Palo Al To Full Guide
Palo Al To Full Guide
9. Log forwarding
9.1. Forward logs from Cortex XDR to external services
9.1.1. Integrate a syslog receiver
9.1.2. Integrate Slack for outbound notifications
9.1.3. Configure notification forwarding
9.1.4. Monitor administrative activity
13.2. Analytics
13.2.1. Analytics engine
13.2.2. Analytics sensors
13.2.3. Coverage of MITRE Attack tactics
13.2.4. Analytics detection time intervals
13.2.5. Analytics alerts and Analytics BIOCs
13.2.6. Identity Analytics
13.2.7. Identity Threat Module
15.8. Dashboards
15.8.1. About dashboards
15.8.2. Predefined dashboards
15.8.3. Custom dashboards
15.8.3.1. Build a custom dashboard
15.8.3.2. Manage your Widget Library
15.8.3.3. Create custom XQL widgets
15.8.3.4. Configure fixed dashboard filters
15.8.3.5. Configure dashboard drilldowns
15.8.3.5.1. Variables in drilldowns
15.8.4. Reports
15.8.4.1. Report templates
15.8.4.2. Run or schedule reports
17.3. Stages
17.3.1. alter
17.3.2. arrayexpand
17.3.3. bin
17.3.4. call
17.3.5. comp
17.3.6. config
17.3.6.1. case_sensitive
17.3.6.2. timeframe
17.3.7. dedup
17.3.8. fields
17.3.9. filter
17.3.10. getrole
17.3.11. iploc
17.3.12. join
17.3.13. limit
17.3.14. replacenull
17.3.15. sort
17.3.16. Tag
17.3.17. target
17.3.18. top
17.3.19. transaction
17.3.20. union
17.3.21. view
17.3.22. windowcomp
17.4. Functions
17.4.1. add
17.4.2. approx_count
17.4.3. approx_quantiles
17.4.4. approx_top
17.4.5. array_all
17.4.6. array_any
17.4.7. arrayconcat
17.4.8. arraycreate
17.4.9. arraydistinct
17.4.10. arrayfilter
17.4.11. arrayindex
17.4.12. arrayindexof
17.4.13. array_length
17.4.14. arraymap
17.4.15. arraymerge
17.4.16. arrayrange
17.4.17. arraystring
17.4.18. avg
17.4.19. coalesce
17.4.20. concat
17.4.21. convert_from_base_64
17.4.22. count
17.4.23. count_distinct
17.4.24. current_time
17.4.25. date_floor
17.4.26. divide
17.4.27. earliest
17.4.28. extract_time
17.4.29. extract_url_host
17.4.30. extract_url_pub_suffix
17.4.31. extract_url_registered_domain
17.4.32. first
17.4.33. first_value
17.4.34. floor
17.4.35. format_string
17.4.36. format_timestamp
17.4.37. if
17.4.38. incidr
17.4.39. incidr6
17.4.40. incidrlist
17.4.41. int_to_ip
17.4.42. ip_to_int
18. Reference
18.1. RBAC permissions
18.1.1. Role permissions by components
18.1.2. Default PANW roles
18.1.2.1. Account Admin
18.1.2.2. Deployment Admin
18.1.2.3. Instance Administrator
18.1.2.4. Investigation Admin
18.1.2.5. Investigator
18.1.2.6. IT Admin
18.1.2.7. Privileged Investigator
18.1.2.8. Privileged IT Admin
18.1.2.9. Privileged Responder
18.1.2.10. Privileged Security Admin
18.1.2.11. Responder
18.1.2.12. Scoped Endpoint Admin
18.1.2.13. Security Admin
18.1.2.14. Viewer
19. Glossary
19.1. Alert
19.8. Broker Virtual Machine Fully Qualified Domain Name (Broker VM FQDN)
19.14. Dataset
19.18. Exception
19.22. Filebeat
19.23. Forensics
19.26. Incident
19.33. Notebooks
19.35. Playbook
19.36. Prisma
19.37. Script
Define an endpoint group and then apply policy rules and manage specific endpoints.
You can define an endpoint group and then apply policy rules and manage specific endpoints. If you set up Cloud Identity Engine, you can also leverage your
Active Directory user, group, and computer details to define endpoint groups.
Create a dynamic group by enabling Cortex XDR to populate your endpoint group dynamically using endpoint characteristics, such as an endpoint tag,
partial hostname or alias, full or partial domain or workgroup name, IP address, range or subnets, installation type (VDI, temporary session or standard
endpoint), agent version, endpoint type (workstation, server, mobile), or operating system version.
After you define an endpoint group, you can then use it to target policy and actions to specific recipients. The Endpoint Groups page displays all endpoint
groups along with the number of endpoints and policy rules linked to the endpoint group.
Upload From File using plain text files with a new line separator, to populate a static endpoint group from a file containing IP addresses,
hostnames, or aliases.
3. Enter a Group Name and optional description to identify the endpoint group. The name you assign to the group will be visible when you assign endpoint
security profiles to endpoints.
Dynamic: Use the filters to define the criteria you want to use to dynamically populate an endpoint group. Dynamic groups support multiple criteria
selections and can use AND or OR operators. For endpoint names and aliases, and domains and workgroups, you can use * to match any string
of characters. As you apply filters, Cortex XDR displays any registered endpoint matches to help you validate your filter criteria.
Static: Select specific registered endpoints that you want to include in the endpoint group. Use the filters, as needed, to reduce the number of
results.
When you create a static endpoint group from a file, the IP address, hostname, or alias of the endpoint must match an existing agent that has
registered with Cortex XDR. You can select up to 250 endpoints.
Disconnecting Cloud Identity Engine in your Cortex XDR deployment can affect existing endpoint groups and policy rules based on Active Directory
properties.
After you save your endpoint group, it is ready for use to assign security profiles to endpoints and in other places where you can use endpoint groups.
At any time, you can return to the Endpoint Groups page to view and manage your endpoint groups. To manage a group, right-click the group and select the
desired action:
Edit: View the endpoints that match the group definition, and optionally refine the membership criteria using filters.
Save as new: Duplicate the endpoint group and save it as a new group.
Export group: Export the list of endpoints that match the endpoint group criteria to a tab separated values (TSV) file.
View endpoints: Pivot from an endpoint group to a filtered list of endpoints on the Endpoint Administration page where you can quickly view and initiate
actions on the endpoints within the group.
Endpoint security profiles can be used immediately, or customized, to protect your endpoints from threats.
Cortex XDR provides default security profiles that you can use out of the box to immediately begin protecting your endpoints from threats. These profiles are
applied to endpoints by mapping them to policies and then mapping the policies to endpoints.
Related information
To aid in endpoint detection and alert investigation, the Cortex XDR agent collects endpoint information when an alert is triggered.
When the Cortex XDR agent raises an alert on endpoint activity, a minimum set of metadata about the endpoint is sent to the server.
When you enable behavioral threat protection or EDR data collection in your endpoint security policy, the Cortex XDR agent can also continuously monitor
endpoint activity for malicious event chains identified by Palo Alto Networks. The endpoint data that the Cortex XDR agent collects when you enable these
capabilities varies by platform type.
Agents with Cortex XDR Pro per Endpoint apply limits and filters on network, file, and registry logs. To expand these limits and filters requires the Extended
Threat Hunting Data (XTH) add-on.
The tables below note whether specific logs require the XTH add-on.
When the Cortex XDR agent raises an alert on endpoint activity, the following metadata is sent to the server:
Field Description
Process creation time Part of the process unique ID per boot session (PID + creation time)
Executable metadata (Traps 6.1 and later) Process start File size
Files Create Full path of the modified file before and after
modification
Write
SHA256 and MD5 hash for the file after
Delete modification
Modification (Traps 6.1 and later) File set security (DACL) information (Traps
6.1 and later)
Symbolic links (Traps 6.1 and later)
Resolve hostnames on local network (Traps
6.1 and later)
Base address
Target process-id/thread-id
Image size
Full path
Bind
Network connection ID
Registry key:
Creation
Deletion
Rename
Addition
Restore
Save
Suspend OS Version
Resume Domain
User presence (Traps 6.1 and later) User Detection Detection when a user is present or idle per active
user session on the computer.
action_rpc_interface_version_minor
action_rpc_func_opnum
action_rpc_func_str_call_fields (optional)
action_rpc_func_int_call_fields (optional)
action_rpc_interface_name
action_rpc_func_name
action_syscall_target_instance_id
action_syscall_target_image_path
action_syscall_target_image_name
action_syscall_target_os_pid
action_syscall_target_thread_id
address_mapping
Event log See the table below for the list of Windows Event Logs that can be sent to the server.
In Traps 6.1.3 and later releases, Cortex XDR and Traps agents can send the following Windows Event Logs to the tenant.
For more information on how to set up Windows event logs collection, see Microsoft Windows security auditing setup.
Application EMET
Application Windows Error Reporting Only for Windows Error Reporting (WER) events when an application
stops unexpectedly
Application Microsoft-Windows-User 1511: A user logged on with a temporary profile because Windows
Profiles Service could not find the user's local profile.
Application Application Error 1000: Application unexpected stop/hang events, similar to WER/1001.
These events include the full path to the EXE file, or to the module with
the fault.
Application Application Hang 1002: Application unexpected stop/hang events, similar to WER/1001.
These events include the full path to the EXE file, or to the module with
the fault.
Microsoft-Windows-DNS- 3008: A DNS query was completed without local machine name
Client/Operational resolution events, and without empty name resolution events.
Microsoft-Windows-PrintService Microsoft-Windows-PrintService
Microsoft-Windows-Windows Firewall With Microsoft-Windows-Windows 2004, 2005, 2006, 2009, 2033: Windows Firewall With Advanced Security
Advanced Security/Firewall Firewall With Advanced Local Modifications (Levels 0, 2, 4)
Security
Security Microsoft-Windows-Eventlog Event log service events specific to the Security channel
Security Routing and Remote Access Service (RRAS) events (these are only
generated on Microsoft IAS server)
4634: Logoff
Files Create Full path of the modified file before and after
modification
*Requires XTH add-on Write
SHA256 and MD5 hash for the file after
Delete modification
Rename
Move
Open
Full path
Message
Write For specific files only and only if the file was
written.
Delete
Message
Learn how to configure the Cortex XDR agent global settings that operate on your endpoints.
In addition to customizable agent settings profiles for each operating system and different endpoint targets, you can set global agent configurations that apply
to all the endpoints in your network.
1. From the Cortex XDR management console, select Settings → Configurations → General → Agent Configurations.
The uninstall password is required to remove a Cortex XDR agent and to grant access to the agent security component on the endpoint. You can use the
default uninstall Password1 defined in Cortex XDR or set a new one and Save. This global uninstall password applies to all the endpoints (excluding
mobile) in your network. If you change the password later on, the new default password applies to all new and existing profiles to which it applied before.
Must be 8 to 32 characters.
Contain at least one upper-case, at least one lower-case letter, at least one number, and at least one of the following characters: !@#%.
Enable bandwidth control: Palo Alto Networks allows you to control your Cortex XDR agent network consumption by adjusting the bandwidth it is
allocated. Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex XDR
calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to retrieve a content update over
a 24 hour period or a week. Cortex XDR supports between 20 - 10000 Mbps, you can enter one of the recommended values or enter one of your
own. For optimized performance and reduced bandwidth consumption, it is recommended that you install and update new agents with Cortex
XDR agents 7.3 and later include the content package built in using SCCM.
Enable minor content version updates: The Cortex XDR research team releases more frequent content updates in-between major content versions
to ensure your network is constantly protected against the latest and newest threats in the wild. Enabled by default, the Cortex XDR agent receives
minor content updates, starting with the next content releases. To learn more about the minor content numbering format, refer to About Content
Updates in the Cortex XDR Administrator Guide.
To control the amount of bandwidth allocated in your network to Cortex XDR content updates, assign a Content bandwidth management value between
20-10,000 Mbps. To help you with this calculation, Cortex XDR recommends the optimal value of Mbps based on the number of active agents in your
network, and including overhead considerations for large content updates. Cortex XDR verifies that agents attempting to download the content update
are within the allocated bandwidth before beginning the distribution. If the bandwidth has reached its cap, the download will be refused and the agents
will attempt again at a later time. After you set the bandwidth, Save the configuration.
5. Configure the Cortex XDR agent auto upgrade scheduler and number of parallel upgrades.
If agent auto upgrades are enabled for your Cortex XDR agents, you can control the automatic upgrade process in your network. To better control the
rollout of a new Cortex XDR agent release in your organization, during the first week only a single batch of agents is upgraded. After that, auto-upgrades
continue to be deployed across your network with number of parallel upgrades as configured.
Amount of Parallel Upgrades: Set the number of parallel agent upgrades, while the maximum is 500 agents.
Days in week: You can schedule the upgrade task for specific days of the week and a specific time range. The minimum range is four hours.
6. Configure automated Advanced Analysis of Cortex XDR Agent alerts raised by exploit protection modules.
Advanced Analysis is an additional verification method you can use to validate the verdict issued by the Cortex XDR agent. In addition, Advanced
Analysis also helps Palo Alto Networks researchers tune exploit protection modules for accuracy.
To initiate additional analysis you must retrieve data about the alert from the endpoint. You can do this manually on an alert-by-alert basis or you can
enable Cortex XDR to automatically retrieve the files.
After Cortex XDR receives the data, it automatically analyzes the memory contents and renders a verdict. When the analysis is complete, Cortex XDR
displays the results in the Advanced Analysis field of the Additional data view for the data retrieval action on the Action Center. If the Advanced Analysis
verdict is benign, you can avoid subsequent blocked files for users that encounter the same behavior by enabling Cortex XDR to automatically create
and distribute exceptions based on the Advanced Analysis results.
Enable Cortex XDR to automatically upload defined alert data files for advanced analysis. Advanced Analysis increases the Cortex XDR
exploit protection module accuracy.
Automatically apply Advanced Analysis exceptions to your Global Exceptions list. This will apply all Advanced Analysis exceptions
suggested by Cortex XDR, regardless of the alert data file source.
7. Configure the Cortex XDR Agent license revocation and deletion period.
This configuration applies to standard endpoints only and does not impact the license status of agents for VDIs or Temporary Sessions.
Connection Lost (Days): Configure the number of days after which the license should be returned when an agent loses the connection to
Cortex XDR. Default is 30 days; Range is 2 to 60 days. Where day one is counted as the first 24 hours with no connection.
Agent Deletion (Days): Configure the number of days after which the agent and related data is removed from the Cortex XDR management
console and database. Default is 180 days; Range is 3 to 360 days and must exceed the Connection Lost value. Where day one is the first
24 hours of lost connection.
The WildFire analysis score for files with a Benign verdict is used to indicate the level of confidence WildFire has in the Benign verdict. For example, a file
by a trusted signer or a file that was tested manually gets a high confidence Benign score, whereas a file that did not display any suspicious behavior at
the time of testing gets a lower confidence Benign score. To add an additional verification method to such files, enable this setting. Then, when Cortex
XDR receives a Benign Low Confidence verdict, the agent enforces the Malware Security profile settings you currently have in place (Run local analysis
to determine the file verdict, Allow, or Block).
Disabling this capability takes immediate effect on new hashes, fresh agent installations, and existing security policies. It could take up to a week to take
effect on existing agents in your environment pending agent caching.
Behavioral threat protection (BTP) alerts have been given unique and informative names and descriptions, to provide immediate clarity into the events
without having to drill down into each alert. Enable to display of the informative BTP rule alert names and descriptions. After you update the settings, new
alerts include the changes while already existing alerts remain unaffected.
If you have any Cortex XDR filters, starring policies, exclusion policies, scoring rules, log forwarding queries, or automation rules configured for
XSIAM/3rd party SIEM, we advise you to update those to support the changes before activating the feature. For example, change the query to include
the previous description that is still available in the new description, instead of searching for an exact match.
10. Configure settings for periodic cleanup of duplicate entities in the endpoint administration table.
When enabled, Periodic duplicate cleanup removes all duplicate entries of an endpoint from the endpoint table based on the defined parameters,
leaving only the last occurrence of the endpoint reporting to the server. This enables you to streamline and improve the management of your endpoints.
For example, when an endpoint reconnects after a hardware change, it may be re-registered, leading to confusion in the endpoint administration table
regarding the real status of the endpoint. The cleanup leaves only the latest record of the endpoint in the table.
Define whether to clean up according to Host Name, Host IP Address, MAC Address, or any combination of them. If not selected, the default is
Host Name. When you select more than one parameter, duplicate entries are removed only if they include all the selected parameters.
Configure the frequency of the cleanup—every 6 hours, 12 hours, 1 day, or 7 days. You can also select to perform an immediate One-time
cleanup.
Data for a deleted endpoint is retained for 90 days since the endpoint’s last connection to the system. If a deleted endpoint reconnects, Cortex XDR
recovers its existing data.
Learn more about how to control Cortex XDR agent and content upgrades.
This document covers a recommended strategy and best practices for managing agent and content updates to help reduce the risk of downtime in a
production environment, while helping ensure timely delivery of security content and capabilities.
Keeping Cortex XDR agents up-to-date is essential for protecting against evolving threats and vulnerabilities. Regular updates ensure the latest security
features for malware and exploit prevention, and compatibility with the latest software environments, which helps reduce the risk of attacks. This can also help
organizations meet regulatory standards while maintaining strong overall protection.
Content updates, such as new threat intelligence or detection logic, are critical for defending against newly discovered cyber threats and malware and are
designed to ensure that systems remain protected against the latest attacks. Content updates address compatibility issues as well, helping achieve smooth
operations alongside the Cortex XDR agent. Without regular content updates, security solutions may fail to detect new or evolving threats, leaving systems
vulnerable to attacks.
When planning Cortex XDR agent upgrades and content updates, consult with the appropriate stakeholders and teams and follow the change management
strategy in your organization.
The Cortex XDR agent can retrieve content updates immediately as they become available, after a pre-configured delay period of up to 30 days, or you can
choose to select a specific version.
Cortex XDR can be configured to manage the deployment of agent and content updates by adjusting the following settings:
Upgrade Rollout includes two options: Immediate, where the Cortex XDR agent automatically receives new releases, including maintenance updates
and features, and Delayed, which lets you set a delay of 7 to 45 days after a version is released before upgrading endpoints.
Global agent settings: Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades to apply to all endpoints in your
organization. You can also schedule the upgrade task for specific days of the week and set a specific time range for the upgrades.
Content Auto-Update is enabled by default and automatically retrieves the latest content before deploying it on the endpoint. If you disable content
updates, the agent will stop fetching updates from the Cortex XDR tenant and will continue to operate with the existing content on the endpoint.
Content Rollout: The Cortex XDR agent can retrieve content updates immediately as they become available, after a pre-configured delay period of up
to 30 days, or you can choose to select a specific version.
Global content updates: Configure the content update cadence and bandwidth allocation within your organization. To enforce immediate protection against
the latest threats, enable minor content updates. Otherwise, the content updates in your network occur only on major releases.
Use a phased rollout plan by creating batches for deploying updates. The specifics may vary based on your organization and its structure. Start with a control
group, then deploy to 10% of your organization. Subsequently, allocate the remaining upgrades in batches that best suit your organization until achieving a full
100% rollout.
Example 3.
The following is an example of a rollout plan for deploying a Cortex XDR agent upgrade:
Phase 1: Control group rollout: Start by selecting a control group of endpoints as early adopters. This group should consist of a diverse range of operating
systems, devices, applications, and servers, with a focus on low-risk endpoints. After a defined testing period, such as one week, assess for any issues. If no
problems are found, move to the next phase.
Phase 2: 10% rollout: Expand the rollout to 10% of the organization’s endpoints. This group should maintain the same variety as the control group but include
low- to medium-risk endpoints. Monitor performance during the set period. If the rollout is successful with no issues, proceed to the next phase.
Phase 3: 40% rollout: After confirming the success of the 10% rollout, extend the deployment to 40% of the organization. Continue including a variety of
endpoints while gradually incorporating some medium-risk endpoints. Ensure thorough testing during this phase before moving forward.
Phase 4: 80% rollout: Extend the deployment to 80% of the organization's endpoints. This batch should include a wide variety of endpoints, incorporating
both medium and high-risk systems. After a careful monitoring period and confirmation that everything is stable, move to the final phase.
Phase 5: Full rollout: Complete the rollout by updating the remaining 20% of the organization’s endpoints. By this point, the majority of systems should have
been thoroughly tested, reducing the risk of issues in the final stage. Once complete, 100% of the organization will be updated.
Content updates are typically provided on a weekly basis. Use a phased rollout plan by creating batches for deploying updates. Start with a control group,
then deploy to 10% of your organization. Subsequently, allocate the remaining upgrades in batches that best suit your organization until achieving a full 100%
rollout.
Example 4.
The following is an example of a rollout plan over a period of one week for deploying content updates:
Phase 1: Control group rollout: Keep the default configuration set to deploy content updates immediately.
Phase 2: 10% rollout: Content is automatically deployed on day 2 following a delay period defined in the profile.
Phase 3: 60% rollout: Content is automatically deployed on day 3 following a delay period defined in the profile.
The following information will help you select and configure the update settings.
Configure one or more of the settings described in this section to keep your Cortex XDR agents up-to-date.
1. Create an agent installation package for each operating system version for which you want to upgrade the Cortex XDR agent.
If needed, filter the list of endpoints. To reduce the number of results, use the endpoint name search and filters Filters at the top of the page.
You can also select endpoints running different operating systems to upgrade the agents at the same time.
4. Right-click your selection and select Endpoint Control → Upgrade Agent Version.
For each platform, select the name of the installation package you want to push to the selected endpoints.
You can install the Cortex XDR agent on Linux endpoints using a package manager. If you do not want to use the package manager, clear the option
Upgrade to installation by package manager.
When you upgrade an agent on a Linux endpoint that is not using a package manager, Cortex XDR upgrades the installation process by default
according to the endpoint Linux distribution.
The Cortex XDR agent keeps the name of the original installation package after every upgrade.
5. Upgrade.
Cortex XDR distributes the installation package to the selected endpoints at the next heartbeat communication with the agent. To monitor the status of
the upgrades, go to Response → Action Center.
From the Action Center you can also view additional information about the upgrade (right-click the action and select Additional data) or cancel the
upgrade (right-click the action and select Cancel Agent Upgrade).
Custom dashboards that include upgrade status widgets, and the All Endpoints page display upgrade status.
During the upgrade process, the endpoint operating system might request a reboot. However, you do not have to perform the reboot for the Cortex
XDR agent upgrade process to complete it successfully.
After you upgrade on an endpoint with Cortex XDR Device Control rules, you need to reboot the endpoint for the rules to take effect.
These profiles can be configured on one or more endpoints, static/dynamic groups, tags, IP ranges, endpoint names, or other parameters that allow the
creation of logical endpoint groups. See how to define endpoint group.
1. Go to Endpoints → Policy Management → Profiles, and then edit an existing profile, add a new profile, or import from a file.
2. Choose the operating system, and select Agent Settings. Then click Next.
Before enabling Auto-Update for Cortex XDR agents, make sure to consult with all relevant stakeholders in your organization.
Automatic Upgrade Scope Latest agent release (Default) For One release before the latest one, Cortex XDR upgrades the agent to
the previous release before the latest, including maintenance releases.
One release before the latest one
Major releases are numbered X.X, such as release 8.0, or 8.2. Maintenance
Only maintenance releases releases are numbered X.X.X, such as release 8.2.2.
Upgrade Rollout Immediate (Default) The Cortex XDR agent will automatically get any new agent release,
maintenance and new features
Delayed
For Delayed, set the delay period (number of days) to wait after the version
release before upgrading endpoints. Choose a value between 7 and 45.
Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades to apply to all endpoints in your organization.
2. Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades.
Item Description
Amount of parallel During the first week of a new Cortex XDR agent release rollout, only a single batch of agents is upgraded. After that, auto-
upgrades upgrades continue to be deployed across your network with the number of parallel upgrades as configured.
Set the number of parallel agent upgrades, where the maximum is 500 agents.
Days in week Schedule the upgrade task for specific days of the week.
Schedule Schedule a specific time range. The minimum range is four hours.
Content updates
When a new content update is available, Cortex XDR notifies the Cortex XDR agent. The Cortex XDR agent then randomly chooses a time within a six-hour
window during which it will retrieve the content update from Cortex XDR. By staggering the distribution of content updates, Cortex XDR reduces the bandwidth
load and prevents bandwidth saturation due to the high volume and size of the content updates across many endpoints. You can view the distribution of
endpoints by content update version from the dashboard.
You can configure whether to update content per endpoint or use the global settings. The information in this topic will help you select and configure these
methods.
Configure content update options for agents within the organization to ensure it is always protected with the latest security measures.
These profiles can be configured on one or more endpoints, static/dynamic groups, tags, IP ranges, endpoint names, or other parameters that allow the
creation of logical endpoint groups.
1. Go to Endpoints → Policy Management → Profiles , and then edit an existing profile, add a new profile, or import from a file.
2. Choose the operating system, and select Agent Settings. Then click Next.
Content Auto-Update Enabled (Default) When Content Auto-Update is enabled, the Cortex XDR agent retrieves the
most updated content and deploys it on the endpoint.
Disabled
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Staging Content Enabled Enable users to deploy agent staging content on selected test
environments. Staging content is released before production content,
Disabled (Default) allowing for early evaluation of the latest content update.
Content Rollout Immediate (Default) The Cortex XDR agent can retrieve content updates immediately as they
are available, after a pre-configured delay period of up to 30 days, or you
Delayed
can select a specific version.
Specific When you delay content updates, the Cortex XDR agent will retrieve the
content according to the configured delay. For example, if you configure a
delay period of two days, the agent will not use any content released in the
last 48 hours.
2. Configure the content update cadence and bandwidth allocation within your organization.
Item Description
Enable bandwidth Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex
control XDR calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to
retrieve a content update over a 24 hour period or a week. Cortex XDR supports between 20 - 10000 Mbps, you can enter
one of the recommended values or enter one of your own. For optimized performance and reduced bandwidth consumption,
it is recommended that you install and update new agents with Cortex XDR agents 7.3 and later include the content
package built in using SCCM.
XDR Calculator for Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex
Recommended XDR calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to
Bandwidth retrieve a content update over 24 hours or a week. This calculation is based on connected agents and includes an overhead
for large content update.
Enable minor To enforce immediate protection against the latest threats, enable minor content updates. Otherwise, the content updates in
content version your network occur only on major releases.
updates
Learn how to configure data ingestion ingest data from a variety of Palo Alto Networks and third-party sources.
To provide you with a more complete and detailed picture of the activity involved in an incident, you can configure Cortex XDR to ingest data from a variety of
Palo Alto Networks and third-party sources.
Related information
Configure server settings such as keyboard shortcuts, timezone, and timestamp format.
You can configure server settings such as keyboard shortcuts, timezone, and timestamp format, to create a more personalized user experience in Cortex XDR.
Go to Settings → Configurations → General → Server Settings.
Keyboard shortcuts, timezone, and timestamp format are not set universally and only apply to the user who sets them.
Keyboard Shortcuts Enables you to change the default shortcut settings. The shortcut value
must be a keyboard letter, A through Z, and cannot be the same for both
shortcuts.
Timezone Select a specific timezone. The timezone affects the timestamps displayed
in Cortex XDR, auditing logs, and when exporting files.
Timestamp Format The format in which to display Cortex XDR data. The format affects the
timestamps displayed in Cortex XDR, auditing logs, and when exporting
files.
Email Contacts A list of email addresses Cortex XDR can use as distribution lists. The
defined email addresses are used to send product maintenance, updates,
and new version notifications. These addresses are in addition to email
addresses registered with your Customer Support Portal account.
Password Protection (for downloaded files) Enable password protection when downloading retrieved files from an
endpoint. This prevents users from opening potentially malicious files.
Scoped Server Access Enforces access restrictions on users with an assigned scope. A user can
inherit scope permissions from a group, or have a scope assigned directly
on top of the role assigned from the group.
If enabled, you must select the SBAC Mode, which is defined per tenant:
Permissive: Enables users with at least one scope tag to access the
relevant entity with that same tag.
Restrictive: Users must have all the scoped tags that are tagged
within the relevant entity of the system.
By default, this setting is set to false and field values are evaluated as
case insensitive. This setting overwrites any other default configuration
except for BIOCs, which will remain case insensitive no matter what this
configuration is set to.
From Cortex XDR version 3.3, the default case sensitivity setting was
changed to case insensitive (config case_sensitive = false). If
you've been using Cortex XDR before this version was released, the default
case sensitivity setting is still configured to be case sensitive (config
case_sensitive = true).
Define the incidents target MTTR per incident severity Determines within how many days and hours you want incidents resolved
according to the incident severity Critical, High, Medium, and Low.
Impersonation Role The type of role permissions granted to the Palo Alto Networks Support
team when opening support tickets. We recommend that role permissions
are granted only for a specific time frame, and full administrative
permissions are granted only when specifically requested by the Support
team.
Prisma Cloud Compute Tenant Pairing Requires a Cortex XSIAM or Cortex XDR Pro license
For more information, see Pairing Prisma Cloud with Cortex XDR .
Configure security settings such as session expiration, user login expiration, and dashboard expiration.
You can configure security settings such as how long users can be logged into Cortex XDR, and from which domains and IP ranges users can log in.
Session Expiration User Login Expiration The number of hours (between 1 and 24) after
which the user login session expires. You can
also choose to automatically log users out after a
specified period of inactivity.
Allowed Sessions Approved Domains The domains from which you want to allow user
access (login) to Cortex XDR. You can add or
remove domains as necessary.
Approved IP Ranges The IP ranges from which you want to allow user
access (login) to Cortex XDR. You can also
choose to limit API access from specific IP
addresses.
User Expiration Deactivate Inactive User Deactivate an inactive user, and also set the user
deactivation trigger period. By default, user
expiration is disabled. When enabled, enter the
number of days after which inactive users should
be deactivated.
Allowed Domains Domain Name The domain names that can be used in your
distribution lists for reports. For example, when
generating a report, ensure the reports are not
sent to email addresses outside your
organization.
9 | Log forwarding
Abstract
Stay informed and updated about events in your system by forwarding alerts and reports to an external service, such as a syslog receiver, a Slack channel, or
an email account.
Logs provide information about events that occur in the system. These logs are a valuable tool in troubleshooting issues that might arise in your Cortex XDR
tenant.
To stay informed about important alerts and events, you can configure your notifications and specify the type of logs you want to forward. You can choose to
receive these notifications through a syslog receiver, a Slack channel, or an email account.
Learn how to forward logs from Cortex XDR to external services such as email, Slack, or a syslog receiver.
You can forward logs from Cortex XDR to an external service. This allows you to stay updated on important alerts and events. Available services include the
following:
Slack channel and/or syslog receiver: Integrate the service with Cortex XDR. Once the integration is complete, configure notification forwarding
specifying the log type you want to forward.
Email distribution list: Configure notification forwarding specifying the log type you want to forward.
The following table shows the log types supported for each notification type:
Alerts ✓ ✓ ✓
Reports ✓ ✓ —
Abstract
Define syslog settings and then configure notification forwarding to receive notifications about alerts and reports.
A syslog receiver can be a physical or virtual server, a SaaS solution, or any service that accepts syslog messages.
To send Cortex XDR notifications to your syslog receiver, you first need to define the settings for the syslog receiver. Once this is complete, you can configure
notification forwarding.
Before you begin, enable access to the following Cortex XDR IP addresses for your region in your firewall.
35.224.66.220
35.246.192.146
34.90.105.250
35.203.52.255
34.105.149.197
34.87.125.227
35.243.76.189
34.87.219.39
35.239.59.210
34.93.183.131
34.65.74.83
34.118.126.170
35.185.171.91
34.18.43.40
34.155.72.149
34.165.101.105
34.166.55.72
34.101.176.232
34.175.230.150
Parameter Description
Parameter Description
Destination IP address or fully qualified domain name (FQDN) of the syslog receiver.
Facility Select one of the syslog standard values. The value maps to how your syslog server uses the facility field to manage messages. For
details on the facility field, see RFC 5424.
TCP: No validation is made on the connection with the syslog receiver. However, if an error occurred with the domain used to
make the connection, the Test connection will fail.
UDP: No error checking, error correction, or acknowledgment. No validation is done for the connection or when sending data.
TCP + SSL: Cortex XDR validates the syslog receiver certificate and uses the certificate signature and public key to encrypt the
data sent over the connection.
Certificate The communication between Cortex XDR and the syslog destination can use TLS. In this case, upon connection, Cortex XDR validates
that the syslog receiver has a certificate signed by either a trusted root CA or a self-signed certificate. You may need to merge the
Root and Intermediate certificate if you receive a certificate error when using a public certificate.
If your syslog receiver uses a self-signed CA, upload your self-signed syslog receiver CA. If you only use a trusted root CA leave
the certificate field empty.
You can ignore certificate errors. For security reasons, this is not recommended. If you choose this option, logs will be forwarded even
if the certificate contains errors.
4. Test the parameters to ensure a valid connection, and click Create when ready.
You can define up to five syslog receivers. Upon success, the table displays the syslog servers and their status.
What to do next
After you integrate with your syslog receiver, configure your forwarding settings. For more information see, Configure notification forwarding.
When configuring a syslog message, Cortex XDR sends a test message. If a test message cannot be sent, Cortex XDR displays an error message to help you
troubleshoot.
The following table includes descriptions and suggested solutions for the error messages:
Host Resolving The IP address Ensure you have the correct IP address or the hostname.
Failed or hostname you
provided doesn't
exist, or can't be
resolved.
Configured The IP address Ensure you have the correct IP address or the hostname.
Local Address or hostname you
provided is
internal and can't
be used.
Wrong The certificate Re-create the certificate in the correct format, for example:
Certificate you uploaded is
-----BEGIN CERTIFICATE-----
Format in an unexpected MIIDHTCCAgWgAwIBAgIQSwieRyGdh6BNRQyp406bnTANBgkqhkiG9w0BAQsFADAhMR8wHQYDVQQDExZTVVJTLUNoYXJsaWVBbHBoYS1Sb290MB4XDTIwMDQzMDE4MjEzNFo
format and can't ----END CERTIFICATE-----
be used. The
certificate must
be an ASCII
string or a bytes-
like object.
Connection Cortex XDR Check the firewall logs and the connection using WireShark.
Timed Out didn’t connect to
the syslog
receiver in the
expected time.
This could be
because your
firewall blocked
the connection or
because the
configuration of
the syslog server
caused it to drop
the connection.
Connection The syslog Check the firewall logs and the connection using WireShark.
Refused receiver refused
the connection.
This could be
because your
firewall blocked
the connection or
because the
configuration of
the syslog server
caused it to drop
the connection.
Connection The connection Check the firewall logs and the connection using WireShark.
Reset was reset by the
syslog receiver.
This could be
because your
firewall blocked
the connection or
because the
configuration of
the syslog
receiver caused
it to drop the
connection.
Certificate The uploaded Incorrect certificate: to check that the certificate you are uploading corresponds to the server syslog certificate, use the fol
Verification certificate
openssl verify -verbose -CAfile cortex_upload_certificate syslog_certificate
Failed couldn’t be
verified for one of If the certificate is correct, the result is syslog_certificate: OK.
the following
reasons. Incorrect hostname: make sure that the hostname/ip in the certificate matches the syslog server.
The Certificate chain: If you are using a list of certificates, merge the chain into one certificate. You can concatenate the certific
certificate cat intermediate_cert root_cert > merged_syslog.crt
doesn't
correspond If the concatenated certificate doesn’t work, change the order of the root and intermediate certificates, and try again.
to the
To verify that the chain certificate was saved correctly, use the following openssl command.
certificate
on the openssl verify -verbose -CAfile cortex_upload_certificate syslog_certificate
syslog
If the certificate is correct, the result is syslog_certificate: OK.
receiver
and can't
be
validated.
The
certificate
doesn’t
have the
correct
hostname.
You are
using a
certificate
chain and
didn’t
merge the
certificates
into one
certificate.
Connection The firewall or the Check the firewall logs and the connection using WireShark.
Terminated syslog receiver
Abruptly dropped the
connection
unexpectedly.
This could be
because the
firewall on the
customer side
limits the number
of connections,
the configuration
on the syslog
receiver drops
the connection,
or the network is
unstable.
Host The network Check the network configuration to make sure that everything is configured correctly like a firewall or a load balancer which may
Unreachable configuration is
faulty and the
connection can't
reach the syslog
receiver.
Abstract
Learn how to integrate Cortex XDR with your Slack workspace and stay updated on important alerts and events.
Integrate Cortex XDR with your Slack workspace to manage and highlight your alerts and reports. Creating a Cortex XDR Slack channel ensures that defined
alerts are exposed on laptop and mobile devices using the Slack interface. Unlike email notifications, Slack channels provide dedicated spaces where you can
contact specific members regarding your alerts.
2. Select the provided link to install Cortex XDR on your Slack workspace.
You are directed to the Slack browser to install Cortex XDR. You can only use this link to install Cortex XDR on Slack. Attempting to install from Slack
Marketplace will redirect you to Cortex XDR documentation.
3. Click Submit.
Upon successful installation, Cortex XDR displays the workspace to which you connected.
What to do next
After you integrate with your Slack workspace, configure your forwarding settings. For more information see, Configure notification forwarding.
Abstract
Learn how to create a forwarding configuration that specifies the log type you want to forward.
After you integrate with an external service such as Slack or a syslog receiver, create a forwarding configuration that specifies the log type you want to forward.
You can configure notifications for alerts, agent audit logs, and management audit logs. To receive notifications about reports, see Create a report from
scratch.
Before you can select a syslog receiver or a Slack channel, you need to integrate these external services with Cortex XDR.
Alerts: Send notifications for specific alert types (for example, XDR Agent or BIOC.
To configure notification forwarding for Health alerts, select Log Type = Alerts and filter the Alerts table by Alert Domain = Health.
Agent Audit Logs: Send notifications for audit logs reported by your Cortex XDR agents.
Management Audit Logs: Send notifications for audit logs about events related to your Cortex XDR tenant.
For example, for a filter set to Severity = Medium, Alert Source = XDR Agent, Cortex XDR sends the alerts or events matching this filter as a
notification.
6. Click Next.
a. In the Distribution List, add the email addresses to which you want to send email notifications.
b. In the Grouping Timeframe, define the time frame, in minutes, to specify how often Cortex XDR sends notifications. Every 20 alerts or 20 events
aggregated within this time frame are sent together in one notification, sorted according to the severity. To send a notification when one alert or
event is generated, set the time frame to 0.
d. If you previously used log forwarding and want to continue forwarding logs in the same format, select Use Legacy Log Format. For more
information, see Log format for IOC and BIOC alerts.
8. Depending on the notification integrations supported by the log type, configure the Slack channel or syslog receiver notification settings. For a list of log
types supported in each notification type, see Forward logs from Cortex XDR to external services.
Enter the Slack channel name and select from the list of available channels. Slack channels are managed independently of Cortex XDR in your
Slack workspace. After integrating your Slack account with your Cortex XDR tenant, Cortex XDR displays a list of specific Slack channels
associated with the integrated Slack workspace.
Select a syslog receiver. Cortex XDR displays the list of receivers integrated with your Cortex XDR tenant.
Abstract
View all Cortex XDR administrator-initiated actions taken on alerts, incidents, and live terminal sessions.
From Settings → Management Auditing, you can track the status of all administrative and investigative actions. Cortex XDR stores audit logs for 365 days
(instead of 180 days, which was the retention period in the past). Use the page filters to narrow the results or manage tables to add or remove fields as
needed.
To ensure you and your colleagues stay informed about administrative activity, you can configure notification forwarding to forward your Management Audit log
to an email distribution list, Syslog server, or Slack channel.
The following table describes the default and optional fields that you can view in alphabetical order.
Field Description
Description Descriptive summary of the administrative action. Hover over this field to view more
detailed information in a popup tooltip. This enables you to know exactly what has
changed, and, if necessary, roll back the change.
Field Description
Field Description
Authentication: User sessions started, along with the user name that started the
session.
Host Insights: Initiation of Host Insights data collection scan (Host Inventory and
Vulnerability Assessment).
Incident Management: Actions taken on incidents and on the assets, alerts, and
artifacts in incidents.
Live Terminal: Remote terminal sessions created and actions taken in the file
manager or task manager, a complete history of commands issued, their
success, and the response.
Public API: Authentication activity using an associated Cortex XDR API key.
Field Description
Response: Remedial actions taken. For example: Isolate a host, undo host
isolation, add a file hash signature to the block list, or undo the addition to the
block list.
Cortex XDR provides you with different formats for its log notifications.
When Cortex XDR alerts and audit logs are forwarded to an external data source, notifications are sent according to the necessary format (syslog messages,
email, or Slack notifications). If you prefer Cortex XDR to forward logs in legacy format, select the legacy option in your log forwarding configuration.
Abstract
View the types of Cortex XDR management audit log messages that are sent.
Cortex XDR management audit log messages are sent based on the various components, for example, Action Center, Alert Rules, or Authentication. It is
recommended that you review the subtype, status, and informational fields for each message.
List of components
Agent Configuration
Agent Installation
Alert Exclusions
Alert Notifications
Alert Rules
API Key
Authentication
Broker API
Broker VMs
Dashboards
EDL Management
Endpoint Administration
Endpoint Groups
Event Forwarding
Extensions Policy
Extensions Profile
Global Exceptions
Host Insights
Incident Management
Ingest Data
Integrations
Licensing
Live Terminal
MSSP
Permission
Public API
Query Center
Remediation
Reporting
Response
Rules
SaaS Collection
Server Settings
Scoring Rules
Security Settings
Starred Incidents
System
Abstract
Learn about the formats used to forward Cortex XDR agent, BIOC, IOC, analytics, correlation, and third-party alerts.
Cortex XDR agent, BIOC, IOC, analytics, correlation, and third-party alerts are forwarded to external data resources according to the email, Slack, or syslog
format.
Email account
Alert notifications are sent to email accounts according to the settings you configured. Email messages also include an alert code snippet of the fields
according to the columns in the Alert table.
If only one alert exists in the queue, a single alert email format is sent.
If more than one alert was grouped in the time frame, all the alerts in the queue are forwarded together in a grouped email format.
Example 5.
Example 6.
Example 7.
{
"original_alert_json":{
"uuid":"<UUID Value>",
"recordType":"threat",
"customerId":"<Customer ID>",
"severity":4,
"...",
"is_pcap":null,
"contains_featured_host":[
"NO"
],
"contains_featured_user":[
"YES"
],
"contains_featured_ip":[
"YES"
],
"events_length":1,
"is_excluded":false
Slack channel
You can send alert notifications to a single Slack contact or a Slack channel. Notifications are similar to the email format.
Syslog receiver
Alert notifications forwarded to a syslog receiver are sent in a CEF format RF 5425.
Section Description
Example 8.
Abstract
An email account or a syslog receiver are the notification channels through which the Agent Audit log is communicated.
Forwarding agent audit logs requires a Cortex XDR Pro per Endpoint license.
Cortex XDR forwards the Agent Audit log to these external data resources:
Syslog receiver: Sent in a CEF format RFC 5425 according to the following mapping:
Section Description
Example 9.
<182>1 2020-10-04T10:41:14.608731Z cortexxdr - - - - CEF:0|Palo Alto Networks|Cortex XDR Agent|Cortex XDR Agent 7.2.0.63060|Agent Audit Logs|Agent
Service|9|dvchost=WORKGROUP shost=Test-Agent cat=Monitoring end=1601808073102 rt=1601808074596 cs1Label=agentversion cs1=7.2.0.63060 cs2Label=subtype cs2=Stop
cs3Label=result cs3=N\/A cs4Label=reason cs4=None msg=XDR service cyserver was stopped on Test-Agent tenantname=Test tenantCDLid=123456 CSPaccountname=1234
Abstract
An email account or a syslog receiver are the notification channels through which the Management Audit log is communicated.
Cortex XDR forwards the Management Audit log to these external data sources:
Syslog receiver: Sent in a CEF format RFC 5425 according to the following mapping:
Section Description
Example 10.
3/18/2012:05:17.567 PM<14>1 2020-03-18T12:05:17.567590Z cortexxdr - - - CEF:0|Palo Alto Networks|Cortex XDR|Cortex XDR x.x |Management Audit
Logs|REPORTING|6|suser=test end=1584533117501 externalId=5820 cs1Label=email cs1=test@paloaltonetworks.com cs2Label=subtype cs2=Slack Report cs3Label=result
cs3=SUCCESS cs4Label=reason cs4=None msg=Slack report 'scheduled_1584533112442' ID 00 to ['CUXM741BK', 'C01022YU00L', 'CV51Y1E2X', 'CRK3VASN9'] tenantname=test
tenantCDLid=11111 CSPaccountname=00000
Abstract
An email account or a syslog receiver are the notification channels through which IOC and BIOC alerts are communicated.
Cortex XDR logs IOC and BIOC alerts. If you configure Cortex XDR to forward logs in the legacy format, when alert logs are forwarded from Cortex XDR, each
log record has the following format:
Example 11.
edrData/action_country:
edrData/action_download:
edrData/action_external_hostname:
edrData/action_external_port:
edrData/action_file_extension: pdf
edrData/action_file_md5: null
edrData/action_file_name: XORXOR2614081980.pdf
...
xdr_sub_type: BIOC - Credential Access
bioc_category_enum_key: null
alert_action_status: null
agent_data_collection_status: null
attempt_counter: null
case_id: null
global_content_version_id:
global_rule_id:
is_whitelisted: false
Syslog format
Example 12.
"/edrData/action_country","/edrData/action_download","/edrData/action_external_hostname","/edrData/action_external_port","/edrData/action_file_extension","
/edrData/action_file_md5","/edrData/action_file_name","/edrData/action_file_path","/edrData/action_file_previous_file_extension","/edrData/action_file_prev
ious_file_name","/edrData/action_file_previous_file_path","/edrData/action_file_sha256","/edrData/action_file_size","/edrData/action_file_remote_ip","/edrD
ata/action_file_remote_port","/edrData/action_is_injected_thread","/edrData/action_local_ip","/edrData/action_local_port","/edrData/action_module_base_addr
ess","/edrData/action_module_image_size","/edrData/action_module_is_remote","/edrData/action_module_is_replay","/edrData/action_module_path","/edrData/acti
on_module_process_causality_id","/edrData/action_module_process_image_command_line","/edrData/action_module_process_image_extension","/edrData/action_modul
e_process_image_md5","/edrData/action_module_process_image_name","/edrData/action_module_process_image_path","/edrData/action_module_process_image_sha256",
"/edrData/action_module_process_instance_id","/edrData/action_module_process_is_causality_root","/edrData/action_module_process_os_pid","/edrData/action_mo
dule_process_signature_product","/edrData/action_module_process_signature_status","/edrData/action_module_process_signature_vendor","/edrData/action_networ
k_connection_id","/edrData/action_network_creation_time","/edrData/action_network_is_ipv6","/edrData/action_process_causality_id","/edrData/action_process_
image_command_line","/edrData/action_process_image_extension","/edrData/action_process_image_md5","/edrData/action_process_image_name","/edrData/action_pro
cess_image_path","/edrData/action_process_image_sha256","/edrData/action_process_instance_id","/edrData/action_process_integrity_level","/edrData/action_pr
ocess_is_causality_root","/edrData/action_process_is_replay","/edrData/action_process_is_special","/edrData/action_process_os_pid","/edrData/action_process
_signature_product","/edrData/action_process_signature_status","/edrData/action_process_signature_vendor","/edrData/action_proxy","/edrData/action_registry
_data","/edrData/action_registry_file_path","/edrData/action_registry_key_name","/edrData/action_registry_value_name","/edrData/action_registry_value_type"
,"/edrData/action_remote_ip","/edrData/action_remote_port","/edrData/action_remote_process_causality_id","/edrData/action_remote_process_image_command_line
","/edrData/action_remote_process_image_extension","/edrData/action_remote_process_image_md5","/edrData/action_remote_process_image_name","/edrData/action_
remote_process_image_path","/edrData/action_remote_process_image_sha256","/edrData/action_remote_process_is_causality_root","/edrData/action_remote_process
_os_pid","/edrData/action_remote_process_signature_product","/edrData/action_remote_process_signature_status","/edrData/action_remote_process_signature_ven
dor","/edrData/action_remote_process_thread_id","/edrData/action_remote_process_thread_start_address","/edrData/action_thread_thread_id","/edrData/action_t
otal_download","/edrData/action_total_upload","/edrData/action_upload","/edrData/action_user_status","/edrData/action_username","/edrData/actor_causality_i
d","/edrData/actor_effective_user_sid","/edrData/actor_effective_username","/edrData/actor_is_injected_thread","/edrData/actor_primary_user_sid","/edrData/
actor_primary_username","/edrData/actor_process_causality_id","/edrData/actor_process_command_line","/edrData/actor_process_execution_time","/edrData/actor
_process_image_command_line","/edrData/actor_process_image_extension","/edrData/actor_process_image_md5","/edrData/actor_process_image_name","/edrData/acto
r_process_image_path","/edrData/actor_process_image_sha256","/edrData/actor_process_instance_id","/edrData/actor_process_integrity_level","/edrData/actor_p
rocess_is_special","/edrData/actor_process_os_pid","/edrData/actor_process_signature_product","/edrData/actor_process_signature_status","/edrData/actor_pro
cess_signature_vendor","/edrData/actor_thread_thread_id","/edrData/agent_content_version","/edrData/agent_host_boot_time","/edrData/agent_hostname","/edrDa
ta/agent_id","/edrData/agent_ip_addresses","/edrData/agent_is_vdi","/edrData/agent_os_sub_type","/edrData/agent_os_type","/edrData/agent_session_start_time
","/edrData/agent_version","/edrData/causality_actor_causality_id","/edrData/causality_actor_effective_user_sid","/edrData/causality_actor_effective_userna
me","/edrData/causality_actor_primary_user_sid","/edrData/causality_actor_primary_username","/edrData/causality_actor_process_causality_id","/edrData/causa
lity_actor_process_command_line","/edrData/causality_actor_process_execution_time","/edrData/causality_actor_process_image_command_line","/edrData/causalit
y_actor_process_image_extension","/edrData/causality_actor_process_image_md5","/edrData/causality_actor_process_image_name","/edrData/causality_actor_proce
ss_image_path","/edrData/causality_actor_process_image_sha256","/edrData/causality_actor_process_instance_id","/edrData/causality_actor_process_integrity_l
evel","/edrData/causality_actor_process_is_special","/edrData/causality_actor_process_os_pid","/edrData/causality_actor_process_signature_product","/edrDat
a/causality_actor_process_signature_status","/edrData/causality_actor_process_signature_vendor","/edrData/event_id","/edrData/event_is_simulated","/edrData
/event_sub_type","/edrData/event_timestamp","/edrData/event_type","/edrData/event_utc_diff_minutes","/edrData/event_version","/edrData/host_metadata_hostna
me","/edrData/missing_action_remote_process_instance_id","/facility","/generatedTime","/recordType","/recsize","/trapsId","/uuid","/xdr_unique_id","/meta_i
nternal_id","/external_id","/is_visible","/is_secdo_event","/severity","/alert_source","/internal_id","/matching_status","/local_insert_ts","/source_insert
_ts","/alert_name","/alert_category","/alert_description","/bioc_indicator","/matching_service_rule_id","/external_url","/xdr_sub_type","/bioc_category_enu
m_key","/alert_action_status","/agent_data_collection_status","/attempt_counter","/case_id","/global_content_version_id","/global_rule_id","/is_whitelisted
"
/edrData/action_file* Fields that begin with this prefix describe attributes of a file for which Traps
reported activity.
edrData/action_module* Fields that begin with this prefix describe attributes of a module for which
Traps reported module loading activity.
edrData/action_module_process* Fields that begin with this prefix describe attributes and activity related to
processes reported by Traps that load modules such as DLLs on the
endpoint.
edrData/action_process_image* Fields that begin with this prefix describe attributes of a process image for
which Traps reported activity.
edrData/action_registry* Fields that begin with this prefix describe registry activity and attributes such
as key name, data, and previous value for which Traps reported activity.
edrData/action_network Fields that begin with this prefix describe network attributes for which Traps
reported activity.
edrData/action_remote_process* Fields that begin with this prefix describe attributes of remote processes for
which Traps reported activity.
edrData/actor* Fields that begin with this prefix describe attributes about the acting user that
initiated the activity on the endpoint.
edrData/agent* Fields that begin with this prefix describe attributes about the Traps agent
deployed on the endpoint.
edrData/causality_actor* Fields that begin with this prefix describe attributes about the causality group
owner.
SEV_010_INFO
SEV_020_LOW
SEV_030_MEDIUM
SEV_040_HIGH
SEV_090_UNKNOWN
/local_insert_ts Date and time when Cortex XDR – Investigation and Response ingested the
app.
/source_insert_ts Date and time the alert was reported by the alert source.
/alert_name If the alert was generated by Cortex XDR – Investigation and Response, the
alert name will be the specific Cortex XDR rule that created the alert (BIOC or
IOC rule name). If from an external system, it will carry the name assigned to
it by Cortex XDR .
OTHER
PERSISTENCE
EVASION
TAMPERING
FILE_TYPE_OBFUSCATION
PRIVILEGE_ESCALATION
CREDENTIAL_ACCESS
LATERAL_MOVEMENT
EXECUTION
COLLECTION
EXFILTRATION
INFILTRATION
DROPPER
FILE_PRIVILEGE_MANIPULATION
RECONNAISSANCE
HASH
IP
PATH
DOMAIN_NAME
FILENAME
MIXED
/alert_description Text summary of the event including the alert source, alert name, severity,
and file path. For alerts triggered by BIOC and IOC rules, Cortex XDR
displays detailed information about the rule.
[{""pretty_name"":""File"",""data_type"":null,
""render_type"":""entity"",""entity_map"":null},
{""pretty_name"":""action type"",
""data_type"":null,""render_type"":""attribute"",
""entity_map"":null},{""pretty_name"":""="",
""data_type"":null,""render_type"":""operator"",
""entity_map"":null},{""pretty_name"":""all"",
""data_type"":null,""render_type"":""value"",
""entity_map"":null},{""pretty_name"":""AND"",
""data_type"":null,""render_type"":""connector"",
""entity_map"":null},{""pretty_name"":""name"",
""data_type"":""TEXT"",
""render_type"":""attribute"",
""entity_map"":""attributes""},
{""pretty_name"":""="",""data_type"":null,
""render_type"":""operator"",
""entity_map"":""attributes""},
{""pretty_name"":""*.pdf"",""data_type"":null,
""render_type"":""value"",
""entity_map"":""attributes""}]"
/bioc_category_enum_key Alert category based on the alert source. An example of a BIOC alert
category is Evasion. An example of a Traps alert category is Exploit Modules.
/alert_action_status Action taken by the alert sensor with action status displayed in parenthesis:
Detected
Detected (Download)
Detected (Reported)
Detected (Scanned)
Prevented (Blocked)
/global_content_version_id Unique identifier for the content version in which a Palo Alto Networks global
BIOC rule was released.
/global_rule_id Unique identifier for an alert triggered by a Palo Alto Networks global BIOC
rule.
Abstract
Learn about the syntax and different variables that are used in the analytics log format.
Cortex XDR Analytics logs alerts as analytics alert logs. If you configure Cortex XDR to forward logs in the legacy format, each log record has the following
format:
Example 13.
sub_type,time_generated,id,version_info/document_version,version_info/magnifier_version,version_info/detection_version,alert/url,alert/category,alert/type,
alert/name,alert/description/html,alert/description/text,alert/severity,alert/state,alert/is_whitelisted,alert/ports,alert/internal_destinations/single_des
tinations,alert/internal_destinations/ip_ranges,alert/external_destinations,alert/app_id,alert/schedule/activity_first_seen_at,alert/schedule/activity_last
_seen_at,alert/schedule/first_detected_at,alert/schedule/last_detected_at,user/user_name,user/url,user/display_name,user/org_unit,device/id,device/url,devi
ce/mac,device/hostname,device/ip,device/ip_ranges,device/owner,device/org_unit,files
Example 14.
sub_type: Update
time_generated: 1547717480
id: 4
version_info/document_version: 1
version_info/magnifier_version: 1.8
version_info/detection_version: 2019.2.0rc1
alert/url: https:\/\/ddc1...
alert/category: Recon
alert/type: Port Scan
alert/name: Port Scan
alert/description/html: \t<ul>\n\t\t<li>The device....
alert/description/text: The device ...
...
device/id: 2-85e40edd-b2d1-1f25-2c1e-a3dd576c8a7e
device/url: https:\/\/ddc1 ...
device/mac: 00-50-56-a5-db-b2
device/hostname: DC1ENV3APC42
device/ip: 10.201.102.17
device/ip_ranges: "[{""max_ip"":""..."",""name"":""..."",""min_ip"":""..."",""asset"":""""}]"
device/owner:
device/org_unit:
files: []
New: First log record for the alert with this record id.
time_generated Time the log record was sent to the Cortex XDR tenant. Value is a Unix Epoch
timestamp.
id Unique identifier for the alert. Any given alert can generate multiple log records
—one when the alert is initially raised, and then additional records every time
the alert status changes. This ID remains constant for all such alert records.
You can obtain the current status of the alert by looking for log records with
this id and the most recent alert/schedule/last_detected_at
timestamp.
version_info/document_version Identifies the log schema version number used for this log record.
version_info/magnifier_version The version number of the Cortex XDR – Analytics instance that wrote this log
record.
version_info/detection_version Identifies the version of the Cortex XDR – Analytics detection software used to
raise the alert.
alert/url Provides the full URL to the alert page in the Cortex XDR – Analytics user
interface.
alert/category Identifies the alert category, which is a reflection of the anomalous network
activity location in the attack life cycle. Possible categories are:
alert/type Identifies the categorization to which the alert belongs. For example Tunneling
Process, Sandbox Detection, Malware, and so forth.
alert/name The alert name as it appears in the Cortex XDR – Analytics user interface.
alert/severity Identifies the alert severity. These severities indicate the likelihood that the
anomalous network activity is a real attack.
alert/is_whitelisted Indicates whether the alert is whitelisted. Whitelisting indicates that anomalous-
appearing network activity is legitimate. If an alert is whitelisted, then it is not
visible in the Cortex XDR – Analytics user interface. Alerts can be dismissed or
archived and still have a whitelist rule.
alert/ports List of ports accessed by the network entity during its anomalous behavior.
alert/internal_destinations/single_destinations Network destinations that the entity reached, or tried to reach, during the
course of the network activity that caused Cortex XDR – Analytics to raise the
alert. This field contains a sequence of JSON objects, each of which contains
the following fields:
alert/internal_destinations/ip_ranges IP address range subnets that the entity reached, or tried to reach, during the
course of the network activity that caused Cortex XDR – Analytics to raise the
alert. This field contains a sequence of JSON objects, each of which contains
the following fields:
alert/external_destinations Provides a list of destinations external to the monitored network that the entity
tried to reach, or actually reached, during the activity that raised this alert. This
list can contain IP addresses or fully qualified domain names.
alert/schedule/activity_first_seen_at Time when Cortex XDR – Analytics first detected the network activity that
caused it to raise the alert. Be aware that there is frequently a delay between
this timestamp, and the time when Cortex XDR – Analytics raises an alert (see
the alert/schedule/first_detected_at field).
alert/schedule/activity_last_seen_at Time when Cortex XDR – Analytics last detected the network activity that
caused it to raise the alert.
alert/schedule/first_detected_at Time when Cortex XDR – Analytics first alerted on the network activity.
alert/schedule/last_detected_at Time when Cortex XDR – Analytics last alerted on the network activity.
user/user_name The name of the user associated with this alert. This name is obtained from
Active Directory.
user/url Provides the full URL to the user page in the Cortex XDR – Analytics user
interface for the user who is associated with the alert.
user/display_name The user name as retrieved from Active Directory. This is the user name
displayed within the Cortex XDR – Analytics user interface for the user who is
associated with this alert.
user/org_unit The organizational unit of the user associated with this alert, as identified using
Active Directory.
device/id A unique ID assigned by Cortex XDR – Analytics to the device. All alerts raised
due to activity occurring on this endpoint will share this ID.
device/url Provides the full URL to the device page in the Cortex XDR – Analytics user
interface.
device/mac The MAC address of the network card in use on the device.
device/ip_ranges Identifies the subnet or subnets that the device is on. This sequence can
contain multiple inclusive subnets. Each element in this sequence is a JSON
object with the following fields:
asset: The asset name assigned to the device from within the Cortex
XDR – Analytics user interface.
device/owner The user name of the person who owns the device.
device/org_unit The organizational unit that owns the device, as identified by Active Directory.
files Identifies the files associated with the alert. Each element in this sequence is a
JSON object with the following fields:
Abstract
Learn about the different log formats that Cortex XDR can forward to an external server or email account.
The following lists the fields for each log type that Cortex XDR can forward to an external server or email destination.
The FUTURE_USE tag applies to fields that Cortex XDR does not currently implement.
When log forwarding to an email account, Cortex XDR sends an email with each field on a separate line in the email body.
Threat logs
recordType, class, FUTURE_USE, eventType, generatedTime, serverTime, agentTime, tzOffset, FUTURE_USE, facility, customerId, trapsId, serverHost,
serverComponentVersion, regionId, isEndpoint, agentId, osType, isVdi, osVersion, is64, agentIp, deviceName, deviceDomain, severity, trapsSeverity, agentVersion,
contentVersion, protectionStatus, preventionKey, moduleId, profile, moduleStatusId, verdict, preventionMode, terminate, terminateTarget, quarantine, block,
postDetected, eventParameters(Array), sourceProcessIdx(Array), targetProcessIdx(Array), fileIdx(Array), processes(Array), files(Array), users(Array), urls(Array),
description(Array)
recordType Record type associated with the event and that you can use when
managing logging quotas. In this case, the record type is threat which
includes logs related to security events that occur on the endpoints.
class Class of Cortex XDR agent log: config, policy, system, or agent_log.
generatedTime Coordinated Universal Time (UTC) equivalent of the time at which an event
was logged. For agent events, this represents the time on the endpoint. For
policy, configuration, and system events, this represents the time on Cortex
XDR in ISO-8601 string representation (for example, 2017-01-
24T09:08:59Z).
serverTime Coordinated Universal Time (UTC) equivalent of the time at which the
server generated the log. If the log was generated on an endpoint, this field
identifies the time the server received the log in ISO-8601 string
representation (for example, 2017-01-24T09:08:59Z).
agentTime Coordinated Universal Time (UTC) equivalent of the time at which an agent
logged an event in ISO-8601 string representation.
facility The Cortex XDR system component that initiated the event, for example:
TrapsAgent, TrapsServiceCore, TrapsServiceManagement, and
TrapsServiceBackend.
customerId The ID that uniquely identifies the Cortex XDR tenant instance which
received this log record.
1: Windows
2: OS X/macOS
3: Android
4: Linux
osVersion Full version number of the operating system running on the endpoint. For
example, 6.1.7601.19135.
5: Notice. Used for normal but significant events that can require
attention.
Each event also has an associated Cortex XDR severity. See the
messageData.trapsSeverity field for details.
trapsSeverity Severity level associated with the event defined for Cortex XDR. Each of
these severities corresponds to a syslog severity level:
1: Low. Used for normal but significant events that can require
attention. Corresponds to the syslog 5 (Notice) severity level.
0: Protected
1: OsVersionIncompatible
2: AgentIncompatible
CYSTATUS_ABNORMAL_PROCESS_TERMINATION
CYSTATUS_ALIGNED_HEAP_SPRAY_DETECTED
CYSTATUS_CHILD_PROCESS_BLOCKED
CYSTATUS_CORE_LIBRARY_LOADED
CYSTATUS_CORE_LIBRARY_UNLOADING
CYSTATUS_CPLPROT_BLACKLIST
CYSTATUS_CPLPROT_REMOTE_DRIVE
CYSTATUS_CPLPROT_REMOVABLE_DRIVE
CYSTATUS_CYINJCT_DISPATCH
CYSTATUS_CYINJCT_MAPPING
CYSTATUS_CYVERA_PREVENTION
CYSTATUS_DANGEROUS_SYSTEM_SERVICE_CALLED
CYSTATUS_DEMO_EVENT
CYSTATUS_DEP_SEH_INF_VIOLATION
CYSTATUS_DEP_SEH_VIOLATION
CYSTATUS_DEP_VIOLATION
CYSTATUS_DEP_VIOLATION_UNALLOCATED
CYSTATUS_DEVICE_BLOCKED
CYSTATUS_DLLPROT_BLACKLIST
CYSTATUS_DLLPROT_CURRENT_WORKING_DIRECTORY
CYSTATUS_DLLPROT_REMOTE_DRIVE
CYSTATUS_DLLPROT_REMVABLE_DRIVE
CYSTATUS_DOTNET_CRITICAL
CYSTATUS_DSE
CYSTATUS_EPM_INIT_FAILED
CYSTATUS_FAILED_CHECK_MEDIA
CYSTATUS_FILE_DELETION_BOOT_DONE
CYSTATUS_FILE_DELETION_FAILED
CYSTATUS_FILE_DELETION_SUCCEEDED
CYSTATUS_FINGERPRINTING_ATTEMPT
CYSTATUS_FONT_PROT_DUQU
CYSTATUS_FORBIDDEN_MEDIA
CYSTATUS_FORBIDDEN_OPTICAL_MEDIA
CYSTATUS_FORBIDDEN_REMOTE_MEDIA
CYSTATUS_FORBIDDEN_REMOVABLE_MEDIA
CYSTATUS_GS_COOKIE_CORRUPTED_COOKIE
CYSTATUS_GUARD_PAGE_VIOLATION
CYSTATUS_HASH_CONTROL
CYSTATUS_HEAP_CORRUPTION
CYSTATUS_HOOKING_ENTRY_POINT_FAILED
CYSTATUS_HOTPATCH_HIJACKING
CYSTATUS_ILLEGAL_EXECUTABLE
CYSTATUS_ILLEGAL_UNSIGNED_EXECUTABLE
CYSTATUS_INJ_APPCONTAINER_FAILURE
CYSTATUS_INJ_CTX_FAILURE
CYSTATUS_JAVA_FILE
CYSTATUS_JAVA_PROC
CYSTATUS_JAVA_REG
CYSTATUS_JIT_EXCEPTION
CYSTATUS_LINUX_BRUTEFORCE_PREVENTED
CYSTATUS_LINUX_ROOT_ESCALATION_PREVENTED
CYSTATUS_LINUX_SHELLCODE_PREVENTED
CYSTATUS_LINUX_SOCKET_SHELL_PREVENTED
CYSTATUS_LOCAL_ANALYSIS
CYSTATUS_MACOS_DLPROT_CWD_HIJACK
CYSTATUS_MACOS_DLPROT_DUPLICATE_PATH_CHECK
CYSTATUS_MACOS_G02_BLOCK_ALL
CYSTATUS_MACOS_G02_SIGNER_NAME_MISMATCH
CYSTATUS_MACOS_G02_SIGN_LEVEL_BELOW_MIN
CYSTATUS_MACOS_G02_SIGN_LEVEL_BELOW_PARENT
CYSTATUS_MACOS_MALICIOUS_DYLIB
CYSTATUS_MACOS_ROOT_ESCALATION_PREVENTED
CYSTATUS_MALICIOUS_APK
CYSTATUS_MALICIOUS_DLL
CYSTATUS_MALICIOUS_EXE
CYSTATUS_MALICIOUS_EXE_ASYNC
CYSTATUS_MALICIOUS_MACRO
CYSTATUS_MALICIOUS_STRING_DETECTED
CYSTATUS_MEMORY_USAGE_LIMIT_EXCEEDED
CYSTATUS_NOP_SLED_DETECTED
CYSTATUS_NO_MEMORY
CYSTATUS_NO_REGISTER_CORRECTED
CYSTATUS_PREALLOCATED_ADDR_ACCESSED
CYSTATUS_PROCESS_CREATION_VIOLATION
CYSTATUS_QUARANTINE_FAILED
CYSTATUS_QUARANTINE_SUCCEEDED
CYSTATUS_RANSOMWARE
CYSTATUS_RESTORE_FAILED
CYSTATUS_RESTORE_SUCCEEDED
CYSTATUS_ROP_MITIGATION
CYSTATUS_SEH_CRITICAL
CYSTATUS_SEH_INF_CRITICAL
CYSTATUS_SHELL_CODE_TRAP_CALLED
CYSTATUS_STACK_OVERFLOW
CYSTATUS_SUSPENDED_PROCESS_BLOCKED
CYSTATUS_SUSPICIOUS_APC
CYSTATUS_SUSPICIOUS_LINK_FILE
CYSTATUS_SYSTEM_SCAN_FINISHED
CYSTATUS_SYSTEM_SCAN_STARTED
CYSTATUS_THREAD_INJECTION
CYSTATUS_TLA_MODEL_NOT_LOADED
CYSTATUS_TOKEN_THEFT_FILE_OPERATION
CYSTATUS_TOKEN_THEFT_PROCESS_CREATED
CYSTATUS_TOKEN_THEFT_REGISTRY_OPERATION
CYSTATUS_TOKEN_THEFT_THREAD_CREATED
CYSTATUS_TOKEN_THEFT_THREAD_INJECTED
CYSTATUS_TOKEN_THEFT_THREAD_STARTED
CYSTATUS_UASLR_CRITICAL
CYSTATUS_UNALLOWED_CODE_SEGMENT
CYSTATUS_UNAUTHORIZED_CALL_TO_SYSTEM_SERVICE
CYSTATUS_UNSIGNED_CHILD_PROCESS_BLOCKED
CYSTATUS_WILDFIRE_GRAYWARE
CYSTATUS_WILDFIRE_MALWARE
CYSTATUS_WILDFIRE_UNKNOWN
0: Benign
1: Malware
2: Grayware
4: Phishing
99: Unknown
preventionMode Action carried out by the Cortex XDR agent (block or notify). The prevention
mode is specified in the rule configuration.
terminateTarget Termination action taken on the target file (relevant for some child process
execution events where we terminate the child process but not the parent
process):
0: Initial prevention.
eventParameters(Array) Parameters associated with the type of event. For example, username,
endpoint hostname, and filename.
targetProcessIdx(Array) Target process index in the processes array. A missing or negative value
means there is no target process.
fileIdx(Array) Index of target files for specific security events such as: Scanning,
Malicious DLL, Malicious Macro events.
processes(Array) All related details for the process file that triggered an event:
1: System process ID
2: Parent process ID
users(Array) Details about the active user on the endpoint when the event occurred:
1: Raw URL
3: Hostname in punycode
4: Host port
description(Array) (Mac only) Description of components related to Cortex XDR . For example,
the description of the ROP, JIT, Dylib hijacking modules for Mac endpoints
is Memory Corruption Exploit.
Config logs
recordType, class, FUTURE_USE, subClassId, eventType, eventCategory, generatedTime, serverTime, FUTURE_USE, facility, customerId, trapsId, serverHost,
serverComponentVersion, regionId, isEndpoint, severity, trapsSeverity, messageCode, friendlyName, FUTURE_USE, msgTextEn, userFullName, userName, userRole,
userDomain, additionalData(Array), messageCode, errorText, errorData, resultData
recordType Record type associated with the event and that you can use when
managing logging quotas. In this case, the record type is config which
includes logs related to Cortex XDR administration and configuration
changes.
class Class of Cortex XDR log. System logs have a value of system.
subClassId Numeric representation of the subClass field for easy sorting and filtering.
eventCategory Category of event, used internally for processing the flow of logs. Event
categories vary by class:
agent_log: agentFlow
generatedTime Coordinated Universal Time (UTC) equivalent of the time at which an event
was logged. For agent events, this represents the time on the endpoint. For
policy, configuration, and system events, this represents the time on Cortex
XDR in ISO-8601 string representation (for example, 2017-01-
24T09:08:59Z).
serverTime Coordinated Universal Time (UTC) equivalent of the time at which the
server generated the log. If the log was generated on an endpoint, this field
identifies the time the server received the log in ISO-8601 string
representation (for example, 2017-01-24T09:08:59Z).
facility The Cortex XDR system component that initiated the event, for example:
TrapsAgent, TrapsServiceCore, TrapsServiceManagement, and
TrapsServiceBackend.
customerId The ID that uniquely identifies the Cortex XDR tenant instance which
received this log record.
5: Notice. Used for normal but significant events that can require
attention.
trapsSeverity Severity level associated with the event defined for Cortex XDR. Each of
these severities corresponds to a syslog severity level:
1: Low. Used for normal but significant events that can require
attention. Corresponds to the syslog 5 (Notice) severity level.
agentTime Coordinated Universal Time (UTC) equivalent of the time at which an agent
logged an event in ISO-8601 string representation.
1: Windows
2: OS X/macOS
3: Android
4: Linux
osVersion Full version number of the operating system running on the endpoint. For
example, 6.1.7601.19135.
0: Protected
1: OsVersionIncompatible
2: AgentIncompatible
Analytics logs
recordType, class, FUTURE_USE, eventType, eventCategory, generatedTime, serverTime, agentTime, tzOffset, FUTURE_USE, facility, customerId, trapsId, serverHost,
serverComponentVersion, regionId, isEndpoint, agentId, osType, isVdi, osVersion, is64, agentIp, deviceName, deviceDomain, severity, agentVersion, contentVersion,
protectionStatus, sha256, type, parentSha256, lastSeen, fileName, filePath, fileSize, localAnalysisResult, reported, blocked, executionCount
recordType Record type associated with the event and that you can use when
managing logging quotas. In this case, the record type is analytics which
includes hash execution reports from the agent.
class Class of Cortex XDR log: config, policy, system, and agent_log.
eventCategory Category of event, used internally for processing the flow of logs. Event
categories vary by class:
agent_log: agentFlow
generatedTime Coordinated Universal Time (UTC) equivalent of the time at which an event
was logged. For agent events, this represents the time on the endpoint. For
policy, configuration, and system events, this represents the time on Cortex
XDR in ISO-8601 string representation (for example, 2017-01-
24T09:08:59Z).
serverTime Coordinated Universal Time (UTC) equivalent of the time at which the
server generated the log. If the log was generated on an endpoint, this field
identifies the time the server received the log in ISO-8601 string
representation (for example, 2017-01-24T09:08:59Z).
agentTime Coordinated Universal Time (UTC) equivalent of the time at which an agent
logged an event in ISO-8601 string representation.
facility The Cortex XDR system component that initiated the event, for example:
TrapsAgent, TrapsServiceCore, TrapsServiceManagement, and
TrapsServiceBackend.
customerId The ID that uniquely identifies the Cortex XDR tenant instance which
received this log record.
1: Windows
2: OS X/macOS
3: Android
4: Linux
osVersion Full version number of the operating system running on the endpoint. For
example, 6.1.7601.19135.
5: Notice. Used for normal but significant events that can require
attention.
0: Protected
1: OsVersionIncompatible
2: AgentIncompatible
0: Unknown
1: PE
2: Mach-o
3: DLL
lastSeen Coordinated Universal Time (UTC) equivalent of the time when the file last
ran on an endpoint in ISO-8601 string representation (for example, 2017-
01-24T09:08:59Z).
fileName File name, without the path or the file type extension.
localAnalysisResult This object includes the content version, local analysis module version,
verdict result, file signer, and trusted signer result. The trusted signer result
is an integer value:
executionCount The total number of times a file identified by a specific hash was executed.
System logs
recordType, class, FUTURE_USE, subClassId, eventType, eventCategory, generatedTime, serverTime, FUTURE_USE, facility, customerId, trapsId, serverHost,
serverComponentVersion, regionId, isEndpoint, agentId, severity, trapsSeverity, messageCode, friendlyName, FUTURE_USE, msgTextEn, userFullName, username,
userRole, userDomain, agentTime, tzOffset, osType, isVdi, osVersion, is64, agentIp, deviceName, deviceDomain, agentVersion, contentVersion, protectionStatus,
userFullName, username, userRole, userDomain, messageName, messageId, processStatus, errorText, errorData, resultData, parameters, additionalData(Array)
recordType Record type associated with the event and that you can use when
managing logging quotas. In this case, the record type is system which
includes logs related to automated system management and agent
reporting events.
class Class of Cortex XDR log. System logs have a value of system.
subClass Subclass of event. Used to categorize logs in Cortex XDR user interface.
subClassId Numeric representation of the subClass field for easy sorting and filtering.
eventCategory Category of event, used internally for processing the flow of logs. Event
categories vary by class:
agent_log: agentFlow
generatedTime Coordinated Universal Time (UTC) equivalent of the time at which an event
was logged. For agent events, this represents the time on the endpoint. For
policy, configuration, and system events, this represents the time on Cortex
XDR in ISO-8601 string representation (for example, 2017-01-
24T09:08:59Z).
serverTime Coordinated Universal Time (UTC) equivalent of the time at which the
server generated the log. If the log was generated on an endpoint, this field
identifies the time the server received the log in ISO-8601 string
representation (for example, 2017-01-24T09:08:59Z).
facility The Cortex XDR system component that initiated the event, for example:
TrapsAgent, TrapsServiceCore, TrapsServiceManagement, and
TrapsServiceBackend.
customerId The ID that uniquely identifies the Cortex XDR tenant instance which
received this log record.
70 EMEA (Frankfurt)
5: Notice. Used for normal but significant events that can require
attention.
trapsSeverity Severity level associated with the event defined for Cortex XDR. Each of
these severities corresponds to a syslog severity level:
1: Low. Used for normal but significant events that can require
attention. Corresponds to the syslog 5 (Notice) severity level.
agentTime Coordinated Universal Time (UTC) equivalent of the time at which an agent
logged an event in ISO-8601 string representation.
1: Windows
2: OS X/macOS
3: Android
4: Linux
osVersion Full version number of the operating system running on the endpoint. For
example, 6.1.7601.19135.
0: Protected
1: OsVersionIncompatible
2: AgentIncompatible
10 | Automation rules
Abstract
Use automation rules to define alert conditions that trigger an action that you specify within the rule.
Cortex XDR provides an easy way to automate the day-to-day activities of SOC analysts. Automation rules enable you to define alert conditions that trigger the
action that you specify within the rule. As alerts are created, Cortex XDR checks if the alert matches any of the alert conditions from the automated rules, and if
there is a match, the corresponding action is then triggered. The automation rules only apply to new alerts which will either create a new incident or be
combined with an existing one.
Automation rules only apply to alerts that are grouped into incidents by the system. Most alerts with low and informational severity do not allow an automation
rule to be automatically executed on them.
The automation rules run in the order they're created. You can drag the rules to change the order. If you select the setting Stop processing after this rule within
a rule, the rule is still processed, but the rules following are not processed if alert conditions are met.
Automation rules support SBAC (scoped based access control). The following parameters are considered when editing a rule:
If Scoped Server Access is enabled and set to restrictive mode, you can edit a rule if you are scoped to all tags in the rule.
If Scoped Server Access is enabled and set to permissive mode, you can edit a rule if you are scoped to at least one tag listed in the rule.
To change the order of a rule, you must have permissions to the other rule/s of which you want to change the order.
If a rule was added when set to restrictive mode, and then changed to permissive (or vice versa), you will only have view permissions.
The Automation Rules page displays a table of all the rules created.
Field Description
Action The action that is triggered when the alert matches the condition configured within the automation rule,
Communication
Send email
Syslog forwarding
Assign incident
Forensics
Forensic Triage
Endpoint Response
Isolate endpoint
Retrieve File
Action Parameters Required information for the action. For example, for the action Send email, you must enter the email of the person receiving the
notification.
Conditions The rule condition defined for the automation rule. For example Severity=Critical, where the rule triggers the action on all alerts
where Severity=Critical.
Stop Processing Indicates that the Stop processing after this rule is selected.
Created by Displays the name of the user that created the automation rule.
Modification Time Time when the automation rule was last modified.
Before you create or manage automation rules, go to Settings → Configuration → Automation Settings and configure the settings for Endpoint Action Limit
Thresholds and Automation Rules Notifications.
Add or edit an automation rule to trigger an action when the alert matches the condition of the rule created.
b. From the Alerts table, use the filter to retrieve the criteria to define the condition of the automation rule.
c. Click Next.
4. From the Action list, select the relevant action to initiate when the alert condition is triggered.ActionFrom the
5. In the Exclude Endpoints page, select the endpoint and click Next.
Before you begin creating automation rules, consider setting thresholds for the following endpoint actions:
Isolate endpoint on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to isolate the endpoint, the
limit threshold defined enables the set number of endpoints to be isolated for the period of
time defined. This is to prevent an overflow of endpoints isolated from the network at the
same time.
If the setting is turned off, there is no threshold for the isolation of endpoints.
Run endpoint script on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to run the endpoint script,
the limit threshold defined enables the set number of endpoints to run the script for the
period of time defined. This is to prevent an overflow of endpoints running scripts at the
same time.
If the setting is turned off, there is no threshold for the running scripts on the endpoints.
Terminate Causality (CGO) on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to terminate causality, the
limit threshold defined enables the set number of endpoints to terminate the causality chain
of processes for the period of time defined. This is to prevent an overflow of endpoints
terminating causality chain of processes at the same time.
If the setting is turned off, there is no threshold for terminating causality on the endpoints.
Forensic Triage on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is set to Forensic Triage, the
limit threshold defined enables the set number of endpoints to triage for the period of time
defined. This is to prevent an overflow of endpoints to triage at the same time.
If the setting is turned off, there is no threshold for the running scripts on the endpoints.
This option is only accessible to users that have the forensics add-on license.
Includes the list of actions to take when the alert condition of the automation rule is triggered for Cortex XDR.
When creating the automation rule, the action is triggered when an alert matches the condition of the automation rule.
Action Settings
Communication Choose one of the options to receive notifications to keep up with alerts.
Send email
Assign Condition
Always assign
Assign To—Select the person from the list to assign the incident.
Set alert status Alert Status—Select alert status to override the present status of the alert.
New
Under Investigation
Resolved
Set alert severity Alert Severity—Select alert severity to override the present severity of the
alert.
Critical
High
Medium
Low
Forensics
Action Settings
Endpoint Response
Alert initiator host—The host specified under Host of the alert from the
Alerts table.
Alert remote host—The host specified under Remote Host of the alert
in the Alerts table.
Script.
Script Library
Code Snippet
Alert initiator host—The host specified under Host of the alert from the
Alerts table.
Alert remote host—The host specified under Remote Host of the alert
in the Alerts table.
Alert initiator file—The file specified under Initiator Path of the alert
from the Alerts table.
Terminate Causality (CGO) Select this option to terminate the causality chain of processes associated
with the alert/s of the automation rule.
Stop processing after this rule The current rule is the last to be processed only if triggered.
Includes the list of fields included in the Automation Audit Log for Cortex XDR.
The Automation Audit Log shows all the records of all the automation rule executions, including successful, failed, and paused actions.
Right-click on a record and select View triggering alert to view the details of the alert in the Alerts table. Only If the record is an Endpoint Response action, you
can select View in Action Center, to view details of the action in the Action Center.
Field Description
Timestamp The date and time of the last time the automation rule was triggered.
Triggering Alert ID The ID of the alert that was triggered by the automation rule.
Automation Rule Version The version number that is updated every time the rule's conditions or actions are modified.
Learn how to manage access for users, user roles, user groups, and Single Sign-On (SSO) for users on a specific Cortex XDR tenant.
You can manage access for users, and create and assign user roles and user groups for a specific tenant. When Single Sign-On (SSO) is enabled, you can
manage SSO for users.
Users
You can manage access permissions and activities for users allocated to a specific Customer Support Portal account and tenant.
User roles
User roles enable you to define the type of access and actions a user can perform. User roles are assigned to users, or to user groups.
Cortex XDR provides predefined built-in user roles that provide specific access rights that cannot be modified. You can also create custom, editable user
roles.
You can also set dataset access permissions using user roles or set specific permissions using role-based access control (RBAC). Configuring administrative
access depends on the security requirements of your organization. Dataset permissions control dataset access for all components, while RBAC controls
access to a specific component. By default, dataset access management is disabled, and users have access to all datasets. If you enable dataset access
management, you must configure access permissions for each dataset type, and for each user role. When a dataset component is enabled for a particular
role, the Alert and Incidents pages include information about datasets. For more information on how to set dataset access permissions, see Manage user roles.
User groups
You can use user groups to streamline configuration activities by grouping together users whose access permission requirements are similar. Import user
groups from Active Directory, or create them from scratch in Cortex XDR.
Single Sign-On
Manage your SSO integration with the Security Assertion Markup Language (SAML) 2.0 standard to securely authenticate system users across enterprise-wide
applications and websites, with one set of credentials. This configuration allows system users to authenticate using your organization's Identity Provider (IdP),
such as Okta or PingOne. You can integrate any IdP with Cortex XDR supported by SAML 2.0.
SSO with SAML 2.0 configuration activities are dependent on your organization’s IdP. Some of the field values need to be obtained from your organization’s IdP,
and some values need to be added to your organization’s IdP. It is your responsibility to understand how to access your organization’s IdP to provide these
fields, and to add any fields from Cortex XDR to your IdP.
Manage user roles that are assigned to Cortex XDR users or user groups in Cortex XDR Access Management.
Managing Roles requires an Account Admin or Instance Administrator role. For more information, see Predefined user roles.
Manage user roles that are assigned to Cortex XDR users or user groups in Cortex XDR Access Management. User roles enable you to define the type of
access and actions a user can perform.
You can only set dataset access permissions from a user role in Cortex XDR Access Management. When creating user roles from the Cortex Gateway, these
settings are disabled. By default, dataset access management is disabled, and users have access to all datasets. If you enable dataset access management,
you must configure access permissions for each dataset type, and for each user role. When a dataset component is enabled for a particular role, the Alert and
Incidents pages include information about datasets.
5. Under Components, expand each list and select the permissions for each of the components. For more information, Role-based Permission Levels for
Cortex XDR/XSIAM.
6. Under Datasets (Disabled), you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by leaving the dataset access management as disabled (default).
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
3. (Optional) Under Role Name, modify the name for the user role.
4. (Optional) Under Description, enter a description for the user role or modify the current description.
5. Under Components, expand each list and select the permissions for each of the components. For more information, Role-based Permission Levels for
Cortex XDR/XSIAM.
6. Under Datasets, you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by disabling the Enable dataset access management toggle.
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
2. Right-click the relevant user role, and select Save As New Role.
3. (Optional) Under Role Name, modify the name for the user role.
4. (Optional) Under Description, enter a description for the user role or modify the current description.
5. Under Components, expand each list and select the permissions for each of the components. For more information, Role-based Permission Levels for
Cortex XDR/XSIAM.
Set the user role with access to all XQL datasets by disabling the Enable dataset access management toggle.
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
Update a user's role, add a user to a user group, and view permissions based on the role and user groups assigned to the user.
If Scope-Based Access Control (SBAC) is enabled for the tenant, you can use specific tags to assign user permissions. For more information, see Manage
user scope.
You can only reduce the permissions of an Account Admin user via Cortex Gateway.
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
Select all to view the combined permissions for every role and user group assigned to the user.
Select a specific role assigned to the user to view the available permissions for that role.
b. Under Components, expand each list to view the permissions to the various Cortex XDR components.
c. Under Datasets, there are two possibilities for viewing a user's dataset access permissions:
When dataset access management is enabled and the user has access to certain Cortex Query Language (XQL) datasets, the datasets are
listed.
When dataset access management is disabled and users have access to all XQL datasets, the text No dataset has been selected is
displayed.
User permissions for components and datasets are based on the acess permissions set in the user role. For more information on editing these user role
permissions, see Manage user roles.
6. (Optional) If Scope-Based Access Control is enabled for the tenant, click Scope and select a tag family and the corresponding tags.
If you select a tag family without specific tags, permissions apply to all tags in the family.
The scope is based only on the selected tag families. If you scope only based on tags from Family A, then Family B is disregarded in scope
calculations and is considered as allowed.
7. Click Save.
Use a CSV file to import users who belong to a Customer Support Portal account, and assign them roles that are defined in Cortex XDR. You can use the CSV
template provided in Cortex XDR, or prepare a CSV file from scratch.
To use the CSV template, click Download example file, and replace the example values with your values.
Prepare a CSV file from scratch. Make sure the file includes these columns:
User email: Email address of the user belonging to a Customer Support Portal account, for example, john.smith1@exampleCompany.com.
Role name: Name of the role that you want to assign to this user, for example, Privileged Responder. The role must already exist in Cortex
XDR.
Is an account role: A boolean value that defines whether the user is designated with an Account Admin role in Cortex Gateway. Set the value
to TRUE; otherwise, the value is set to FALSE (default).
5. Click Import.
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
Select all to view the combined permissions for every role and user group assigned to the user.
Select a specific role assigned to the user to view the available permissions for that role.
4. Under Components, expand each list to view the permissions to the various Cortex XDR components.
5. Under Datasets, there are two possibilities for viewing a user's dataset access permissions:
When dataset access management is enabled and the user has access to certain Cortex Query Language (XQL) datasets, the datasets are listed.
When dataset access management is disabled and users have access to all XQL datasets, the text No dataset has been selected is displayed.
Hide user
There might be instances where you want to hide a user from the list of users, for example, a user that has a Customer Support Portal Super User role but isn't
active on your Cortex XDR tenant. Once you hide a user, they will no longer be displayed in the list of users when Show User Subset is selected on the Users
page.
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
4. Click Save.
Deactivate user
3. Click Deactivate.
3. Click Remove.
Field Description
User Type Indicates whether a user was defined in Cortex XDR using the Customer Support Portal, SSO (single sign-on) using your organization’s
IdP, or both Customer Support Portal/SSO.
Direct XDR Name of the role specifically assigned to a user. When a user does not have any Cortex XDR access permissions assigned specifically
Role to them, the field displays No-Role.
Groups Lists the groups to which a user belongs. Any group that was imported from Active Directory displays AD beside the group name.
If a user group has scoping permissions, the users in the group are granted permissions according to the user group settings, even if
the user does not have configured scope settings.
Group Roles Lists the group roles based on the groups to which a user belongs. Hovering over the group role displays the group associated with this
role.
Scope Only visible if Scope-Based Access Control is enabled for the tenant.
Lists the scope assigned to the user either directly or through a group, based on tags. The family includes the tag types and the related
tags of the selected family.
Learn about Scope-Based Access Control (SBAC) and how to assign users to specific tags of different types in your organization.
With Scope-Based Access Control (SBAC), Cortex XDR enables you to assign users to specific tags of different types in your organization. By default, all users
have management access to all tags in the tenant. However, after you (as an administrator) assign a management scope to a Cortex XDR user (non-
administrator), the user is then able to manage only the specific tags and its associated entities that are predefined within that scope. To enable SBAC per
server, see Configure server settings.
The permissions in user or group settings define which entity the user can access, and the scope defines what the user can view within the entity.
Policy Management: Create and edit Prevention policies and profiles, Extension policies and profiles, and global and device Exceptions that are within
the scope of the user.
Action Center: View and take actions only on endpoints that are within the scope of the user.
Incidents and Alerts: View and manage incidents and alerts filtered according to the scope of the user or group.
Also, note that the Agent Installation widget is not available for scoped users.
The currently assigned scope of each user is displayed in the Scope column of the Users table.
3. In the Scope tab, select one or all of the following for Tag Family. The user's permissions are based on the tags assigned to them.
Select All
Endpoint Groups: User is scoped according to endpoint groups. The tag selected refers to the specific endpoint group.
Endpoint Tags: User is scoped according to endpoint tags. The tag selected refers to the specific endpoint tag.
4. If you selected a Tag Family option, from the Tags field, select the relevant tags associated with the family.
If you select a tag family without specific tags, permissions apply to all tags in the family.
The scope is based only on the selected Tag Families. If you scope only based on tags from Family A, then Family B is disregarded in scope
calculations and considered as allowed.
5. Click Save.
The users to whom you have scoped particular endpoints are now able to use Cortex XDR only within the scope of their assigned endpoints.
Make sure to assign the required default permissions for scoped users. This depends on the structure and divisions within your organization and the particular
purpose of each organizational unit to which scoped users belong.
12 | Endpoint security
Abstract
This topic provides an overview of traditional endpoint protection versus the protection of endpoints using Cortex XDR.
Cyberattacks target endpoints to inflict damage, steal information or achieve other goals that involve taking control of computer systems. Attackers perpetrate
cyberattacks either by causing a user to unintentionally run a malicious executable file, known as malware, or by exploiting a weakness in a legitimate
executable file to run malicious code behind the scenes without the knowledge of the user.
One way to prevent these attacks is to identify executable files, dynamic-link libraries (DLLs), and other pieces of code to determine if they are malicious and, if
so, to prevent the execution of these components by first matching each potentially dangerous code module against a list of specific, known threat signatures.
The weakness of this method is that it is time-consuming for signature-based antivirus (AV) solutions to identify newly created threats that are known only to the
attacker (also known as zero-day attacks or exploits) and add them to the lists of known threats, which leaves endpoints vulnerable until signatures are
updated.
Cortex XDR takes a more efficient and effective approach to prevent attacks that eliminates the need for traditional AV. Rather than try to keep up with the ever-
growing list of known threats, Cortex XDR sets up a series of roadblocks—also referred to as traps—that prevent the attacks at their initial entry points—the
point where legitimate executable files are about to unknowingly allow malicious access to the system.
Cortex XDR provides a multi-method protection solution with exploit protection modules that target software vulnerabilities in processes that open non-
executable files and malware protection modules that examine executable files, DLLs, and macros for malicious signatures and behavior. Using this multi-
method approach, Cortex XDR can prevent all types of attacks, whether these are known or unknown threats.
Abstract
Cortex XDR prevents malware attacks and provides protection on endpoints based on the different operating systems.
Malicious files, known as malware, are often disguised as or embedded in non-malicious files. These files can attempt to gain control, gather sensitive
information, or disrupt the normal operations of the system. Cortex XDR prevents malware by employing the Malware Prevention Engine. This approach
combines several layers of protection to prevent both known and unknown malware from causing harm to your endpoints. The mitigation techniques that the
Malware Prevention Engine employs vary by endpoint type.
The Malware Prevention Engine uses mitigation methods that implements malware protection on endpoints based on the different operating systems.
Windows
Anti webshell protection Enables Cortex XDR to protect endpoint processes from dropping
malicious web shells.
Credential gathering protection Enables Cortex XDR to protect endpoints from processes trying to access
or steal passwords and other credentials.
Cryptominers protection Enables Cortex XDR to protect against attempts to locate or steal
cryptocurrencies.
Endpoint scanning Enables Cortex XDR to scan endpoints and attached removable drives for
dormant, inactive malware.
Financial malware threat protection Enables Cortex XDR to protect against techniques specific to financial and
banking malware.
Global behavioral threat protection rules Enables Cortex XDR to use rules to protect endpoints from malicious
causality chains.
In-process shellcode protection Enables Cortex XDR to protect against in-process shellcode attack threats.
Malicious device protection Enables Cortex XDR to protect against the connection of potentially
malicious devices to endpoints.
Office files with macros examination Enables Cortex XDR to analyze and prevent malicious macros embedded
in Microsoft Office files (Word, Excel) from running on Windows endpoints.
On-write file protection Enables Cortex XDR to monitor and take action on malicious files during the
on-write process.
Portable executable and DLL Enables Cortex XDR to analyze and prevent malicious executable files and
DLL files from running on Windows endpoints.
PowerShell script file examination Enables Cortex XDR to analyze and prevent malicious PowerShell script
files from running on Windows endpoints.
UAC bypass prevention Enables Cortex XDR to protect against the User Access Control (UAC)
bypass mechanism that is associated with privilege elevation attempts.
macOS
Endpoint scanning Enables Cortex XDR to scan endpoints and attached removable drives for dormant, inactive malware.
Global behavioral threat Enables Cortex XDR to use rules to protect endpoints from malicious causality chains.
protection rules
Credential gathering protection Enables Cortex XDR to protect endpoints from processes trying to access or steal passwords and other credentials.
Anti webshell protection Enables Cortex XDR to protect endpoint processes from dropping malicious web shells.
Financial malware threat Enables Cortex XDR to protect against techniques specific to financial and banking malware.
protection
Cryptominers protection Enables Cortex XDR to protect against attempts to locate or steal cryptocurrencies.
Anti tampering protection Enables Cortex XDR to protect against tampering attempts.
Ransomware protection Enables Cortex XDR to protect against encryption-based activity associated with ransomware attacks.
Malicious child process Enables Cortex XDR to prevent script-based attacks. Such attacks can be used to deliver malware by blocking
protection targeted processes that are commonly used to bypass traditional security methods.
Mach-O file examination Enables Cortex XDR to check Mach-O files for malware.
Local file threat examination Enables Cortex XDR to detect malicious files on the endpoint.
DMG file examination Enables Cortex XDR to check DMG files for malware.
Linux
Endpoint scanning Enables Cortex XDR to scan endpoints and attached removable drives for dormant, inactive malware.
Global threat behavioral threat protection Enables Cortex XDR to use rules to protect endpoints from malicious causality chains.
rules
Credential gathering protection Enables Cortex XDR to protect endpoints from processes trying to access or steal passwords and other
credentials.
Anti webshell protection Enables Cortex XDR to protect endpoint processes from dropping malicious web shells.
Financial malware threat protection Enables Cortex XDR to protect against techniques specific to financial and banking malware.
Cryptominers protection Enables Cortex XDR to protect against attempts to locate or steal cryptocurrencies.
Container escaping protection Enables Cortex XDR to protect against container-escaping attempts.
ELF file examination Enables Cortex XDR to examine ELF files on endpoints and perform additional actions on them.
Local file threat examination Enables Cortex XDR to detect malicious files on the endpoint.
Reverse shell protection Enables Cortex XDR to prevent attempts to redirect standard input and output streams to network sockets.
Android
APK files examination Enables Cortex XDR to analyze and prevent malicious APK files from running on endpoints.
iOS
URL filtering Enables Cortex XDR to analyze and block or report malicious URLs, and to block or allow custom URLs.
Spam reports Enables Cortex XDR to report calls and messages as spam.
Call and messages blocking Enables Cortex XDR to act on incoming calls and messages from known spam numbers.
Safari browser security This security module can provide proactive gating of suspicious sites accessed using Safari, and provides informative site
module analysis to the device user. This option is recommended for iOS devices that do not belong to your organization and do
not use the Network Shield feature.
Network and EDR security This module lets you configure granular control and monitoring of network traffic on iOS-based supervised devices. The
module devices' profiles must be also configured for this on the MDM side as explained in the Cortex XDR Agent iOS Guide.
Abstract
Cortex XDR prevents exploit attempts and provides protection on endpoints based on the different operating systems.
An exploit is a sequence of commands that takes advantage of a bug or vulnerability in software or hardware to gain unauthorized access or control.
To combat an attack in which an attacker takes advantage of a software exploit or vulnerability, Cortex XDR employs Endpoint Protection Modules (EPM). Each
EPM targets a specific exploit type in the attack chain. Some capabilities that Cortex XDR EPMs provide are reconnaissance prevention, memory corruption
prevention, code execution prevention, and kernel protection.
The following table lists the types of exploits for which Cortex XDR provides protection.
Reconnaissance prevention Prevents attackers from probing the network for vulnerabilities while preserving the option to perform internal
reconnaissance testing.
Code execution prevention Prevents malicious code that could allow attackers to deploy additional malware to steal sensitive data.
Kernel protection Protects the kernel against kernel threats and exploits.
Abstract
The Cortex XDR agent utilizes advanced multi-method protection and prevention techniques to protect from both known and unknown malware and software
exploits.
The Cortex XDR agent utilizes advanced multi-method protection and prevention techniques to protect your endpoints from both known and unknown malware
and software exploits.
In a typical attack scenario, an attacker attempts to gain control of a system by first corrupting or bypassing memory allocation or handlers. Using memory-
corruption techniques, such as buffer overflows and heap corruption, a hacker can trigger a bug in the software or exploit a vulnerability in a process. The
attacker must then manipulate a program to run code provided or specified by the attacker while evading detection. If the attacker gains access to the
operating system, the attacker can then upload malware, such as Trojan horses (programs that contain malicious executable files), or can otherwise use the
system to their advantage. The Cortex XDR agent prevents such exploit attempts by employing roadblocks—or traps—at each stage of an exploitation
attempt.
When a user opens a non-executable file, such as a PDF or Word document, and the process that opened the file is protected, the Cortex XDR agent
seamlessly injects code into the software. This occurs at the earliest possible stage before any files belonging to the process are loaded into memory. The
Cortex XDR agent then activates one or more protection modules inside the protected process. Each protection module targets a specific exploitation
technique and is designed to prevent attacks on program vulnerabilities based on memory corruption or logic flaws.
In addition to automatically protecting processes from such attacks, the Cortex XDR agent reports any security events to Cortex XDR and performs additional
actions as defined in the endpoint security policy. Common actions performed by the Cortex XDR agent include collecting forensic data and notifying the user
about the event.
The default endpoint security policy protects the most vulnerable and most commonly used applications but you can also add other third-party and proprietary
applications to the list of protected processes.
Malware Protection
The Cortex XDR agent provides malware protection in a series of four evaluation phases:
When a user attempts to run an executable, the operating system attempts to run the executable as a process. If the process tries to launch any child
processes, the Cortex XDR agent first evaluates the child process protection policy. If the parent process is a known targeted process that attempts to launch
a restricted child process, the Cortex XDR agent blocks the child processes from running and reports the security event to Cortex XDR. For example, if a user
tries to open a Microsoft Word document (using the winword.exe process) and the document has a macro that tries to run a blocked child process (such as
WScript), the Cortex XDR agent blocks the child process and reports the event to Cortex XDR. If the parent process does not try to launch any child processes
or tries to launch a child process that is not restricted, the Cortex XDR agent next moves to Phase 2: Evaluation of the restriction policy.
The Cortex XDR agent verifies that the executable file does not violate any restriction rules. For example, you might have a restriction rule that blocks
executable files launched from network locations. If a restriction rule applies to an executable file, the Cortex XDR agent blocks the file from executing and
reports the security event to Cortex XDR and, depending on the configuration of each restriction rule, the Cortex XDR agent can also notify the user about the
prevention event.
If no restriction rules apply to an executable file, the Cortex XDR agent next moves to Phase 3: Hash verdict determination.
The Cortex XDR agent calculates a unique hash using the SHA-256 algorithm for every file that attempts to run on the endpoint. Depending on the features that
you enable, the Cortex XDR agent performs additional analysis to determine whether an unknown file is malicious or benign. The Cortex XDR agent can also
submit unknown files to Cortex XDR for in-depth analysis by WildFire.
To enhance performance and efficiency, hash verdict requests from the Cortex XDR agent will be routed to the WildFire service with the lowest latency. File
uploads for analysis will strictly adhere to the designated Cortex XDR and WildFire regions, ensuring data remains within the appropriate geographical
boundaries.
To determine a verdict for a file, the Cortex XDR agent evaluates the file in the following order:
1. Hash exception: A hash exception enables you to override the verdict for a specific file without affecting the settings in your Malware Security profile.
The hash exception policy is evaluated first and takes precedence over all other methods to determine the hash verdict.
For example, you may want to configure a hash exception for any of the following situations:
You want to allow a file that has a malware verdict to run. In general, we recommend that you only override the verdict for malware after you use
available threat intelligence resources—such as WildFire and AutoFocus—to determine that the file is not malicious.
You want to specify a verdict for a file that has not yet received an official WildFire verdict.
After you configure a hash exception, Cortex XDR distributes it at the next heartbeat communication with any endpoints that have previously opened the
file.
When a file launches on the endpoint, the Cortex XDR agent first evaluates any relevant hash exception for the file. The hash exception specifies whether
to treat the file as malware. If the file is assigned a benign verdict, the Cortex XDR agent permits it to open.
If a hash exception is not configured for the file, the Cortex XDR agent next evaluates the verdict to determine the likelihood of malware.
2. Highly trusted signers (Windows and Mac): The Cortex XDR agent distinguishes highly trusted signers such as Microsoft from other known signers. To
keep parity with the signers defined in WildFire, Palo Alto Networks regularly reviews the list of highly trusted and known signers and delivers any
changes with content updates. The list of highly trusted signers also includes signers that are included in the allow list from Cortex XDR. When an
unknown file attempts to run, the Cortex XDR agent applies the following evaluation criteria: Files signed by highly trusted signers are permitted to run,
and files signed by prevented signers are blocked, regardless of the WildFire verdict. Otherwise, when a file is not signed by a highly trusted signer or by
a signer included in the block list, the Cortex XDR agent next evaluates the WildFire verdict. For Windows endpoints, evaluation of other known signers
takes place if the WildFire evaluation returns an unknown verdict for the file.
3. WildFire verdict: If a file is not signed by a highly trusted signer on Windows and Mac endpoints, the Cortex XDR agent performs a hash verdict lookup
to determine if a verdict already exists in its local cache.
If the executable file has a malware verdict, the Cortex XDR agent reports the security event to Cortex XDR , and, depending on the configured behavior
for malicious files, the Cortex XDR agent performs one of the following actions.
Notifies the user about the file but still allows the file to execute.
Logs the issue without notifying the user and allows the file to execute.
If the verdict is benign, the Cortex XDR agent moves on to the next stage of evaluation Phase 4: Evaluation of Malware Protection Policy.
If the hash does not exist in the local cache or has an unknown verdict, the Cortex XDR agent next evaluates whether the file is signed by a known
signer.
4. Local analysis: When an unknown executable, DLL, or macro attempts to run on a Windows or Mac endpoint, the Cortex XDR agent uses local analysis
to determine if it is likely to be malware. On Windows endpoints, if the file is signed by a known signer, the Cortex XDR agent permits the file to run and
does not perform additional analysis. For files on Mac endpoints and files that are not signed by a known signer on Windows endpoints, the Cortex XDR
agent performs local analysis to determine whether the file is malware. Local analysis uses a static set of pattern-matching rules that inspect multiple file
features and attributes, and a statistical model that was developed with machine learning on WildFire threat intelligence. The model enables the Cortex
XDR agent to examine hundreds of characteristics for a file and issue a local verdict (benign or malicious) while the endpoint is offline or Cortex XDR is
unreachable. The Cortex XDR agent can rely on the local analysis verdict until it receives an official WildFire verdict or hash exception.
Local analysis is enabled by default in a Malware Security profile. Because local analysis always returns a verdict for an unknown file, if you enable the
Cortex XDR agent to Block files with unknown verdict, the agent only blocks unknown files if a local analysis error occurs or local analysis is disabled. To
change the default settings (not recommended), see Set up malware prevention profiles.
If the prior evaluation phases do not identify a file as malware, the Cortex XDR agent observes the behavior of the file and applies additional malware
protection rules. If a file exhibits malicious behavior, such as encryption-based activity common with ransomware, the Cortex XDR agent blocks the file and
reports the security event to the Cortex XDR.
Abstract
The endpoint protection capabilities vary depending on the platform (operating system) that is used on each of your endpoints.
Each security profile provides a tailored list of protection capabilities that you can configure for the platform you select. The following table describes the
protection capabilities you can customize in a security profile. The table also indicates which platforms support the protection capability (a dash (—) indicates
the capability is not supported).
Browsers can be subject to exploitation attempts from malicious web pages and exploit kits that are
embedded in compromised websites. By enabling this capability, the Cortex XDR agent automatically
protects browsers from common exploitation attempts.
Attackers can use existing mechanisms in the operating system—such as DLL-loading processes or built
in system processes—to execute malicious code. By enabling this capability, the Cortex XDR agent
automatically protects endpoints from attacks that try to leverage common operating system
mechanisms for malicious purposes.
Common applications in the operating system, such as PDF readers, Office applications, and even
processes that are a part of the operating system itself can contain bugs and vulnerabilities that an
attacker can exploit. By enabling this capability, the Cortex XDR agent protects these processes from
attacks which try to exploit known process vulnerabilities.
To extend protection to third-party processes that are not protected by the default policy from exploitation
attempts, you can add additional processes to this capability.
Attackers commonly leverage the operating system itself to accomplish a malicious action. By enabling
this capability, the Cortex XDR agent protects operating system mechanisms such as privilege escalation
and prevents them from being used for malicious purposes.
If you have Windows endpoints in your network that are unpatched and exposed to a known vulnerability,
Palo Alto Networks strongly recommends that you upgrade to the latest Windows Update that has a fix
for that vulnerability. If you choose not to patch the endpoint, the Unpatched Vulnerabilities Protection
capability allows the Cortex XDR agent to apply a workaround to protect the endpoints from the known
vulnerability.
Prevents sophisticated attacks that leverage built-in OS executables and common administration utilities
by continuously monitoring endpoint activity for malicious causality chains.
Prevents web shell attacks by continuously monitoring endpoints for processes that try to drop malicious
files.
Cryptominers protection — —
Prevents cryptomining by monitoring for processes which attempt to locate or steal cryptocurrencies.
Ransomware protection — — —
Targets encryption based activity associated with ransomware to analyze and halt ransomware before
any data loss occurs.
Prevents script-based attacks used to deliver malware by blocking known targeted processes from
launching child processes commonly used to bypass traditional security approaches.
Analyzes and prevents malicious executable and DLL files from running.
Analyzes and quarantines malicious PHP files arriving from the web server.
Analyzes and prevents malicious macros embedded in PDF files from running.
Analyzes and prevents malicious macros embedded in Microsoft Office files from running.
Detects suspicious or abnormal network activity from shell processes and terminate the malicious shell
process.
Protect the endpoint from kernel-level threats such as bootkits, rootkits, and susceptible drivers.
Spam reports — — — —
Container-escaping attempts — — — —
Execution paths — — — —
Many attack scenarios are based on writing malicious executable files to certain folders such as the local
temp or download folder and then running them. Use this capability to restrict the locations from which
executable files can run.
Network locations — — — —
To prevent attack scenarios that are based on writing malicious files to remote folders, you can restrict
access to all network locations except for those that you explicitly trust.
Removable media — — — —
To prevent malicious code from gaining access to endpoints using external media such as a removable
drive, you can restrict the executable files, that users can launch from external drives attached to the
endpoints in your network.
Optical drive — — — —
To prevent malicious code from gaining access to endpoints using optical disc drives (CD, DVD, and Blu-
ray), you can restrict the executable files, that users can launch from optical disc drives connected to the
endpoints in your network.
Abstract
Security modules are activated for your endpoints depending on the chosen security profile and the operating system on the endpoint.
Each security profile applies multiple security modules to protect your endpoints from a wide range of attack techniques. While the settings for each security
module are not configurable, the Cortex XDR agent activates a specific protection module depending on the type of attack, the configuration of your security
policy, and the operating system of the endpoint.
When a security event occurs, the Cortex XDR agent logs details about the event including the security module employed by the Cortex XDR agent to detect
and prevent the attack based on the technique. To help you understand the nature of the attack, the alert identifies the protection module the Cortex XDR
agent employed.
The following table lists the modules and the platforms on which they are supported. A dash (—) indicates that the module is not supported.
Anti-Ransomware — —
APC protection — — —
Behavioral threat —
CPL protection — — —
DLL hijacking — — —
DLL security — — —
Dylib hijacking — — —
Font protection — — —
Gatekeeper enhancement — — —
Hash exception
Java deserialization — —
JIT — —
Local analysis —
Null dereference — — —
ROP —
SEH — — —
Shellcode protection — — —
ShellLink — — —
SO hijacking protection — — —
SysExit — — —
UASLR — — —
UEFI BTP
WildFire
Abstract
Application processes that run on your endpoint are protected by the exploit security policy.
By default, your exploit security profile protects endpoints from attack techniques that target specific processes. Each exploit protection capability protects a
different set of processes that Palo Alto Networks researchers determine are susceptible to attack. The following tables display the processes that are
protected by each exploit protection capability for each operating system.
infopath.exe skypeapp.exe
skypehost.exe
msmpeng.exe
ibserver proftpd
identd qmgr
lighttpd rpcbind
java rsync
kamailio
Abstract
File forwarding
Cortex XDR sends unknown samples for in-depth analysis to WildFire. WildFire accepts up to 1,000,000 sample uploads per day and up to 1,000,000 verdict
queries per day from each Cortex XDR tenant. The daily limit resets at 23:59:00 UTC. Uploads that exceed the sample limit are queued for analysis after the
limit resets. WildFire also limits sample sizes to 100MB. For more information, see the WildFire documentation.
For samples that the Cortex XDR agent reports, the agent first checks its local cache of hashes to determine if it has an existing verdict for that sample. If the
Cortex XDR agent does not have a local verdict, the Cortex XDR agent queries Cortex XDR to determine if WildFire has previously analyzed the sample. If the
sample is identified as malware, it is blocked. If the sample remains unknown after comparing it against existing WildFire signatures, Cortex XDR forwards the
sample for WildFire analysis.
The Cortex XDR agent analyzes files based on the type of file, regardless of the file’s extension. For deep inspection and analysis, you can also configure your
Cortex XDR to forward samples to WildFire. A sample can be:
Executable files
Object code
FON (Fonts)
Microsoft Office files containing macros opened in Microsoft Word (winword.exe) and Microsoft Excel (excel.exe):
Microsoft Office 2010 and later releases—.docm, .docx, .xlsm, and .xlsx
.dll files
.ocx files
Mach-o files
DMG files
Verdicts
WildFire delivers verdicts to identify samples it analyzes as safe, malicious, or unwanted (grayware is considered obtrusive but not malicious):
Unknown: Initial verdict for a sample for which WildFire has received but has not analyzed.
Benign: The sample is safe and does not exhibit malicious behavior. If Low Confidence is indicated for the Benign verdict, Cortex XDR can treat this
hash as if the verdict is unknown and further run Local Analysis to get a verdict with higher confidence.
Malware: The sample is malware and poses a security threat. Malware can include viruses, worms, Trojans, Remote Access Tools (RATs), rootkits,
botnets, and malicious macros. For files identified as malware, WildFire generates and distributes a signature to prevent future exposure to the threat.
Grayware: The sample does not pose a direct security threat but might display otherwise obtrusive behavior. Grayware typically includes adware,
spyware, and Browser Helper Objects (BHOs).
In cases when the Cortex XDR agent gets a failed status from the WF service due to a general error or unsupported file type, and the Local Analysis is set to
disabled or not applicable, Cortex XDR will not generate an alert on the file.
When WildFire is not available or integration is disabled, the Cortex XDR agent can also assign a local verdict for the sample using additional methods of
evaluation: When the Cortex XDR agent performs local analysis on a file, it uses pattern-matching rules and machine learning to determine the verdict. The
Cortex XDR agent can also compare the signer of a file with a local list of trusted signers to determine whether a file is malicious:
Benign: Local analysis determined the sample is safe and does not exhibit malicious behavior.
Malware: The sample is malware and poses a security threat. Malware can include viruses, worms, Trojans, Remote Access Tools (RATs), rootkits,
botnets, and malicious macros.
The Cortex XDR agent stores hashes and the corresponding verdicts for all files that attempt to run on the endpoint in its local cache. The local cache scales in
size to accommodate the number of unique executable files opened on the endpoint. On Windows endpoints, the cache is stored in the
C:\ProgramData\Cyvera\LocalSystem folder on the endpoint. When service protection is enabled (see Set up agent settings profiles), the local cache is
accessible only by the Cortex XDR agent and cannot be changed.
Each time a file attempts to run, the Cortex XDR agent performs a lookup in its local cache to determine if a verdict already exists. If known, the verdict is either
the official WildFire verdict or manually set as a hash exception. Hash exceptions take precedence over any additional verdict analysis.
If the file is unknown in the local cache, the Cortex XDR agent queries Cortex XDR for the verdict. If Cortex XDR receives a verdict request for a file that was
already analyzed, Cortex XDR immediately responds to the Cortex XDR agent with the verdict.
12.1.8 | Guidelines for keeping Cortex XDR agents and content updated
Abstract
Learn more about how to control Cortex XDR agent and content upgrades.
This document covers a recommended strategy and best practices for managing agent and content updates to help reduce the risk of downtime in a
production environment, while helping ensure timely delivery of security content and capabilities.
Keeping Cortex XDR agents up-to-date is essential for protecting against evolving threats and vulnerabilities. Regular updates ensure the latest security
features for malware and exploit prevention, and compatibility with the latest software environments, which helps reduce the risk of attacks. This can also help
organizations meet regulatory standards while maintaining strong overall protection.
Content updates, such as new threat intelligence or detection logic, are critical for defending against newly discovered cyber threats and malware and are
designed to ensure that systems remain protected against the latest attacks. Content updates address compatibility issues as well, helping achieve smooth
operations alongside the Cortex XDR agent. Without regular content updates, security solutions may fail to detect new or evolving threats, leaving systems
vulnerable to attacks.
When planning Cortex XDR agent upgrades and content updates, consult with the appropriate stakeholders and teams and follow the change management
strategy in your organization.
The Cortex XDR agent can retrieve content updates immediately as they become available, after a pre-configured delay period of up to 30 days, or you can
choose to select a specific version.
Cortex XDR can be configured to manage the deployment of agent and content updates by adjusting the following settings:
Agent Auto-Upgrade is disabled by default. Before enabling agent auto-upgrade for Cortex XDR agents, make sure to consult with all relevant
stakeholders in your organization. Enabling this option allows you to define the scope of the automatic updates, such as upgrading to the latest agent
release, one release prior, only maintenance releases, or maintenance releases within a specific version.
Upgrade Rollout includes two options: Immediate, where the Cortex XDR agent automatically receives new releases, including maintenance updates
and features, and Delayed, which lets you set a delay of 7 to 45 days after a version is released before upgrading endpoints.
Global agent settings: Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades to apply to all endpoints in your
organization. You can also schedule the upgrade task for specific days of the week and set a specific time range for the upgrades.
Content Auto-Update is enabled by default and automatically retrieves the latest content before deploying it on the endpoint. If you disable content
updates, the agent will stop fetching updates from the Cortex XDR tenant and will continue to operate with the existing content on the endpoint.
Content Rollout: The Cortex XDR agent can retrieve content updates immediately as they become available, after a pre-configured delay period of up
to 30 days, or you can choose to select a specific version.
Global content updates: Configure the content update cadence and bandwidth allocation within your organization. To enforce immediate protection against
the latest threats, enable minor content updates. Otherwise, the content updates in your network occur only on major releases.
Use a phased rollout plan by creating batches for deploying updates. The specifics may vary based on your organization and its structure. Start with a control
group, then deploy to 10% of your organization. Subsequently, allocate the remaining upgrades in batches that best suit your organization until achieving a full
100% rollout.
Example 15.
The following is an example of a rollout plan for deploying a Cortex XDR agent upgrade:
Phase 1: Control group rollout: Start by selecting a control group of endpoints as early adopters. This group should consist of a diverse range of operating
systems, devices, applications, and servers, with a focus on low-risk endpoints. After a defined testing period, such as one week, assess for any issues. If no
problems are found, move to the next phase.
Phase 2: 10% rollout: Expand the rollout to 10% of the organization’s endpoints. This group should maintain the same variety as the control group but include
low- to medium-risk endpoints. Monitor performance during the set period. If the rollout is successful with no issues, proceed to the next phase.
Phase 4: 80% rollout: Extend the deployment to 80% of the organization's endpoints. This batch should include a wide variety of endpoints, incorporating
both medium and high-risk systems. After a careful monitoring period and confirmation that everything is stable, move to the final phase.
Phase 5: Full rollout: Complete the rollout by updating the remaining 20% of the organization’s endpoints. By this point, the majority of systems should have
been thoroughly tested, reducing the risk of issues in the final stage. Once complete, 100% of the organization will be updated.
Content updates are typically provided on a weekly basis. Use a phased rollout plan by creating batches for deploying updates. Start with a control group,
then deploy to 10% of your organization. Subsequently, allocate the remaining upgrades in batches that best suit your organization until achieving a full 100%
rollout.
Example 16.
The following is an example of a rollout plan over a period of one week for deploying content updates:
Phase 1: Control group rollout: Keep the default configuration set to deploy content updates immediately.
Phase 2: 10% rollout: Content is automatically deployed on day 2 following a delay period defined in the profile.
Phase 3: 60% rollout: Content is automatically deployed on day 3 following a delay period defined in the profile.
Phase 4: Full rollout: Increase the deployment to include medium and high-risk systems, until the entire organization is updated.
The following information will help you select and configure the update settings.
Configure one or more of the settings described in this section to keep your Cortex XDR agents up-to-date.
1. Create an agent installation package for each operating system version for which you want to upgrade the Cortex XDR agent.
If needed, filter the list of endpoints. To reduce the number of results, use the endpoint name search and filters Filters at the top of the page.
You can also select endpoints running different operating systems to upgrade the agents at the same time.
4. Right-click your selection and select Endpoint Control → Upgrade Agent Version.
For each platform, select the name of the installation package you want to push to the selected endpoints.
When you upgrade an agent on a Linux endpoint that is not using a package manager, Cortex XDR upgrades the installation process by default
according to the endpoint Linux distribution.
The Cortex XDR agent keeps the name of the original installation package after every upgrade.
5. Upgrade.
Cortex XDR distributes the installation package to the selected endpoints at the next heartbeat communication with the agent. To monitor the status of
the upgrades, go to Response → Action Center.
From the Action Center you can also view additional information about the upgrade (right-click the action and select Additional data) or cancel the
upgrade (right-click the action and select Cancel Agent Upgrade).
Custom dashboards that include upgrade status widgets, and the All Endpoints page display upgrade status.
During the upgrade process, the endpoint operating system might request a reboot. However, you do not have to perform the reboot for the Cortex
XDR agent upgrade process to complete it successfully.
After you upgrade on an endpoint with Cortex XDR Device Control rules, you need to reboot the endpoint for the rules to take effect.
Agent settings per endpoint
These profiles can be configured on one or more endpoints, static/dynamic groups, tags, IP ranges, endpoint names, or other parameters that allow the
creation of logical endpoint groups. See how to define endpoint group.
1. Go to Endpoints → Policy Management → Profiles, and then edit an existing profile, add a new profile, or import from a file.
2. Choose the operating system, and select Agent Settings. Then click Next.
Before enabling Auto-Update for Cortex XDR agents, make sure to consult with all relevant stakeholders in your organization.
Automatic Upgrade Scope Latest agent release (Default) For One release before the latest one, Cortex XDR upgrades the agent to
the previous release before the latest, including maintenance releases.
One release before the latest one
Major releases are numbered X.X, such as release 8.0, or 8.2. Maintenance
Only maintenance releases releases are numbered X.X.X, such as release 8.2.2.
Upgrade Rollout Immediate (Default) The Cortex XDR agent will automatically get any new agent release,
maintenance and new features
Delayed
For Delayed, set the delay period (number of days) to wait after the version
release before upgrading endpoints. Choose a value between 7 and 45.
Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades to apply to all endpoints in your organization.
2. Configure the Cortex XDR agent upgrade scheduler and the number of parallel upgrades.
Item Description
Amount of parallel During the first week of a new Cortex XDR agent release rollout, only a single batch of agents is upgraded. After that, auto-
upgrades upgrades continue to be deployed across your network with the number of parallel upgrades as configured.
Set the number of parallel agent upgrades, where the maximum is 500 agents.
Item Description
Days in week Schedule the upgrade task for specific days of the week.
Schedule Schedule a specific time range. The minimum range is four hours.
Content updates
When a new content update is available, Cortex XDR notifies the Cortex XDR agent. The Cortex XDR agent then randomly chooses a time within a six-hour
window during which it will retrieve the content update from Cortex XDR. By staggering the distribution of content updates, Cortex XDR reduces the bandwidth
load and prevents bandwidth saturation due to the high volume and size of the content updates across many endpoints. You can view the distribution of
endpoints by content update version from the dashboard.
You can configure whether to update content per endpoint or use the global settings. The information in this topic will help you select and configure these
methods.
Configure content update options for agents within the organization to ensure it is always protected with the latest security measures.
These profiles can be configured on one or more endpoints, static/dynamic groups, tags, IP ranges, endpoint names, or other parameters that allow the
creation of logical endpoint groups.
1. Go to Endpoints → Policy Management → Profiles , and then edit an existing profile, add a new profile, or import from a file.
2. Choose the operating system, and select Agent Settings. Then click Next.
Content Auto-Update Enabled (Default) When Content Auto-Update is enabled, the Cortex XDR agent retrieves the
most updated content and deploys it on the endpoint.
Disabled
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Staging Content Enabled Enable users to deploy agent staging content on selected test
environments. Staging content is released before production content,
Disabled (Default) allowing for early evaluation of the latest content update.
Content Rollout Immediate (Default) The Cortex XDR agent can retrieve content updates immediately as they
are available, after a pre-configured delay period of up to 30 days, or you
Delayed
can select a specific version.
Specific When you delay content updates, the Cortex XDR agent will retrieve the
content according to the configured delay. For example, if you configure a
delay period of two days, the agent will not use any content released in the
last 48 hours.
2. Configure the content update cadence and bandwidth allocation within your organization.
Item Description
Enable bandwidth Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex
control XDR calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to
retrieve a content update over a 24 hour period or a week. Cortex XDR supports between 20 - 10000 Mbps, you can enter
one of the recommended values or enter one of your own. For optimized performance and reduced bandwidth consumption,
it is recommended that you install and update new agents with Cortex XDR agents 7.3 and later include the content
package built in using SCCM.
XDR Calculator for Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex
Recommended XDR calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to
Bandwidth retrieve a content update over 24 hours or a week. This calculation is based on connected agents and includes an overhead
for large content update.
Enable minor To enforce immediate protection against the latest threats, enable minor content updates. Otherwise, the content updates in
content version your network occur only on major releases.
updates
Abstract
To increase security coverage and quickly resolve any issues in policy, Palo Alto Networks can seamlessly deliver software packages called content updates.
To increase security coverage and quickly resolve any issues in policy, Palo Alto Networks can seamlessly deliver software packages for Cortex XDR called
content updates. Content updates can contain changes or updates to any of the following:
Cortex XDR delivers the content update to the agent in parts and not as a single file, allowing the agent to retrieve only the updates and additions it needs.
Protected processes
Trusted signers
Ransomware module logic including Windows network folders susceptible to ransomware attacks
Event Log for Windows event logs and Linux system authentication logs
Maximum file size for hash calculations in File search and destroy
When a new update is available, Cortex XDR notifies the Cortex XDR agent. The Cortex XDR agent then randomly chooses a time within a six-hour window
during which it will retrieve the content update from Cortex XDR. By staggering the distribution of content updates, Cortex XDR reduces the bandwidth load
and prevents bandwidth saturation due to the high volume and size of the content updates across many endpoints. You can view the distribution of endpoints
by content update version from the dashboard.
The Cortex XDR research team releases more frequent content updates in-between major content versions to ensure your network is constantly protected
against the latest and newest threats in the wild. When you enable minor content updates, the Cortex XDR agent receives minor content updates, starting with
the next content releases. Otherwise, if you do not wish to deploy minor content updates, your Cortex XDR agents will keep receiving content updates for
major releases which usually occur on a weekly basis. The content version numbering format remains XXX-YYYY, where XXX indicates the version and YYYY
indicates the build number. To distinguish between major and minor releases, XXX is rounded up to the nearest ten for every major release, and incremented
by one for a minor release. For example, 1280-<build_num> and 1290-<build_num> are major releases, and 1281-<build_num> , 1282-
<build_num>, and 1291-<build_num> are minor releases.
To adjust content update distribution for your environment, you can configure the following optional settings:
Content management settings as part of the Cortex XDR global agent configurations.
Content download source, as part of the Cortex XDR agent setting profile.
Otherwise, if you want the Cortex XDR agent to retrieve the latest content from the server immediately, you can force the Cortex XDR agent to connect to the
server using one of the following methods.
(Windows and Mac only) Perform manual check-in from the Cortex XDR agent console.
Abstract
To aid in endpoint detection and alert investigation, the Cortex XDR agent collects endpoint information when an alert is triggered.
When the Cortex XDR agent raises an alert on endpoint activity, a minimum set of metadata about the endpoint is sent to the server.
When you enable behavioral threat protection or EDR data collection in your endpoint security policy, the Cortex XDR agent can also continuously monitor
endpoint activity for malicious event chains identified by Palo Alto Networks. The endpoint data that the Cortex XDR agent collects when you enable these
capabilities varies by platform type.
Agents with Cortex XDR Pro per Endpoint apply limits and filters on network, file, and registry logs. To expand these limits and filters requires the Extended
Threat Hunting Data (XTH) add-on.
The tables below note whether specific logs require the XTH add-on.
When the Cortex XDR agent raises an alert on endpoint activity, the following metadata is sent to the server:
Field Description
Process creation time Part of the process unique ID per boot session (PID + creation time)
Executable metadata (Traps 6.1 and later) Process start File size
Files Create Full path of the modified file before and after
modification
Write
SHA256 and MD5 hash for the file after
Delete modification
Modification (Traps 6.1 and later) File set security (DACL) information (Traps
6.1 and later)
Symbolic links (Traps 6.1 and later)
Resolve hostnames on local network (Traps
6.1 and later)
Base address
Target process-id/thread-id
Image size
Full path
Bind
Network connection ID
Registry key:
Creation
Deletion
Rename
Addition
Restore
Save
Suspend OS Version
Resume Domain
User presence (Traps 6.1 and later) User Detection Detection when a user is present or idle per active
user session on the computer.
action_rpc_interface_version_minor
action_rpc_func_opnum
action_rpc_func_str_call_fields (optional)
action_rpc_func_int_call_fields (optional)
action_rpc_interface_name
action_rpc_func_name
action_syscall_target_instance_id
action_syscall_target_image_path
action_syscall_target_image_name
action_syscall_target_os_pid
action_syscall_target_thread_id
address_mapping
Event log See the table below for the list of Windows Event Logs that can be sent to the server.
In Traps 6.1.3 and later releases, Cortex XDR and Traps agents can send the following Windows Event Logs to the tenant.
For more information on how to set up Windows event logs collection, see Microsoft Windows security auditing setup.
Application EMET
Application Windows Error Reporting Only for Windows Error Reporting (WER) events when an application
stops unexpectedly
Application Microsoft-Windows-User 1511: A user logged on with a temporary profile because Windows
Profiles Service could not find the user's local profile.
Application Application Error 1000: Application unexpected stop/hang events, similar to WER/1001.
These events include the full path to the EXE file, or to the module with
the fault.
Application Application Hang 1002: Application unexpected stop/hang events, similar to WER/1001.
These events include the full path to the EXE file, or to the module with
the fault.
Microsoft-Windows-DNS- 3008: A DNS query was completed without local machine name
Client/Operational resolution events, and without empty name resolution events.
Microsoft-Windows-PrintService Microsoft-Windows-PrintService
Microsoft-Windows-Windows Firewall With Microsoft-Windows-Windows 2004, 2005, 2006, 2009, 2033: Windows Firewall With Advanced Security
Advanced Security/Firewall Firewall With Advanced Local Modifications (Levels 0, 2, 4)
Security
Security Microsoft-Windows-Eventlog Event log service events specific to the Security channel
Security Routing and Remote Access Service (RRAS) events (these are only
generated on Microsoft IAS server)
4634: Logoff
Files Create Full path of the modified file before and after
modification
*Requires XTH add-on Write
SHA256 and MD5 hash for the file after
Delete modification
Rename
Move
Open
Full path
Message
Write For specific files only and only if the file was
written.
Delete
Message
Learn how to set up profiles, policies and other settings for endpoint protection, how to install Cortex XDR agent on endpoints, and how to manage them after
installation.
Endpoint protection starts with the Cortex XDR agent that is installed on each endpoint in your environment. The agent package that you install on endpoints
contains many settings that are configured by default, out-of-the-box, to enable you to get protection up and running quickly. However, these settings can also
be modified and used in different combinations, by using profiles, which are then mapped to policies, and by configuring global settings.
Several endpoint management tasks can be performed remotely by administrators, from Cortex XDR. These include tasks such as applying tags and aliases to
endpoints, upgrading the Cortex XDR agent, uninstalling and deleting the Cortex XDR agent, and more.
To stay up to date with the latest policy and endpoint status, Cortex XDR communicates regularly with your Cortex XDR agents. For example, when you
upgrade your endpoints to the latest release, Cortex XDR creates an installation package and distributes it to the agent on their next communication. Similarly,
Abstract
Set up endpoint protection profiles and policies, exceptions, endpoint hardening, and other endpoint settings.
Set up endpoint protection profiles and policies, exceptions, endpoint hardening, and other endpoint settings.
Abstract
Endpoint security profiles can be used immediately, or customized, to protect your endpoints from threats.
Cortex XDR provides default security profiles that you can use out-of-the-box to immediately begin protecting your endpoints from threats. These profiles are
applied to endpoints by mapping them to policies, and then mapping the policies to endpoints.
While security rules enable you to block or allow files to run on your endpoints, security profiles help you customize and reuse settings across different groups
of endpoints. When the Cortex XDR agent detects behavior that matches a rule defined in your security policy, the Cortex XDR agent applies the security
profile that is attached to the rule for further inspection.
Profiles associated with one or more targets that are beyond the scope of your defined user permissions are locked, and cannot be edited.
Abstract
Configure malware prevention profiles to control the actions taken by Cortex XDR agents when known malware, macros, and unknown files try to run.
Malware prevention profiles protect against the execution of malware including trojans, viruses, worms, and grayware. Malware prevention profiles serve two
main purposes: to define how to treat behavior common with malware, such as ransomware or script-based attacks, and to define how to treat known malware
and unknown files.
You can configure the action that Cortex XDR agents take when known malware, macros, and unknown files try to run on endpoints. By default, the Cortex
XDR agent will receive the default profile that contains a pre-defined configuration for each malware protection capability supported by the platform. The
default setting for each capability is shown in parentheses in the user interface. To fine-tune your malware prevention policy, you can override the configuration
of each capability to block the malicious behavior or file, allow but report it, or disable the module.
For each setting that you override, clear the Use Default option, and select the setting of your choice.
In this profile, the Report options configure the endpoints to report the corresponding suspicious files, actions, processes, or behaviors to Cortex XDR, without
blocking them. The Disabled options configure the endpoints to neither analyze nor report the corresponding malware or behavior.
The tasks below are organized according to the operating systems used by your organization's endpoints.
Windows
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure Portable Executable and DLL Examination. The Cortex XDR agent can analyze and prevent malicious executable files and DLL files from
running on Windows endpoints.
As part of the anti-malware security flow, the Cortex XDR agent leverages the operating system's capability to identify revoked certificates for
executables, and DLL files that attempt to run on the endpoint by accessing the Windows Certificate Revocation List (CRL). To allow the Cortex XDR
agent access the CRL, you must enable internet access over port 80 for Windows endpoints. If the endpoint is not connected to the internet, or you
experience delays with executables and DLLs running on the endpoint, contact Customer Support.
Action Mode Block When the Cortex XDR agent detects attempts to run malware, it
performs the configured action.
Report
Disabled
Quarantine Malicious Disabled By default, the Cortex XDR agent blocks malware from running, but
Executables does not quarantine the file. You can enable one of the options to
Quarantine WildFire malware quarantine files, depending on the verdict issuer.
verdict
The Quarantine Malicious Executables feature is not available for
Quarantine WildFire and Local malware identified on network drives.
Analysis malware verdict
Action when file is Allow Allow: Unknown files are not blocked and local verdicts are not issued
unknown to WildFire for them.
Run Local Analysis
Run Local Analysis: The Cortex XDR agent uses embedded machine
Block
learning to determine the likelihood that an unknown file is malware, and
issues a local verdict for the file.
Block: Block unknown files but do not run local analysis. In this case,
unknown files remain blocked until the Cortex XDR agent receives an
official WildFire verdict.
Action when file is benign Allow Select the action to take when a file with a Benign Low Confidence
with low confidence verdict from WildFire tries to run on the endpoint. When local analysis is
Run Local Analysis
enabled, the Cortex XDR agent uses embedded machine learning to
Block determine the likelihood that an unknown file is malware, and issues a
local verdict for the file. If you block this file but do not run a local
analysis, the file remains blocked until the Cortex XDR agent receives a
high-confidence WildFire verdict.
For optimal user experience, we recommend that you set the action
mode to either Allow or Run Local Analysis.
Upload unknown files to Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
WildFire XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
Treat Grayware as Enabled When enabled, Cortex XDR treats all grayware with the same Action
Malware Mode as configured for malware.
Disabled
When disabled, grayware is considered benign, and is not blocked.
3. Configure options for Office Files with Macros Examination. The Cortex XDR agent can analyze and prevent malicious macros embedded in Microsoft
Office files (Word, Excel) from running on Windows endpoints.
Action Mode Block When the Cortex XDR agent detects attempts to run malware, it
performs the configured action.
Report
Disabled
Action when file is Allow Select the action to take when a file is not recognized by WildFire. When
unknown to WildFire local analysis is enabled, the Cortex XDR agent uses embedded
Run Local Analysis machine learning to determine the likelihood that an unknown file is
malware, and issues a local verdict for the file.
Block
If you block unknown files, but do not run local analysis, unknown files
remain blocked until the Cortex XDR agent receives an official WildFire
verdict.
Action when WildFire Allow Select the action to take when a file with a Benign Low Confidence
verdict is Benign Low verdict from WildFire tries to run on the endpoint. When local analysis is
Confidence Run Local Analysis enabled, the Cortex XDR agent uses embedded machine learning to
determine the likelihood that an unknown file is malware, and issues a
Block
local verdict for the file.
If you block this file but do not run a local analysis, the file remains
blocked until the Cortex XDR agent receives a high-confidence WildFire
verdict.
For optimal user experience, we recommend that you set the action
mode to either Allow or Run Local Analysis.
Upload unknown files to Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
WildFire XDR, and Cortex XDR sends the files to WildFire for analysis. For macro
Disabled
analysis, the Cortex XDR agent sends the Microsoft Office file containing
the macro.
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
Examine Office files from Enabled You can enable the Cortex XDR agent to examine Microsoft Office files
network drives on network drives when they contain a macro that attempts to run.
Disabled
4. Configure PowerShell Script Files to analyze and prevent malicious PowerShell script files from running on Windows-based endpoints.
Action Mode Block When the Cortex XDR agent detects attempts to run PowerShell script
files, it performs the configured action.
Report
Disabled
Quarantine Malicious Disabled By default, the Cortex XDR agent blocks malware from running, but
Script Files does not quarantine the file. You can enable one of the options to
Quarantine WildFire malware
quarantine files, depending on the verdict issuer.
verdict
The Quarantine Malicious Script Files feature is not available for malware
Quarantine WildFire and Local identified on network drives.
Analysis malware verdict
Action when file is Allow Allow: Unknown files are not blocked and local verdicts are not issued
unknown to WildFire for them.
Run Local Analysis
Run Local Analysis: The Cortex XDR agent uses embedded machine
Block learning to determine the likelihood that an unknown file is malware, and
issues a local verdict for the file.
Block: Block unknown files but do not run local analysis. In this case,
unknown files remain blocked until the Cortex XDR agent receives an
official WildFire verdict.
Upload unknown files to Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
WildFire XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
5. Configure On-write File Protection to monitor and take action on malicious files during the on-write process.
Action Mode Enabled When enabled, the Cortex XDR agent monitors for malicious files during
the on-write process, and if finds any, it sends alerts and quarantines the
Disabled files.
6. Configure Endpoint Scanning to scan endpoints and attached removable drives for dormant, inactive malware.
End-User Initiated Local Enabled When enabled, the endpoint user can perform a local scan on the
Scan endpoint.
Disabled
Periodic Scan Enabled We recommend that you disable scheduled scanning. VDI machine
scans are based on the golden image and additional files will be
Disabled
examined upon execution.
When periodic scanning is enabled in your profile, the Cortex XDR agent
initiates an initial scan when it is first installed on the endpoint,
regardless of the periodic scanning scheduling time.
7. Configure the Global Behavioral Threat Protection Rules. Use these rules to protect endpoints from malicious causality chains.
Action Mode Block The Cortex XDR agent protects against malicious causality chains, using
behavioral threat protection rules. When the action mode is set to Block,
Report the Cortex XDR agent terminates all processes and threads in the event
Disabled chain up to the causality group owner (CGO).
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes and
the artifacts, such as files, related to the CGO.
Disabled
When disabled, the Cortex XDR agent does not quarantine the CGO of
an event chain, nor any scripts or files called by the CGO.
Action Mode for Block Behavioral threat protection rules can also detect attempts to load
Vulnerable Drivers vulnerable drivers which can be used to bypass the Cortex XDR agent.
Report
Protection As with other rules, Palo Alto Networks threat researchers can deliver
Disabled changes to vulnerable driver rules with content updates.
Advanced API Monitoring Enabled When enabled, the Cortex XDR agent adds additional hooks in user
mode processes for increased coverage of anti-exploit and anti-malware
Disabled modules.
8. Configure Credential Gathering Protection to protect endpoints from processes trying to access or steal passwords and other credentials.
Action Mode Block The Cortex XDR agent protects against all processes and threads in the
event chain up to the credential gathering process or file.
Report
When this module is disabled, the Cortex XDR agent does not analyze
Disabled
the event chain and does not block credential gathering.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
related to the credential gathering event chain.
Disabled
9. Configure Anti Webshell Protection to protect endpoint processes from dropping malicious web shells.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to drop malicious web shells, it performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the web shell drop event chain, and any scripts or
Disabled files called by the web shell dropping process.
10. Configure Financial Malware Threat Protection to protect against techniques specific to financial and banking malware.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to access or steal financial or banking information, the Cortex
Report XDR agent performs the configured action.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
related to the financial information gathering event chain, and scripts or
Disabled
files called by the financial information gathering process.
Crypto Wallet Protection Enabled When enabled, provides protection for cryptocurrency wallets that are
stored on endpoints. Cryptocurrency wallets store private keys that are
Disabled used to access crypto assets.
11. Configure Cryptominers Protection to protect against attempts to locate or steal cryptocurrencies.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a cryptomining
process or file, the Cortex XDR agent performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
detected during a cryptocurrency gathering attempt.
Disabled
12. Configure In-process shellcode protection to protect against in-process shellcode attack threats.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to run in-process shellcodes to load malicious code, the Cortex
Report
XDR agent performs the configured action.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the in-process
shellcode processes or files related to a causality chain.
Disabled
Process Injection 32 Bit Enabled When enabled, the Cortex XDR agent quarantines 32 bit in-process
shellcode processes or files related to a causality chain.
Disabled
Process injection 32 bit is set to Enabled by default for all new tenants
created after 25 June 2023. For tenants created before this date, the
default was set to Disabled.
Shellcode AI Protection Enabled When enabled, Precision AI-based detection rules use machine learning
to detect and prevent in-memory shellcode attacks.When enabled,
Disabled Precision AI-based detection rules use machine learning to detect and
prevent in-memory shellcode attacks.
13. Configure Malicious Device Prevention to protect against the connection of potentially malicious devices to endpoints.
Action Mode Block When the Cortex XDR agent detects the connection of potentially
malicious external device to an endpoint, the Cortex XDR agent
Report
performs the configured action.
Disabled
14. Configure UAC Bypass Prevention to protect against the User Access Control (UAC) bypass mechanism that is associated with privilege elevation
attempts.
Action Mode Block When the Cortex XDR agent detects a UAC bypass
mechanism, the Cortex XDR agent performs the configured action. The
Report
Block option blocks all processes and threads in the event chain up to
Disabled the UAC bypass mechanism.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the UAC bypass
processes or files related to the chain, and any scripts or files released
Disabled to the UAC bypass mechanism.
Action Mode Block When the Cortex XDR agent detects a tampering attempt, including
modification and/or termination of the Cortex XDR agent, it
Report
performs the configured action.
Disabled If you choose the Block option, you must also enable XDR Agent
Tampering Protection in the Agent Settings profile, and ensure that both
profiles are assigned to the same endpoints.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the tampering attempt.
Disabled
Malicious Safe Mode Block Define the action to take when the Cortex XDR agent detects safe mode
Rebooting Protection reboot attempts made suspiciously by other apps.
Report
Disabled
16. Configure IIS Protection to protect against Internet Information Server (IIS) attacks.
Action Mode Block When the Cortex XDR agent detects a threat that targets an Internet
Information Server (IIS), the Cortex XDR agent performs the configured
Report action.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the IIS attack.
Disabled
17. Configure UEFI Protection, to protect the endpoint from Unified Extensible Firmware Interface (UEFI) manipulation attempts.
Action Mode Block When the Cortex XDR agent detects UEFI manipulation attempts, it
performs the configured action. When Block is selected, the Cortex XDR
Report agent blocks all processes and threads in the event chain, up to the
UEFI threat.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the UEFI threat.
Disabled
18. Configure Ransomware Protection to protect against encryption-based activity associated with ransomware attacks.
Action Mode Block When the Cortex XDR agent detects ransomware activity locally on the
endpoint or in pre-defined network folders, the Cortex XDR agent
Report
performs the configured action.
Disabled
Quarantine Malicious Enabled When enabled, the Cortex XDR agent quarantines the processes that
Process are related to the ransomware activity.
Disabled
The Quarantine Malicious Process option is only available if Action
Mode is set to Block.
Protection Mode Normal By default, Protection Mode is set to Normal, where the decoy files on
the endpoint are present, but do not interfere with benign applications
Aggressive and end user activity on the endpoint. If you suspect your network has
been infected with ransomware, and you need to provide better
coverage, you can apply the Aggressive protection mode. Aggressive
mode exposes more applications in your environment to the Cortex XDR
agent decoy files. However, it also increases the likelihood that benign
software is exposed to decoy files, raising false ransomware alerts, and
impairing user experience.
19. Configure Malicious Child Process Protection to prevent script-based attacks. Such attacks can be used to deliver malware by blocking targeted
processes that are commonly used to bypass traditional security methods.
Action Mode Block When the Cortex XDR agent detects known suspicious parent-child
relationships that are used to bypass security, the Cortex XDR agent
Report
performs the configured action. When Block is selected, known
Disabled suspicious child processes are blocked from starting.
20. To prevent attacks that extract passwords from memory using the Mimikatz tool, set Password Theft Protection to Enabled.
21. Configure Respond to Malicious Causality Chains options, which define the automatic response actions taken by the Cortex XDR agent when it identifies
malicious causality chains.
Terminate Connection and Enabled When the Cortex XDR agent identifies a remote network connection that
Block IP Address of attempts to perform malicious activity—such as encrypting endpoint
Remote Causality Group Disabled files—the agent can automatically block the IP address to close all
Owner existing communication, and to block new connections from this IP
address to the endpoint. When Cortex XDR blocks an IP address per
endpoint, that address remains blocked throughout all agent profiles
and policies, including any host-firewall policy rules. You can view the
list of all blocked IP addresses per endpoint from the Action Center, as
well as unblock them to re-enable communication as appropriate.
22. Configure the Network Packet Inspection Engine to analyze network packet data for malicious behavior.
Action Mode Terminate session By analyzing the network packet data, the Cortex XDR agent can
already detect malicious behavior at the network level, and provide
Report
protection to the growing corporate network boundaries. The engine
Disabled leverages both Palo Alto Networks NGFW content rules, and new Cortex
XDR content rules created by the Cortex XDR Research Team. The
Cortex XDR content rules are updated through the security content. This
feature focuses on detecting outbound C2 activity.
23. Configure Dynamic Kernel Protection to protect the endpoint from kernel-level threats such as bootkits, rootkits, and susceptible drivers.
Action Mode Block When set to Block, this protection module loads during the boot process
to protect the endpoint against malicious processes running at boot
Report time.
Disabled
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
macOS
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure Endpoint Scanning to scan endpoints and attached removable drives for dormant, inactive malware.
Periodic Scan Enabled We recommend that you disable scheduled scanning. VDI machine
scans are based on the golden image and additional files will be
Disabled
examined upon execution.
When periodic scanning is enabled in your profile, the Cortex XDR agent
initiates an initial scan when it is first installed on the endpoint,
regardless of the periodic scanning scheduling time.
3. Configure the Global Behavioral Threat Protection Rules. These rules can be used to protect endpoints from malicious causality chains.
Action Mode Block The Cortex XDR agent protects against malicious causality chains, using
behavioral threat protection rules. When the action mode is set to Block,
Report the Cortex XDR agent terminates all processes and threads in the event
chain up to the causality group owner (CGO).
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes and
the artifacts, such as files, related to the CGO.
Disabled
When disabled, the Cortex XDR agent does not quarantine the CGO of
an event chain, nor any scripts or files called by the CGO.
4. Configure Credential Gathering Protection to protect endpoints from processes trying to access or steal passwords and other credentials.
Action Mode Block The Cortex XDR agent protects against all processes and threads in the
event chain up to the credential gathering process or file.
Report
When this module is disabled, the Cortex XDR agent does not analyze
Disabled the event chain and does not block credential gathering.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
related to the credential gathering event chain.
Disabled
5. Configure Anti Webshell Protection to protect endpoint processes from dropping malicious web shells.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to drop malicious web shells, it performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the web shell drop event chain, and any scripts or
Disabled files called by the web shell dropping process.
6. Configure Financial Malware Threat Protection to protect against techniques specific to financial and banking malware.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to access or steal financial or banking information, the Cortex
Report
XDR agent performs the configured action.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
related to the financial information gathering event chain, and scripts or
Disabled files called by the financial information gathering process.
Crypto Wallet Protection Enabled When enabled, provides protection for cryptocurrency wallets that are
stored on endpoints. Cryptocurrency wallets store private keys that are
Disabled
used to access crypto assets.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a cryptomining
process or file, the Cortex XDR agent performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
detected during a cryptocurrency gathering attempt.
Disabled
Action Mode Block When the Cortex XDR agent detects a tampering attempt, including
modification and/or termination of the Cortex XDR agent, it
Report performs the configured action.
Disabled If you choose the Block option, you must also enable XDR Agent
Tampering Protection in the Agent Settings profile, and ensure that both
profiles are assigned to the same endpoints.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the tampering attempt.
Disabled
9. Configure Ransomware Protection to protect against encryption-based activity associated with ransomware attacks.
Action Mode Block When the Cortex XDR agent detects ransomware activity locally on the
endpoint or in pre-defined network folders, the Cortex XDR agent
Report
performs the configured action.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the files that are
related to the ransomware activity.
Disabled
10. Configure Malicious Child Process Protection to prevent script-based attacks. Such attacks can be used to deliver malware by blocking targeted
processes that are commonly used to bypass traditional security methods.
Action Mode Block When the Cortex XDR agent detects known suspicious parent-child
relationships that are used to bypass security, the Cortex XDR agent
Report
performs the configured action. When Block is selected, known
Disabled suspicious child processes are blocked from starting.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the files that are
related to a malicious child process.
Disabled
11. Configure Mach-O Files Examination to check Mach-O files for malware.
Action Mode Block When the Cortex XDR agent detects attempts to run malware, it
performs the configured action.
Report
Disabled
Quarantine malicious Disabled By default, the Cortex XDR agent blocks malware from running, but
Mach-O files does not quarantine the file. You can enable one of the options to
Quarantine WildFire malware quarantine files, depending on the verdict issuer.
verdict
The Quarantine Malicious Mach-O Files feature is not available for
Quarantine WildFire and Locals malware identified on network drives.
Analysis malware verdict
Action on unknown Mach- Allow Allow: Unknown files are not blocked and local verdicts are not issued
O files to WildFire for them.
Run Local Analysis
Run Local Analysis: The Cortex XDR agent uses embedded machine
Block learning to determine the likelihood that an unknown file is malware, and
issues a local verdict for the file.
Block: Block unknown files but do not run local analysis. In this case,
unknown files remain blocked until the Cortex XDR agent receives an
official WildFire verdict.
Action when WildFire Allow Select the action to take when a file with a Benign Low Confidence
verdict is Benign Low verdict from WildFire tries to run on the endpoint. When local analysis is
Confidence Run Local Analysis enabled, the Cortex XDR agent uses embedded machine learning to
determine the likelihood that an unknown file is malware, and issues a
Block
local verdict for the file. If you block this file but do not run a local
analysis, the file remains blocked until the Cortex XDR agent receives a
high-confidence WildFire verdict.
For optimal user experience, we recommend that you set the action
mode to either Allow or Run Local Analysis.
Upload Mach-O files for Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
cloud analysis XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
Treat Grayware as Enabled When enabled, Cortex XDR treats all grayware with the same Action
Malware Mode as configured for malware.
Disabled
When disabled, grayware is considered benign, and is not blocked.
12. Configure Local File Threat Examination to enable detection of malicious files on the endpoint.
This module is supported by Cortex XDR agent 8.1.0 and later releases.
Action Mode Enabled When enabled, the Local Threat-Evaluation Engine (LTEE) analyzes the
endpoint for PHP files arriving from a web server and alerts about any
Disabled malicious PHP scripts.
Terminate Malicious Enabled When enabled, the Cortex XDR agents terminates malicious PHP files on
Processes the endpoint.
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines malicious files on the
endpoint and does not quarantine updated files.
Disabled
13. Configure DMG File Examination to check DMG files for malware.
Action Mode Block When the Cortex XDR agent detects attempts to run malware in DMG
files, it performs the configured action.
Report
Disabled
Quarantine Malicious Enabled When enabled, the Cortex XDR agent quarantines malicious executable
Executables DMG files.
Disabled
The Quarantine Malicious Executables feature is not available for
malware identified on network drives.
Upload unknown files to Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
WildFire XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Linux
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
Endpoint scanning is enabled by default on the following: /etc, /tmp, /home, /usr, /bin, /sbin, /lib, /var, /opt, /dev, /root, /boot.
Periodic Scan Enabled We recommend that you disable scheduled scanning. VDI machine
scans are based on the golden image and additional files will be
Disabled
examined upon execution.
When periodic scanning is enabled in your profile, the Cortex XDR agent
initiates an initial scan when it is first installed on the endpoint,
regardless of the periodic scanning scheduling time.
Scan Timeout Number of hours If a scan exceeds the number of hours configured here, the Cortex XDR
agent stops the scan.
3. Configure the Global Behavioral Threat Protection Rules. These rules can be used to protect endpoints from malicious causality chains.
Action Mode Block The Cortex XDR agent protects against malicious causality chains, using
behavioral threat protection rules. When the action mode is set to Block,
Report
the Cortex XDR agent terminates all processes and threads in the event
Disabled chain up to the causality group owner (CGO).
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes and
the artifacts, such as files, related to the CGO.
Disabled
When disabled, the Cortex XDR agent does not quarantine the CGO of
an event chain, nor any scripts or files called by the CGO.
4. Configure Credential Gathering Protection to protect endpoints from processes trying to access or steal passwords and other credentials.
Action Mode Block The Cortex XDR agent protects against all processes and threads in the
event chain up to the credential gathering process or file.
Report
When this module is disabled, the Cortex XDR agent does not analyze
Disabled the event chain and does not block credential gathering.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
related to the credential gathering event chain.
Disabled
5. Configure Anti Webshell Protection to protect endpoint processes from dropping malicious web shells.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to drop malicious web shells, it performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the processes or files
that are related to the web shell drop event chain, and any scripts or
Disabled files called by the web shell dropping process.
6. Configure Financial Malware Threat Protection to protect against techniques specific to financial and banking malware.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a process that
attempts to access or steal financial or banking information, the Cortex
Report XDR agent performs the configured action.
Disabled
Quarantine Malicious Files Enabled In a causality chain, when the Cortex XDR agent detects a process that
attempts to access or steal financial or banking information, the Cortex
Disabled
XDR agent performs the configured action.
Action Mode Block In a causality chain, when the Cortex XDR agent detects a cryptomining
process or file, the Cortex XDR agent performs the configured action.
Report
Disabled
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines the process or file
detected during a cryptocurrency gathering attempt.
Disabled
Action Mode Block When the Cortex XDR agent detects container escaping attempts, it
performs the configured action.
Report
Disabled
9. Configure ELF File Examination to examine ELF files on endpoints and perform additional actions on them.
Action Mode Block When the Cortex XDR agent detects attempts to run malware in ELF
files, it performs the configured action.
Report
Disabled
Quarantine malicious ELF Disabled By default, the Cortex XDR agent blocks malware from running, but
files does not quarantine the file. You can enable one of the options to
Quarantine WildFire malware quarantine files, depending on the verdict issuer.
verdict
The Quarantine Malicious ELF Files feature is not available for malware
Quarantine WildFire and Local
identified on network drives.
Analysis malware verdict
Action on unknown ELF Allow Allow: Unknown files are not blocked and local verdicts are not issued
files to WildFire for them.
Run Local Analysis
Run Local Analysis: The Cortex XDR agent uses embedded machine
Block learning to determine the likelihood that an unknown file is malware, and
issues a local verdict for the file.
Block: Block unknown files but do not run local analysis. In this case,
unknown files remain blocked until the Cortex XDR agent receives an
official WildFire verdict.
Upload ELF files for cloud Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
analysis XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
Treat Grayware as Enabled When enabled, Cortex XDR treats all grayware with the same Action
Malware Mode as configured for malware.
Disabled
When disabled, grayware is considered benign, and is not blocked.
10. Configure Local File Threat Examination to enable detection of malicious files on the endpoint.
This module is supported by Cortex XDR agent 8.1.0 and later releases.
Action Mode Enabled When enabled, the Local Threat-Evaluation Engine (LTEE) analyzes the
endpoint for PHP files arriving from a web server and alerts about any
Disabled malicious PHP scripts.
Quarantine Malicious Files Enabled When enabled, the Cortex XDR agent quarantines malicious files on the
endpoint and does not quarantine updated files.
Disabled
11. Configure Reverse Shell Protection to prevent attempts to redirect standard input and output streams to network sockets.
Action Mode Block When the Cortex XDR agent detects attempts to redirect standard input
and output streams to network sockets, it performs the configured
Report
action.
Disabled
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Android
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
2. Configure APK Files Examination, to analyze and prevent malicious APK files from running on endpoints.
Action Mode Block When the Cortex XDR agent detects attempts to run malicious APK files,
it performs the configured action.
Report
Disabled
Action on unknown APK Allow Allow: Unknown files are not blocked and local verdicts are not issued
files to WildFire for them.
Run Local Analysis
Run Local Analysis: The Cortex XDR agent uses embedded machine
Block learning to determine the likelihood that an unknown file is malware, and
issues a local verdict for the file.
Block: Block unknown files but do not run local analysis. In this case,
unknown files remain blocked until the Cortex XDR agent receives an
official WildFire verdict.
Upload APK files for cloud Enabled When enabled, the Cortex XDR agent sends unknown files to Cortex
analysis XDR, and Cortex XDR sends the files to WildFire for analysis.
Disabled
The file types that the Cortex XDR agent analyzes depend on the
platform type. WildFire accepts files up to 100 MB in size.
Treat Grayware as Enabled When enabled, Cortex XDR treats all grayware with the same Action
Malware Mode as configured for malware.
Disabled
When enabled, Cortex XDR treats all grayware with the same Action
Mode as configured for malware.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
iOS
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile, or to import a
profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure URL filtering to analyze and block or report malicious URLs, and to block or allow custom URLs.
Blocking functionality is different for each security module. For SMS/MMS, Cortex XDR agent will move detected messages containing such URLs from
unknown senders to the Junk folder.
Action Mode Block When the Cortex XDR agent detects malicious URLs, the Cortex XDR
agent performs the configured action.
Report
To add numbers to the Block List, click +Add and enter the URL. Press
Disabled Enter to add more URLs.
To add URLs to the Allow List, define a list on the Legacy Agent
Exceptions page.
Spam Report Enabled Configure reporting of spam calls and messages to Cortex analysts.
Disabled
4. Configure Call and Messages Blocking for incoming calls and messages from known spam numbers.
Action Mode Block When the Cortex XDR agent detects incoming calls or messages from
known spam numbers, the Cortex XDR agent performs the configured
Report action.
Disabled
To add numbers to the Block List, click +Add and enter the phone
number. Press Enter to add more numbers.
Ensure that the same numbers are not added multiple times with
different leading zeros.
5. Configure Safari Browser Security Module. This security module can provide proactive gating of suspicious sites accessed using Safari, and provides
informative site analysis to the device user. This option is recommended for iOS devices that do not belong to your organization and do not use the
Network Shield feature.
To fully enable the Safari browser security module on the device side, each iOS device user must enable the Safari Safeguard module on the device, and
grant it permission to work on all websites. If the iOS device user does not do this, the endpoint's operation status is reported as Partially Protected.
The Safari browser security module will only function when the URL filtering module (see earlier in this procedure) is set to Block.
Enforce use of Safari Enabled When set to Enabled, the Safari Safeguard security module displays
Security Module "Required" on the Modules screen of the app. Full protection for Safari
Disabled
will only be active after the iOS device user has also activated it on the
device. When this module is also activated on the device, alerts are
forwarded to the tenant.
When set to Disabled, and users decide to enable the module on their
devices, alerts are visible locally on the iOS device only, and are not
forwarded to the tenant.
Safari malicious JS Enabled When set to Enabled, the Cortex XDR agent blocks the entire page in
blocking Safari where malicious JS files are detected.
Disabled
6. Configure Network and EDR Security Module. This module lets you configure granular control and monitoring of network traffic on iOS-based supervised
devices. The devices' profiles must be also configured for this on the MDM side as explained in the Cortex XDR Agent iOS Guide.
Cortex XDR agent version 8.4 or higher are required for this feature.
Auto detected malicious Enabled When set to Enabled, the Cortex XDR agent automatically filters known
URL filtering malicious URLs.
Disabled
URL filtering Enabled When set to Enabled, the Cortex XDR agent filters URLs according to
the lists of allowed and blocked URLs configured in the URL Filtering
Disabled section above.
Predefined Blocked Apps List of apps A list of commonly known apps that your organization may be interested
in blocking on supervised devices is provided here. The Cortex XDR
agent will block use of the selected apps. You can select one or more
apps.
Blocked Bundle IDs A Bundle ID is an app's unique identifier, in string format, that is used to
identify the app in an app store. Communication will be blocked for any
process with exactly the Bundle ID defined here, or for a Bundle ID that
has the defined string as a suffix.
com.apple.calculator
H3DT34.com.apple.calculator
widget.com.apple.calculator
Block List of Remote The Cortex XDR agent will block the IP addresses that you add to this
IPV4/IPV6 IP Address field. Both IPV4 and IPv6 addresses are supported.
Digest alerts Enabled Digest alerts are alerts that contain a summary of blocked network
activity over a prolonged time period.
Disabled
When set to Enabled, the Cortex XDR agent sends digest alerts to the
tenant.
Digest alerts max 1 to 7 days When Digest alerts is enabled, you can limit the digest alert to no more
frequency than one per <selected number of days>.
Max alerts per app Hours Limit alert notifications by the Cortex XDR agent app to one alert for
each app per <selected period of time>.
Minutes
Max user notifications Hours Limit alert notifications by the Cortex XDR agent app to one user
notification per <selected number of hours>.
Abstract
Exploit prevention profiles control the action that the Cortex XDR agent takes when attempts to exploit software vulnerabilities or flaws occur.
Exploit prevention profiles block attempts to exploit system flaws in browsers, and in the operating system. For example, exploit prevention profiles help protect
against exploit kits, illegal code execution, and other attempts to exploit process and system vulnerabilities.
You can configure the action that the Cortex XDR agent takes when attempts to exploit software vulnerabilities or flaws occur. To protect against specific exploit
techniques, you can customize exploit protection capabilities in each exploit prevention profile. Default settings are shown in parentheses. To fine-tune your
exploitprevention policy, you can override the configuration of each capability to block the malicious exploit, allow but report it, or disable the module.
To view which processes are protected by each capability, see Processes Protected by Exploit Security Policy.
For each setting that you override, clear the corresponding option to Use Default, and select the setting of your choice.
In this profile, the Report options configure the endpoints to report the corresponding exploit attempts to Cortex XDR, without blocking them. The Disabled
options configure the endpoints to neither analyze nor report the corresponding malware or behavior.
The tasks below are organized according to the operating systems used by your organization's endpoints.
Windows
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
2. Configure Browser Exploits Protection, to protect endpoints from malicious or compromised websites.
Action Mode Block When the Cortex XDR agent detects attempts to exploit browser
processes for malicious purposes, it performs the configured action.
Report
Disabled
3. Configure Logical Exploits Protection to prevent execution of malicious code using common operating system mechanisms.
Action Mode Block When the Cortex XDR agent detects attempts to execute malicious code
using operating system mechanisms, it performs the configured action.
Report
Disabled
Block List DLLs The block list blocks the specified DLLs when they are run by a
protected process, using the DLL Hijacking module.
The DLL folder or file must include the complete path. To complete
the path, you can use environment variables or the asterisk (*) as
a wildcard to match any string of characters (for example,
*/windows32/).
4. Configure Known Vulnerable Processes Protection to automatically protect endpoints from attacks that try to leverage common operating system
mechanisms for malicious purposes.
Action Mode Block Attackers can use existing mechanisms in the operating system to
execute malicious code. When you set this option to Block, in order to
Report block such code, you can also configure Java Deserialization Protection.
Disabled
Java Deserialization Enabled When enabled, the same action mode defined for the Known Vulnerable
Protection Process Protection is inherited here.
Disabled
5. Configure Operating System Exploit Protection to prevent attackers from using operating system mechanisms for malicious purposes.
Action Mode Block When the Cortex XDR agent detects attempts to use the operating
system's own mechanisms to perform an attack, the Cortex XDR agent
Report
performs the configured action.
Disabled
6. Configure Exploit Protection for Additional Processes to protect third-party processes running on endpoints.
Action Mode Block The Cortex XDR agent can protect third-party processes from
exploitation. To protect these processes, define them in the Processes
Report list below this field. If you select the Block option, we recommend that
you perform testing and validation to ensure that there are no
Disabled
compatibility issues with the third-party processes that you have defined.
Java Deserialization
ROP
SO Hijacking
Processes If you want to add exploit protection for one or more additional third-
party processes, add them here.
2. Enter the file name of the process that you want to block, and
press ENTER.
7. Configure Unpatched Vulnerabilities Protection to provide a temporary workaround for protecting unpatched endpoints from known vulnerabilities.
This step provides a temporary workaround for the following publicly known information-security vulnerabilities and exposures: CVE-2021-24074, CVE-
2021-24086 and CVE-2021-24094.
If you choose not to patch the endpoint, the Unpatched Vulnerabilities Protection capability allows the Cortex XDR agent to apply a workaround to
protect the endpoints from the known vulnerability. It takes the Cortex XDR agent up to 6 hours to enforce your configured policy on the endpoints.
If you have Windows endpoints in your network that are unpatched and exposed to a known vulnerability, we strongly recommend that you upgrade to
the latest Windows Update that has a fix for that vulnerability.
Modify IPv4 and IPv6 Do not modify system settings To address known vulnerabilities CVE-2021-24074, CVE-2021-24086,
Settings and CVE-2021-24094, you can Modify IPv4 and IPv6 settings as follows:
Modify settings until the endpoint is
patched Do not modify system settings (default): Do not modify the IPv4
and IPv6 settings currently set on the endpoint, whether the
Revert system settings to your current values are your original values or values that were
previous settings modified as part of this workaround.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
macOS
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. Enter a unique Profile Name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30 characters. The
name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure Browser Exploits Protection, to protect endpoints from malicious or compromised websites.
Action Mode Block When the Cortex XDR agent detects attempts to exploit browser
processes for malicious purposes, it performs the configured action.
Report
Disabled
3. Configure Logical Exploits Protection to prevent execution of malicious code using common operating system mechanisms.
Action Mode Block When the Cortex XDR agent detects attempts to execute malicious code
using operating system mechanisms, it performs the configured action.
Report
Disabled
4. Configure Known Vulnerable Processes Protection to automatically protect endpoints from attacks that try to leverage common operating system
mechanisms for malicious purposes.
Action Mode Block Attackers can use existing mechanisms in the operating system to
execute malicious code. When you set this option to Block, in order to
Report block such code, you can also configure Java Deserialization Protection.
Disabled
5. Configure Operating System Exploit Protection to prevent attackers from using operating system mechanisms for malicious purposes.
Action Mode Block When the Cortex XDR agent detects attempts to use the operating
system's own mechanisms to perform an attack, the Cortex XDR agent
Report performs the configured action.
Disabled
Action Mode Block The Cortex XDR agent can protect third-party processes from
exploitation. To protect these processes, define them in the Processes
Report
list below this field. If you select the Block option, we recommend that
Disabled you perform testing and validation to ensure that there are no
compatibility issues with the third-party processes that you have defined.
Java Deserialization
ROP
SO Hijacking
Processes If you want to add exploit protection for one or more additional third-
party processes, add them here.
2. Enter the file name of the process that you want to block, and
press ENTER.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Linux
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile,
or to import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
d. Enter a unique Profile Name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30 characters. The
name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure Known Vulnerable Processes Protection to automatically protect endpoints from attacks that try to leverage common operating system
mechanisms for malicious purposes.
Action Mode Block Attackers can use existing mechanisms in the operating system to
execute malicious code. When you set this option to Block, in order to
Report block such code, you can also configure Java Deserialization Protection.
Disabled
3. Configure Operating System Exploit Protection to prevent attackers from using operating system mechanisms for malicious purposes.
Action Mode Block When the Cortex XDR agent detects attempts to use the operating
system's own mechanisms to perform an attack, the Cortex XDR agent
Report performs the configured action.
Disabled
4. Configure Exploit Protection for Additional Processes to protect third-party processes running on endpoints.
Action Mode Block The Cortex XDR agent can protect third-party processes from
exploitation. To protect these processes, define them in the Processes
Report list below this field. If you select the Block option, we recommend that
you perform testing and validation to ensure that there are no
Disabled
compatibility issues with the third-party processes that you have defined.
Java Deserialization
ROP
SO Hijacking
Processes If you want to add exploit protection for one or more additional third-
party processes, add them here.
2. Enter the file name of the process that you want to block, and
press ENTER.
2. Right-click your new profile, and select Create a new policy rule using this profile.
Abstract
Use agent settings profiles to customize Cortex XDR agent settings for different platforms and groups of users.
Use agent settings profiles to customize Cortex XDR agent settings for different platforms and groups of users.
The tasks below are organized according to the operating systems used by your organization's endpoints.
Windows
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile or import a profile
from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
b. Select the Windows platform, and Agent Settings as the profile type.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. For Disk Quota, configure the amount of disk space to allot for Cortex XDR agent logs. Specify a value in MB from 100 to 10,000 (default is 5,000).
By default, Cortex XDR uses the settings specified in the default agent settings profile and displays the default configuration in parentheses. When you
select a setting other than the default, you override the default configuration for the profile.
Tray Icon Visible (default) Choose whether you want the Cortex XDR agent icon to be Visible or
Hidden in the notification area (system tray).
Hidden
XDR Agent Console Enabled When enabled, allows access to Cortex XDR.
Access
Disabled
XDR Agent User Enabled Enable this option to operate display notifications in the notifications area
Notifications on the endpoint. When you enable notifications, you can use the default
Disabled
notification messages that are displayed for each option, or provide
custom text for each notification type. You can also customize a
notification footer. Options include:
4. Customize Agent Security settings. By default, the Cortex XDR agent protects all agent components. However, you can configure protection with more
granularity for Cortex XDR agent services, processes, files, registry values and tampering protection.
In Traps 5.0.6 and later releases, when protection is enabled, access will be read-only. In earlier Traps releases, enabling protection disables all access
to services, processes, files, and registry values.
If you choose the Enable option, you must also enable XDR Agent Tampering Protection in the malware profile and set it to Block. Ensure that both
profiles are assigned to the same endpoints.
Service Protection Enabled Protects against stopping agent services. When this protection is
enabled, agent services won't accept operating system stop requests.
Disabled
Process Protection Enabled Protects against attempts to tamper with agent processes; injecting into
them, terminating them, reading, or writing into their virtual memory.
Disabled
File Protection Enabled Protects against attempts to tamper with agent files; deleting, replacing,
renaming, moving, or writing files/directories.
Disabled
Registry Protection Enabled Protects against attempts to tamper with agent registry settings and
agent policies, such as deleting, adding, and renaming registry keys or
Disabled
values which belong to the agent.
Pipe Protection Enabled Protects against attempts to tamper with the agent's pipe-based inter-
process communication (IPC) mechanism.
Disabled
Define and confirm an encrypted password that the user must specify to uninstall the Cortex XDR agent. The uninstall password, also known as the
supervisor password, is also used to protect against tampering attempts using Cytool commands. The password must contain:
8 to 32 characters
Lower-case letter
Upper-case letter
Number
The Windows Security Center is a reporting tool that monitors the system health and security state of Windows endpoints on Windows 7 and later
releases.
When you enable Cortex XDR agent registration with the Windows Security Center, Windows automatically shuts down Microsoft Defender on Windows-
based workstation endpoints. If you still want to allow Microsoft Defender to run on a workstation endpoint where Cortex XDR is installed, you must use
the Disable option. However, Palo Alto Networks does not recommend running Windows Defender and the Cortex XDR agent on the same endpoint,
because this might cause performance and incompatibility issues with Global Protect and other applications.
On Windows-based servers, ensure that Windows Defender is disabled. This can be done using a Group Policy Object (GPO) or another group
management tool of your choice.
Windows Security Enabled The Cortex XDR agent registers with the Windows Security Center as an
Integration official Antivirus (AV) software product. As a result, Windows
automatically shuts down Microsoft Defender on the endpoint, except for
endpoints that are running Windows Server versions.
Enabled No Patches (Traps 5.0 release only) Select this option if you want to register the
agent with the Windows Security Center, but prevent Windows from
automatically installing Meltdown/Spectra vulnerability patches on the
endpoint.
Disabled The Cortex XDR agent does not register with the Windows Action Center.
As a result, Windows Action Center might indicate that virus protection is
off, depending on other security products that are installed on the
endpoint.
When the Cortex XDR agent raises alerts on process-related activity on the endpoint, the agent collects the contents of memory and other data about the
event, in what is known as an alert data dump file. You can configure the Cortex XDR agent to automatically upload alert data dump files to Cortex XDR.
Alert Data Dump File Size Small The Full option creates the largest and most complete set of information.
Medium
Full
Automatically Upload Alert Enabled During event investigation, if automatic upload was disabled, you can
Data Dump File still manually retrieve this data.
Disabled
8. Enable XDR Pro Endpoint Capabilities, and then configure the capabilities required by your organization. The Cortex XDR Pro features are hidden until
you enable this option.
Requires a Cortex XDR Pro per Endpoint license. When you enable this feature, a Cortex XDR Pro per Endpoint license is consumed.
Endpoint Information Enabled When enabled, the Cortex XDR agent collects host inventory information
Collection such as users, groups, services, drivers, hardware, and network shares,
Disabled as well as information about applications installed on the endpoint,
including CVE and installed KBs for Vulnerability Assessment.
With this option you can also select the File Search and Destroy
Monitored File Types where Cortex XDR monitors all the files on the
endpoint, or only common file types. If you choose Common file types,
Cortex XDR monitors the following file types:
bin, msi, doc, docx, docm, rtf, xls, xlsx, xlsm, pdf,
ppt, pptx, pptm, ppsm, pps, ppsx, mpp, mppx, vsd,
xsdx and wsf.
A hash will also be computed for these file types: zip, pe, and ole.
Additionally, you can exclude files that exist under a specific local path
on the endpoint from inclusion in the files database.
When enabled, the Cortex XDR agent collects detailed information about
what happened on your endpoint, to create a forensics database. Define
the following to enable collection and collection time intervals for the
following entity types:
Process Execution
File Access
Persistence
Command History
Network
Remote Access
Search Collections
Disabled When enabled, the Cortex XDR agent scans your network using Ping or
Nmap to provide updated identifiers of your unmanaged network assets.
To enable access to these options, scroll Ping scans return the IP address, MAC address, Hostname, and
down to Network Location Configuration, Platform, whereas Nmap will scan the most common ports for the IP
and set Action Mode to Enabled. address, Hostname, Platform, and OS version.
Ping is a lighter scan, that generates icmp requests to peers and does
not use external tools. Nmap will make more noise on the network, but
the resulting can be better, and also supports operating system
detection.
Depending on the type of scan you defined, the agent Ping scan takes
30 minutes, and Nmap takes 60 minutes. Following each scan, Cortex
XDR aggregates the IP addresses that were collected, and displays the
results in the Asset Management table.
9. Configure XDR Cloud for hosts running on cloud platforms. By default (auto-detect mode), the agent detects whether an endpoint is a cloud-based
(container) installation or a permanent installation, and uses license allocation accordingly.
XDR Cloud Auto-detect If you set this to Enabled in the profile, any agent using this profile will be
treated as if it is a cloud-based agent for licensing purposes.
Enabled
This feature requires a Cortex XDR Cloud per Host license. This license is required for both cloud-based and on-prem use of K8 nodes.
10. Configure Response Actions for specific applications or processes, using an Allow list.
If you need to isolate an endpoint, but want to allow access for a specific application or process, add it to the Network Isolation Allow List. Keep the
following considerations in mind:
When you add a specific application to your allow list from network isolation, the Cortex XDR agent continues to block some internal system
processes. This is because some applications, for example, ping.exe, can use other processes to facilitate network communication. As a result, if
the Cortex XDR agent continues to block an application you included in your allow list, you may need to perform additional network monitoring to
determine the process that facilitates the communication, and then add that process to the allow list.
For VDI sessions, use of the network isolation response action can disrupt communication with the VDI host management system, thereby stopping
access to the VDI session. Therefore, before using the response action, you must add the VDI processes and corresponding IP addresses to your
allow list.
b. Specify the Process Path that you want to allow, and the IPv4 or IPv6 address of the endpoint. Use the * wildcard on either side to match any
process or IP address. For example, specify * as the process path and an IP address to allow any process to run on the isolated endpoint with
that IP address. Conversely, specify * as the IP address and a specific process path to allow the process to run on any isolated endpoint that
receives this profile.
Shadowcopy Activation Enabled When enabled, the Cortex XDR agent automatically turns on the system
protection of the endpoint. This ensures that the data is backed up and
Disabled may be recovered in cases of any security breaches or loss of data.
Disk Space Limitation Disk space in MB Limits the amount of disk space in MB that can be used for endpoint
data backup.
If you disable or delay automatic-content updates provided by Palo Alto Networks, it may affect the security level in your organization.
If you disable content updates for a newly installed agent, the agent retrieves the content for the first time from Cortex XDR, and then disables
content updates on the endpoint.
When you add a Cortex XDR agent to an endpoint group with a disabled content auto-upgrades policy, the policy is applied to the added agent as
well.
Content Auto-update Enabled (default) By default, the Cortex XDR agent always retrieves the most updated
content and deploys it on the endpoint, to ensure that it is always
Disabled (default)
protected with the latest security measures.
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Content Staging Enabled Enable users to deploy agent staging content on selected test
environments. Staging content is released before production content,
Disabled (default)
allowing for early evaluation of the latest content update.
Content Rollout Immediately The Cortex XDR agent can retrieve content updates immediately as they
are available, or after a pre-configured delay period. When you delay
Delayed content updates, the Cortex XDR agent will retrieve the content
according to the configured delay. For example, if you configure a delay
period of two days, the agent will not use any content released in the last
48 hours.
13. Agent Auto-Upgrade is disabled by default. Before enabling Auto-Update for Cortex XDR agents, make sure to consult with all relevant stakeholders in
your organization.
Automatic upgrades are not supported with non-persistent VDI and temporary sessions.
Disabled (Default)
Automatic Upgrade Scope Latest agent release For One release before the latest one, Cortex XDR upgrades the agent
to the previous release before the latest, including maintenance
One release before the latest one releases. Major releases are numbered X.X, such as release 8.0, or 8.2.
Only maintenance releases Maintenance releases are numbered X.X.X, such as release 8.2.2.
Only maintenance releases in a For Only maintenance releases in a specific version, select the required
specific version release version.
Upgrade Rollout Immediate For Delayed, set the delay period (number of days) to wait after the
version release before upgrading endpoints. Choose a value between 7
Delayed and 45.
14. Specify a Download Source, or multiple sources, from which Cortex XDR agent retrieves agent and content updates. The options provided help you to
reduce external network bandwidth loads during updates. When all sources are selected, the download sources are prioritized in the following order:
P2P > Broker VM > Cortex XDR Server.
To ensure your agents remain protected, the Cortex Server download source is always enabled to allow all Cortex XDR agents in your network to retrieve
the content directly from the Cortex XDR server on their following heartbeat.
When you install the Cortex XDR agent, the agent retrieves the latest content update version available. A freshly installed agent can take between
five to ten minutes (depending on your network and content update settings) to retrieve the content for the first time. During this time, your endpoint
is not protected.
When you upgrade a Cortex XDR agent to a newer Cortex XDR agent version, if the new agent cannot use the content version running on the
endpoint, the new content update will start within one minute in P2P, and within five minutes from Cortex XDR.
Select all Selected When selected, all download source options are enabled.
Clear
P2P 33221 (default port) Cortex XDR deploys serverless peer-to-peer distribution to Cortex XDR
agents in your LAN network by default. Within the six hour randomization
custom port
window during which the Cortex XDR agent attempts to retrieve the new
version, it will broadcast its peer agents on the same subnet twice: once
within the first hour, and once again during the following five hours. If the
agent did not retrieve the files from other agents in both queries, it will
proceed to the next download source defined in your profile.
To enable P2P, you must enable UDP and TCP over the port specified
for P2P Port. By default, Cortex XDR uses port 33221. You can change
the port number, if required by your organization.
Brokers If you have a Palo Alto Networks Broker VM in your network, you can
leverage the Local Agent Settings applet to cache release upgrades
Clusters and content updates. When the Broker VM is enabled and configured
appropriately (refer to Activate the Local Agent Settings) , it retrieves the
(only Broker VMs that are connected and
latest installers and content every 15 minutes. The Broker VM stores
configured for caching can be selected)
them for a 30-day retention period since an agent last asked for them.
If the files are not available on the Broker VM at the time of the request,
the agent proceeds to download the files directly from the Cortex XDR
server.
When you select multiple Broker VMs, the agent chooses a Broker VM
randomly for each download request.
15. Configure Network Location Configuration for your Cortex XDR agents. If you configure host firewall rules in your network, you must:
Enable Network Location Configuration Action Mode, so that Cortex XDR can test the network location of your device.
If the Cortex XDR agent detects a network change on the endpoint, the agent triggers the device location test and re-calculates the policy according to
the new location.
Action Mode Enabled When Enabled, a domain controller (DC) test checks whether the device
is connected to the internal network or not. If the device is connected to
Disabled the internal network, it is determined to be in the organization. If the DC
test fails or returns an external domain, Cortex XDR performs a DNS
connectivity test.
DNS Name Your network's DNS name The Cortex XDR agent tests network location by submitting a Domain
Name Server (DNS) name that is known only to the internal network. If
the DNS returns the pre-configured internal IP address, the device is
determined to be within the organization. If the DNS IP address cannot
be resolved, the device is deemed to be located elsewhere.
IP Address Your network's DNS internal IP address Enter the internal DNS IP address to be used by the DNS test.
17. Configure Agent Certificates. For improved security, enforce the use of root CA that is provided by Palo Alto Networks rather than on the local machine.
Disabled If the Cortex XDR agent is initially unable to communicate without the
local store, enforcement is not enabled and the agent will show as
Disabled (Notify)
partially protected.
When set to Disabled (Notify), Cortex XDR agents with this policy will
trigger a banner in the server to notify customers about potential risk,
and will direct them to change the certificate and the setting. The Last
Certificate Enforcement Fallback column of the All Endpoints table is
updated, and management audit logs related to the local store fallback
are received by the server.
When set to Disabled, Cortex XDR agents with this policy will trigger a
banner in the server to notify customers about potential risk, and will
direct them to change the certificate and the setting. The Last Certificate
Enforcement Fallback column of the All Endpoints table is not updated,
and no management audit logs related to the local store fallback are
received by the server.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
macOS
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile or import a profile
from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
b. Select the macOS platform, and Agent Settings as the profile type.
c. Click Next.
d. Enter a unique Profile Name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30 characters. The
name will be visible from the list of profiles when you configure a policy rule.
2. For Disk Quota, configure the amount of disk space to allot for Cortex XDR agent logs. Specify a value in MB from 100 to 10,000 (default is 5,000).
By default, Cortex XDR uses the settings specified in the default agent settings profile and displays the default configuration in parentheses. When you
select a setting other than the default, you override the default configuration for the profile.
Tray Icon Visible (default) Choose whether you want the Cortex XDR agent icon to be Visible or
Hidden in the notification area (system tray).
Hidden
XDR Agent Console Enabled When enabled, allows access to Cortex XDR.
Access
Disabled
XDR Agent User Enabled Enable this option to operate display notifications in the notifications area
Notifications on the endpoint. When you enable notifications, you can use the default
Disabled
notification messages that are displayed for each option, or provide
custom text for each notification type. You can also customize a
notification footer. Options include:
4. For Agent Security, configure XDR Agent Tampering Protection (default is Enabled). By default, the Cortex XDR agent protects all agent components.
In Traps 5.0.6 and later releases, when protection is enabled, access will be read-only. In earlier Traps releases, enabling protection disables all access
to services, processes, files, and registry values.
Define and confirm an encrypted password that the user must specify to uninstall the Cortex XDR agent. The uninstall password, also known as the
supervisor password, is also used to protect against tampering attempts via Cytool commands. The password must contain:
8 to 32 characters
Lower-case letter
Upper-case letter
Number
When the Cortex XDR agent raises alerts on process-related activity on the endpoint, the agent collects the contents of memory and other data about the
event, in what is known as an alert data dump file. You can configure the Cortex XDR agent to automatically upload alert data dump files to Cortex XDR.
Alert Data Dump File Size Small The Full option creates the largest and most complete set of information.
Medium
Full
Automatically Upload Alert Enabled During event investigation, if automatic upload was disabled, you can
Data Dump File still manually retrieve this data.
Disabled
7. Requires a Cortex XDR Pro per Endpoint license. When you enable this feature, a Cortex XDR Pro per Endpoint license is consumed.
Enable XDR Pro Endpoint Capabilities, and then configure the capabilities required by your organization. The Cortex XDR Pro features are hidden until
you enable this option.
Endpoint Information Enabled When enabled, the Cortex XDR agent collects Host Inventory
Collection information such as users, groups, services, drivers, hardware, and
Disabled network shares, as well as information about applications installed on
the endpoint, including CVE and installed KBs for Vulnerability
Assessment.
With this option you can also select the File Search and Destroy
Monitored File Types where Cortex XDR monitors all the files on the
endpoint, or only common file types. If you choose Common file types,
Cortex XDR monitors the following file types:
acm, apk, ax, bat, bin, bundle, csv, dll, dmg, doc,
docm, docx, dylib, efi, hta, jar, js, jse, jsf, lua,
mpp, mppx, mui, o, ocx, pdf, pkg, pl, plx, pps, ppsm,
ppsx, ppt, pptm, pptx, py, pyc, pyo, rb, rtf, scr,
sh, vds, vsd, wsf, xls, xlsm, xlsx, xsdx, and zip.
Additionally, you can exclude files that exist under a specific local path
on the endpoint from inclusion in the files database.
When enabled, the Cortex XDR agent collects detailed information about
what happened on your endpoint, to create a forensics database. Define
the following to enable collection and collection time intervals for the
following entity types:
Process Execution
File Access
Persistence
Command History
Network
Search Collections
8. Configure XDR Cloud for hosts running on cloud platforms. By default (auto-detect mode), the agent detects whether an endpoint is a cloud-based
(container) installation or a permanent installation, and uses license allocation accordingly.
XDR Cloud Auto-detect If you set this to Enabled in the profile, any agent using this profile will be
treated as if it is a cloud-based agent for licensing purposes.
Enabled
This feature requires a Cortex XDR Cloud per Host license. This license is required for both cloud-based and on-prem use of K8 nodes.
9. Configure Response Actions for specific applications or processes, using an Allow list.
If you need to isolate an endpoint, but want to allow access for a specific application or process, add it to the Network Isolation Allow List. Keep the
following considerations in mind:
When you add a specific application to your allow list from network isolation, the Cortex XDR agent continues to block some internal system processes.
This is because some applications, for example, ping.exe, can use other processes to facilitate network communication. As a result, if the Cortex XDR
agent continues to block an application you included in your allow list, you may need to perform additional network monitoring to determine the process
that facilitates the communication, and then add that process to the allow list.
b. Specify the Process Path that you want to allow, and the IPv4 or IPv6 address of the endpoint. Use the * wildcard on either side to match any
process or IP address. For example, specify * as the process path and an IP address to allow any process to run on the isolated endpoint with
that IP address. Conversely, specify * as the IP address and a specific process path to allow the process to run on any isolated endpoint that
receives this profile.
Time Machine Activation Enabled When enabled, this option automatically turns on the Time Machine
setting of the endpoint. This ensures that the data is backed up and may
Disabled be recovered in cases of any security breaches or loss of data.
If you disable or delay automatic-content updates provided by Palo Alto Networks, it may affect the security level in your organization.
If you disable content updates for a newly installed agent, the agent retrieves the content for the first time from Cortex XDR, and then disables
content updates on the endpoint.
When you add a Cortex XDR agent to an endpoint group with a disabled content auto-upgrades policy, the policy is applied to the added agent as
well.
Content Auto-update Enabled (default) By default, the Cortex XDR agent always retrieves the most updated
content and deploys it on the endpoint, to ensure that it is always
Disabled (default)
protected with the latest security measures.
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Staging Content Enabled Enable users to deploy agent staging content on selected test
environments. Staging content is released before production content,
Disabled (default)
allowing for early evaluation of the latest content update.
Content Rollout Immediately The Cortex XDR agent can retrieve content updates immediately as they
are available, or after a pre-configured delay period. When you delay
Delayed
content updates, the Cortex XDR agent will retrieve the content
according to the configured delay. For example, if you configure a delay
period of two days, the agent will not use any content released in the last
48 hours.
12. Agent Auto-Upgrade is disabled by default. Before enabling Auto-Update for Cortex XDR agents, make sure to consult with all relevant stakeholders in
your organization.
Automatic upgrades are not supported with non-persistent VDI and temporary sessions.
Disabled (Default)
Automatic Upgrade Scope Latest agent release For One release before the latest one, Cortex XDR upgrades the agent
to the previous release before the latest, including maintenance
One release before the latest one
releases. Major releases are numbered X.X, such as release 8.0, or 8.2.
Only maintenance releases Maintenance releases are numbered X.X.X, such as release 8.2.2.
Upgrade Rollout Immediate For Delayed, set the delay period (number of days) to wait after the
version release before upgrading endpoints. Choose a value between 7
Delayed and 45.
13. Specify a Download Source, or multiple sources, from which Cortex XDR agent retrieves agent and content updates. The options provided help you to
reduce external network bandwidth loads during updates. When all sources are selected, the download sources are prioritized in the following order:
P2P > Broker VM > Cortex XDR Server.
To ensure your agents remain protected, the Cortex Server download source is always enabled to allow all Cortex XDR agents in your network to retrieve
the content directly from the Cortex XDR server on their following heartbeat.
When you install the Cortex XDR agent, the agent retrieves the latest content update version available. A freshly installed agent can take between
five to ten minutes (depending on your network and content update settings) to retrieve the content for the first time. During this time, your endpoint
is not protected.
When you upgrade a Cortex XDR agent to a newer Cortex XDR agent version, if the new agent cannot use the content version running on the
endpoint, the new content update will start within one minute in P2P, and within five minutes from Cortex XDR.
Select all Selected When selected, all download source options are enabled.
Clear
P2P 33221 (default port) Cortex XDR deploys serverless peer-to-peer distribution to Cortex XDR
agents in your LAN network by default. Within the six hour randomization
custom port
window during which the Cortex XDR agent attempts to retrieve the new
version, it will broadcast its peer agents on the same subnet twice: once
within the first hour, and once again during the following five hours. If the
agent did not retrieve the files from other agents in both queries, it will
proceed to the next download source defined in your profile.
To enable P2P, you must enable UDP and TCP over the port specified
for P2P Port. By default, Cortex XDR uses port 33221. You can change
the port number, if required by your organization.
Brokers If you have a Palo Alto Networks Broker VM in your network, you can
leverage the Local Agent Settings applet to cache release upgrades
Clusters and content updates. When the Broker VM is enabled and configured
appropriately (refer to Activate the Local Agent Settings) , it retrieves the
(only Broker VMs that are connected and
latest installers and content every 15 minutes. The Broker VM stores
configured for caching can be selected)
them for a 30-day retention period since an agent last asked for them.
If the files are not available on the Broker VM at the time of the request,
the agent proceeds to download the files directly from the Cortex XDR
server.
When you select multiple Broker VMs, the agent chooses a Broker VM
randomly for each download request.
14. Configure Network Location Configuration for your Cortex XDR agents. If you configure host firewall rules in your network, you must:
Enable Network Location Configuration Action Mode, so that Cortex XDR can test the network location of your device.
If the Cortex XDR agent detects a network change on the endpoint, the agent triggers the device location test and re-calculates the policy according to
the new location.
Action Mode Enabled When Enabled, a domain controller (DC) test checks whether the device
is connected to the internal network or not. If the device is connected to
Disabled the internal network, it is determined to be in the organization. If the DC
test fails or returns an external domain, Cortex XDR performs a DNS
connectivity test.
DNS Name Your network's DNS name The Cortex XDR agent tests network location by submitting a Domain
Name Server (DNS) name that is known only to the internal network. If
the DNS returns the pre-configured internal IP address, the device is
determined to be within the organization. If the DNS IP address cannot
be resolved, the device is deemed to be located elsewhere.
IP Address Your network's DNS internal IP address Enter the internal DNS IP address to be used by the DNS test.
Select whether to Enable or Disable Direct Server Access for the agent when connected using a proxy.
16. Configure Agent Certificates. For improved security, enforce the use of root CA that is provided by Palo Alto Networks rather than on the local machine.
Disabled If the Cortex XDR agent is initially unable to communicate without the
local store, enforcement is not enabled and the agent will show as
Disabled (Notify) partially protected.
When set to Disabled (Notify), Cortex XDR agents with this policy will
trigger a banner in the server to notify customers about potential risk,
and will direct them to change the certificate and the setting. The Last
Certificate Enforcement Fallback column of the All Endpoints table is
updated, and management audit logs related to the local store fallback
are received by the server.
When set to Disabled, Cortex XDR agents with this policy will trigger a
banner in the server to notify customers about potential risk, and will
direct them to change the certificate and the setting. The Last Certificate
Enforcement Fallback column of the All Endpoints table is not updated,
and no management audit logs related to the local store fallback are
received by the server.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Linux
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile or import a profile
from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
b. Select the Linux platform, and Agent Settings as the profile type.
c. Click Next.
d. Enter a unique Profile Name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30 characters. The
name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. For Disk Quota, configure the amount of disk space to allot for Cortex XDR agent logs. Specify a value in MB from 100 to 10,000 (default is 5,000).
Alert Data Dump File Size Small The Full option creates the largest and most complete set of information.
Medium
Full
Automatically Upload Alert Enabled During event investigation, if automatic upload was disabled, you can
Data Dump File still manually retrieve this data.
Disabled
4. Requires a Cortex XDR Pro per Endpoint license. When you enable this feature, a Cortex XDR Pro per Endpoint license is consumed.
Enable XDR Pro Endpoint Capabilities, and then configure the capabilities required by your organization. The Cortex XDR Pro features are hidden until
you enable this option.
Enable Host Insights Enabled Requires Host Insights add-on; not supported in Traps 5.0.x
Capabilities
Disabled When enabled, the various host insight capabilities can be configured.
Endpoint Information Enabled When enabled, the Cortex XDR agent collects Host Inventory
Collection information such as users, groups, services, drivers, hardware, and
Disabled network shares, as well as information about applications installed on
the endpoint, including CVE and installed KBs for Vulnerability
Assessment.
5. Configure XDR Cloud for hosts running on cloud platforms. By default (auto-detect mode), the agent detects whether an endpoint is a cloud-based
(container) installation or a permanent installation, and uses license allocation accordingly.
XDR Cloud Auto-detect If you set this to Enabled in the profile, any agent using this profile will be
treated as if it is a cloud-based agent for licensing purposes.
Enabled
This feature requires a Cortex XDR Cloud per Host license. This license is required for both cloud-based and on-prem use of K8 nodes.
6. Configure Response Actions for specific applications or processes, using an Allow list.
If you need to isolate an endpoint, but want to allow access for a specific application or process, add it to the Network Isolation Allow List. Keep the
following considerations in mind:
When you add a specific application to your allow list from network isolation, the Cortex XDR agent continues to block some internal system
processes. This is because some applications, for example, ping.exe, can use other processes to facilitate network communication. As a result, if
the Cortex XDR agent continues to block an application you included in your allow list, you may need to perform additional network monitoring to
determine the process that facilitates the communication, and then add that process to the allow list.
b. Specify the Process Path that you want to allow, and the IPv4 or IPv6 address of the endpoint. Use the * wildcard on either side to match any
process or IP address. For example, specify * as the process path and an IP address to allow any process to run on the isolated endpoint with
that IP address. Conversely, specify * as the IP address and a specific process path to allow the process to run on any isolated endpoint that
receives this profile.
7. Configure settings to automatically Revert Endpoint Isolation of an agent. When this feature is enabled, agent isolation will be cancelled when a
connection with the managing server is lost for the defined continuous period of time.
a. Either keep the recommended default setting (Enabled), or change it by selecting Disabled in the Revert Isolation field.
b. Set a time unit and enter the number of hours or days. We recommend 24 hours (default).
If you disable or delay automatic-content updates provided by Palo Alto Networks, it may affect the security level in your organization.
If you disable content updates for a newly installed agent, the agent retrieves the content for the first time from Cortex XDR, and then disables
content updates on the endpoint.
When you add a Cortex XDR agent to an endpoint group with a disabled content auto-upgrades policy, the policy is applied to the added agent as
well.
Content Auto-update Enabled (default) By default, the Cortex XDR agent always retrieves the most updated
content and deploys it on the endpoint, to ensure that it is always
Disabled (default) protected with the latest security measures.
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Staging Content Enabled Enable users to deploy agent staging content on selected test
environments. Staging content is released before production content,
Disabled (default) allowing for early evaluation of the latest content update.
Content Rollout Immediately The Cortex XDR agent can retrieve content updates immediately as they
are available, or after a pre-configured delay period. When you delay
Delayed
content updates, the Cortex XDR agent will retrieve the content
according to the configured delay. For example, if you configure a delay
period of two days, the agent will not use any content released in the last
48 hours.
9. Agent Auto-Upgrade is disabled by default. Before enabling Auto-Update for Cortex XDR agents, make sure to consult with all relevant stakeholders in
your organization.
Automatic upgrades are not supported with non-persistent VDI and temporary sessions.
Disabled (Default)
Automatic Upgrade Scope Latest agent release For One release before the latest one, Cortex XDR upgrades the agent
to the previous release before the latest, including maintenance
One release before the latest one
releases. Major releases are numbered X.X, such as release 8.0, or 8.2.
Only maintenance releases Maintenance releases are numbered X.X.X, such as release 8.2.2.
Upgrade Rollout Immediate For Delayed, set the delay period (number of days) to wait after the
version release before upgrading endpoints. Choose a value between 7
Delayed and 45.
10. Specify a Download Source, or multiple sources, from which Cortex XDR agent retrieves agent and content updates. The options provided help you to
reduce external network bandwidth loads during updates. When all sources are selected, the download sources are prioritized in the following order:
P2P > Broker VM > Cortex XDR Server.
To ensure your agents remain protected, the Cortex Server download source is always enabled to allow all Cortex XDR agents in your network to retrieve
the content directly from the Cortex XDR server on their following heartbeat.
When you install the Cortex XDR agent, the agent retrieves the latest content update version available. A freshly installed agent can take between
five to ten minutes (depending on your network and content update settings) to retrieve the content for the first time. During this time, your endpoint
is not protected.
When you upgrade a Cortex XDR agent to a newer Cortex XDR agent version, if the new agent cannot use the content version running on the
endpoint, the new content update will start within one minute in P2P, and within five minutes from Cortex XDR.
Select all Selected When selected, all download source options are enabled.
Clear
P2P 33221 (default port) Cortex XDR deploys serverless peer-to-peer distribution to Cortex XDR
agents in your LAN network by default. Within the six hour randomization
custom port
window during which the Cortex XDR agent attempts to retrieve the new
version, it will broadcast its peer agents on the same subnet twice: once
within the first hour, and once again during the following five hours. If the
agent did not retrieve the files from other agents in both queries, it will
proceed to the next download source defined in your profile.
To enable P2P, you must enable UDP and TCP over the port specified
for P2P Port. By default, Cortex XDR uses port 33221. You can change
the port number, if required by your organization.
Brokers If you have a Palo Alto Networks Broker VM in your network, you can
leverage the Local Agent Settings applet to cache release upgrades
Clusters and content updates. When the Broker VM is enabled and configured
appropriately (refer to Activate the Local Agent Settings) , it retrieves the
(only Broker VMs that are connected and
latest installers and content every 15 minutes. The Broker VM stores
configured for caching can be selected)
them for a 30-day retention period since an agent last asked for them.
If the files are not available on the Broker VM at the time of the request,
the agent proceeds to download the files directly from the Cortex XDR
server.
When you select multiple Broker VMs, the agent chooses a Broker VM
randomly for each download request.
Select whether to Enable or Disable Direct Server Access for the agent when connected using a proxy.
12. Configure Advanced Vulnerability Scanning for periodic Active Vulnerability Analysis (AVA) scans. This option is only available for tenants that are paired
with Prisma Cloud.
Custom For other time frames, select Custom, and then configure the desired
time frame. Where relevant, select the start day and time for the periodic
scans. If you select monthly scans, you can also configure a timeout
period, in hours.
Kernel module-based operation, offering synchronous anti-malware protection, event collection from kernel level, and anti-lpe protection
User Space Agent: user mode agent, for agents running Linux kernel 5.0.0 or higher, offering synchronous anti-malware and event collection from
kernel level
Neither of the above. When working in Kernel module-based operation running on an endpoint with an unsupported kernel, or installing with
installation flag --no-km , or when working in User Space Agent mode on a Linux kernel older than 5.0.0, the agent will run in Asynchronous
mode. In such cases, the anti-malware protection is asynchronous, and there is no event collection, no BTP, no EDR and no anti-lpe. This operation
mode frequently shows "partially protected" endpoints. To avoid this, you can configure the profile to give preference to Kernel mode, but to switch
to User Space Agent mode when the kernel module for an endpoint is not supported by a content update, and switch back when a the kernel
module in use is supported in a newer content update.
User Space Agent User Space Agent mode requires Linux kernel 5.0.0 or higher.
When Kernel Mode is Enabled When Kernel mode is used, to ensure continued full protection when a
unavailable, use User kernel version is not supported by a content update, select the Enabled
Disabled
Space Mode option.
User Space Agent mode requires Linux kernel 5.0.0 or higher. Endpoints
running an older Linux kernel version with this fallback enabled, will not
start using User Space Agent mode, and will operate asynchronously.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Android
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile or import a profile
from a file.
b. Select the Android platform, and Agent Settings as the profile type.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
If you disable or delay automatic-content updates provided by Palo Alto Networks, it may affect the security level in your organization.
When you add a Cortex XDR agent to an endpoint group with a disabled content auto-upgrades policy, the policy is applied to the added agent as
well.
Content Auto-update Enabled (default) By default, the Cortex XDR agent always retrieves the most updated
content and deploys it on the endpoint, to ensure that it is always
Disabled (default) protected with the latest security measures.
If you disable content updates, the agent stops retrieving them from the
Cortex XDR tenant, and keeps working with the current content on the
endpoint.
Content Rollout Immediately The Cortex XDR agent can retrieve content updates immediately as they
are available, or after a pre-configured delay period. When you delay
Delayed content updates, the Cortex XDR agent will retrieve the content
according to the configured delay. For example, if you configure a delay
period of two days, the agent will not use any content released in the last
48 hours.
When the option Upload Using Cellular Data is enabled, the Cortex XDR agent uses cellular data to send unknown apps to the Cortex XDR for
inspection. Standard data charges may apply. When this option is disabled, the Cortex XDR agent queues any unknown files and sends them when the
endpoint connects to a Wi-Fi network. If configured, the data usage setting on the Android endpoint takes precedence over this configuration.
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
iOS
a. Select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile or import a profile
from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
b. Select the iOS platform, and Agent Settings as the profile type.
c. Click Next.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
2. Configure the following notifications that can be pushed to the iOS device.
App Notifications Enabled Select whether to enable or disable notifications from the app on the iOS
device.
Disabled
Jailbreak Detection Enabled Select whether to enable or disable Jailbreak Detection notification to
the device.
Disabled
Restart Recommendation Enabled Select whether to enable or disable a reboot notification to the device.
An option can be set for a reminder every number of days. The default is
Disabled 15 days.
Stationary Device Enabled Select whether to enable or disable notifications for stationary iOS
Indicators devices, such as iPads that are expected to remain in a fixed location.
Disabled Options include:
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Abstract
Restrictions prevention profiles limit the locations from which executables can run on an endpoint.
Windows
By default, the Cortex XDR agent receives a default profile that contains a pre-defined configuration for each restriction capability. The default setting for each
capability is shown in parentheses in the user interface. To fine-tune your restrictions prevention policy, you can override the default configuration of each
capability as follows. For each setting that you override, clear the Use Default option, and select the setting of your choice.
Notify: Allow file execution, but notify the user that the file is attempting to run from a suspicious location. The Cortex XDR agent also reports the event to
Cortex XDR.
Disabled: Disable the module, and do not analyze or report execution attempts from restricted locations.
Example 17.
To customize the configuration for specific Cortex XDR agents, configure a new restrictions prevention profile and assign it to one or more policy rules. You can
restrict files from running from specific local folders, or from removable media.
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile
or import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
Action Mode Block When the Cortex XDR agent detects execution of files from outside the
pre-defined locations, it performs the configured action.
Notify
To add files or folders to the Block List, click +Add, enter the path,
Report and press Enter. To add more files or folders, click +Add again.
Disabled You can use a wildcard to match a partial name for the
folder and environment variables.
To add files or folders to the Allow List, define a list on the Legacy
Agent Exceptions page.
3. Configure Network Location Files to restrict access to all network locations except for explicitly trusted ones.
Action Mode Block When the Cortex XDR agent detects execution of files from network
locations that are not trusted, it performs the configured action.
Notify
To add files or folders to the Allow List, define a list on the Legacy Agent
Report Exceptions page.
Disabled
4. Configure Removable Media Files to restrict file execution launched from external drives that are attached to endpoints in your network.
Action Mode Block When the Cortex XDR agent detects execution of files from removable
media,it performs the configured action.
Notify
To add files or folders to the Allow List, define a list on the Legacy Agent
Report
Exceptions page.
Disabled
5. Configure Optical Drive Files to restrict file execution launched from optical disc drives that are attached to endpoints in your network.
Action Mode Block When the Cortex XDR agent detects execution of files from an optical
disc drive, it performs the configured action.
Notify
To add files or folders to the Allow List, define a list on the Legacy Agent
Report Exceptions page.
Disabled
Action Mode Enabled When user-defined BIOC prevention rules are present in the system, you
can enable them here.
Disabled
Configure custom BIOC prevention rules here:
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
macOS
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles. Click +Add Profile, and select whether to create a new profile
or import a profile from a file.
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
Action Mode Enabled When user-defined BIOC prevention rules are present in the system, you
can enable them here.
Disabled
Configure custom BIOC prevention rules here:
What to do next
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Linux
New profiles based on imported profiles are added, and do not replace existing ones.
c. Click Next.
d. For Profile Name, enter a unique name for the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
e. For Description, to provide additional context for the purpose or business reason for creating the profile, enter a profile description. For example,
you might include an incident identification number or a link to a help desk ticket.
Action Mode Enabled When user-defined BIOC prevention rules are present in the system, you
can enable them here.
Disabled
Configure custom BIOC prevention rules here:
If you are ready to apply your new profile to endpoints, you do this by adding it to a policy rule. If you still need to define other profiles, you can do this later.
During policy rule creation or editing, you select the endpoints to which to assign the policy. There are different ways of doing this, such as:
2. Right-click your new profile, and select Create a new policy rule using this profile.
Abstract
Exception profiles can be configured to override security policies for known processes, files, digital signers, URLs, BTP rules, telephone numbers, and other
exceptions.
Abstract
To allow full granularity, Cortex XDR enables you to create exceptions from your baseline policy. With these exceptions, you can remove specific folders or
paths from evaluation, or disable specific security modules. You can configure exception rules for Cortex XDR protection and prevention actions in a
centralized location, and apply them across multiple profiles. The exceptions can be configured from Settings → Exception Configuration.
Alert Exclusion rules specify match criteria for alerts that you want to suppress.
IOC/BIOC Suppression rules exclude one or more indicators from an IOC or BIOC rule that takes action on specific behaviors.
Disable Injection and Prevention rules specify exceptions that bypasses a process from prevention modules and injections.
Disable Prevention rules specify granular exceptions to prevention actions triggered for your endpoints.
Legacy Agent Exceptions define prevention profile exception rules for all endpoints.
Support Exception rules generate exceptions based on files provided by the support team.
Prior to Cortex XDR version 3.5, Legacy Agent Exceptions and Support Exceptions were configured through their relevant profiles.
Starting with version 3.5, Cortex XDR enables you to manage the Legacy Agent Exceptions and Support Exception configurations from a central location and
easily apply them across multiple profiles in the Agent Exceptions Management page.
To manage the Prevention profile exceptions from Exception Configuration, you must first migrate your existing exceptions configured via profiles. Your existing
exception profiles are migrated per module.
Cortex XDR simulates the migration to enable you to review the results before activating the migration.
1. Select Settings → Exception Configuration → Legacy Exceptions and click Start Simulation.
2. Review the Legacy Agent Exceptions and the Support Exception Rules.
3. You can then Activate the new agent management page or Cancel to continue using the Prevention Profiles to configure individual exceptions.
If you don't migrate the legacy exceptions, you can continue to create exceptions through the profiles.
After the migration, you can Add a support exception rule or Add a legacy exception rule.
Abstract
The Settings → Exception Configuration → Alert Exclusions page displays the alert exclusion rules in Cortex XDR.
An Alert Exclusion is a rule that contains a set of alert match criteria that you want to suppress from Cortex XDR. You can add an Alert Exclusion rule from
scratch or base the exclusion on alerts you investigate in an incident. After you create an exclusion rule, Cortex XDR excludes and no longer saves any of the
future alerts that match the criteria from incidents and search query results. If you select to apply the policy to historic results as well as future alerts, Cortex
XDR identifies the historic alerts as grayed out.
You can also set up alert exceptions by creating global endpoint policy exceptions. For more information, see Add a global endpoint policy exception.
The following table describes both the default fields and additional optional fields that you can add to the alert exclusions table and lists the fields in
alphabetical order.
Field Description
Checkbox to select one or more alert exclusions on which you want to perform actions.
Backward Scan Status Exclusion policy status for historic data, either enabled if you want to apply the policy to previous alerts or disabled if
you don’t want to apply the policy to previous alerts.
Comment Administrator-provided comment that identifies the purpose or reason for the exclusion policy.
Description Text summary of the policy that displays the match criteria.
Modification Date Date and time when the exclusion policy was created or modified.
Abstract
Learn how to create a rule to exclude certain criteria from raising alerts in Cortex XDR.
Through the process of triaging alerts or resolving an incident, you may determine whether a specific alert does not indicate a threat. If you do not want Cortex
XDR to display alerts that match certain criteria, you can create an alert exclusion rule.
After you create an exclusion rule, Cortex XDR hides any future alerts that match the criteria, and excludes the alerts from incidents and search query results. If
you choose to apply the rule to historic results as well as future alerts, the app identifies any historic alerts as grayed out.
If an incident contains only alerts with exclusions, Cortex XDR changes the incident status to Resolved - False Positive and sends an email notification to the
incident assignee (if set).
There are two ways to create an exclusion rule. You can define the exclusion criteria when you investigate an incident or you can create an alert exclusion from
scratch.
You can also set up alert exceptions by creating global endpoint policy exceptions. For more information, see Add a global endpoint policy exception.
Alert exclusions support Scope-Based Access Control (SBAC). For more information, see Manage user scope.
If Scoped Server Access is enabled and set to restrictive mode, you can edit a rule if you are scoped to all tags in the rule.
If Scoped Server Access is enabled and set to permissive mode, you can edit a rule if you are scoped to at least one tag listed in the rule.
If a rule was added when set to restrictive mode, and then changed to permissive (or vice versa), you will only have view permissions.
If after reviewing the incident details, you want to suppress one or more alerts from appearing in the future, create an exclusion policy based on the alerts in
the incident. When you create an incident from the incident view, you can define the criteria based on the alerts in the incident. If desired, you can also create
an Alert Exclusion Policy from scratch.
1. In Incident Response → Incidents, from the Incident view, click the menu icon and, select Create Exclusion.
3. Enter a description that specifies the reason or purpose of the alert exclusion rule.
4. Use the alert filters to add any match criteria for the alert exclusion policy.
You can also right-click a specific value in the alert to add it as match criteria. The app refreshes to show you which alerts in the incident would be
excluded. To see all matching alerts including those not related to the incident, clear the option to Show only alerts in the named incident.
If you later need to make changes, you can view, modify, or delete the exclusion policy from the Settings → Exception Configuration → Alert Exclusions
page.
Use the filters at the top of the table to build your exclusion criteria.
Use existing alert values to populate your exclusion criteria. To do so, right-click the column value on which you want to base your rule and select
Add alerts with <value> to configuration.
As you define the criteria, the app filters the results to display matches.
The alerts in the table will be excluded from appearing in the app after the rule is created and optionally, any existing alert matches will be grayed out.
This action is irreversible. All historically excluded alerts will remain excluded if you disable or delete the rule.
Abstract
If you want to create a rule to take action on specific behaviors but also want to exclude one or more indicators from the rule, you can create an IOC or BIOC
rule exception. An indicator can include the SHA256 hash of a process, process name, process path, vendor name, user name, causality group owner (CGO)
full path, or process command-line arguments. For more information about these indicators, see Detection rules. For each exception, you also specify the rule
scope to which the exception applies.
In case you need to map fields returned in an XQL process query to your exception configuration, the following table provides a matrix for the criteria
mentioned in this procedure to the fields returned in a process query.
Cortex XDR only supports exceptions with one attribute. See Add an alert exclusion rule to create advanced exceptions based on your filtered criteria.
5. Select the scope of the exception, whether the exception applies to IOCs, BIOCs, or both.
By default, all BIOC rules that match the criteria are excluded. To exclude only specific BIOC rules, select them from the provided rule list. You can add
multiple rules.
By default, activity matching the indicators does not trigger any rule. As an alternative, you can select one or more rules. After you save the exception,
the Exceptions count for the rule increments. If you later edit the rule, you will also see the exception defined in the rule summary.
2. In the Exceptions table, locate the exception rule you want to export. You can select multiple rules.
If one or more of the selected exceptions are applied to a specific BIOC rule, select one of the following options:
Export anyway
Export only non-specific Exceptions: Only export exceptions are applied on all BIOC rules
Export all Exceptions as non-specific: Export and apply specific Exceptions to BIOC rules
Abstract
You can generate granular exceptions to prevention actions defined for your endpoints.
You can generate granular exceptions to prevention actions defined for your endpoints. You can specify signers, command line, or processes to exclude from
the prevention actions triggered by specific security modules. This may be useful when you have processes that are essential to your organization and must
not be terminated. Cortex XDR still generates alerts from the disabled rules.
All applicable prevention actions are skipped for the files and process that match the properties defined in the rule.
Consider the consequences of disabling a prevention rule before you add the exception and monitor it over time.
You can only apply a Disable Prevention Rule to agents version 7.9 and later.
2. Specify an optional description for the reason or intent for the rule.
3. Select the platform. To cover all your endpoints, you can prevent different exception rules per platform.
4. Under Target Properties, specify the Hash, Path, Command Line argument, or trusted Signer Name, or any combination of these.
When you specify two or more values, the exception is applied only if the file satisfies all the specified target properties.
5. Select one or more security Modules that won't trigger prevention actions.
6. Select the Scope for the rule. If you want to apply the rule to only specific Exception Profiles, select them from the list.
8. Review the configurations for the exception, and if the risks are acceptable to you, select I understand the risk.
Abstract
You can generate a temporary exception to bypass a process from prevention modules and injections.
You can generate a temporary exception to bypass a process from prevention modules and injections. You can specify paths, or command line, from both
prevention and injection. This may be useful when you have processes that are essential to your organization and must not be terminated. Cortex XDR still
generates alerts from data collections.
Consider the consequences of disabling a prevention rule before you add the exception and monitor it over time.
You can only apply a Disable Prevention Rule to agents version 7.9 and later.
4. Select the platform. To cover all your endpoints, you can prevent different exception rules per platform.
7. Select the Scope for the rule. If you want to apply the rule to only specific Exception Profiles, select them from the list.
9. Click Yes, to confirm that you acknowledge that the selected rules will be disabled.
Abstract
You can define and manage exceptions based on files received from the customer support team. You can apply the rule across all of your endpoints or to
specific profiles.
Prior to Cortex XDR version 3.5, support exceptions were configured through profiles.
Starting with version 3.5, Cortex XDR enables you to manage the support exceptions from a central location and easily apply them across multiple
profiles on the Support Exception Rules page.
To manage the prevention profile exceptions from Exception Configuration, you must first migrate your existing exceptions configured via the Prevention
profiles.
1. From Settings → Exception Configuration → Support Exception Rules, click + Import from file.
2. Locate the JSON file you received from the customer support team.
3. Select to apply the rule to specific Profiles or select Global to apply to all endpoints.
If you don't migrate the legacy exceptions, you can continue to create exceptions through the profiles.
Abstract
Learn how to use Cortex XDR Legacy Exception rules to configure an exception to prevention and protection modules on endpoints for selected profiles.
Legacy Exception rules enable you to configure an exception to prevention and protection modules on endpoints for selected profiles.
Items included in allow lists may continue to generate Cortex XDR security events. If you want to exclude event reporting, configure this on the Alert Exclusions
page (Settings → Exception Configurations → Alert Exclusions).
Prior to Cortex XDR version 3.5, legacy exceptions were configured through profiles.
Starting with version 3.5, Cortex XDR enables you to manage the malware security exceptions from a central location and easily apply them across
multiple profiles in the Legacy Agent Exceptions Management page.
To manage the prevention profile exceptions from Exception Configuration, you must first migrate your existing exceptions configured via the prevention
profiles.
Your migrated rules are displayed on the Settings → Exception Configurations → Legacy Agent Exceptions page. For more information about the migration,
see Exception configuration.
1. Select Settings → Exception Configurations → Legacy Agent Exceptions, and then click + Add Rule.
2. Select the platform for which you want to create an agent exception.
3. Select the module for which you want to create an exception. Optionally, select Select all to apply the exception to all profiles for this module or select
specific profiles.
Malware Respond to Malicious Causality Windows Add to your allow list specific and
Chains known safe IP address or IP
address ranges that you do not
want Cortex XDR to block.
Behavioral Threat Protection Windows, MacOS, Linux Add to your allow list the file or
folder path you want to exclude
from evaluation. Use ? to match a
single character or * to match any
string of characters.
Office Files with Micros Windows Add to your allow list the file or
Examination folder path you want to exclude
from evaluation. Use ? to match a
single character or * to match any
string of characters.
Portable Executable and DLL Windows Add to your allow list the file or
Examination folder path and the signers you
want to exclude from evaluation.
Use ? to match a single character
or * to match any string of
characters.
Malicious Child Process Protection Windows Add to your allow list the parent
processes that can launch child
processes to your allow list with
optional execution criteria. Specify
the allow list criteria including
the Parent Process Name, Child
Process Name, and Command
Line Params. Use ? to match a
single character or * to match any
string of characters.
Endpoint Scanning Windows, MacOS, Linux Add to your allow list the file or
folder path and the signers you
want to exclude from evaluation.
Use ? to match a single character
or * to match any string of
characters.
Credential Gathering Protection Windows, MacOS, Linux Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Anti Webshell Protection Windows, MacOS, Linux Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Financial Malware Threat Windows, MacOS, Linux Add to your allow list the file or
Protection folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Cryptominers Protection Windows, MacOS, Linux Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
In-process Shellcode Protection Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Malicious Device Prevention Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
UAC Bypass Prevention Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Anti Tampering Protection Windows, MacOS Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
PowerShell Script Files Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Mach-o Files Examination MacOS Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
DMG File Examination MacOS Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Local File Threat Examination Linux Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
ELF File Examination Linux Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
SMS and MMS Malicious URL iOS Add to your allow list and known
filtering Allow list safe URLs that you do not want
Cortex XDR to block.
Call and Messages Blocking iOS Add to your allow list names and
Allow list phone numbers of contacts that
you do not want Cortex XDR to
block.
Dynamic Kernel Protection Windows Add to your allow list the file or
folder path you want to exclude
from evaluation. Use ? to match a
single character or * to match any
string of characters.
Restrictions Executable Files Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Network Location Files Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Optical Drive Files Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Removable Media Files Windows Add to your allow list the file or
folder paths to exclude from
evaluation. Use ? to match a
single character or * to match any
string of characters.
Exceptions Process Exceptions Windows, MacOS, Linux Add to your allow list the process
and the module names to exclude
from evaluation. Use ? to match a
single character or * to match any
string of characters.
4. For each module, enter the file or folder path that you want to add to the exception rule, and press ENTER. Repeat this step to add additional paths to
the rule.
5. Select the endpoint profiles to which you want to apply this rule.
6. Click Next.
7. Review the rule, and then select the warning message checkbox.
8. Click Create.
If you don't migrate the legacy exceptions, you can continue to create exceptions through the profiles.
Abstract
You can configure exceptions that apply to specific groups of endpoints or you can add a global endpoint policy exception.
Starting with version 3.5, Cortex XDR enables you to manage the exception security rules from a central location and easily apply them across multiple profiles
in the Legacy Agent Exceptions management page.
To manage the exceptions from Exception Configuration, you must first migrate your existing exceptions configured via the exceptions security profiles.
To create new exception security profile rules using the Legacy Agent Exceptions management page, see Add a legacy exception rule.
If you don't migrate the legacy exceptions, you can continue to create exceptions as described below.
a. From Cortex XDR, select Endpoints → Policy Management → Prevention → Profiles → +Add Profile and select whether to Create New or Import
from File a new profile.
b. Select the platform to which the profile applies and Exceptions as the profile type.
a. Select a unique Profile Name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name will be visible from the list of profiles when you configure a policy rule.
b. To provide additional context for the purpose or business reason for creating the profile, specify a profile Description. For example, you might
include an incident identification number or a link to a help desk ticket.
3. Select one or more endpoint protection modules that will allow this process to run. The modules displayed in the list are the modules relevant to the
operating system defined for this profile.
To apply the process exception on the following exploit modules, select Disable Injection.
APC Guard, CPL Execution Protection, DEP, DLL Hijacking Protection, DLL Security, EPM D02, Exception Heap Spray Check, Exception
SysExist Check, Exploit Kit Fingerprinting Protection, Font Protection, Hot Patch Protection, JIT Mitigation, Library Preallocation, Memory Limit
Heap Spray Check, Null Dereference Protection, Password Theft Protection, ROP Mitigation, SEH Protection, Shellcode Preallocation, UASLR
You can return to the Process Execution profile from the Endpoint Profile page at any point and edit the settings. For example, if you want to add or
remove security modules.
1. Import the json file you received from the Palo Alto Networks support team by either browsing for it in your files or by dragging the file on the page.
2. Click Create.
Behavioral Threat Protection Rule Exception: When you view an alert for a Behavioral Threat event that you want to allow in your network from now
on, right-click the alert and Create alert exception. Review the alert data (Platform and Rule name) and select from the following options as
needed.
CGO signer: CGO signer entity (for Windows and Mac only)
CGO command arguments:CGO command arguments. This option is available only if CGO process path is selected, and only if you are
using Cortex XDR Agent 7.5 or later on your endpoints. After selecting this option, check the full path of each relevant command argument
within quote marks. You can edit the displayed paths if needed.
Digital Signer Exception: When you view an alert for a Digital Signer Restriction that you want to allow in your network from now on, right-click the
alert and Create alert exception. Cortex XDR displays the alert data (Platform, Signer, and Generating Alert ID). Select Exception Scope: Profile
and select the exception profile name. Click Add.
Java Deserialization Exception: When you identify a Suspicious Input Deserialization alert that you believe to be benign and want to suppress
future alerts, right-click the alert and Create alert exception. Cortex XDR displays the alert data (Platform, Process, Java executable, and
Generating Alert ID). Select Exception Scope: Profile and select the exception profile name. Click Add.
Local File Threat Examination Exception: When you view an alert for a PHP file that you want to allow in your network from now on, right-click the
alert and Create alert exception. Cortex XDR displays the alert data (Process, Path, and Hash). Select Exception Scope: Profile and select the
exception profile name. Click Add.
Gatekeeper Enhancement Exception: When you view a Gatekeeper Enhancement security alert for a bundle or specific source-child combination
you want to allow in your network from now on, right-click the alert and Create alert exception. Cortex XDR displays the alert data (Platform, Source
Process, Target Process, and Alert ID). Select Exception Scope: Profile and select the exception profile name. Click Add. This exception allows
Cortex XDR to continue enforcing the Gatekeeper Enhancement protection module on the source process running other child processes.
If you want to remove an exceptions profile from your network, go to the Profiles page, right-click, and select Delete.
Abstract
Learn how to define and manage global endpoint policy exceptions in Cortex XDR.
As an alternative to adding an endpoint-specific exception in policy rules, you can define and manage global exceptions that apply across all of your
endpoints. On the Global Exception page, you can manage all the global exceptions in your organization for all platforms. Profiles associated with one or more
targets that are beyond your defined user scope are locked and cannot be edited.
Starting with version 3.5, Cortex XDR enables you to manage the Global Endpoint Policy exceptions from a central location and easily apply them across
multiple profiles in the Legacy Agent Exceptions management page.
To manage the prevention profile exceptions from Exception Configuration, you must first migrate your existing exceptions configured via the Global
exceptions.
Your migrated rules are displayed on the Settings → Exception Configurations → Legacy Agent Exceptions page. For more information about the
migration, see Exception configuration.
To create new global endpoint policy exceptions using the Legacy Agent Exceptions page, see Add a legacy exception rule.
If you don't migrate the legacy exceptions, you can continue to add exceptions as described below.
Configure exception rules forCortex XDR protection and prevention actions in a centralized location, and apply them across multiple profiles.
c. Select one or more Endpoint Protection Modules that will allow this process to run. The modules displayed on the list are the modules relevant to
the operating system defined for this profile. To apply the process exception on all security modules, Select all. To apply the process exception on
all exploit security modules, select Disable Injection. Click the adjacent arrow to add the exception.
The new process exception is added to the Global Exceptions in your network and will be applied across all rules and policies. To edit the exception,
select it and click the edit icon. To delete it, select it and click the delete icon.
Configure support exception rules for Cortex XDR protection and prevention actions in a centralized location, and apply them across multiple profiles.
Import the JSON file you received from the Palo Alto NetworksPalo Alto Networks support team by either browsing for it in your files or by dragging and
dropping the file on the page.
3. Click Save.
The new support exception is added to the Global Exceptions in your network and will be applied across all rules and policies.
When you view a Behavioral Threat alert in the Alerts table which you want to allow across your organization, you can create a global exception for that rule.
2. Review the alert data (platform and rule name) and then select from the following options as needed:
b. CGO signer: CGO signer entity (for Windows and Mac only).
d. CGO command arguments: CGO command arguments. This option is available only if CGO process path is selected, and only if you are using
Cortex XDR Agent 7.5 or later on your endpoints. After selecting this option, check the full path of each relevant command argument within quote
marks. You can edit the displayed paths if needed.
3. Click Create.
The relevant BTP exception is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you can
click the Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and click X.
You cannot edit global exceptions generated from a BTP security event.
Add a global credential gathering protection exception
When you view a Credential Gathering Protection alert in the Alerts table that you want to allow across your organization, you can create a global exception for
that rule.
1. Right-click the Credential Gathering Protection alert and select Create alert exception.
2. Review the alert data (platform and module name) and then select from the following options as needed:
b. CGO signer: CGO signer entity (for Windows and Mac only).
d. CGO command arguments: CGO command arguments. This option is available only if CGO process path is selected, and only if you are using
Cortex XDR agent 7.5 or later on your endpoints. After selecting this option, check the full path of each relevant command argument within quote
marks. You can edit the displayed paths if needed.
3. Click Create.
The relevant exception is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you can click the
Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and click X.
You cannot edit global exceptions generated from a Credential Gathering Protection security event.
When you view an Anti Webshell Protection alert in the Alerts table that you want to allow across your organization, you can create a global exception for that
rule.
1. Right-click the Anti Webshell Protection alert and select Create alert exception.
2. Review the alert data (platform and module name) and then select from the following options as needed:
b. CGO signer: CGO signer entity (for Windows and Mac only).
d. CGO command arguments: CGO command arguments. This option is available only if CGO process path is selected, and only if you are using
Cortex XDR Agent 7.5 or later on your endpoints. After selecting this option, check the full path of each relevant command argument within quote
marks. You can edit the displayed paths if needed.
3. Click Create.
The relevant exception is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you can click the
Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and click X.
You cannot edit global exceptions generated from an Anti Webshell Protection security event.
When you view in the Alerts table a Local Analysis alert that was triggered as a result of local analysis rules, you can create a global exception to allow the
rules across your organization.
2. Review the alert data (platform and rule name) and select Exception Scope:Global.
The relevant Local Analysis Rules exception is added to the Global Exceptions in your network and will be applied across all rules and policies. The
exception allows all the rules that triggered the alert, and you cannot choose to allow only specific rules within the alert. At any point, you can click the
Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and click X. You
cannot edit global exceptions generated from a local analysis security event.
Review advanced analysis exceptions
With Advanced Analysis, Cortex XDR can provide a secondary validation of Cortex XDR agent alerts raised by exploit protection modules. To perform the
additional analysis, Cortex XDR analyzes alert data sent by the Cortex XDR agent. If Advanced Analysis indicates an alert is benign, Cortex XDR can
automatically create exceptions and distribute the updated security policy to your endpoints.
By enabling Cortex XDR to automatically create and distribute global exceptions you can minimize disruption for users when they subsequently encounter the
same benign activity. To enable the automatic creation of Advanced Analysis Exceptions, configure the Advanced Analysis options in Settings →
Configurations → General → Agent Configurations.
For each exception, Cortex XDR displays the affected platform, exception name, and the relevant alert ID for which Cortex XDR determined activity was
benign. To drill down into the alert details, click the Generating Alert ID.
When you view in the Alerts table a Digital Signer Restriction alert for a digital signer you trust and want to allow from now on across your network, create a
Global Exception for that digital signer directly from the alert.
Review the alert data (Platform, signer, and alert ID) and select Exception Scope:Global.
2. Click Add.
The relevant digital signer exception is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you
can click the Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and
click X. You cannot edit global exceptions generated from a digital signer restriction security event.
When you view in the Alerts table a Suspicious Input Desensitization alert for a Java executable you want to allow from now on across your network, create a
global exception for that executable directly from the alert of the security event that prevented it.
Review the alert data (Platform, Process, Java executable, and alert ID) and select Exception Scope: Global.
2. Click Add.
The relevant digital signer exception is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you
can click the Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and
click X. You cannot edit global exceptions generated from a digital signer restriction security event.
When you view in the Alerts table a Local Threat Detected alert for a PHP file you want to allow from now on across your network, create a global exception for
that file directly from the alert of the security event that prevented it.
Review the alert data (Process, Path, and Hash) and select Exception Scope: Global.
2. Click Add.
The relevant PHP file is added to the Global Exceptions in your network and will be applied across all rules and policies. At any point, you can click the
Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select it and click X. You
cannot edit global exceptions generated from a local file threat examination exception restriction security event.
When you view a Gatekeeper Enhancement security alert in the Alerts table, you can create a global exception for this specific bundle or source-child
combination only, while allowing Cortex XDR to continue enforcing the Gatekeeper Enhancement protection module on the source process running other child
processes.
Review the alert data (Platform, Source Process, Target Process, and Alert ID) and select Exception Scope: Global.
2. Click Add.
The relevant source and target processes are added to the Global Exceptions in your network and will be applied across all rules and policies. At any
point, you can click the Generating Alert ID to return to the original alert from which the exception originated. To delete a specific global exception, select
Select + Import/Export to Export your exceptions list and/or Import from File.
Abstract
Define an endpoint group and then apply policy rules and manage specific endpoints.
You can define an endpoint group and then apply policy rules and manage specific endpoints. If you set up Cloud Identity Engine, you can also leverage your
Active Directory user, group, and computer details to define endpoint groups.
Create a dynamic group by enabling Cortex XDR to populate your endpoint group dynamically using endpoint characteristics, such as an endpoint tag,
partial hostname or alias, full or partial domain or workgroup name, IP address, range or subnets, installation type (VDI, temporary session or standard
endpoint), agent version, endpoint type (workstation, server, mobile), or operating system version.
After you define an endpoint group, you can then use it to target policy and actions to specific recipients. The Endpoint Groups page displays all endpoint
groups along with the number of endpoints and policy rules linked to the endpoint group.
Upload From File using plain text files with a new line separator, to populate a static endpoint group from a file containing IP addresses,
hostnames, or aliases.
3. Enter a Group Name and optional description to identify the endpoint group. The name you assign to the group will be visible when you assign endpoint
security profiles to endpoints.
Dynamic: Use the filters to define the criteria you want to use to dynamically populate an endpoint group. Dynamic groups support multiple criteria
selections and can use AND or OR operators. For endpoint names and aliases, and domains and workgroups, you can use * to match any string
of characters. As you apply filters, Cortex XDR displays any registered endpoint matches to help you validate your filter criteria.
Static: Select specific registered endpoints that you want to include in the endpoint group. Use the filters, as needed, to reduce the number of
results.
When you create a static endpoint group from a file, the IP address, hostname, or alias of the endpoint must match an existing agent that has
registered with Cortex XDR. You can select up to 250 endpoints.
Disconnecting Cloud Identity Engine in your Cortex XDR deployment can affect existing endpoint groups and policy rules based on Active Directory
properties.
After you save your endpoint group, it is ready for use to assign security profiles to endpoints and in other places where you can use endpoint groups.
At any time, you can return to the Endpoint Groups page to view and manage your endpoint groups. To manage a group, right-click the group and select the
desired action:
Edit: View the endpoints that match the group definition, and optionally refine the membership criteria using filters.
Save as new: Duplicate the endpoint group and save it as a new group.
Export group: Export the list of endpoints that match the endpoint group criteria to a tab separated values (TSV) file.
View endpoints: Pivot from an endpoint group to a filtered list of endpoints on the Endpoint Administration page where you can quickly view and initiate
actions on the endpoints within the group.
Abstract
The different Cortex XDR agents that operate on your endpoints require configuration of different global settings.
In addition to the customizable Agent Settings Profiles for each Operating System and different endpoint targets, you can configure global Agent
Configurations that apply to all the endpoints in your network.
The uninstall password is required to remove a Cortex XDR agent and to grant access to the agent security component on the endpoint. You can use the
default uninstall Password1 defined in Cortex XDR or set a new one and Save. This global uninstall password applies to all the endpoints (excluding
mobile) in your network. If you change the password later on, the new default password applies to all new and existing profiles to which it applied before.
If you want to use a different password to uninstall specific agents, you can override the default global uninstall password by setting a different password
for those agents in the Agent Settings profile. The selected password must satisfy the requirements enforced by Password Strength indicator.
A new password must satisfy the following Password Strength indicator requirements:
It must be 8 to 32 characters.
It must contain at least one upper-case, at least one lower-case letter, at least one number, and at least one of the following characters: !@#%.
Enable bandwidth control: Palo Alto Networks enables you to control your Cortex XDR agent network consumption by adjusting the bandwidth it is
allocated. Based on the number of agents you want to update with content and upgrade packages, active or future agents, the Cortex XDR
calculator configures the recommended amount of Mbps (Megabits per second) required for a connected agent to retrieve a content update over
a 24 hour period or a week. Cortex XDR supports between 20 - 10000 Mbps, you can enter one of the recommended values or enter one of your
own. For optimized performance and reduced bandwidth consumption, it is recommended that you install and update new agents with Cortex
XDR agents 7.3 and later include the content package built in using SCCM.
Enable minor content version updates: The Cortex XDR research team releases more frequent content updates in-between major content versions
to ensure your network is constantly protected against the latest and newest threats in the wild. Enabled by default, the Cortex XDR agent receives
minor content updates, starting with the next content releases. To learn more about the minor content numbering format, refer to the About content
updates topic.
To control the amount of bandwidth allocated in your network to Cortex XDR content updates, assign a Content bandwidth management value between
20-10,000 Mbps. To help you with this calculation, Cortex XDR recommends the optimal value of Mbps based on the number of active agents in your
network, and including overhead considerations for large content updates. Cortex XDR verifies that agents attempting to download the content update
are within the allocated bandwidth before beginning the distribution. If the bandwidth has reached its cap, the download will be refused and the agents
will attempt again at a later time. After you set the bandwidth, Save the configuration.
5. Configure the Cortex XDR agent auto upgrade scheduler and number of parallel upgrades.
If Agent auto upgrades are enabled for your Cortex XDR agents, you can control the automatic upgrade process in your network. To better control the
rollout of a new Cortex XDR agent release in your organization, during the first week only a single batch of agents is upgraded. After that, auto-upgrades
continue to be deployed across your network with number of parallel upgrades as configured.
Amount of Parallel Upgrades: Set the number of parallel agent upgrades, where the maximum is 500 agents.
Days in week:—You can schedule the upgrade task for specific days of the week and a specific time range. The minimum range is four hours.
6. Configure automated Advanced Analysis of Cortex XDR Agent alerts raised by exploit protection modules.
Advanced Analysis is an additional verification method you can use to validate the verdict issued by the Cortex XDR agent. In addition, Advanced
Analysis also helps Palo Alto Networks researchers tune exploit protection modules for accuracy.
To initiate additional analysis you must retrieve data about the alert from the endpoint. You can do this manually on an alert-by-alert basis or you can
enable Cortex XDR to automatically retrieve the files.
After Cortex XDR receives the data, it automatically analyzes the memory contents and renders a verdict. When the analysis is complete, Cortex XDR
displays the results in the Advanced Analysis field of the Additional data view for the data retrieval action on the Action Center. If the Advanced Analysis
verdict is benign, you can avoid subsequent blocked files for users that encounter the same behavior by enabling Cortex XDR to automatically create
and distribute exceptions based on the Advanced Analysis results.
Automatically apply Advanced Analysis exceptions to your Global Exceptions list. This will apply all Advanced Analysis exceptions
suggested by Cortex XDR, regardless of the alert data file source.
7. Configure the Cortex XDR Agent license revocation and deletion period.
This configuration applies to standard endpoints only and does not impact the license status of agents for VDIs or Temporary Sessions.
Connection Lost (Days): Configure the number of days after which the license should be returned when an agent loses the connection to
Cortex XDR. Default is 30 days; Range is 2 to 60 days. Day one is counted as the first 24 hours with no connection.
Agent Deletion (Days): Configure the number of days after which the agent and related data is removed from the Cortex XDR management
console and database. Default is 180 days; Range is 3 to 360 days and must exceed the Connection Lost value. Day one is the first 24
hours of lost connection.
The WildFire analysis score for files with a Benign verdict is used to indicate the level of confidence WildFire has in the Benign verdict. For example, a file
by a trusted signer or a file that was tested manually gets a high confidence Benign score, whereas a file that did not display any suspicious behavior at
the time of testing gets a lower confidence Benign score. To add an additional verification method to such files, enable this setting. After this, when
Cortex XDR receives a Benign Low Confidence verdict, the agent enforces the Malware Security profile settings you currently have in place (Run local
analysis to determine the file verdict, Allow, or Block).
Disabling this capability takes immediate effect on new hashes, fresh agent installations, and existing security policies. It could take up to a week to take
effect on existing agents in your environment pending agent caching.
Behavioral threat protection (BTP) alerts have been given unique and informative names and descriptions, to provide immediate clarity into the events
without having to drill down into each alert. Enable to display of the informative BTP rule alert names and descriptions. After you update the settings, new
alerts include the changes while already existing alerts remain unaffected.
If you have any Cortex XDR filters, starring policies, exclusion policies, scoring rules, log forwarding queries, or automation rules configured for
XSOAR/3rd party SIEM, we advise you to update those to support the changes before activating the feature. For example, change the query to include
the previous description that is still available in the new description, instead of searching for an exact match.
10. Configure settings for periodic cleanup of duplicate entities in the endpoint administration table.
When enabled, Periodic duplicate cleanup removes all duplicate entries of an endpoint from the endpoint table based on the defined parameters,
leaving only the last occurrence of the endpoint reporting to the server. This enables you to streamline and improve the management of your endpoints.
For example, when an endpoint reconnects after a hardware change, it may be re-registered, leading to confusion in the endpoint administration table
regarding the real status of the endpoint. The cleanup leaves only the latest record of the endpoint in the table.
Define whether to clean up according to Host Name, Host IP Address, MAC Address, or any combination of them. If not selected, the default is
Host Name. When you select more than one parameter, duplicate entries are removed only if they include all the selected parameters.
Configure the frequency of the cleanup—every 6 hours, 12 hours, 1 day, or 7 days. You can also select to perform an immediate One-time
cleanup.
Data for a deleted endpoint is retained for 90 days since the endpoint’s last connection to the system. If a deleted endpoint reconnects, Cortex XDR
recovers its existing data.
Abstract
Learn how to apply security profiles to your endpoints, depending on the platform used.
Cortex XDR provides out-of-the-box protection for all registered endpoints with a default security policy customized for each supported platform type. To
configure your security policy, customize the settings in a security profile and attach the profile to a policy.
Each policy you create must apply to one or more endpoints or endpoint groups. The Prevention Policy Rules table lists all the policy rules per operating
system. Rules associated with one or more targets that are beyond your defined user scope are locked and cannot be edited.
When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows:
Rules without a defined target are disabled until the target is specified.
Select Endpoints → Policy Management → Prevention → Profiles, right-click the profile you want to assign and click Create a new policy rule using
this profile.
2. Define a Policy Name and optional Description that describes the purpose or intent of the policy.
3. Select the Platform for which you want to create a new policy.
4. Select the desired Exploit, Malware, Restrictions, and Agent Settings profiles you want to apply in this policy.
If you do not specify a profile, the Cortex XDR agent uses the default profile.
5. Click Next.
6. Use the filters to assign the policy to one or more endpoints or endpoint groups.
Cortex XDR automatically applies the platform filter you selected and, if it exists, the Group Name according to the groups within your defined user
scope.
7. Click Done.
8. In the Policy Rules table, change the rule position, if needed, to order the policy relative to other policies.
The Cortex XDR agent evaluates policies from top to bottom. When the Cortex XDR agent finds the first match it applies that policy as the active policy.
To move the rule, select the arrows and drag the policy to the desired location in the policy hierarchy.
Right-click to View Policy Details, Edit, Save as New, Disable, and Delete.
9. Export policy.
Select one or more policies, right-click and select Export Policies. You can include the associated Policy Targets, Global Exceptions, and endpoint
groups.
Abstract
Learn how to create a Cortex XDR agent installation package to deploy to your endpoints.
To install the Cortex XDR agent on the endpoint for the first time, create an agent installation package. Review Where can I install the Cortex XDR agent for
supported versions and operating systems.
To install the Cortex XDR agent software, you must use a valid installation package that exists in your Cortex XDR management console. If you delete an
installation package, new agents installed from this package are not able to register to Cortex XDR, however, existing agents may re-register using the Agent
ID generated by the installation package.
3. Enter a unique name and an optional description to identify the installation package.
The package name can contain letters, numbers, hyphens, underscores, commas, and spaces, and should not exceed 100 characters.
Upgrade from ESM: Use this package to upgrade Traps agents which connect to the on-premises Traps Endpoint Security Manager to Cortex
XDR. For more information, see Migrate from Traps Endpoint Security Manager.
(Linux only) Kubernetes Installer: Use for fresh installations and upgrades of Cortex XDR agents running on Kubernetes clusters.
Settings for the Kubernetes installer cannot be changed after you create the installation package.
For the Agent Daemonset Namespace, it is recommended to use the default cortex-xdr namespace.
For a more granular deployment, enter any labels or selectors in the Node Selector. The Cortex XDR agent will be deployed only on these
nodes.
To configure the Cortex XDR agent to communicate through a proxy, enter either the IP address and port number or enter the FQDN and
port number. When you enter the FQDN, you can use both lowercase and uppercase letters. Avoid using special characters or spaces. Use
commas to separate multiple addresses.
Helm Installer: Use this package for fresh installations and upgrades of Cortex XDR agents running on Kubernetes clusters.
5. Select the platform and relevant settings, and then click Create.
Cortex XDR prepares your installation package and displays it on the Agent Installations page.
When the status of the package shows Completed, right-click the package, and click Download.
Cortex XSIAM provides out-of-the-box exploit and malware protection. However, at minimum, you must enable Data Collection in an Agent Settings profile to
leverage endpoint data in Cortex XSIAM apps.
Abstract
Learn how to make changes such as deleting an agent installation package or editing the package name.
You can manage agent installation packages on the Agent Installations page. To manage a specific package, right-click the agent version, and select the
desired action:
Delete the installation package. Deleting an installation package does not uninstall the Cortex XDR agent software from any endpoints.
Since Cortex XDR relies on the installation package ID to approve agent registration during the installation, we recommend that you don't delete the
installation package of active endpoints. If you install the Cortex XDR agent from a package after you delete it, Cortex XDR denies the registration
request leaving the agent in an unprotected state. Hiding the installation package removes it from the default list of available installation packages, and
can be useful for preventing confusion within the management console main view. The hidden installation can be viewed by removing the default filter.
Copy text to clipboard to copy the text from a specific field in the row of an installation package.
Hide installation packages. Using the Hide option provides a quick method to filter out results based on a specific value in the table. You can also use
the filters at the top of the page to build a filter from scratch. To create a persistent filter, save ( ) it.
Abstract
By hardening your endpoints with Cortex XDR agent, you can make these endpoints more secure and safer from attackers.
You can extend the security on your endpoints beyond the Cortex XDR agent built-in prevention capabilities to provide increased network security coverage
within your organization. By leveraging existing mechanisms and added capabilities, the Cortex XDR agent can enforce additional protections on your
endpoints to provide a comprehensive security posture.
From Endpoints → Policy Management → Extensions → Profiles, you can create profiles for the following hardened endpoint security capabilities.
Device control
Host firewall
Disk encryption
Field Description
Created Time Date and time at which the profile was created
Modification Time Date and time at which the profile was modified
To apply the profiles, from Endpoints → Policy Management → Extensions → Policy Rules, you can view all the policy rules per operating system. Rules
associated with one or more targets that are beyond your defined user scope are locked and cannot be edited.
The following table describes for each capability the supported platforms and minimal agent version. A dash (—) indicates the setting is not supported.
Hardened endpoint security capabilities are not supported for Android endpoints.
Device Control ✓ ✓ –
Protects endpoints from loading malicious Cortex XDR agent 7.0 and later Cortex XDR agent 7.2 and later
files from USB-connected removable devices
(CD-ROM, disk drives, floppy disks, and For VDI, Cortex XDR agent 7.3
Windows portable devices drives). and later
Host Firewall ✓ ✓ –
Protects endpoints from attacks originating in Cortex XDR agent 7.1 and later Cortex XDR agent 7.2 and later
network communications to and from the
endpoint.
Disk Encryption ✓ ✓ –
Provides visibility into endpoints that encrypt Cortex XDR agent 7.1 and later Cortex XDR agent 7.2 and later
their hard drives using BitLocker or FileVault.
Abstract
Protect your Windows and macOS-based endpoints from connecting to malicious USB-connected removable devices, to Bluetooth devices, and to print jobs.
By default, all external USB and Bluetooth devices are allowed to connect to your Windows and macOS-based Cortex XDR endpoints, and all print jobs are
allowed. To protect endpoints from connecting to removable devices, such as disk drives, CD-ROM drives, floppy disk drives, Bluetooth devices, and other
portable devices, that can contain malicious files, Cortex XDR provides device control. Different types of print jobs can also be blocked.
(Windows and macOS) Block all supported USB-connected devices for an endpoint group.
(Windows and macOS) Block a USB device type but add to your allow list a specific vendor from that list that will be accessible from the endpoint.
(Windows only) Block connections to Classic Bluetooth devices or Low Energy Bluetooth services. These are two different Bluetooth protocols used for
short-range wireless connections.
Some examples of Classic Bluetooth devices include: laptop computers, tablets, telephones, audio/video devices, wearables, peripherals,
imaging devices, health devices, toys, and so on.
Some examples of Low Energy Bluetooth devices include: telephone alert status, microphone control, health sensors, insulin delivery, location and
navigation, object transfer, and so on.
(Windows and macOS) Block some, or all, print jobs to local or network printers, or to file.
Depending on your defined user scope permissions, creating device profiles, policies, exceptions, and violations may be disabled.
When you enable device control protection for the first time, some devices that are already connected to the endpoint (or paired, in case of Bluetooth) will not
be immediately affected by the change.
The profile change will affect the connected device after one of the following occurs:
A computer restart
For Bluetooth, toggling Bluetooth off and on, or manually unpairing the device
The following are prerequisites to enforce device control policy rules on your endpoints:
Platform Prerequisites
For VMware Horizon, you must disable Sharing → Allow access to removable storage in your VMware
Horizon client settings.
Mac No prerequisites
Platform Prerequisites
Windows VDI Virtual environments leverage different stacks that might not be subject to the Device
Control policy rules that are enforced by the Cortex XDR agent and, therefore, could
lead to USB devices that are allowed to connect to the VDI instance in contrast to the
configured policy rules.
The Cortex XDR agent provides best-effort enforcement of the Device Control policy
rules on VDI instances that are running on physical endpoints where a Cortex XDR
agent is not deployed.
For Citrix Virtual Apps and Desktops, Cortex XDR Device Control is supported on
generic virtual channels only.
If a profile is set to block specific Bluetooth Low Energy (BLE) services, Cortex XDR
only blocks the services set to Block, and not the functionality of the entire device.
This means that if a device has multiple services, some of them might still be
accessible, while others are blocked.
Cortex XDR attempts to aggregate all related BLE services so that they appear under
a single logical Bluetooth device control violation report. However, some Bluetooth
devices might be reported in a separate violation report due to the way these devices
are paired in the Windows operating system and because they reside outside the
device container.
Cortex XDR cannot block low energy services or report device control violations on
devices that do not report any LE services. The devices can, however, be blocked
completely by setting the entire Bluetooth device to Block.
Exceptions can only be created when the Vendor field for the device is available in a
violation report.
Exceptions for specific BLE devices cannot be created from a violation report.
Exceptions for such devices can only be created by disabling the the blocked LE
services in the policy.
macOS - No limitations
To apply device control in your organization, define device control profiles that determine which device types Cortex XDR blocks, and which it permits. There
are two types of profiles:
Profile Description
When set to Block, all print jobs sent from the endpoint will be
blocked.
Printing to file (Windows only) blocks print jobs that are saved
as a file. This option only blocks the print driver.
For network printer print jobs, ensure that you also configure the
Agent Settings profile, Network Location Configuration option. This
setting must be set to Enabled, and configured.
If you do not enable and configure this setting, all network printer
operations will be treated as internal network print jobs.
The Print Job option does not block connections to a printer, but
blocks print jobs according to the type of print job. You cannot block
use of a specific printer with this feature.
Any print job that is not sent via the endpoint's printer spooler, such
as a file uploaded to a remote software based printing service, will
not be blocked.
The Cortex XDR agent relies on the device class assigned by the operating
system. For Windows endpoints only, you can configure additional device
classes.
Profile Description
Exceptions Profile Allow specific devices according to device types and vendor. You can
further specify a specific product and/or product serial number.
Device Configuration and Device Exceptions profiles are configured for each operating system separately. After you configure a device control profile, Apply
device control profiles to your endpoints.
1. In Endpoints → Policy management → Extensions → Profiles, select +Add Profile and then select either Create New or Import from File.
Assign the profile Name and add an optional Description. The profile Type and Platform are set by Cortex XDR.
For each group of device types, select the desired action. To use the default option defined by Palo Alto Networks, leave Use Default selected.
For Disk Drives only, you can also allow connecting in Read-only mode.
For Print Jobs, you can choose the Custom option, and then select the desired print job type.
For Bluetooth Devices, you can choose the Custom option, and then select the desired Bluetooth Classes or Low Energy Services type.
Currently, the default is set to Use Default (Allow), however, Palo Alto Networks may change the default definition at any time.
In XQL Search, to view connect and disconnect events of USB devices that are reported by the agent, the Device Configuration must be set to
Block. Otherwise, the USB events are not captured. The events are also captured when a group of device types are blocked on the endpoints with
a permanent or temporary exception in place. For more information, see Ingest connect and disconnect events of USB devices.
You cannot edit or delete the default profiles pre-defined in Cortex XDR.
6. (Optional) To define exceptions to your Device Configuration profile, Add a new exceptions profile.
1. In Endpoints → Policy management → Extension → Profiles, select + New Profile or Import from File.
Assign the profile Name and add an optional Description. The profile Type and Platform are set by the system.
You can add devices to your allow list according to different sets of identifiers: vendor, product, and serial numbers.
Type: Select the device type that you want to add to the allow list: Bluetooth, CD-ROM, Disk Drive, Floppy Disk, or Windows Portable Devices
(Windows only).
(Disk Drives only) Permission: Select the permissions you want to grant: Read only or Read/Write.
Vendor: Select a specific vendor from the list or enter the vendor ID in hexadecimal code.
(Optional) Product: Select a specific product (filtered by the selected vendor) to add to your allow list, or add your product ID in hexadecimal
code.
(Optional) Serial Number: Enter a specific serial number (pertaining to the selected product) to add to your allow list. Only devices with this serial
number are included in the allow list. If you want to add serial number where the last character is a space character, use quotation marks. For
example, "K04M1972138 ".
After you define the required profiles for Device Configuration and Exceptions, you must configure Device Control policies and enforce them on your endpoints.
Cortex XDR applies Device Control policies on endpoints from beginning to end, as you’ve ordered them on the page. The first policy that matches the
endpoint is applied. If no policies match, the default policy that enables all devices is applied.
When enabling Device Control protection for the first time, some devices that are already connected (or paired in case of Bluetooth) to the machine will not be
immediately affected by the change. The profile change will affect the connected device after one of the following occurs:
A computer restart
For Bluetooth devices only: Bluetooth toggled off and on, or manual unpairing of the device.
1. In Endpoints → Policy management → Extensions → Policy Rules, select + New Policy or Import from File.
When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows:
Rules without a defined target are disabled until the target is specified.
a. Assign a policy name and select the platform. You can add a description.
b. Assign the Device Type profile you want to use in this rule.
c. Click Next.
Use filters or manual endpoint selection to define the exact target endpoints of the policy rules. If exists, the Group Name is filtered according to
the groups within your defined user scope.
e. Click Done.
Drag the policies in the desired order of execution. The default policy that enables all devices on all endpoints is always the last one on the page and is
applied to endpoints that don’t match the criteria in the other policies.
After the policy is saved and applied to the agents, Cortex XDR enforces the device control policies on your environment.
In the Protection Policy Rules table, you can view and edit the policy you created and the policy hierarchy.
b. Right-click to View Policy Details, Edit, Save as New, Disable, and Delete.
c. Select one or more policies, right-click and select Export Policies. You can choose to include the associated Policy Targets, Global Exceptions,
and endpoint groups.
After you apply Device Control rules in your environment, you can use the Endpoints → Device Control Violations page to monitor all instances where
end users attempted to connect restricted devices or print jobs, and Cortex XDR blocked them on the endpoint. All violation logs are displayed on the
page. You can sort the results and use the filters menu to narrow down the results. For each violation event, Cortex XDR logs the following event details,
where relevant and available for each device or print job:
Agent ID
User name
IP address
Type of device
Product name
Additional Information
Major Class
Minor Class
Vendor Type
If you see a violation for which you’d like to define an exception on the device that triggered it, right-click the violation and select one of the following
options:
Add device to permanent exceptions: To ensure this device is always allowed in your network, select this option to add the device to the Device
Permanent Exceptions list, the type of Permissions, and an optional comment.
Add device to temporary exceptions: To allow this device only temporarily on the selected endpoint or on all endpoints, select this option and set
the allowed time frame for the device, the type of Permissions, and an optional comment.
Add device to a profile exception: Select this option to allow the device within an existing Device Exceptions profile, the type of Permissions, and
an optional comment.
To better deploy device control in your network and allow further granularity, you can add devices on your network to your allow list and grant them
access to your endpoints. Device control exceptions are configured per device and you must select the device category, vendor, and type of permission
that you want to allow on the endpoint. Optionally, to limit the exception to a specific device, you can also include the product and/or serial number.
Permanent Exceptions Permanent exceptions approve the device in your network across all Device Control
policies and profiles. You can create them directly from the violation event that blocked
the device, or through the Permanent Exceptions list.
Permanent exceptions apply across platforms, allowing the devices on all operating
systems.
Temporary Exceptions Temporary exceptions approve the device for a specific time period up to 30 days. You
create a temporary exception directly from the violation event that blocked the device.
Profile Exceptions Profile exceptions approve the device in an existing exceptions profile. You create a
profile exception directly from the violation event that blocked the device.
Permanent device control exceptions are managed in the Permanent Exception list and are applied to all devices regardless of the endpoint
platform.
If you know in advance which device you’d like to allow throughout your network, create a general exception from the list:
1. Go to Endpoints → Policy Management → Extensions and select Device Permanent Exceptions on the left menu. The list of existing
Permanent Exceptions is displayed.
3. (Optional) Select a specific product and/or enter a specific serial number for the device.
4. Click the adjacent arrow and Save. The exception is added to the Permanent Exceptions list and will be applied in the next heartbeat.
Otherwise, you can create a permanent exception directly from the violation event that blocked the device in your network:
1. On the Device Control Violations page, right-click the violation event triggered by the device you want to permanently allow.
2. Select Add device to permanent exceptions. Review the exception data and change the defaults if necessary.
3. Click Save.
1. On the Device Control Violations page, right-click the violation event triggered by the device you want to temporarily allow.
2. Select Add device to temporary exceptions. Review the exception data and change the defaults if necessary. For example, you can
configure the exception to this endpoint only or to all endpoints in your network, or set which device identifiers will be included in the
exception.
3. Configure the exception Time Frame by defining the number of days or number of hours during which the exception will be applied, up to 30
days.
4. Click Save. The exception is added to the Device Temporary Exceptions list and will be applied in the next heartbeat.
1. On the Device Control Violations page, right-click the violation event triggered by the device you want to add to a Device Exceptions profile.
3. Save. The exception is added to the exceptions profile and will be applied in the next heartbeat.
(Windows only) You can include custom USB-connected device classes beyond Disk Drive, CD-ROM, Windows Portable Devices, and Floppy Disk Drives,
such as USB connected network adapters. When you create a custom device class, you must supply Cortex XDR the official ClassGuid identifier used by
Microsoft. Alternatively, if you configured a GUID value to a specific USB connected device, you must use this value for the new device class. After you add a
custom device class, you can view it in Device Management and enforce any device control rules and exceptions on this device class.
Select +New Device. Set a Name for the new device class, and supply a valid and unique GUID Identifier. For each GUID value, you can define one
class type only.
3. Save.
The new device class is now available in Cortex XDR as all other device classes.
You can personalize the Cortex XDR notification pop-up on the endpoint when the user attempts to connect a USB device that is either blocked on the
endpoint or allowed in read-only mode. To edit the notifications, refer to Set up agent settings profiles.
The Cortex Query Language (XQL) supports the ingestion of connect and disconnect events of USB devices that are reported by the agent. To view these USB
device events in XQL Search, you must set the Device Configuration of the endpoint profile to Block. Otherwise, the USB events are not captured. The events
are also captured when a group of device types are blocked on the endpoints with a permanent or temporary exception in place. For more information, see
Add a new configuration profile.
You can use XQL Search to query for this data and build widgets based on the xdr_data dataset, where the following use cases are supported:
Displaying devices by Vendor ID, Vendor Name, Product ID, and Product Name.
Displaying hosts that a specific device, based on the serial number, is connected.
Query for USB devices that are connected to specific hosts or groups of hosts.
This query returns the action_device_usb_product_name field from all xdr_data records, where the event_type is DEVICE and the
event_sub_type is DEVICE_PLUG.
dataset = xdr_data
| filter event_type = DEVICE and event_sub_type = DEVICE_PLUG
| fields action_device_usb_product_name
This query returns the action_device_usb_vendor_name field from all device_control records (preset of the xdr_data dataset) where the
event_type is DEVICE.
preset = device_control
| filter event_type = DEVICE
| fields action_device_usb_vendor_name
Abstract
Control communications on your endpoints based on the network location of your device by using the Cortex XDR host firewall.
The Cortex XDR host firewall enables you to control communications on your endpoints. To use the host firewall, you set rules that allow or block the traffic on
the devices and apply them to your endpoints using host firewall policy rules. Additionally, you can configure different sets of rules based on the current
location of your endpoints - within or outside your organization network. The Cortex XDR host firewall rules leverage the operating system firewall APIs and
enforce these rules on your endpoints, but not your Windows or Mac firewall settings.
The following apply Cortex XDR host firewall policy rules on your endpoints:
By default, Cortex firewall is disabled and Windows firewall has control. Enforcing Cortex firewall rules will
take control away from Windows Firewall, and Windows firewall rules will no longer apply.
It is recommended to disable the windows firewall on endpoints running Windows 7 SP1 before applying the
Cortex XDR host firewall profile.
After you disable or remove the Cortex XDR host-firewall policy on the endpoint, the system firewall on the
endpoint is disabled.
You cannot configure the following Mac host firewall settings with the Cortex XDR host firewall.
Abstract
Control communications on your endpoints based on the network location of your device by using the host firewall.
Enforce the Cortex XDR host firewall policy in your organization to control communications on your endpoints and gain visibility into your network connections.
The host firewall policy consists of unique rules groups that are enforced hierarchically and can be reused across all host firewall profiles. The Cortex XDR host
firewall rules are integrated with the Windows Security Center and leverage the operating system firewall APIs and enforce these rules on your endpoints, but
not your operating system firewall settings. Once you deploy the host firewall, use the Host Firewall Events table to track the enforcement events in your
organization.
To configure the Cortex XDR host firewall in your network, follow this high-level workflow:
Create rules within rule groups: Create host firewall rules groups that you can reuse across all host firewall profiles. Add rules to each group and
prioritize the rules from top to bottom to create an enforcement hierarchy.
Configure a profile: Select one or more rule groups into a host firewall enforcement profile that you later associate with an enforcement policy. The
profile can enforce different rules when the endpoint is located within the organization’s internal network, and when it is outside. Prioritize the groups
within the profile from top to bottom to create an enforcement hierarchy.
Configure a policy: Add your host firewall profile to a new or existing policy that will be enforced on selected target endpoints.
Monitor and troubleshoot: View aggregated host firewall enforcement events, or all single host firewall activities the agent performed in your network.
Cortex XDR Pro customers can also query the host firewall events using the new host_firewall_events dataset in XQL Search for data and network
analysis.
Group rules into Rules Groups that you can reuse across all host firewall profiles. A host firewall group includes one or more host firewall unique rules. The
rules are enforced according to their order of appearance within the group, from top to bottom. After you create a rules group, you can assign the group to a
host firewall profile. When you edit, re-prioritize, disable, or delete a rule from a group, the change takes effect in all policies where this group is included. To
support this scalability and structure, every rule in Cortex XDR is assigned a unique ID and must be contained within a group. Additionally, you can import
existing firewall rules into Cortex XDR, or export them in JSON format.
1. Create a group.
From Endpoints → Host Firewall → Host Firewall Rules Groups, click +New Group on the upper bar.
Enter the rule name and optional description. To enforce the rules within the group in all policies they are associated with, Enable the group. When
Disabled, the group exists but is not enforced.
Create rules within rules groups to allow or block traffic on the endpoint. Use a variety of parameters to fine tune your policy such as specific protocols,
applications, services, and more. For every group, you need to create its own list of rules. Each rule is assigned a unique ID and can be associated with
a single group only.
A rule can belong to one rules group only and cannot be reused in different groups.
A host firewall rule allows or blocks the communication to and/or from an endpoint. Enter the rule Name, optional Description, and select the
Platforms you want to associate the rule with.
Any
Custom
TCP
UDP
ICMPv4
ICMPv6
Once you select one of the available protocols or enter the protocol number, you will be able to specify additional parameters per protocol
as needed. For example, for TCP(6) you can set local and remote ports, whereas for ICMPv4(1) you can add the ICMP type and code.
When selecting ICMP protocol, you must enter a the ICMP Type and Code. Without these values the ICMP protocol is ignored by the
Windows and macOS Cortex XDR agents.
Direction: Select the direction of the communication this rule applies to: Inbound communication to the endpoint, Outbound communication
from the endpoint, or Both.
Action: Select whether the rule action is to Allow or Block the communication on the endpoint.
Local/Remote IP Address: Configure the rule for specific local or remote IP addresses s and/or Ports. You can set a single IP address,
multiple IP addresses separated by a comma, range of IP addresses separated by a hyphen, or a combination of these options.
Depending on the type of platform you selected, define the Application, Service, and Bundle IDs of the Windows Settings and/or macOS
Settings—Configure the rule for all applications/services or specific ones only by entering the full path and name. If you use system variables
in the path definition, you must re-enforce the policy on the endpoint every time the directories and/or system variables on the endpoint
change.
Report Matched Traffic: When Enabled, enforcement events captured by this rule are reported periodically to Cortex XDR and displayed in
the Host Firewall Events table, whether the rule is set to Allow or Block the traffic. When Disabled, the rule is applied but enforcement events
are not reported periodically.
b. Save rule.
After you fill in all the details, you need to save the rule. If you know you need to create a similar rule, click Create another to save this rule and
leave the specified parameters available for edit for the next rule. Otherwise, to save the rule and exit, click Create.
4. Prioritize rules.
The rules within the group are enforced by priority from top to bottom. By default, every new rule is added to the top of the already existing rules in the
group, meaning it is assigned the highest priority and will be enforced first. To change the rules priority and order of enforcement within the group, click
the rule priority number and drag the rule up or down the table to the proper row. Repeat this process to prioritize all the rules.
5. Save.
When you are done, click Create. The new rules group is created and can be associated with a host firewall profile.
After you create a group, you can perform additional actions. From Endpoints → Host Firewall → Host Firewall Rules Groups, click a group:
View group data: From the Host Firewall Rules Groups table you can view details about all the existing rules groups in your organization. The table lists
high level information about the group such as name, mode, and number of rules included. To view all rules within a group and all the profiles the group
is associated with, click the expand icon.
Delete/Disable: To stop enforcing the rules within this group, right-click the group and Delete/Disable it. On the next heartbeat, its rule will be
removed/disabled from all profiles this group is associated with.
Import/Export group rules: Using a JSON file, you can import rules into the Cortex XDR host firewall or export them. Right-click the rule and
Import/Export.
After you create a host firewall rule and assign it to a rules group, you can manage the rule settings and enforcement as follows.
Change priority: Change the rule priority within the group by dragging its row up and down the rules list.
Delete/Disable: To stop enforcing the rule, you can right-click the rule and Delete/Disable it. On the next heartbeat, the rule will be removed/disabled in
all profiles where this rules group is included.
Configure host firewall profiles that contain one or more rules groups. The groups are enforced according to their order of appearance within the profile, from
top to bottom (and within each group, the rules are also enforced from top to bottom). You can also configure profiles based on the device location within your
internal network. When you edit, re-prioritize, disable, or delete a rules group from a profile, the change takes effect on the next heartbeat in all policies where
this profile is included.
1. Create a profile.
From Endpoints → Policy Management → Extensions and select + Add Profile or Import from File.
When the profile operates in report mode, Cortex XDR overrides all rules set to Block traffic. Instead, the traffic is allowed to go through, and the
enforcement event is reported as Override Block. You can configure a profile in report mode if you need for example to test new block rules before you
actually apply them.
To apply location-based host firewall rules, you must first enable network location configuration in your Agent Settings Profile. When enabled, Cortex XDR
enforces the host firewall rules based on the current location of the device within the internal organization network (Internal Rules), enabling you for
example to enforce more strict rules when the device is outside the office and in a public place (External Rules). If you disable the Location Based
option, your policy will apply the internal set of rules only, and that will be applied to the device regardless of its location.
To quickly apply the exact same rules in both cases, select Add as external/internal rules groups as well.
The groups are listed according to the order of enforcement from top to bottom. To change this order, click on the group priority number and drag
the group to the desired row.
Field Description
Applicable Rules Count Displays the number of rules in the specific group that are associated
with the platform profile
Created by Displays the email address of the user that created the rule
Creation Time Date and time of when the rule was created
Field Description
Modified by Displays the email address of the last user that made changes to the
group
Modification Time Date and time of when the group was modified
d. (Optional) Select View Rules to view a list of all the rule details within the rules group. The table is filtered according to the rules associated with the
platform profile you are creating.
e. Allow or Block the Default Action for Inbound/Outbound Traffic in the profile if you want to allow all network connections that have not been
matched to any other rule in the profile.
When you are done, click Create. You can now configure a host firewall policy.
After you create the host firewall extensions profile, you can perform additional actions. The changes take effect on the next heartbeat. From Endpoints →
Policy Management → Extensions → Policy Rules, right-click to:
Edit: Change the profile settings and Save. The change takes effect in all policies enforcing this profile.
Delete: The profile is deleted from all policies it was associated with, while the rules groups are not deleted and are still available in Cortex XDR.
Export Profile: Select one or more policies, right-click and select Export Policies. You can choose to include the associated Policy Targets, Global
Exceptions, and endpoint groups.
After you define the required host firewall profiles, configure host firewall policies that will be enforced on your target endpoints. You can associate the profile
with an existing policy, or create a new one.
1. Create a policy.
From Endpoints → Policy Management → Extensions → Policy Rules, click +New Policy or Import from File.
When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows.
3. Select profile.
Select the desired profile for host firewall from the drop-down list, and any other profiles you want to include in this policy. Click Next.
4. Select endpoints.
Select the target endpoints on which to enforce the policy. Use filters or manual endpoint selection to define the exact target endpoints of the policy.
Click Done.
Drag and drop the policies in the desired order of execution, from top to bottom.
The Host Firewall Events table provides an aggregated view of the host firewall enforcement events in your network. An enforcement event represents the
number of rule hits per endpoint in 60 minutes.
The data is aggregated and reported periodically every 60 minutes since the first time the host firewall policy was enforced on the endpoint, not every
round hour.
The table lists enforcement events only for rules set to Report Matching Traffic.
Every enforcement event includes additional data such as the time of the first rule hit, the rule action, protocol, and more.
To gain deeper visibility into all the host firewall activity that occurred on an endpoint, you can retrieve a log file listing all single actions the agent performed for
all rules (whether set to Report Matched Traffic or not). The logs are stored in a cyclic 50MB file on the endpoint, which is constantly being re-written and
overridden older logs. When you upload the file, the logs are loaded to the Host Firewall Events table. You can filter the table using the Event Source field to
view only the aggregated periodic logs, or only non-aggregated on-demand logs.
To collect the log file, right-click the event containing the endpoint you are interested in and select Collect Detailed Host Firewall Logs. Alternatively, you can
perform this action for multiple endpoints from Endpoints Administration.
Abstract
Control communications on your endpoints based on the network location of your device by using the host firewall.
The Cortex XDR host firewall enables you to control communications on your endpoints. To use the host firewall, you set rules that allow or block the traffic on
the devices and apply them to your endpoints using Cortex XDR host firewall policy rules. Additionally, you can configure different sets of rules based on the
current location of your endpoints - within or outside your organization network. The Cortex XDR host firewall rules leverage the operating system firewall APIs
and enforce these rules on your endpoints, but not your Windows or Mac firewall settings.
In Cortex XDR 3.0, no change was made to the Host Firewall Configuration or operation on macOS endpoints. All existing policies configured in Cortex XDR
2.9 still apply and will continue to work as expected with Cortex XDR agent 7.2 or a later release. Enforcement events triggered by macOS endpoints are not
included in the Host Firewall Events table.
To configure the Cortex XDR host firewall in your network, follow this high-level workflow. Ensure you meet the host firewall requirements.
If you want to apply location-based host firewall rules, you must first enable network location configuration in your agent settings profile. On every heartbeat,
and if the Cortex XDR agent detects a network change on the endpoint, the agent triggers the device location test and re-calculates the policy according to
the new location.
Configure host firewall profiles that contain one or more rules groups. The groups are enforced according to their order of appearance within the profile, from
top to bottom (and within each group, the rules are also enforced from top to bottom). You can also configure profiles based on the device location within your
internal network. When you edit, re-prioritize, disable, or delete a rules group from a profile, the change takes effect on the next heartbeat in all policies where
this profile is included.
Rules created on macOS 10 and Cortex XDR agent 7.5 and prior are managed only in the Legacy Host Firewall Rules and do not appear in the Rule Groups
tables.
1. From Endpoints → Policy Management → Extensions Profiles → Profiles, select + New Profile or Import from File. Select the Platform and click Host
Firewall → Next.
When the profile operates in report mode, Cortex XDR overrides all rules set to Block traffic. Instead, the traffic is allowed to go through, and the
enforcement event is reported as Override Block. You can configure a profile in report mode if you need for example to test new block rules before you
actually apply them.
To quickly apply the exact same rules in both cases, select Add as external/internal rules groups as well.
The groups are listed according to the order of enforcement from top to bottom. To change this order, click on the group priority number and drag
the group to the desired row.
Field Description
Applicable Rules Count Displays the number of rules in the specific group that are associated
with the platform profile
Created by Displays the email address of the user that created the rule
Creation Time Date and time of when the rule was created
Modified by Displays the email address of the last user that made changes to the
group
Modification Time Date and time of when the group was modified
d. (Optional) Select View Rules to view a list of all the rule details within the rules group. The table is filtered according to the rules associated with the
platform profile you are creating.
Any type protocol and specific ports cannot be edited. If saved as a new rule, the specific ports previously defined are removed from the cloned
rule.
e. Allow or Block the Default Action for Inbound/Outbound Traffic in the profile if you want to allow all network connections that have not been
matched to any other rule in the profile.
Manage Host Firewall Rules created on macOS 10 and Cortex XDR agent 7.5 and earlier.
a. Enable Manage Host Firewall to allow Cortex XDR to manage the host firewall on your Mac endpoints.
The host firewall settings allow or block inbound communication on your Mac endpoints. Enable or Disable the following actions:
Block All Incoming Connections: Select where to block all incoming communications on the endpoint or not.
Application Exclusions: Allow or block specific programs running on the endpoint using a Bundle ID.
If the profile is location-based, you can define both internal and external settings.
After you define the required host firewall profiles, configure the Protection Policies and enforce them on your endpoints. Cortex XDR applies Protection policies
on endpoints from top to bottom, as you’ve ordered them on the page. The first policy that matches the endpoint is applied. If no policies match, the default
policy that enables all communication to and from the endpoint is applied.
1. From Endpoints → Policy Management → Extensions → Policy Rules, select +New Policy or Import from File.
When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows:
Rules without a defined target are disabled until the target is specified.
b. Assign the host firewall profile you want to use in this rule.
c. Click Next.
Use filters or manual endpoint selection to define the exact target endpoints of the policy rules.
e. Click Done.
Alternatively, you can associate the host firewall profile with an existing policy. Right-click the policy and select Edit. Select the Host Firewall profile and
click Next. If needed, you can edit other settings in the rule, such as target endpoints and description. When you’re done, click Done.
After the policy is saved and applied to the agents, Cortex XDR enforces the host firewall policies on your environment.
To view only the communication events on the endpoint to which the Cortex XDR host firewall rules were applied, you can run the Cytool firewall show
command.
Additionally, to monitor the communication on your macOS endpoint, you can use the following operating system utilities: From the endpoint System
Preferences → Security and Privacy → Firewall → Firewall options, you can view the list of blocked and allowed applications in the firewall. The Cortex XDR
host firewall blocks only incoming communications on Mac endpoints, still allowing outbound communication initiated from the endpoint.
Abstract
For enhanced security, you can configure and apply disk encryption profiles to the disks of your Windows and Mac endpoints.
Cortex XDR provides full visibility into encrypted Windows and Mac endpoints that were encrypted using BitLocker and FileVault, respectively. Additionally, you
can apply Cortex XDR Disk Encryption rule on the endpoints by creating disk encryption rules and policies that leverage BitLocker and FileVault capabilities.
Before you start applying disk encryption policy rules, ensure you meet the following requirements and refer to these known limitations:
Endpoint Prerequisites The endpoint must be running a Microsoft The endpoint must be running a macOS
Windows version that supports BitLocker. version that supports FileVault.
The endpoint must be within the The endpoint must be running a Cortex
organization's network domain. XDR agent 7.2 or later.
Disk Encryption Scope You can enforce XDR disk encryption policy You can enforce XDR disk encryption
rules only on the Operating System volume. policy rules only on the Operating System
volume.
Follow this high-level workflow to deploy the Cortex XDR disk encryption in your network:
You can monitor the Encryption Status of an endpoint in the Endpoints → Disk Encryption Visibility table. For each endpoint, the table lists both system and
custom drives that were encrypted.
The following table describes both the default and additional optional fields that you can view in the Disk Encryption Visibility table per endpoint. The fields are
in alphabetical order.
Field Description
Endpoint Status Status of the endpoint, for more information, see Manage endpoints.
Last Reported Date and time of the last change in the agent’s status, for more information,
see Manage endpoints.
Volume Status Lists all the disks on the endpoint along with the status per volume,
Decrypted or Encrypted. For Windows endpoints, Cortex XDR includes the
encryption method.
You can also monitor the endpoint Encryption Status in your Endpoint Administration table.
1. Under Endpoints → Policy Management → Extensions → Profiles, select + New Profile or Import from File. Choose the Platform and select Disk
Encryption. Click Next.
To enable the Cortex XDR agent to apply disk encryption rules using the operating system disk encryption capabilities, Enable the Use disk encryption
option.
For Windows:
For Mac:
Inline with the operating system requirements, when the Cortex XDR agent attempts to enforce an encryption profile on an endpoint, the endpoint
user is required to enter the login password. Limit the number of login attempts to one or three. Otherwise, if you do not force log in attempts, the
user can continuously dismiss the operating system pop-up and the Cortex XDR agent will never encrypt the endpoint.
For each operating system (Windows 7, Windows 8-10, Windows 10 (1511), and above), select the encryption method from the corresponding list.
You must select the same encryption method configured by the Microsoft Windows Group Policy in your organization for the target endpoints. Otherwise,
if you select a different encryption method than the one already applied through the Windows Group Policy, Cortex XDR displays errors.
To enable the Cortex XDR agent to encrypt your endpoint, or to help users who forgot their password to decrypt the endpoint, you must upload to Cortex
XDR the FileVaultMaster certificate / institutional recovery key (IRK). You must ensure the key is signed by a valid authority and upload a CER file only.
After you define the required disk encryption profiles, configure Protection Policies and enforce them on your endpoints. Cortex XDR applies Protection policies
on endpoints from top to bottom, as you’ve ordered them on the page. The first policy that matches the endpoint is applied. If no policies match, the default
policy that enables all communication to and from the endpoint is applied.
1. Under Endpoints → Policy Management → Extensions → Policy Rules, select +New policy or Import from File.
When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows:
Rules without a defined target are disabled until the target is specified.
b. Assign the disk encryption profile you want to use in this rule.
c. Click Next.
Use filters or manual endpoint selection to define the exact target endpoints of the policy rules. If exists, the Group Name is filtered according to
the groups within your defined user scope.
e. Click Done.
Alternatively, you can associate the disk encryption profile with an existing policy. Right-click the policy and select Edit. Select the Disk Encryption profile
and click Next. If needed, you can edit other settings in the rule, such as target endpoints and description. When you’re done, click Done.
After the policy is saved and applied to the agents, Cortex XDR enforces the disk encryption policies on your environment.
5. Select one or more policies, right-click and select Export Policies. You can choose to include the associated Policy Targets, Global Exceptions, and
endpoint groups.
Abstract
Review the inventory of all your hosts (endpoints), and identify in the inventory any IT and security issues in your network.
With Host Inventory, you gain full visibility and inventory into the business and IT operational data on all your endpoints. By reviewing the inventory for all your
hosts in a single place, you can quickly identify IT and security issues that exist in your network, such as identifying a suspicious service or autorun that was
added to an endpoint.
The Cortex XDR agent scans the endpoint every 24 hours for any updates and displays the data found over the last 30 days. Alternatively, you can rescan the
endpoint to retrieve the most updated data. It can take Cortex XDR up to 6 hours to collect initial data from all endpoints in your network.
The following are prerequisites to enable Host Inventory for your Cortex XDR instance:
Requirement Description
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
The Cortex XDR Host Inventory includes the following entities and information, according to the operating system running on the endpoint:
Accessibility – ✓ –
Applications ✓ ✓ ✓
Autoruns ✓ ✓ ✓
Daemons – ✓ ✓
Disks ✓ ✓ ✓
Drivers ✓ – ✓
Extensions – ✓ –
Groups ✓ ✓ ✓
Mounts – ✓ ✓
Services ✓ – –
Shares ✓ ✓ ✓
System Information ✓ ✓ ✓
Users ✓ ✓ –
Users to Groups ✓ ✓ ✓
For each entity, Cortex XDR lists all the details about the entity, and the details about the endpoint it applies to. For example, the default Services view lists a
separate row for every service on every endpoint:
Alternatively, to better understand the overall presence of each entity on the total number of endpoints, you can switch to an aggregated view (click ) and
group the data by the main entity. You can also sort and filter according to the number of affected endpoints. For example, in the Services aggregated view,
you can sort by the number of affected endpoints to identify the least commonly deployed service in your network. To get a closer view of all endpoints, right-
click and select View affected endpoints.
To view the Host inventory, go to Incident Response → Investigation → Host Inventory. You can export the tables and respective asset views to a tab-
separated values (TSV) file.
Data Description
Accessibility Details about installed applications that require and were allowed special permissions to enable a camera,
microphone, accessibility features, full disk access, or screen captures.
For each application, Cortex XDR lists the existing CVEs and the vulnerability severity score that reflects the highest
NIST vulnerability score detected for the application.
Autoruns Details about executables that start automatically when the user logs in or boots the endpoint.
Cortex XDR displays information about autoruns that are configured in the endpoint Registry, startup folders,
scheduled tasks, services, drivers, daemons, extensions, Crond tasks, login items, login, and logout hooks.
For each autorun, Cortex XDR lists the autorun type and configuration, such as startup method, CMD, user details,
and image path.
Information about the daemon, such as the name, type, and path
For each disk that exists on an endpoint, Cortex XDR lists details such as the drive type, name, file system, free
space, and total size.
Data Description
For each driver, Cortex XDR lists all the following details:
Information about the driver, such as the driver name, type, and path.
Driver type
Whether the driver is currently running, in which mode, and the runtime state
Extensions Details about the system and kernel extensions currently running on your Mac endpoints.
For each group, Cortex XDR lists identifying details, such as name, SID/GID name, and type.
Mounts Details about all the drives, volumes, and disks that were mounted on endpoints.
For each mount, Cortex XDR lists the mount point directory, file system type, mount spec, and GUID.
For each service, Cortex XDR lists all the following details:
Information about the service, such as the service name, type, and path
Whether the service is currently running and what is the runtime state
Whether you can stop, pause, or delay the service start time
The name of the user who started the service and the start mode
For each folder, Cortex XDR lists all the following details:
Shared network folder type: Disk Drive, Print Queue, Device, IPC, Disk Drive Admin, Print Queue Admin,
Device Admin, IPC Admin
Whether the folder is limited to a maximum number of shares, and the maximum number of allowed shares
For each endpoint, Cortex XDR lists all the following details:
Information about the endpoint hardware, such as manufacturer, model, physical memory, processor
architecture, and CPU
Data Description
For each user, Cortex XDR lists all the following details.
Details about the account, such as whether the account is active and the account type
Information about the password set for this user account, such as whether it is required to login, has an
expiration date or can be changed
Users to Groups A list mapping all the users, local and in your domain, to the existing user groups on an endpoint.
Cortex XDR includes only the first 10,000 results per endpoint.
Cortex XDR lists only users that belong to each group directly, and does not include users who belong to a
group within the main group.
If a local users group includes a domain user (whose credentials are stored on the Domain Controller server
and not on the endpoint), Cortex XDR includes this user in the user-to-group mapping, but does not include it
in the user's insights view.
Abstract
Perform a vulnerability assessment of all endpoints in your network using Cortex XDR. This includes CVE, endpoint, and application analysis.
Cortex XDR vulnerability assessment enables you to identify and quantify the security vulnerabilities on an endpoint. After evaluating the risks to which each
endpoint is exposed and the vulnerability status of an installed application in your network, you can mitigate and patch these vulnerabilities on all the endpoints
in your organization.
For a comprehensive understanding of the vulnerability severity, Cortex XDR retrieves the latest data for each Common Vulnerabilities and Exposures (CVE)
from the NIST National Vulnerability Database, including CVE severity and metrics.
The following are prerequisites for Cortex XDR to perform a vulnerability assessment of your endpoints.
Requirement Description
Requirement Description
Cortex XDR lists only CVEs relating to the operating system, and not CVEs relating
to applications provided by other vendors.
Cortex XDR retrieves the latest data for each CVE from the NIST National
Vulnerability Database as well as from the Microsoft Security Response Center
(MSRC).
Cortex XDR collects KB and application information from the agents but calculates
CVE only for KBs based on the data collected from MSRC and other sources
For endpoints running Windows Insider, Cortex XDR cannot guarantee an accurate
CVE assessment.
Cortex XDR does not display open CVEs for endpoints running Windows releases
for which Microsoft no longer fixes CVEs.
Linux
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the the latest data retrieved from the NIST.
MacOS
Cortex XDR collects only the applications list from MacOS without CVE calculation.
If Cortex XDR doesn't match any CVE to its corresponding application, an error message is
displayed, "No CVEs Found".
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Cortex XDR calculates CVEs for applications according to the application version, and not
according to application build numbers.
The Enhanced Vulnerability Assessment mode uses an advanced algorithm to collect extensive details on CVEs from comprehensive databases and to
produce an in-depth analysis of the endpoint vulnerabilities. Turn on the Enhanced Vulnerability Assessment mode from Settings → Configurations →
Vulnerability Assessment. This option may be disabled for the first few days after updating Cortex XDR as the Enhanced Vulnerability Assessment engine is
initialized.
The following are prerequisites for Cortex XDR to perform an Enhanced Vulnerability Assessment of your endpoints.
Requirement Description
Requirement Description
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the latest data retrieved from the NIST.
CVEs that apply to applications that are installed by one user aren't detected when
another user without the application installed is logged in during the scan.
MacOS
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the latest data retrieved from the NIST.
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Some CVEs may be outdated if the Cortex XDR agent wasn't updated recently.
Application versions which have reached end-of-life (EOL) may have their version listed as
0. This doesn't affect the detection of the CVEs.
Some applications are listed twice. One of the instances may display invalid version,
however, this doesn't affect the functionality.
The scanning process may impact performance on the Cortex XDR agent during
scanning. The scan may take up to two minutes.
You can access the Vulnerability Assessment panel from Assets → Vulnerability Assessment.
Collecting the initial data from all endpoints in your network could take up to 6 hours. After that, Cortex XDR initiates periodical recalculations to rescan the
endpoints and retrieve the updated data. If at any point you want to force data recalculation, click Recalculate. The recalculation performed by any user on a
tenant updates the list displayed to every user on the same tenant.
CVE Analysis
To evaluate the extent and severity of each CVE across your endpoints, you can drill down into each CVE in Cortex XDR and view all the endpoints and
applications in your environment that are impacted by the CVE. Cortex XDR retrieves the latest information from the NIST public database. From Assets → Host
Insights → Vulnerability Assessment, select CVEs on the upper-right bar. This information is also available in the va_cves dataset, which you can use to build
queries in XQL Search.
If you have the Identity Threat Module enabled, you can also view the CVE analysis in the Host Risk View. To do so, from Assets → Asset Scores, select
the Hosts tab, right click on any endpoint, and select Open Host Risk View.
For each vulnerability, Cortex XDR displays the following default and optional values.
Value Description
Affected endpoints The number of endpoints that are currently affected by this CVE. For
excluded CVEs, the affected endpoints are N/A.
You can click each individual CVE to view in-depth details about it on a
panel that appears on the right.
Value Description
Excluded Indicates whether this CVE is excluded from all endpoint and application
views and filters, and from all Host Insights widgets.
Platforms The name and version of the operating system affected by this CVE.
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score is based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex XDR as you analyze the existing vulnerabilities:
View CVE details—Left-click the CVE to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all endpoints in your network that are impacted by a CVE—Right-click the CVE and then select View affected endpoints.
Learn more about the applications in your network that are impacted by a CVE—Right-click the CVE and then select View applications.
Exclude irrelevant CVEs from your endpoints and applications analysis—Right-click the CVE and then select Exclude. You can add a comment if
needed, as well as Report CVE as incorrect for further analysis and investigation by Palo Alto Networks. The CVE is grayed out and labeled Excluded
and no longer appears on the Endpoints and Applications views in Vulnerability Assessment, or in the Host Insights widgets. To restore the CVE, you can
right-click the CVE and Undo exclusion at any time.
The CVE will be removed/reinstated to all views, filters, and widgets after the next vulnerability recalculation.
Endpoint Analysis
To help you assess the vulnerability status of an endpoint, Cortex XDR provides a full list of all installed applications and existing CVEs per endpoint and also
assigns each endpoint a vulnerability severity score that reflects the highest NIST vulnerability score detected on the endpoint. This information helps you to
determine the best course of action for remediating each endpoint. From Assets → Vulnerability Assessment, select Endpoints on the upper-right bar. This
information is also available in the va_endpoints dataset. In addition, the host_inventory_endpoints preset lists all endpoints, CVE data, and additional
metadata regarding the endpoint information. You can use this dataset and preset to build queries in XQL Search.
For each vulnerability, Cortex XDR displays the following default and optional values.
Value Description
CVEs A list of all CVEs that exist on applications that are installed on the endpoint.
You can click each individual endpoint to view in-depth details about it on a
panel that appears on the right.
Last Reported Timestamp The date and time of the last time the Cortex XDR agent started the
process of reporting its application inventory to Cortex XDR.
Value Description
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex XDR as you investigate and remediate your endpoints:
View endpoint details—Left-click the endpoint to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all applications installed on an endpoint—Right-click the endpoint and then select View installed applications. This list
includes the application name, and version, of applications on the endpoint. If an installed application has known vulnerabilities, Cortex XDR also
displays the list of CVEs and the highest Severity.
(Windows only) Isolate an endpoint from your network—Right-click the endpoint and then select Isolate the endpoint before or during your
remediation to allow the Cortex XDR agent to communicate only with Cortex XDR .
(Windows only) View a complete list of all KBs installed on an endpoint—Right-click the endpoint and then select View installed KBs. This list
includes all the Microsoft Windows patches that were installed on the endpoint and a link to the Microsoft official Knowledge Base (KB) support article.
This information is also available in the host_inventory_kbs preset, which you can use to build queries in XQL Search.
Retrieve an updated list of applications installed on an endpoint—Right-click the endpoint and then select Rescan endpoint.
Application Analysis
You can assess the vulnerability status of applications in your network using the Host inventory. Cortex XDR compiles an application inventory of all the
applications installed in your network by collecting from each Cortex XDR agent the list of installed applications. For each application on the list, you can see
the existing CVEs and the vulnerability severity score that reflects the highest NIST vulnerability score detected for the application. Any new application
installed on the endpoint will appear in Cortex XDR within 24 hours. Alternatively, you can re-scan the endpoint to retrieve the most updated list.
Starting with macOS 10.15, Mac built-in system applications are not reported by the Cortex XDR agent and are not part of the Cortex XDR Application
Inventory.
To view the details of all the endpoints in your network on which an application is installed, right-click the application and select View endpoints.
To view in-depth details about the application, left-click the application name.
Abstract
After you install the Cortex XDR agent and the agent registers with Cortex XDR, you can set endpoints to run with a Cortex XDR agent Critical Environment (CE)
version.
CE versions are designed for sensitive and highly regulated environments. These versions receive full content update coverage and contain the same feature
set as the standard line it is based on. Please note, that some bug fixes, introducing higher stability risk, may not be incorporated into the maintenance
releases of these lines. Support is provided for CE versions for 24 months, while support for standard versions is provided for 9 months.
To ensure the stability of the line, CE versions maintenance release cadence is longer than in the standard line, we recommend that deployment is adjusted
accordingly.
Setting an endpoint with a CE agent version requires you to define your agent configurations which then allows you to do the following:
b. Click Enable Critical Environment Versions to be Created and Installed in the Tenant.
Navigate to Endpoints → All Endpoints table and locate the Version Type field to view whether the endpoint is defined as a Standard or Critical
Environment agent.
Abstract
Set an application-specific proxy for the Cortex XDR agent without affecting the communication of other applications on the endpoint.
This capability is supported on endpoints with Traps agent 5.0.9 (Windows only) or Cortex XDR agent 7.0 and later releases.
In environments where agents communicate with the Cortex XDR server through a wide-system proxy you can set an application-specific proxy for the Traps
and Cortex XDR agent without affecting the communication of other applications on the endpoint. You can set the proxy during the agent installation, after
installation using Cytool on the endpoint, or from All Endpoints in Cortex XDR.
You can assign up to five different proxy servers per agent. The proxy server the agent uses is selected randomly and with equal probability. If communication
fails between the agent and the Cortex XDR server through the app-specific proxies, the agent resumes communication through the system-wide proxy
defined on the endpoint. If that also fails, the agent directly resumes communication with Cortex XDR.
3. Select the row of the endpoint for which you want to set a proxy.
4. Right-click the endpoint and select Endpoint Control → Set Agent Proxy.
5. You can assign up to five different proxies per agent. For each proxy, enter the IP address and port number. For Cortex XDR agents 7.2.1 and later, you
can also configure the proxy by entering the FQDN and port number. When you enter the FQDN, you can use all lowercase letters or all uppercase
letters. Avoid using special characters or spaces.
6. Click Set.
7. If required, you can Disable Agent Proxy from the right-click menu.
When you disable the proxy configuration, all proxies associated with that agent are removed. The agent resumes communication with the Cortex XDR
server through the system-wide proxy. If a system-wide proxy is not defined, the agent resumes direct communication with the Cortex XDR server. If
neither a system-wide proxy nor direct communication exists, the agent will disconnect from Cortex XDR.
Abstract
Learn how to pair Prisma Cloud Compute with Cortex XDR for use with the Cortex XDR Agent for Cloud.
Cortex XDR and Prisma Cloud Compute are offering a unified cloud security agent for Windows and Linux. The Cortex XDR Agent for Cloud provides end to
end prevention and vulnerability coverage on Linux cloud environments.
Cortex XDR Agent for Cloud has a single management server that is based on a Cortex XDR tenant. Policy management, data, and alerts are first managed
between the Cortex XDR tenant and Cortex XDR Agent for Cloud, and then runtime protection and vulnerability coverage can be provided on Prisma Cloud
Compute and Cortex XDR.
Prerequisites
To enable the capabilities of Cortex XDR Agent for Cloud, the Prisma Cloud Compute tenant must be paired with an existing Cortex XDR tenant. Pairing is one
to one, with the two tenants being in the same region.
Pairing Prisma Cloud Compute to Cortex XDR can only be done when both Cortex XDR and Prisma Cloud Compute tenants are already active.
The following are required in order for Windows agents to be visible in the Prisma Cloud Compute tenant:
The tenant must have the XDR Cloud per Host license and the Host insights add-on.
In the XDR Pro Endpoints section of the Agent Settings profile, enable the XDR Pro Endpoints Capabilities and then enable the Host Insights capabilities.
Vulnerabilities are based on the Cortex vulnerability assessment engine, which may be different than Prisma vulnerability scanning.
Cloud metadata information is not reported for results collected by the Cortex XDR agent running on a Windows endpoint and Compliance Assessment is not
supported by the Cortex XDR agent for Cloud.
1. From the Prisma Cloud Compute console, copy the access pairing key.
b. Click the copy icon to copy the Access Key, which is the pairing key used in Cortex XDR.
a. Select Settings → Configurations → Server Settings, and scroll to Prisma Cloud Compute Tenant Pairing.
After a few seconds, the Cortex XDR and Prisma Cloud Compute tenants are paired.
In Cortex XDR, select Settings → Configurations → Server Settings, and scroll to Prisma Cloud Compute Tenant Pairing.
In Prisma Cloud Compute, select Manage → System, and scroll to Pair Cortex XDR Tenant.
2. Click Unpair.
Note that all Advanced Vulnerability settings (under the Agent Settings profile) will be reset and all Agent Installations created via the Prisma Cloud
Compute console will be deleted.
After a few seconds, the Cortex XDR and Prisma Cloud Compute tenants are unpaired.
When unpairing, the Active Vulnerability Analysis Module under the Agent Settings profile is reset to Disable mode.
If Prisma Cloud and Cortex XDR are to be paired again, the Active Vulnerability Analysis Module must be enabled manually.
Abstract
The Cortex XDR agent is installed on each of your endpoints, and you can manage the agents using Cortex XDR.
The Cortex XDR agent is installed on each of your endpoints, and you can perform various management activities on the agents, using Cortex XDR.
Abstract
Endpoint tags enable multiple layers of segmentation to your endpoints. An endpoint tag is a dynamic entity that is created and assigned to one or more
endpoints. The assigned endpoint tags can then be used to create Endpoint Groups, Policies, and Actions.
The following explanations use Windows operating system installation parameters and Cytool argument examples.
An endpoint tag can be created during installation of the Cortex XDR agent.
An endpoint tag can be created after installation either from the Cortex XDR agent or from the Cortex XDR management console.
Linux does not support tag names with spaces as command line arguments to the shell installer. Instead, tags can be set in the /etc/panw/cortex.conf
configuration file, which supports all Linux installers.
Add an endpoint tag after installation
1. Navigate to the Cytool folder location and open the CLI as an administrator.
2. Select one or more endpoints, right-click, and select Endpoint Control → Assign Endpoint Tags.
3. Select Add tag... and choose one or more tags from the list of existing tags or begin to type a new tag name to Create tag.
4. (This step requires administrator permissions) To assign the tag to users or user groups, select Add selected tags to Users or Groups, and select
the relevant Users and/or User Groups.
Depending on where you created your tag, Server or Agent, you can choose to edit or remove the tags.
If you remove the tag and there are assigned users or user groups with scope settings, this can impact user permissions in the system.
1. Navigate to the Cytool folder location and open the CLI as an administrator.
2. Select one or more endpoints, right-click, and select Endpoint Control → Remove Endpoint Tags.
1. Navigate to the Cytool folder location and open the CLI as an administrator.
All Server and Agent tags associated with the specific endpoint are displayed. Tags created in the XDR agent are displayed with a shield icon.
2. Filter and search the Tags field for the endpoint tags you have created and assigned.
Abstract
To identify one or more endpoints by a name that is different from the endpoint hostname, you can configure an alias. You can set an alias for a single endpoint
or set an alias for multiple endpoints in bulk. To quickly search for the endpoints during an investigation and when you need to take action, you can use either
the endpoint hostname or the alias.
If you change your mind in the future, you can select Endpoint Control → Change Endpoint Alias again, and delete the required aliases.
6. Use the Quick Launcher to search the endpoints by alias across Cortex XDR.
Abstract
You can manage the endpoint prevention profiles of your Cortex XDR agent endpoints in various ways, including editing, duplicating, and populating endpoint
prevention policy rules.
After you create and customize your endpoint prevention profiles, you can manage them from the Prevention Profiles page as needed.
View the prevention policy rules that use a specific prevention profile
Before you modify or delete a profile, you can check which policy rules, if any, use the profile.
From Endpoints → Policy Management → Prevention → Profiles, right-click the profile and select View policy Rules.
Cortex XDR opens the Prevention Policy Rules page on a new tab. This page is filtered, and only displays the rules that use the profile that you selected.
Edit a profile:
1. From Endpoints → Policy Management → Prevention → Profiles, right-click the profile and select Edit.
Export a profile:
1. From Endpoints → Policy Management → Prevention → Profiles, right-click the profile and select Export Profile.
Duplicate a profile:
1. From Endpoints → Policy Management → Prevention → Profiles, right-click the prevention profile and select Save as New. A new profile is displayed,
containing the values from the profile that you selected.
2. Edit the profile name and description, edit any values that you want to change, and then click Create.
Delete a profile:
1. If necessary, delete or detach any policy rules that use the profile before attempting to delete it.
2. From Endpoints → Policy Management → Prevention → Profiles, locate the profile that you want to remove. The profile's Usage Count cell must have a 0
(zero) value.
1. From Endpoints → Policy Management → Prevention → Profiles, right-click the profile and select Create a new policy rule using this profile.
Cortex XDR automatically populates the Platform selection based on your profile configuration, and assigns the profile based on the profile type.
3. Assign any additional profiles that you want to apply to your policy rule, and click Next. A list of endpoints is displayed.
4. Select the target endpoints for the policy rule, or use the filters to define criteria for the policy rule to apply, and then click Next.
The following table displays the fields that are available on the Prevention Profiles page, in alphabetical order. The table includes both default fields and
additional fields that are available in the column manager. To view this page, go to Endpoints → Policy Management → Prevention → Profiles.
Field Description
Associated Targets The endpoints or endpoint groups to which the profile is assigned
Created Time The date and time at which the prevention profile was created
Modification Time The date and time at which the prevention profile was modified
Usage Count The number of policy rules that use the profile. If you want to delete a profile, ensure that this cell
displays "0".
Abstract
You can upgrade the Cortex XDR agent software by using the appropriate method for the endpoint operating system.
After you install the Cortex XDR agent and the agent registers with Cortex XDR, you can upgrade the Cortex XDR agent software using a method supported by
the endpoint platform:
Android: Upgrade the app directly from the Google Play Store or push the app to your endpoints from an endpoint management system such as
AirWatch.
iOS: Upgrade the app directly from the Apple App Store (agent version 8.6 or later), or push the app to your endpoints from an endpoint management
system.
Windows, Mac, or Linux: Create new installation packages and push the Cortex XDR agent package to up to 5,000 endpoints from Cortex XDR.
The following list includes important points to take into account when upgrading the Cortex XDR agent:
You cannot upgrade the Cortex XDR agent on VDI endpoints or a Golden Image.
You must reinstall (uninstall and install again) the relevant agent version on the Golden Image,
Installing a Golden Image for the Citrix App Layering environment must be performed on OS layer only.
Every new agent version installation must be performed on OS layer's version where the agent was not previously installed. There is no possibility to
reinstall agent on the Golden Image for the Citrix App Layering environment.
You must ensure that the System Extensions were approved on the endpoint. Otherwise, if the extensions were not approved, after the upgrade the extensions
remain on the endpoint without any option to remove them which could cause the agent to display unexpected behavior. To check whether the extensions were
approved, you can either verify that the endpoint is in Fully Protected state in Cortex XDR, or execute the following command line on the endpoint to list the
extensions: systemextensionsctl list. If you need to approve the extensions, follow the workflow explained in the Cortex XDR agent administration
guide for approving System Extensions.
Upgrades are supported using actions that you can initiate from the Action Center or from All Endpoints as described in this workflow.
1. Create an agent installation package for each operating system version for which you want to upgrade the Cortex XDR agent.
If needed, filter the list of endpoints. To reduce the number of results, use the endpoint name search and filters Filters at the top of the page.
You can also select endpoints running different operating systems to upgrade the agents at the same time.
4. Right-click your selection and select Endpoint Control → Upgrade Agent Version.
For each platform, select the name of the installation package you want to push to the selected endpoints.
You can install the Cortex XDR agent on Linux endpoints using a package manager. If you do not want to use the package manager, clear the option
Upgrade to installation by package manager.
When you upgrade an agent on a Linux endpoint that is not using a package manager, Cortex XDR upgrades the installation process by default
according to the endpoint Linux distribution.
The Cortex XDR agent keeps the name of the original installation package after every upgrade.
5. Upgrade.
Cortex XDR distributes the installation package to the selected endpoints at the next heartbeat communication with the agent. To monitor the status of
the upgrades, go to Response → Action Center.
From the Action Center you can also view additional information about the upgrade (right-click the action and select Additional data) or cancel the
upgrade (right-click the action and select Cancel Agent Upgrade).
Custom dashboards that include upgrade status widgets, and the All Endpoints page display upgrade status.
During the upgrade process, the endpoint operating system might request a reboot. However, you do not have to perform the reboot for the Cortex
XDR agent upgrade process to complete it successfully.
After you upgrade on an endpoint with Cortex XDR Device Control rules, you need to reboot the endpoint for the rules to take effect.
Abstract
You can restart an agent from the Cortex XDR tenant. This action is hidden by default.
As soon as the action is confirmed, the restart command triggers a restart of the agent on the endpoint.
1. From Cortex XDR, navigate to Endpoints → All Endpoints. Select the relevant endpoint to restart and right-click + Alt and select Endpoint Control →
Restart Agent and click OK.
2. Select I agree and then click OK to confirm restarting the agent on all endpoints selected.
Abstract
Uninstall Cortex XDR agent from one or more endpoints at any time using the Action Center, or one-by-one using the All Endpoints page.
If you want to uninstall the Cortex XDR agent from the endpoint, you can do so from the Cortex XDR tenant at any time. You can uninstall them from an
unlimited number of endpoints in a single bulk action using the Action Center. You can also uninstall each endpoint one-by-one, using the All Endpoints page.
Uninstalling an endpoint triggers the following lifespan flow:
Once you uninstall the agent from the endpoint, the action is immediate. All agent files and protections are removed from the endpoint, leaving the
endpoint unprotected.
The endpoint status changes to Uninstalled , and the license returns immediately to the license pool. After a retention period of 7 days, the agent is
deleted from the database and is displayed in Cortex XDR as Endpoint Name - N/A (Uninstalled).
Data associated with the deleted endpoint is displayed in the Action Center tables and the Causality View for the standard 90-day retention period.
Alerts that already include the endpoint data at the time of the alert creation are not affected.
Before upgrading a Cortex XDR agent 7.0 or later running on macOS 10.15.4 or later, you must ensure that the System Extensions were approved on the
endpoint. Otherwise, if the extensions were not approved, after the upgrade the extensions remain on the endpoint without any option to remove them
which could cause the agent to display unexpected behavior. To check whether the extensions were approved, you can verify that the endpoint is in a
Fully Protected state in Cortex XDR or execute the following command line on the endpoint to list the extensions: systemextensionsctl list. If you
need to approve the extensions, follow the workflow explained in the Cortex XDR agent administration guide for approving System Extensions.
For iOS and Android endpoints, uninstallation will reset account registration and data, but the app itself will remain on the device until removed locally by
the user. The endpoint will be disconnected, and the user will no longer be able to connect the app to the tenant account.
4. Click Next.
5. Select the target endpoints (up to 100) for which you want to uninstall the Cortex XDR agent.
6. Click Next.
2. Find and then right-click the agent that you want to uninstall, and select Endpoint Control → Uninstall Agent.
3. In the confirmation dialog box that appears, select I agree, and click OK.
Abstract
If you have an endpoint that you no longer want to track through Cortex XDR, for example, if the endpoint disconnected from Cortex XDR, or an endpoint
where the Cortex XDR agent was uninstalled, you can delete the endpoint from the Cortex XDR tenant views. Deleting an endpoint triggers the following
lifespan flow:
Data associated with the deleted endpoint is displayed in the Action Center tables and in the Causality View for the standard 90-day retention period.
Alerts that already include the endpoint data at the time of alert creation are not affected.
Additionally, Cortex XDR automatically deletes agents after a long period of inactivity.
Standard agents are deleted after 180 days of inactivity. Where day one is the first 24 hours of continuous inactivity.
The following workflow describes how to delete the Cortex XDR agent from one or more Windows, Mac, or Linux endpoints.
You can also select multiple endpoints if you want to perform a bulk delete.
Abstract
Manage tokens per agent to retrieve the password used to run functions at the agent.
You can now run some of the agent functions that require administrative passwords using a unique token shared between Cortex XDR and Cortex XDR agent.
Rolling token: Automatically generated per endpoint every fourteen days by the system and then sent to the relevant agent
Temporary token: Set a temporary token that is valid anywhere from one to twenty-one days.
Agent token is supported from Cortex XDR server version 3.3 and Cortex XDR agent version 7.7.1. It is only supported for Windows and Mac.
You can view the password of the selected agent. Whether the password is from a rolling token or a temporary token is indicated in the dialog.
b. Click the copy button to copy the password displayed and then click Ok.
You can now use the password to run functions at the agent.
You can generate a temporary token for any of the agents for a specified number of days between one to twenty-one days. If the agent is disconnected,
it gets the temporary token when the agent connects.
You can select a single or many endpoints at once to add a temporary token.
b. In the Token Expiration field, add the number of days for which to generate a temporary token for the agent and then click the Add Token
Expiration blue arrow.
c. Click the copy button to copy the password displayed and then click Create to begin generating the token.
d. Go to the Action Center to view which agent received the temporary token.
You can now use the password to run functions at the agent.
3. Retrieve the token using the token hash from the endpoint.
If the endpoint is disconnected from the server at the point the rolling token was updated, it won’t be possible to run agent functions with the updated
token from the server. You can still retrieve the password to run functions at the agent.
a. From the agent, run the cytool.exe to run the token query command. This command displays the current token of the endpoint.
b. Copy the token from the command line interface of the agent.
e. Click the copy button to copy the password displayed and then click Ok.
You can now use the password to run functions at the agent.
Abstract
Learn how to retrieve the password to access files from the Tech Support File (TSF), which is generated in a zip format protected by an encrypted password.
From Cortex XDR agent version 7.8 and later, the Tech Support File (TSF) is generated in a zip format protected by an encrypted password. The TSF file is
archived inside another file which also includes a metadata file that contains a token. The token is used to retrieve the password to unzip the TSF file.
To retrieve the password for the TSF file from the endpoint, go to the Cortex XDR server from the Tokens and Passwords option.
To retrieve the password for the TSF file from the server, go to the Action Center.
a. At the top of the page, click Tokens and Passwords and select Retrieve Support File Password.
b. In the Retrieve Support File Password dialog box, in the Encrypted Password field, paste the token that you copied from the metadata file located
in the saved file when running the Cytool log collect.
c. Click the copy button to copy the password displayed and then click Ok. Use the password to unzip the TSF file.
a. Right-click the relevant action of action type Support File Retrieval and select Additional Data.
c. In the Retrieve Support File Password dialog box, in the Encrypted Password field, paste the token that you copied from the metadata file located
in the download file.
d. Click the copy button to copy the password displayed and then click Ok. Use the password to unzip the TSF file.
Abstract
You can move Cortex XDR agents to other Cortex XDR managing servers.
You can move existing agents between Cortex XDR managing servers directly from Cortex XDR. This can be useful during migration, POCs, or to better
manage your agent allocation between tenants. When you change the server that manages the agent, the agent transfers to the new managing server as a
freshly installed agent, without any data that was stored on the original managing server. After the Cortex XDR agent registers with the new server, it can no
longer communicate with the previous one.
Ensure you are running a Cortex XDR agent 7.2 or a later release.
Ensure you have administrator privileges for Cortex XDR in the hub.
To register to another managing server, the Cortex XDR agent requires a distribution ID of an installation package on the target server in order to identify itself
as a valid Cortex XDR agent. The agent must provide an ID of an installation package that matches the same operating system for the same or a previous
agent version. For example, if you want to move a Cortex XDR Agent 7.0.2 for Windows, you can select from the target managing server the ID of an
installation package created for a Cortex XDR Agent 5.0.0 for Windows. The operating system version can be different.
Cortex XDR does not support moving agents between FedRamp and commercial tenants.
a. Log in to Cortex XDR on the target management server, then navigate to Endpoints → Agent Installations.
b. From the agent installations table, locate a valid installation package you can use to register the agent. Alternatively, you can create a new
installation package if required.
Log in to the current managing server of the Cortex XDR agent and navigate to Endpoints → All Endpoints.
a. Select one or more agents that you want to move to the target server.
b. Right-click + Alt to open the options menu in advanced mode, and select Endpoint Control → Change managing server. This option is available
only for an administrator in Cortex XDR and for Cortex XDR agent 7.2 and later.
c. Enter the ID number of the installation package you obtained in Step 1. If you selected agents running on different operating systems, for example,
Windows and Linux, you must provide an ID for each operating system. When done, click Move.
When you track the action in the Action Center, the original managing server will keep displaying In progress (Sent) status also after the action has
ended successfully, since the agent no longer reports to this managing server. The new managing server will add this as a new agent registration action.
Abstract
In cases where your Cortex XDR agent is having issues, you can attempt a reset by clearing the Cortex XDR agent state of one or more endpoints.
Clearing the agent database is supported on all platforms with Cortex XDR agent version 7.9 or later and is available only when using the debugging mode.
Clearing the agent database is available only when using the debugging mode, and can be tracked in the Action Center.
a. Navigate to Endpoints → All Endpoints and select one or more endpoints for which you want to clear the database.
b. In the All Actions table, filter the Action Type field according to Agent Database Cleanup.
You can only right-click to cancel the clear agent database for actions with a pending status.
Abstract
You can push a notification to the Cortex XDR agent on the iOS device from Cortex XDR.
1. Navigate to Endpoints → All Endpoints and locate the required iOS device or devices.
Notification Action
Device Checkup When the App user taps the received notification, the Cortex XDR app
will open on the device, ready to perform the checkup.
Verify App Permissions If the Phone permissions are not set correctly for full protection, the user
is instructed to allow permission.
The App user must tap Open Permissions Wizard from the iOS device
Home screen and follow the wizard to enable and allow the required
settings for full protection.
Custom message Admin can send a message with a header and body text to designated
Cortex XDR App users. The App user will receive this textual message.
Abstract
You can view the operational status of any Cortex XDR agent that you manage.
From the Cortex XDR management console, you have full visibility into the XDR agent operational status on the endpoint, which indicates whether the agent is
providing protection according to its predefined security policies and profiles. By observing the operational status on the endpoint, you can identify when the
agent may suffer from a technical issue or misconfiguration that interferes with the agent’s protection capabilities or interaction with Cortex XDR and other
applications. The XDR agent reports the operational status as follows:
Protected: Indicates that the XDR agent is running as configured and did not report any exceptions to Cortex XDR.
Partially protected: Indicates that the XDR agent reported one or more exceptions to Cortex XDR.
Unprotected: Indicates the XDR agent is not enforcing protection on the endpoint.
Local Resource Impact: indicates that the XDR agent machine resources currently available for use, are not enough for the agent to operate smoothly.
You can monitor the Cortex XDR agent Operational Status in Endpoints → All Endpoints. If the Operational Status field is missing, add it.
The operational status that the agent reports varies according to the exceptions reported by the XDR agent.
Status Description
Protected (Windows, Mac, and Linux) Indicates all protection modules are running as configured
on the endpoint.
Mac
Linux
Any of the listed items could lead to a partially protected state. Refer to the
Cortex XDR management console for specific reasons for the state.
Status Description
**(Status):
Abstract
You can monitor the activity of any Cortex XDR Broker VM that you manage.
Viewing agent audit logs requires either a Cortex XDR Prevent or Cortex XDR Pro per Endpoint license.
The Cortex XDR agent logs entries for events that are monitored by the Cortex XDR agent, and hourly reports the logs back to Cortex XDR. Cortex XDR stores
the logs for 365 days. To view the XDR agent logs, select Settings → Agent Auditing.
To ensure you and your colleagues stay informed about agent activity, you can Configure notification forwarding to forward your Agent Audit log to an email
distribution list, Syslog server, or Slack channel.
You can customize your view of the logs by adding or removing filters to the Agent Audits Table. You can also filter the page result to narrow down your search.
The following table describes the default and optional fields that you can view in the Cortex XDR Agents Audit Table:
Field Description
Category The XDR agent logs these endpoint events using one of the following categories:
Monitoring: Unsuccessful changes to the agent that may require administrator intervention.
Received Time Date and time when the action was received by the agent and reported back to Cortex XDR.
Field Description
Critical
High
Medium
Low
Informational
Field Description
Type and Sub-Type Additional classification of agent log (Type and Sub-Type):
Field Description
Installation:
Install
Uninstall
Upgrade
Policy change:
Content Update
Policy Update
Process Exception
Hash Exception
Agent service:
Service start (reported only when the agent fails to start and the RESULT is Fail)
Service stopped
Agent modules:
Module initialization
Agent status:
Fully protected
OS incompatible
Software incompatible
Proxy communication
Quota exceeded (reported when old prevention data is being deleted from the endpoint)
Minimal content
Action:
Field Description
Endpoint Token
Scan
File retrieval
Terminate process
Isolate
Cancel isolation
Payload execution
Quarantine
Restore
Block IP address
Unblock IP address
Tagging
XDR Agent Version The version of the XDR agent running on the endpoint.
Abstract
From Endpoints → All Endpoints, you can view the upgrade status of any Cortex XDR agent that you manage.
From the Cortex XDR management console, you have full visibility into the XDR agent upgrade status on the endpoint. You can monitor the Cortex XDR agent
statuses in Endpoints → All Endpoints. If the upgrade status fields are missing, add them. XDR agents report upgrade statuses as follows:
Status Description
Last upgrade status Displays the last upgrade status for each endpoint, and can be filtered by:
In Progress: This is the first stage shown when an upgrade is initiated (There is no Pending status).
Completed Successfully
Failed
No Upgrade: No upgrade of any type has been initiated for the endpoint. Newly installed endpoints will also show this status
until an upgrade is initiated by one of the upgrade methods.
Last upgrade status Displays a timestamp for the last time the upgrade status changed for each endpoint. This column can be filtered by date and
time time.
Last upgrade failure When relevant, displays the reason for an upgrade failure. This column can be filtered by free text.
reason
Status Description
Last upgrade source Displays the source that initiated the last upgrade, and can be filtered by the following:
Manual Server Upgrade: The upgrade was manually initiated from the server.
Auto Upgrade: The endpoint was automatically upgraded according to the upgrade policy.
Local Manual Upgrade: The upgrade was manually initiated at the endpoint side.
Cortex XDR uses rules to detect the threats in your network and to raise alerts. You can add specific detection rules for which you want Cortex XDR to raise
alerts. The following are the different types of rules available:
Indicators of compromise (IOCs): IOCs are used to alert for known artifacts that are considered malicious or suspicious. IOCs are static, simple, and
based on the detection of criteria such as SHA256 hashes, IP addresses and domains, file names, and paths. You create IOC rules based on
information you gather from various threat-intelligence feeds or as a result of an investigation within Cortex XDR. For example, if you find out that a
certain ransomware uses a certain file hash, you can add the file hash as an IOC and get an alert if it is detected.
Behavioral indicators of compromise (BIOCs): BIOCs detect suspicious behavior. As you identify specific activities (network, process, file, registry,
etc) that indicate a threat, you create BIOCs that can alert you when the behavior is detected. If you enable Cortex XDR Analytics, Cortex XDR can use
Analytics BIOCs (ABIOCs) to establish baseline behavior and detect any deviation from this behavior.
Correlation Rules: Correlation rules help you analyze the relationship between multiple events from multiple sources by using the Cortex Query Language
(XQL) based engine.
Abstract
Indicators of compromise (IOCs) alert you about known malicious objects on your endpoints.
Indicators of compromise (IOCs) enable Cortex XDR to trigger alerts about known malicious objects on endpoints across the organization. You can load
collections of IOCs from threat-intelligence sources into the Cortex XDR app or define them individually.
Full path
File name
Domain
Destination IP address
MD5 hash
SHA256 hash
After you load or define IOCs, the tenant checks for matches in the xdr_data dataset that contains all the information collected about the endpoints and the
network. The app looks for IOC matches in all data collected in the past and continues to evaluate any new data it receives in the future.
Alerts for IOCs are identified by the source type of the IOC.
Abstract
Manage all indicators of compromise (IOCs) configured from or uploaded to Cortex XDR.
In the Detection Rules → IOC page, you can view all indicators of compromise (IOCs) configured from or uploaded to Cortex XDR. To view the number of IOC
rules, filter by one or more fields in the IOC rules table. You can also manage or clone existing rules.
The following table describes the fields that are available for each IOC rule in alphabetical order.
Field Description
BACKWARDS SCAN STATUS Status of the Cortex XDR search for the first 10,000 matches when the IOC rule was
created or edited. Status can be:
Done
Failed
Pending
Queued
BACKWARDS SCAN TIMESTAMP Timestamp of the Cortex XDR search for the first 10,000 matches in your Cortex XDR
when the IOC rule was created or edited.
BACKWARDS SCAN RETRIES Number of times Cortex XDR searched for the first 10,000 matches in your Cortex XDR
when the IOC rule was created or edited.
COMMENT Free-form comments specified when the IOC was created or modified.
EXPIRATION DATE The date and time at which the IOC will be removed automatically.
INDICATOR The indicator value itself. For example, if the indicator type is a destination IP address,
this could be an IP address such as 1.1.1.1.
INSERTION DATE Date and time when the IOC was created.
MODIFICATION DATE Date and time when the IOC was last modified.
A - Completely Reliable
B - Usually Reliable
C - Fairly Reliable
E - Unreliable
Field Description
SEVERITY IOC severity that was defined when the IOC was created.
SOURCE User who created this IOC, or the file name from which it was created, or one of the
following keywords:
Public API—the indicator was uploaded using the Insert Simple Indicators,
CSV or Insert Simple Indicators, JSON REST APIs.
TYPE Type of indicator: Full path, File name, Host name, Destination IP, MD5 hash.
VENDORS A list of threat intelligence vendors from which this IOC was obtained.
Abstract
From the Cortex XDR management console, you can upload or configure indicator of compromise (IOC) rules criteria.
Create new indicator of compromise (IOC) rules and optionally define rule expiration for all IOC rules. You can create an IOC ruke either by configuring a single
one or by uploading a file that contains multiple IOCs.
To ensure your IOC rules raise alerts efficiently and do not overcrowd your Alerts table, Cortex XDR automatically does the following:
Disables any IOC rules that reach 5000 or more hits over a 24 hour period.
Creates a rule exception based on the PROCESS SHA256 field for IOC rules that hit more than 100 endpoints over a 72 hour period.
After investigating a threat, if you identify a malicious artifact, you can create an alert for the Single IOC right away.
2. Configure the IOC TYPE. Options are Full Path, File Name, Domain, Destination IP, and MD5 or SHA256 Hash.
3. Configure the SEVERITY you want to associate with the alert for the IOC.
6. (Optional) Configure the EXPIRATION settings for this IOC. Default, Specific Expiration Date, No Expiration.
7. Click Save.
If you want to match multiple indicators, you can upload the criteria in a CSV file. You can upload IOCs using REST APIs in either CSV or JSON format.
Upload a file, one IOC per line, that contains up to 20,000 IOCs. For example, you can upload multiple file paths and MD5 hashes for an IOC rule. To
help you format the upload file in the syntax that Cortex XDR accepts, you can download the example file.
2. Drag and drop the CSV file containing the IOC criteria in the drop area of the Upload File dialog or Browse for the file.
Cortex XDR supports a file with multiple IOCs in a pre-configured format. For help in determining the format syntax, Cortex XDR provides an
example text file that you can download.
3. Configure the SEVERITY you want to associate with the alert for the IOCs.
4. Define the DATA FORMAT of the IOCs in the CSV file. Options are Mixed, Full Path, File Name, Domain, Destination IP, and MD5 or SHA256 Hash.
6. (Optional) Enter an EXPIRATION for the IOC. Default, Specific Expiration Date, No Expiration.
7. Click Upload.
You can also configure additional expiration criteria per IOC type to apply to all IOC rules of that type. In most cases, IOC types like Destination IP or
Host Name are considered malicious only for a short period of time since they are soon cleaned and then used by legitimate services, from which time
they only cause false positives. For these types of IOCs, you can set a defined expiration period. The expiration criteria you define for an IOC type will
apply to all existing rules and additional rules that you create in the future. By default, Cortex XDR does not apply an expiration date set on IOCs.
b. Set the expiration for any relevant IOC type. Options are Never, 7 Days, 30 days, 90 days, or 180 days.
c. Click Save.
Abstract
Behavioral indicators of compromise (BIOCs) alert you to respond to potentially compromising behaviors.
Behavioral indicators of compromise (BIOCs) enable you to alert and respond to behaviors—tactics, techniques, and procedures. Instead of hashes and other
traditional indicators of compromise, BIOC rules detect behavior related to processes, registry, files, and network activity.
To benefit from the latest threat research, the Cortex XDR tenant automatically receives pre-configured rules from Palo Alto Networks. These global rules are
delivered to all tenants with content updates. When you need to override a global BIOC rule, you can disable it or set a rule exception. You can also configure
additional BIOC rules as you investigate threats on your network and endpoints. BIOC rules are highly customizable; you can create a BIOC rule that is simple
or quite complex.
As soon as you create or enable a BIOC rule, the tenant begins to monitor input feeds for matches. It also analyzes historical data collected in the tenant.
When there is a match on a BIOC rule, Cortex XDR logs an alert.
To further enhance the BIOC rule capabilities, you can also configure BIOC rules as custom prevention rules and incorporate them with your Restrictions
profiles. The tenant can then trigger behavioral threat prevention alerts based on your custom prevention rules in addition to the BIOC detection alerts.
Abstract
From the Cortex XDR management console, you can define your own rules based on behavior with the behavioral indicator of compromise (BIOC) rules.
Manage your behavioral indicator of compromise (BIOC) rules in Detection Rules → BIOC.
If you are assigned a role that enables Investigation → Rules privileges, you can view all user-defined and preconfigured rules for behavioral indicators of
compromise (BIOCs).
If you have Cortex XDR Analytics enabled, you can also view Analytics BIOCs (ABIOCs) on a separate page. To access this page, click Analytics BIOC Rules
next to the refresh icon at the top of the page.
Each page displays fields that are relevant to the specific rule type.
By default, the BIOC Rules page displays all enabled rules. To search for a specific rule, use the filters above the results table to narrow the results. You can
also manage existing rules using the right-click pivot menu.
The following table describes the fields that are available for each BIOC rule in alphabetical order.
Field Description
BACKWARDS SCAN STATUS Status of the Cortex XDR search for the first 10,000 matches when the BIOC rule
was created or edited. Status can be:
Done
Failed
Pending
Queued
BACKWARDS SCAN TIMESTAMP Timestamp of the Cortex XDR search for the first 10,000 matches in your Cortex
XDR when the BIOC rule was created or edited.
BACKWARDS SCAN RETRIES Number of times Cortex XDR searched for the first 10,000 matches in your
Cortex XDR when the BIOC rule was created or edited.
COMMENT Free-form comments specified when the BIOC was created or modified.
EXCEPTIONS Exceptions to the BIOC rule. When there's a match on the exception, the event
will not trigger an alert.
GLOBAL RULE ID Unique identification number assigned to rules created by Palo Alto Networks.
INSERTION DATE Date and time when the BIOC rule was created.
MITRE ATT&CK TACTIC Displays the type of MITRE ATT&CK tactic the BIOC rule is attempting to trigger
on.
MITRE ATT&CK TECHNIQUE Displays the type of MITRE ATT&CK technique and sub-technique the BIOC rule
is attempting to trigger on.
MODIFICATION DATE Date and time when the BIOC was last modified.
NAME Unique name that describes the rule. Global BIOC rules defined by Palo Alto
Networks are indicated with a blue dot and cannot be modified or deleted.
Field Description
Collection
Credential Access
Dropper
Evasion
Execution
Evasive
Exfiltration
Infiltration
Lateral Movement
Other
Persistence
Privilege Escalation
Reconnaissance
Tampering
SEVERITY BIOC severity that was defined when the BIOC was created.
SOURCE User who created this BIOC, the file name from which it was created, or Palo
Alto Networks if delivered through content updates.
STATUS Enabled
Disabled
When you hover over a rule that's disabled, a pop-up message appears to
provide more information about the Disable action.
USED IN PROFILES Displays if the BIOC rule is associated with a Restriction profile.
By default, the Analytics BIOC Rules page displays all enabled rules. To search for a specific rule, use the filters above the results table to narrow the results.
You can also disable and enable rules using the right-click pivot menu.
The following table describes the fields that are available for each Analytics BIOC rule in alphabetical order.
Field Description
Activation Prerequisites Displays a description of the prerequisites Cortex XDR requires in order to
activate the rule.
Field Description
NAME Unique name that describes the rule. New rules are identified with a blue badge
icon.
SEVERITY BIOC severity that was defined when the BIOC rule was created. Severity levels
can be Low, Medium, High, Critical, and Multiple.
Multiple severity BIOC rules can raise alerts with different severity levels. Hover
over the flag to see the severities defined for the rule.
Rules that are Pending Activation are in the process of collecting the data
required to enable the rule. Hover over the field to view how much data within a
certain period of time has already been collected.
TAGS Filter the results according to Detector Tags. This tag enables you to filter for
specific detectors such as Identity Threat, Identity Analytics, and others.
Abstract
You can configure rules for behavioral indicators of compromise (BIOCs) to trigger an alert on an identified threat.
When you identify a threat and its characteristics, you can configure rules for behavioral indicators of compromise (BIOCs) for this threat.
You can create a BIOC rule either by configuring a single one or by uploading a file that contains multiple BIOCs.
After you create a BIOC rule, Cortex XDR searches for the first 10,000 matches in your tenant and triggers an alert if a match is detected. After the initial scan,
Cortex XDR triggers alerts every time a new match is detected.
You can also use BIOC rules to create prevention rules that terminate the causality chain of a malicious process and trigger Cortex XDR Agent behavioral
prevention type alerts.
To ensure your BIOC rules trigger alerts efficiently and do not overcrowd your Alerts table, Cortex XDR automatically does the following:
Disables BIOC rules that reach 5000 or more hits over a 24-hour period.
Creates a rule exception based on the PROCESS SHA256 field for BIOC rules that hit more than 100 endpoints over a 72 hour period.
You can create a new BIOC rule in a similar way as you create a search with Query Builder or by building the rule query with XQL Search. In both methods,
you use Cortex Query Language (XQL) to define the rule using XQL syntax. The XQL query must at a minimum filter on the event_type field in order for it to
be a valid BIOC rule. In addition, you can create BIOC rules using the xdr_data and cloud_audit_log datasets and presets for these datasets.
Currently, you cannot create a BIOC rule on customized datasets and only the filter stage, alter stage, and functions without any aggregations are
supported for XQL queries that define a BIOC.
For BIOC rules, the field values in XQL are evaluated as case insensitive (config case_sensitive = false).
The following describes the event_type values for which you can create a BIOC rule.
FILE—Events relating to file create, write, read, and rename according to the file name and path.
NETWORK—Events relating to incoming and outgoing network, filed IP addresses, port, host name, and protocol.
PROCESS—Events relating to execution and injection of a process name, hash, path, and CMD.
REGISTRY—Events relating to registry write, rename and delete according to registry path.
STORY—Events relating to a combination of firewall and endpoint logs over the network.
EVENT_LOG—Events relating to Windows event logs and Linux system authentication logs.
2. The XQL query field is where you define the parameters of your query for the BIOC rule. To help you create an effective XQL query, the search field
provides suggestions as you type. The XQL query must at a minimum filter on the event_type field in order for it to be a valid BIOC rule. In
addition, you can create BIOC rules using the xdr_data and cloud_audit_log datasets and presets for these datasets. Currently, you cannot
create a BIOC rule on customized datasets and only the filter stage, alter stage, and functions without any aggregations are supported for
XQL queries that define a BIOC. For BIOC rules, the field values in XQL are evaluated as case insensitive (config case_sensitive =
false). After configuring the XQL query for your BIOC rule and the syntax is valid, a indication is displayed, and it is possible to add
the BIOC rule.
3. Click Test BIOC. Rules that you do not refine enough can create thousands of alerts. It is highly recommended that you test the behavior of a new
or edited BIOC rule before you save it.
When you test the rule, Cortex XDR immediately searches for rule matches across all your Cortex XDR tenant data. The results are displayed in the
Query Results tab underneath the XQL query field. Adjust any rule definition as needed.
To demonstrate the expected behavior of the rule before you save it, Cortex XDR tests the BIOC on historical logs. After you save a BIOC rule, it
will operate on both historical logs (up to 10,000 hits) and new data received from your log sensors.
4. (Optional) Use the Schema tab to view schema information for every field found in the result set. This information includes the field name, data
type, descriptive text (if available), and the dataset that contains the field. In order for a field to appear in the Schema tab, it must contain a non-
NULL value at least once in the result set.
1. Select an entity icon. Define any relevant activity or characteristics for the entity type. Create a new BIOC rule in the same way that you create a
search with the Query Builder. You use XQL to define the rule. The XQL query must filter on an event_type in order for it to be a valid BIOC rule.
2. Test your BIOC rule. Rules that you do not refine enough can create thousands of alerts. It is highly recommended that you test the behavior of a
new or edited BIOC rule before you save it.
When you test the rule, Cortex XDR immediately searches for rule matches across all your Cortex XDR Cortex XDR tenant data. Adjust any rule
definition as needed.
To demonstrate the expected behavior of the rule before you save it, Cortex XDR tests the BIOC on historical logs. After you save a BIOC rule, it
will operate on both historical logs (up to 10,000 hits) and new data received from your log sensors.
a. Name—Specify a description or leave the default name which is automatically populated using the format XQL-BIOC-<rule number>.
d. (Optional) Select the MITRE Technique and MITRE Tactic you want to associate with the alert. You can select up to 3 MITRE Techniques/Sub-
Techniques and MITRE Tactics.
e. (Optional) Select the +<number> more global exceptions to view the EXCEPTIONS associated with this BIOC rule.
f. (Optional) Comment—Specify any additional comments, such as why you created the BIOC.
g. Click OK.
To match multiple indicators, you can upload the criteria in a CSV file. You can upload BIOCs using REST APIs in either CSV or JSON format. Your file can be a
list of BIOCs from external feeds or a file that you previously exported from Cortex XDR. The export/import capability is useful for rapid copying of BIOCs
across different Cortex XDR instances.
Upload a file, one BIOC per line, that contains up to 20,000 BIOCs. For example, you can upload multiple file paths and MD5 hashes for a BIOC rule. To help
you format the upload file in the syntax that Cortex XDR accepts, you can download the example file.
You can only import files that were exported from Cortex XDR. You can not edit an exported file.
3. Drag and drop the file on the import rules dialog or browse to a file.
4. Click Import.
Cortex XDR loads any BIOC rules. This process may take a few minutes depending on the size of the file.
5. Refresh the BIOC Rules page to view matches (# of Hits) in your historical data.
6. To investigate any matches, view the Alerts page and filter the Alert Name by the name of the BIOC rule.
Custom prevention rules are supported on Cortex XDR agent 7.2 and later versions and enable you to configure and apply user-defined BIOC rules to
Restriction profiles deployed on your Windows, Mac, and Linux endpoints.
By using the BIOC rules, you can configure custom prevention rules to terminate the causality chain of a malicious process according to the Action Mode
defined in the associated Restrictions Security Profile and trigger Cortex XDR Agent behavioral prevention type alerts in addition to the BIOC rule detection
alerts.
For example, if you configure a custom prevention rule for a BIOC Process event, apply it to the Restrictions profile with an action mode set to Block, the Cortex
XDR agent:
Blocks a process at the endpoint level according to the defined rule properties.
Triggers a behavioral prevention alert you can monitor and investigate in the Alerts table.
Before you configure a BIOC rule as a custom prevention rule, create a Restriction Profile for each type of operating system (OS) that you want to deploy your
prevention rules.
1. In the BIOC Rule table, from the Source field, filter and locate a user-defined rule you want to apply as a custom prevention rule. You can only apply a
BIOC rule that you created either from scratch or a Cortex XDR global rule template that meets the following criteria.
The user-defined BIOC rule does not include the following field configurations.
BIOC rules with OS scope definitions must align with the Restrictions profile OS.
When defining the Process criteria for a user-defined BIOC rule event type, you can select to run only on actor, causality, and OS actor on
Windows, and causality and OS actor on Linux and Mac.
If the rule is already referenced by one or more profiles, select See profiles to view the profile names.
Ensure the rule you selected is compatible with the type of endpoint operating system.
Select the Restriction Profile name you want to apply the BIOC rule to for each of the operating systems. BIOC event rules of type Event Log and
Registry are only supported by Windows OS.
You can only add to existing profiles you created, Cortex XDR Default profiles will not appear as an option.
The BIOC rule is now configured as a custom prevention rule and applied to your Restriction profiles. After the Restriction profile is pushed to your
endpoints, the custom prevention rule can start triggering behavioral prevention-type alerts.
b. Locate the Restrictions Profile to which you applied the BIOC rule. In the Summary field, Custom Prevention Rules appears as Enabled.
d. In the Custom Prevention Rules section, you can review and modify the following:
Auto-disable—Select if to auto-disable a BIOC prevention rule if it triggers after a defined number of times during a defined duration.
Auto-disable will turn off both the BIOC rule detection and the BIOC prevention rule.
Prevention BIOC Rules table—Filter and maintain the BIOC rules applied to this specific Restriction Profile. Right-click to Delete a rule or Go
to BIOC Rules table.
In the Description field, you can see the rule name that triggered the prevention alert.
Abstract
Update and copy BIOC rules, and add rule exceptions in Cortex XDR.
Global BIOC rules are detection rules created by Cortex and distributed to the tenants. Cortex XDR checks automatically for the latest update of global BIOC
rules and applies them. If there are no new global BIOC rules, Cortex XDR displays a content status of Content up to date next to the BIOC rules table
heading. A dot to the left of the rule name indicates a global BIOC rule.
To see which rules are pushed by Palo Alto Networks, display the optional Source field.
2. To view the content details, hover over the status Content up to date, to show the global rules version number and the date the global rules were
checked.
The content status displays the date when the content was last updated, either automatically or manually by an administrator.
3. If the status displays Could not check update, click the status to check for updates manually.
You cannot directly modify a global rule, but you can copy global rules as a template to create new rules.
1. Locate a Palo Alto Networks Source type rule, right-click and select Save as New.
The rule appears in the BIOC Rules table as a user-defined Source type rule that you can edit.
You cannot edit global rules, but you can add exceptions to the rule. For more information about rule exceptions, see Add a rule exception.
Abstract
Correlation rules help you analyze correlations of multi-events from multiple sources by using the Cortex Query Language based engine for creating scheduled
rules.
Correlation rules help you analyze correlations of multi-events from multiple sources by using the Cortex Query Language (XQL) based engine for creating
scheduled rules. Alerts can then be triggered based on these correlation rules with a defined time frame and set schedule, including every X minutes, once a
day, once a week, or a custom time.
After you configure your correlation rules, you can manage them in Detection Rules → Correlations, and view and analyze the generated alerts in Incidents and
the Alerts Table. In addition, alerts triggered by correlation rules are factored into the number of incidents displayed in the dashboards.
Abstract
In the Correlation Rules page, you can view all of your enabled rules in a table format and the various fields displayed.
There may be future changes to the Correlation Rules offerings, which can impact your licensing agreements. You will receive a notification ahead of time
before any changes are implemented.
If you are assigned a role that enables Investigation → Rules privileges, you can manage all user-defined Correlation Rules from Detection Rules →
Correlations.
By default, the Correlation Rules page displays all enabled rules. To search for a specific rule, use the filters above the results table to narrow the results. From
the Correlation Rules page, you can manage existing rules using the right-click pivot menu. You can also import and export rules in JSON format, which can
help you to transfer your configurations between environments for onboarding, migration, backup, and sharing. You can bulk export and import multiple rules
at a time.
In addition, the Correlation Rules page enables you to easily identify and resolve correlation rules errors. The number of errors is indicated at the top of the
page in red using the format <number> errors found. You can change the view to only display the correlation rules with errors by selecting Show Errors Only.
The LAST EXECUTION column in the table indicates a correlation rule with an error by displaying the last execution time in a red font and providing a
description of the correlation rule error when hovering over the field. The following error messages are displayed in the applicable scenarios.
Invalid query
Query timeout
Unknown error
Delayed rule—This rule is running past its scheduled time, which can cause delayed results.
Only an administrator can create and view queries built with an unknown dataset that currently does not exist in Cortex XDR .
A notification is also displayed in Cortex XDR to indicate these correlation rules errors.
Fields that are available for each correlation rule in alphabetical order
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Collection
Credential Access
Dropper
Evasion
Execution
Evasive
Exfiltration
Infiltration
Lateral Movement
Persistence
Privilege Escalation
Reconnaissance
Tampering
Other
DATASET* The text displayed here depends on the resulting action configured for the
correlation rule when the rule was created.
Alerts—When your resulting action for the rule was configured to Generate
alert.
Dataset name—When your resulting action for the rule was configured to
Save to dataset.
DESCRIPTION* The description for the Correlation Rule that was configured when the rule was
created.
DRILL-DOWN QUERY Displays the Drill-Down Query that you configured for additional information about
the alert for further investigation using Cortex Query Language (XQL) when you
created the rule. If you did not configure one, the field is left empty.
Once configured any alert generated for the Correlation Rule has a right-click
pivot menu Open Drilldown Query option, an Open drilldown query link after you
investigate any contributing events, and a quick action Open Drilldown Query
icon ( ) that is accessible in the Alerts page, which opens a new browser tab in
XQL Search to run this query. If you do not define a Drill-Down Query, no right-
click menu option, link, or icon is displayed.
Generated Alert—Uses the time frame of the alert that is triggered, which is
the first event and last event timestamps for the alert (default option).
XQL Search—Uses the time frame from when the Correlation Rule was run
in XQL Search.
Field Description
FAILURE REASON For a Correlation Rule with an error, displays the error message, which can be one
of the following.
Invalid query
Query timeout
Unknown error
Delayed rule—This rule is running past its scheduled time, which can cause
delayed results.
Only an administrator can create and view queries built with an unknown
dataset that currently does not exist in Cortex XDR.
INSERTION DATE Date and time when the Correlation Rule was created.
LAST EXECUTION* Date and time when the correlation rule was last executed. Indicates a correlation
rule with an error by displaying the last execution time in a red font and providing
a description of the correlation rule Error when hovering over the field.
MITRE ATT&CK TACTIC* Displays the type of MITRE ATT&CK tactic the correlation rule is attempting to
trigger.
MITRE ATT&CK TECHNIQUE* Displays the type of MITRE ATT&CK technique and sub-technique the correlation
rule is attempting to trigger.
MODIFICATION DATE* Date and time when the correlation rule was last modified.
SCHEDULE* Displays the Time Schedule for the frequency of running the XQL Search
definition set for the correlation rule when the rule was created. The options
displayed are one of the following.
Every 10 Minutes
Every 20 Minutes
Every 30 Minutes
Hourly
Daily
Field Description
SEVERITY* Correlation rule severity that was defined when the correlation rule was created.
Severity levels can be Informational, Low, Medium, High, Critical, and
Customized.
Whenever an alert is generated with a severity type of Medium and above based
on the Correlation Rule, a new incident is automatically opened.
SUPPRESSION DURATION* The duration time for how long to ignore other events that match the alert
suppression criteria that was configured when the rule was created. This is
required to configure.
SUPPRESSION FIELDS* The fields that the alert suppression is based on, which was configured when the
rule was created. The fields listed are based on the XQL query result set for the
rule. This is optional to configure.
SUPPRESSION STATUS* Displays the Suppression Status as either Enabled or Disabled as configured
when the rule was created.
TIME FRAME* Displays the time frame for running a query, which can be up to 7 days as
configured when the rule was created.
TIMEZONE Displays the Timezone when the Time Schedule for the frequency of running the
XQL Search definition set for the correlation rule is set to run daily or using a cron
expression. Otherwise, this field is left empty.
XQL SEARCH Displays the XQL definition for the correlation rule that was configured in XQL
Search when the rule was created.
Abstract
Create new correlation rules from either the Correlation Rules page or when building a query in XQL Search, or import a many correlation rules from a file.
Correlation rules require a Cortex XDR Pro license. There may be future changes to the correlation rules offerings, which can impact your licensing
agreements. You will receive a notification ahead of time before any changes are implemented.
You can create a new correlation rule from either the Detection Rules → Correlation Rules page or when building a query in XQL Search. You can also import a
number of correlation rules.
Define whether alerts generated by the correlation rule are suppressed by a duration time and field.
Set the resulting action for the correlation rule, which includes any of the following:
Generate an alert: You can also define the alert settings, which include the Alerts Field Mapping for incident enrichment, Alert Severity, MITRE
Attack Tactics and Techniques, and other alert settings.
Save data to a dataset: Use this option to test and fine-tune new rules before initiating alerts and applying correlation of correlation use cases.
To ensure your correlation rules raise alerts efficiently and do not overcrowd your Alerts table, Cortex XDR automatically disables correlation rules that reach
5000 or more hits over a 24-hour period.
2. In the XQL query field, define the parameters for your Correlation Rule.
The New Correlation Rule editor is displayed where the XQL Search section is populated with the query you already set in the XQL query
field.
Define the correlation rule in the XQL Search field. After writing at least one line in XQL, you can Open full query mode to display the query in XQL
Search. You can Test the XQL definition for the rule whenever you want.
When you open the New Correlation Rule editor from XQL Search, this XQL Search field is already populated with the XQL query that you defined.
An administrator can create and view queries built with an unknown dataset that currently does not exist in Cortex XDR . All other users can only
create and view queries built with an existing dataset.
When you finish writing the XQL for the Correlation Rule definition, select Continue editing rule to bring you back to the New Correlation Rule editor, and
the complete query you set is added to the XQL Search field.
The XQL features for call, top, config timeframe, and wildcards in datasets (dataset in (<dataset prefix>_*)) are currently not
supported in Correlation Rules. If you add them to the XQL definition, you will not be able to Create or Save the Correlation Rule.
The XQL features for transaction in datasets (dataset in (<dataset prefix>_*)) are currently not supported in Real Time correlation
rules.
Using the current_time() function in your XQL query for a correlation rule can yield unexpected results when there are lags or during
downtime. This happens if the correlation rule doesn’t run exactly at the time of the data inside the timeframe, for example when a rule is
dependent on another rule, or when a rule is stuck due to an error, and then runs in recovery mode. Instead, we recommend using the
time_frame_end() function, which returns the timestamp at the end of the time frame in which the rule is executed.
Every 10 Minutes: Runs every rounded 10 minutes at preset 10 minute intervals from the beginning of the hour, such as 10:10 AM, 10:20 AM,
and 10:30 AM.
Every 20 Minutes: Runs every rounded 20 minutes at preset 20 minute intervals from the beginning of the hour, such as 10:20 AM, 10:40 AM,
and 11:00 AM.
Every 30 Minutes: Runs every rounded 30 minutes at preset 30 minute intervals from the beginning of the hour, such as 10:30 AM, 11:00 AM,
and 11:30 AM.
Hourly: Runs at the beginning of the hour, such as 1:00 AM or 2:00 AM.
Custom: Displays the Time Schedule as Cron Expression fields, where you can set the cron expression in each time field to define the
schedule frequency for running the XQL Search. The minimum query frequency is every 10 minutes and is already configured. You can also
set a particular Timezone.
Timezone (Optional): You can only set the Timezone when the Time Schedule is set to Daily or Custom. Otherwise, the option is disabled.
Query time frame: Set the time frame for running a query, which can be up to 7 days. Specify a number in the field and in the other field select
either Minute/s, Hour/s, or Day/s.
Define whether the alerts generated by the Correlation Rule are suppressed by a duration time, field, or both.
Enable alert suppression: Select this checkbox to Enable alert suppression. By default, this checkbox is clear and the alerts of the Correlation Rule
are configured to not be suppressed.
Duration time: Set the Duration time for how long to ignore other events that match the alert suppression criteria, which are based on the Fields
listed. Specify a number in the field and in the other field select either Minute/s, Hour/s, or Day/s. By default, the generated alerts are configured to
be suppressed by 1 hour (1 Hour/s). The Duration time can be configured for a maximum of 1 day.
Fields (Optional): Select the fields that the alert suppression is based on. The fields listed are based on the XQL query result set. You can perform
the following.
Select all to configure all the fields for suppression. This means that all the fields must match for the alerts to be suppressed. This option will
generate multiple alerts during the suppression period.
Search for a particular field, which narrows the available options as you begin typing.
Do not set any Fields by leaving the field empty only 1 alert is generated during the suppression period.
You can select one of the following resulting actions to occur, where the configuration settings change depending on your selection:
Generate alert
Generates a Correlation type of alert according to the configured settings in the New Correlation Rule editor (default). When this option is selected a
number of new sections are opened to configure the alert.
Alert Settings
Severity: Select the severity type whenever an alert is generated for this Correlation Rule as one of the following:
Informational
Low
Medium
High
Critical
Whenever the severity type is Medium or above for the alert generated, an incident is automatically opened.
Alert Description (Optional): Specify a description of the behavior that will raise the alert. You can include dollar signs ($), which represent the
fields names (i.e. output columns) in XQL Search.
For example.
The user $user_name has made $count failed login requests to $dest in a 24 hours period
Output.
The user lab_admin has made 234 failed login requests to 10.10.32.44 in a 24 hours period
There is no validation or auto complete for these parameters and the values can be null or empty. In these scenarios, Cortex XDR does not display
the null or empty values, but adds the text NULL or EMPTY in the descriptions.
Drill-Down Query (Optional): You can configure a Drill-Down Query for additional information about the alert for further investigation using XQL.
This XQL query can accept parameters from the alert output for the Correlation Rule. Yet, keep in mind that when you create the Correlation Rule,
Cortex XDR does not know in advance if the parameters exist or contain the correct values. As a result, Cortex XDR enables you to save the query,
but the query can fail when you try and run it. You can also refer to field names using dollar signs ($) as explained in the Alert Description.
Once configured any alert generated for the Correlation Rule has a right-click pivot menu Open Drilldown Query option, an Open drilldown query
link after you investigate a contributing event, and a quick action Open Drilldown Query icon ( ) that is accessible in the Alerts page, which opens
a new browser tab in XQL Search to run this query. If you do not define a Drill-Down Query, no right-click pivot menu option, link, or icon is
displayed.
Drill-Down Query Time Frame: Select the time frame used to run the Drill-Down Query from one of the following options, which provides more
informative details about the alert generated by the Correlation Rule.
Generated Alert: Uses the time frame of the alert that is triggered, which is the first event and last event timestamps for the alert (default
option). If there is only one event, the event timestamp is the time frame used for the query.
XQL Search: Uses the time frame from when the Correlation Rule was run in XQL Search.
MITRE ATT&CK (Optional): Select the MITRE Tactics and MITRE Techniques you want to associate with the alert using the MITRE ATT&CK matrix.
1. You can access the matrix by selecting the MITRE ATT&CK bar or Open complete MITRE matrix link underneath the bar on the right.
2. Select the MITRE Tactics listed in the first row of the matrix and the applicable MITRE techniques and Sub-Techniques, which are listed in the
other rows in the table. You can select either MITRE Tactics only, MITRE techniques and Sub-Techniques only, or a combination of both.
3. Click Select and the matrix window closes and the MITRE ATT&CK section in the New Correlation Rule editor lists the number of Tactics and
Techniques configured, which is also listed in the bar. For example, in the following image, there are 3 Tactics and 4 Techniques configured.
The three MITRE Tactics are Resource Development with 2 Techniques configured, Credential Access with 1 Technique configured, and
Discovery with 1 Technique configured.
Alerts Fields Mappings
You can map the alert fields so that the mapped fields are displayed in the Alerts page to provide important information in analyzing your alerts. In
addition, mapping the fields helps to improve incident grouping logic and enables Cortex XDR to list the artifacts and assets based on the map fields in
the incident. The options available can change depending on your Correlation Rule definitions in XQL Search. Each preconfigured field that is
automatically mapped is clearly displayed. There are two ways to map the alert fields.
Select this option if you want Cortex XDR to automatically map the fields for you. This checkbox only displays when your Correlation Rule can be
configured to use Cortex XDR incident enrichment and then it is set as the default option. We recommend using this option whenever it is available
to you.
Manually map the alert fields by selecting the fields that you want to map. When you create the Correlation Rule, Cortex XDR does not know
whether the alert fields that you mapped manually are valid. If the fields are invalid according to your mapping, null values are assigned to those
fields.
In a case where Use the Cortex XDR default incident enrichment is not selected and you have not mapped any alert fields, the alert is dispatched
into a new incident.
Save to dataset
Use to save the data generated from the Correlation Rule to a separate Target Dataset. This option is helpful when you are fine-tuning and testing a rule
before promoting the rule to production. You can also save a rule to a dataset as a building block for the next Correlation Rule, which will be based on
the results of the first Correlation Rule instead of building too complex XQL queries.
You can either create a new Target Dataset by specifying the name for the dataset in the field or select a preexisting Target Dataset that was created for
a different Correlation Rule. The list only displays the datasets configured when creating a Correlation Rule. Different Correlation Rules can be saved to
the same dataset and Cortex XDR will expand the dataset schema as needed. The dataset you configure for the Correlation Rule contains the following
additional fields:
_rule_id
_rule_name
_insert_time
Add to lookup
Use to add data to a specified lookup dataset. After selecting this option, perform the following:
1. In the Target Dataset field, select an existing lookup dataset to add the data.
After the dataset is chosen, a mapping table is displayed. A list of fields from the lookup schema are listed in the KEY column to allow you to map
fields from the query to an entry in the lookup.
2. In the VALUE column, map at least one field from the query to an entry in the lookup dataset (KEY column).
3. (optional) You can set a single field or multiple fields as unique by selecting the checkbox in the UNIQUE column. A unique field means these
fields are designated as a key to update existing entries as opposed to creating a new entry. If multiple fields are selected, these fields together
are used to identify existing entries. If several existing entries meet the condition, all these entries are updated. If no existing entries meet the
condition, the entry is added as a new one. If no field is marked as unique, records are added as new.
The maximum size of a lookup dataset is 50 MB. If the data exceeds this limit, the add to lookup action fails.
Removes data from a specified lookup dataset. Once this option is selected, perform the following:
1. In the Target Dataset field, select an existing lookup dataset to remove data.
After the dataset is chosen, a mapping table is displayed. A list of fields from the lookup schema are listed in the KEY column to allow you to map
fields from the query to an entry in the lookup.
2. In the VALUE column, map at least one field from the query to an entry in the lookup dataset (KEY column). All rows (lookup entries) matching
these field mapping values (filtering condition) will be deleted. If several existing entries meet the condition, all these entries are deleted. If no
existing entries meet the condition, no entries are deleted.
Select Disable → Create if you want to finish configuring your Correlation Rule at a different time, but do not want to lose your settings. The Create button
is only enabled when you have configured all the mandatory fields in the New Correlation Rule editor. Once configured, your Correlation Rule is listed in
the Correlation Rules page, but is disabled. You can edit or enable the rule at any time by right-clicking the rule and selecting Edit Rule or Enable.
The rule is added to the table in the Correlation Rules page as an active rule and a notification is displayed.
Import correlation rules from a file
You can import a number of correlation rules from a JSON file. This facilitates the sharing of correlation rules between tenants.
To import a file containing correlation rules, select Detection Rules → Correlations and click Import at the top right corner of the page.
Abstract
View and manage your correlation rules in Detection Rules → Correlation Rules. To manage a Correlation Rule, right-click the Correlation Rule and select an
action.
You can also monitor your correlation rule executions with the correlations_auditing data set. For more information, see Monitor correlation rules.
View related alerts: View the alerts generated by this correlation rule in the Alerts page. You can Show alerts in new tab or Show alerts in same tab.
Open in XQL: View the XQL results for the correlation rule in XQL Search. You can Show results in new tab or Show results in same tab.
Execute Rule: Run the rule now without waiting for the scheduled time.
Save as new: Duplicate the correlation rule and save it as a new correlation rule.
Disable the selected correlation rule. This option is only available on an active rule.
Enable the selected correlation rule. This option is only available on an inactive rule.
Edit Rule: Edit the rule parameters configured in the Edit Correlation Rule editor.
Copy entire row to copy the text from all the fields in a row of a correlation rule.
Show rows with ‘<field value>’ to filter the correlation rules list to only display the correlation rules with a specific field value that you select in the table.
On certain fields that are null, this option does not display.
Hide rows with ‘<Rule Description>’: Filter the correlation rules list to hide the correlation cules with a specific field value that you select in the table. On
certain fields that are null, this option does not display.
Abstract
You can monitor your correlation executions with the correlations_auditing dataset.
Cortex XDR audits all correlation executions in the correlations_auditing dataset. The dataset records the query initiation times, end times, retry
attempts, failure reasons, and other useful metrics. .
The rule starts executing. This is audited with the status of Initiated or Initiated Manually.
In the dataset, the Query start time and Query end time indicate the time frame of the data that was queried. The actual start and end times of the correlation
rule execution are recorded in the _time field for the Initiated and Completed entries.
Field Description
For entries with an Initiated or Initiated Manually status, this is the start time of the correlation rule execution.
For entries with a Completed or Error status, this is the end time of the rule execution.
Field Description
Query start time The start time of the query time frame.
Query end time The end time of the query time frame.
Failure reason For correlation rules with errors, this field displays the error message.
Retry attempts Number of retry attempts before the query initiated or failed to run.
Rule creation time Date and time that the correlation rule was created.
Rule modification time Date and time that the correlation rule was last modified.
Suppression duration Duration for which to ignore additional events that match the alert suppression criteria.
MITRE ATT&CK Tactic MITRE ATT&CK tactic that the correlation rule attempted to trigger.
Field Description
MITRE ATT&CK Technique MITRE ATT&CK technique that the correlation rule attempted to trigger.
Alert name Name of the alert that the correlation rule will trigger.
Abstract
Edit, export, copy, disable, or remove rules, and add rule exceptions for existing indicators in Cortex XDR.
After you create an indicator rule, you can take the following actions:
For Analytics BIOC rules, you can only disable and enable rules.
As your IOC and BIOC rules trigger alerts, Cortex XDR displays the total # OF ALERTS triggered by the rule in the the BIOC or IOC rules page. For rules with a
high, medium, or low severity that have triggered one or more alerts, you can quickly pivot to a filtered view of those alerts triggered by the indicator:
1. Select Detection & Threat Intel → Detection Rules and the type of rule (BIOC or IOC).
You can view a filtered query of alerts associated with the Rule ID.
1. Select Detection & Threat Intel → Detection Rules and the type of rule (BIOC or IOC).
2. Right-click anywhere in the rule, and then select Open in query builder.
Cortex XDR populates a query using the criteria of the BIOC rule.
For more information, see Edit and rerun queries in Query Center.
Edit a rule
After you create a rule, it may be necessary to tweak or change the rule settings. You can open the rule configuration from the Rules page or from the pivot
menu of an alert triggered by the rule. To edit the rule from the Rules page:
1. Select Detection & Threat Intel → Detection Rules and the type of rule (BIOC or IOC).
If you make any changes, Test and then Save the rule.
The exported file is not editable, however, you can use it as a source to import rules at a later date.
You can use an existing rule as a template to create a new one. Global BIOC rules cannot be deleted or altered, but you can copy a global rule and edit the
copy.
1. Select Detection & Threat Intel → Detection Rules and then BIOC.
3. Right-click anywhere in the rule row and then select Save as New to create a duplicate rule.
If you no longer need a rule you can temporarily disable or permanently remove it.
1. Select Detection & Threat Intel → Detection Rules and the type of rule (BIOC or IOC).
3. Right-click anywhere in the rule row and then select Remove to permanently delete the rule, or Disable to temporarily stop the rule. If you disable a rule
you can later return to the rule page to Enable it.
You can disable one or more BIOC rules on the agent, on the server, or on both. This provides you more granularity for managing the prevention actions
triggered by the BIOC Rules.
3. Right-click any of the rules and select to disable the rules on the agent, on the server, or on both.
If you disable a rule only on the agent, detection on the server works as usual.
If you disable a rule only on the server, prevention on the agent works as usual.
When a BIOC rule is disabled automatically by Cortex XDR, for example due to the server anti flooding mechanism, prevention on the agent works as before.
You can re-enable a rule granularly for detection, prevention, or both in the same way.
13.2 | Analytics
Abstract
Cortex XDR uses an Analytics engine to examine logs and data from your sensors.
Analytics uses the Analytics engine, sensors, and rules to keep your network safe.
Safeguarding your network requires a defense-in-depth strategy which utilizes current and patched software and hardware to keep unwanted users out of the
network. Most available strategies are designed to stop intrusion attempts at the network perimeter, defending only against known threats. For example,
systems scanning for malicious software rely on previously identified MD5 signature databases. However, attackers constantly modify virus signatures to
circumvent virus scanners. Your network defense-in-depth strategy must include software and processes designed to detect and respond to intruders that
may have already penetrated your systems.
Cortex XDR efficiently and automatically identifies abnormal activity on your network, while providing you with the exact information you need to rapidly
evaluate, isolate and remove potential threats.
The Analytics Engine examines traffic and data from a variety of sources such as network activity from firewall logs, VPN logs (from Prisma Access from the
Panorama plugin), endpoint activity data (on Windows endpoints), Active Directory or a combination of these sources, to identify the endpoints and users on
your network. After identifying the endpoints and the users, the Analytics Engine collects relevant details about each asset based on the information it obtains
from the logs to create profiles. The Analytics Engine can detect threats from only network data or only endpoint data, but for more context when investigating
an alert, we recommend using a combination of data sources.
The Analytics Engine creates and maintains profiles to view the activity of the endpoint or user in context by comparing it to similar endpoints or users. The
large number of profile types can generally be placed into one of three categories.
Peer Group profiles: A statistical analysis of an entity or an entity relation that compares activities from multiple entities in a peer group. For example, a
domain can have a cross-organization popularity profile or per peer group popularity profile.
Temporal profiles: A statistical analysis of an entity or an entity relation that compares the same entity to itself over time. For example, a host can have a
profile depending on the number of ports it accessed in the past.
Entity classification: A model detecting the role of an entity. For example, users can be classified as service accounts, and hosts as domain controllers.
To detect anomalous behavior, Cortex XDR can analyze logs and data from a variety of sensors.
Cortex XDR Pro per Endpoint agents without the XTH add-on can enable Analytics and Identity Analytics. Due to the limits and filters applied to the data
collected, results will differ from agents with the XTH add-on. See the Cortex XDR Analytics Alert Reference guide for a complete list of supported sensors.
Sensor Description
Firewall traffic logs Palo Alto Networks firewalls perform traditional and next-generation firewall
activities. The Cortex XDR Analytics engine can analyze Palo Alto Networks
firewall logs to obtain intelligence about the traffic on your network. A Palo Alto
Networks firewall can also enforce Security policies based on IP addresses and
domains associated with Analytics alerts with external dynamic lists.
Enhanced application logs (EAL) To provide greater coverage and accuracy, you can enable enhanced
application logging on your Palo Alto Networks firewalls. Enhanced Application
Logs (EAL) are collected by the firewall to increase visibility into network activity
for Palo Alto Networks apps and services, like Cortex XDR.
GlobalProtect and Prisma Access logs If you use GlobalProtect or Prisma Access to extend your firewall security
coverage to your mobile users, Cortex XDR can analyze VPN traffic to detect
anomalous behavior on mobile endpoints.
Firewall URL logs (part of firewall threat logs) Palo Alto Networks firewalls can log threat log entries when traffic matches one
of the Security Profiles attached to a security rule on the firewall. Cortex XDR
can analyze entries for Tthreat logs relating to URLs and trigger alerts that
indicate malicious behavior such as command and control, and exfiltration.
Sensor Description
Cortex XDR agent endpoint data With a Cortex XDR Pro per Endpoint license, you can deploy Cortex XDR agents
on your endpoints to protect them from malware and software exploits. The
Analytics engine can also analyze the EDR data collected by the agent to
trigger alerts. To collect EDR data, you must install Cortex XDR agent 6.0 or a
later release on your Windows endpoints (Windows 7 SP1 or later).
The Cortex XDR Analytics engine can analyze activity and traffic based solely
on endpoint activity data sent from Cortex XDR agents. For increased coverage
and greater insight during investigations, use a combination of Cortex XDR
agent data and firewalls to supply activity logs for analysis.
Pathfinder data collector In a firewall-only deployment where the Cortex XDR agent is not installed on
your endpoints, you can use Pathfinder to monitor endpoints. Pathfinder scans
unmanaged hosts, servers, and workstations for malicious activity. The Analytics
engine can also analyze the Pathfinder data collector in combination with other
data sources to increase coverage of your network and endpoints, and to
provide more context when investigating alerts.
Directory Sync logs If you use the Cloud Identity Engine to provide Cortex XDR with Active Directory
data, the Analytics engine can also trigger alerts on your Active Directory logs.
External sensors
Third-party firewall logs If you use non-Palo Alto Networks firewalls - Check Point, Fortinet, Cisco ASA -
in addition to or instead of Palo Alto Networks firewalls, you can set up a syslog
collector to facilitate log and alert ingestion. By sending your firewall logs to
Cortex XDR, you can increase detection coverage and take advantage of Cortex
XDR analysis capabilities. When Cortex XDR analyzes your firewall logs and
detects anomalous behavior, it triggers an alert.
Third-party authentication service logs If you use an authentication service—Microsoft Azure AD, Okta, or PingOne—
you can set up log collection to ingest authentication logs and data into
authentication stories.
Windows Event Collector logs The Windows Event Collector (WEC) runs on the Broker VM collecting event logs
from Domain Controllers (DCs). The Analytics engine can analyze these event
logs to trigger alerts such as for credential access and defense evasion.
Network attacks follow predictable patterns. If you interfere with any portion of this pattern, you can neutralize the attack. The adversarial behaviors making up
these patterns are collected in a universally accessible, continuously updated knowledge base called the MITRE ATT&CK™ knowledge base of tactics.
The Analytics Engine can trigger an alert for any of the following attack tactics as defined in the MITRE Attack database.
Tactic Description
Execution After attackers gain a foothold in your network, they can use various
techniques to execute malicious code on a local or remote endpoint.
Persistence To carry out a malicious action, an attacker can try techniques that maintain
access in a network or on an endpoint. An attacker can initiate
configuration changes—such as a system restart or failure—that require the
endpoint to restart a remote access tool or open a back door that allows the
attacker to regain access on the endpoint.
Discovery When an attacker has access to a part of your network, they use discovery
techniques to explore and identify subnets, servers and services that are
hosted on those endpoints. They aim to identify vulnerabilities within your
network.
Cortex XDR detects these tactics by looking for indicators in your internal
network traffic such as changes in connectivity patterns, including
increased rates of connections, failed connections, and port scans.
Lateral Movement To expand the footprint inside your network, an attacker uses lateral
movement techniques to obtain credentials for additional access to more
data in the network.
Command and Control The command and control tactic allows an attacker to remotely issue
commands to an endpoint and receive information from it. The Analytics
Engine identifies intruders using this tactic by looking for anomalies in
outbound connections, DNS lookups, and endpoint processes with bound
ports. Cortex XDR detects unexplained changes in the periodicity of
connections and failed DNS lookups, changes in random DNS lookups,
and other indicators that suggest an attacker has gained initial control of a
system.
Exfiltration Exfiltration tactics are techniques used to retrieve data from a network, such
as valuable enterprise data. Cortex XDR identifies this type of attack by
examining outbound connections with a focus on the volume of data being
transferred. Increases in this volume are an important symptom of data
exfiltration.
The Cortex XDR Analytics Engine retrieves logs from the Cortex XDR tenant to create a baseline so that it can trigger alerts when abnormal activity occurs. This
analysis is highly sophisticated and performed on more than a thousand dimensions of data. Internally, Cortex XDR organizes its analytics activity into
algorithms called detectors. Each detector is responsible for triggering an alert when suspicious behavior is detected.
To trigger alerts, each detector compares the recent past behavior to the expected baseline by examining the data found in your logs. A certain amount of log
file time is required to establish a baseline and then a certain amount of recent log file time is required to identify what is currently happening in your
environment.
Activation period The shortest amount of log file time before the app can trigger an alert. This
is typically the period between the time a detector first starts running and
the time you see an alert. However, in some cases, detectors pause after
an upgrade as they enter a new activation period.
Most but not all detectors start running after the activation period ends. The
activation period provides the detector enough data to establish a baseline,
which in turn helps to avoid false positives.
The activation period is also called the profiling or waiting period and is
informally referred to as soak time.
Test period The amount of logging time that a detector uses to determine if unusual
activity is occurring on your network. The detector compares test period
data to the baseline created during the training period, and uses that
comparison to identify abnormal behavior.
Training period The amount of logging time that the detector requires to establish a
baseline, and to identify the behavioral limits beyond which an alert is
triggered. Because your network is not static in terms of its topology or
usage, detectors are constantly updating the baselines that they require for
their analytics. For this update process, the training period is how far back
in time the detector goes to update and tune the baseline.
Deduplication period The amount of time in which additional alerts for the same activity or
behavior are suppressed before Cortex XDR triggers another Analytics
alert.
These time periods are different for every Cortex XDR Analytics detector. The actual amount of logging data (measured in time) required to trigger any given
Cortex XDR Analytics alert is specified in the Cortex XDR Analytics Alert Reference Guide.
The Cortex XDR Analytics engine triggers an alert when it detects suspicious activity, composed of multiple events, that deviates from the behavior baseline it
establishes over time. To ensure the Analytics detectors raise alerts efficiently and do not overcrowd your Alerts table, Cortex XDR automatically disables alerts
from detectors that reach 5000 or more matches over a 24 hour period.
In addition to standard Analytics alerts, there is another category of alerts triggered by Analytics behavioral indicators of compromise (ABIOCs). In contrast to
standard Analytics alerts, Analytics BIOCs (ABIOCs)—indicate a single event of suspicious behavior with an identified chain of causality. To identify the context
and chain of causality, ABIOCs leverage user, endpoint, and network profiles. The profile is generated by the Analytics Engine and can be based on a simple
statistical profile or a more complex machine-learning profile. Cortex XDR tailors each ABIOC to your specific environment after analyzing your logs and data
sources and continually tunes and delivers new ABIOCs with content updates.
Cortex XDR enables you investigate suspicious user activity information using Identity Analytics. When enabled, Identity Analytics aggregates and displays
user profile information, activity, and alerts associated with a user-based Analytics type alert and Analytics BIOC rule.
After configuring your Cloud Identity Engine instance and Cortex XDR Analytics, select Settings ( ) → Configurations → Cortex XDR - Analytics, and in the
Featured in Analytics section, Enable Identity Analytics.
The Identity Threat module provides superior coverage for stealthy identity threat vectors, including compromised accounts and insider threats. The module is
available as an add-on and includes the following UI features.
Automated and customizable Asset Role classification based on constant analysis of the users and host in your network. You can edit and manage the
User Asset Roles and Host Asset Roles to meet the needs of your organization.
The Behavioral Analytics tab in the Alert Panel view that displays background information for quicker triaging and investigation. This enables you to
analyze the deviation that triggered the alert against the backdrop of baseline behavior.
Risk Management dashboard for reviewing the risk posture of the organization and enabling faster decision making. The dashboard contains a number
of Metrics widgets that present statistical risk information for your organization.
User Risk View and Host Risk View which provide additional information about the asset, including score trend timeline, notable events, peer comparison,
and additional asset-associated alerts and insights for easy uncovering of hidden threats.
Learn about forensics, how to create forensic investigations, how to create and manage data collections, and how to assess other forensic related settings.
Investigations are comprised of one or more data collections from endpoints within an environment. Grouping the collections within a single location enables
you to focus on the endpoints relevant to your investigation. When searching for data, you can select two types of collections:
Hunt collections enable you to search for a specific activity across a large number of hosts. A hunt collection provides more details about where
something occurred. Examples of this type of collection are, finding which endpoints ran a piece of malware, which users accessed a particular file, or
which endpoints were accessed by a specific user.
Triage collections enable you to collect detailed information about specific activities that occurred on an endpoint. The triage functionality is configurable
and supports the collection of all currently supported forensic artifacts, user-defined file paths, a full file listing for all of the connected drives, full event
logs, and registry hives. The amount of data collected during a triage can be large, so triages are limited to ten or fewer endpoints per collection.
Abstract
Manage an investigation by adding collections, managing alerts, adjusting the timeline, analyzing assets and artifacts.
Forensic investigations streamlines your incident response, data collection, threat hunting and analysis of your endpoint. By using the Forensic Investigation,
you can find the source and scope of the attack and to determine what, if any, data was accessed. It provides a single location for grouping, tracking, and
analyzing all forensic data collections.
View any alerts triggered during data ingested as part of the investigation.
Set user permissions that can be assigned to investigations allowing you to restrict access to the Investigation page including the Investigation Timeline
and collection details.
Field Description
Open
Close pending: After selecting close, the investigation status changes to close pending. It takes 24 hours until officially removed
from the investigations repository. This gives the users a chance to revert back if necessary.
New alerts Total count of alerts for the collection where the Resolution Status=New.
Total alerts Total number of alerts for data collected in the investigation
You can click the link to open the investigation on the Alerts tab.
Abstract
Learn how to create a forensics investigation. This includes adding a collection, exporting the data collection, managing alerts and key assets & artifacts.
Create a forensics investigation that includes all the relevant forensics data. This includes adding collections (hunts and triages), exporting the data collections,
managing alerts and evaluating key assets & artifacts.
3. In the Create New Investigation wizard, enter a name and description (optional) for the investigation.
4. In the Permissions table, select the users to whom you want to grant access to the investigation data.
To set up user permissions, you must have Scope-Based Access Control (SBAC) enabled.
5. Click Save to save the investigation in the Forensic Investigations table or click Save & Start A Collection to start the process of adding collections.
8. Click UTC Timezone to configure the timezone and timestamp format. Refer to Configure server settings for information on setting up your timezone.
Abstract
From the list of active investigations, you can edit the name, description or update the user permissions for the investigation.
1. From the Forensic Investigations table, right-click one of the investigations and select Edit.
2. In the Edit Investigation widget, you can update the Investigation Name, Description, and Permissions. For more information, refer to User permissions.
Abstract
From the list of ongoing investigations, you can close an investigation. You might want to close an investigation if resolved, or if you want to cancel the
investigation.
When you close an investigation, Palo Alto Networks has a grace period of 24 hours before deleting any collections associated with the investigation. During
this timeframe, you have the option to cancel the close investigation action.
1. From the Forensic Investigations table, right-click an investigation and select Close.
2. In the Close Investigation widget, you can view all evidence collections exported for the investigation.
3. In the Forensic Investigation table, the status of the investigation changes to Close Pending, and the timestamp displays the time the investigation
expires and the investigation data is deleted.
Permanently delete: Delete the investigation and all associated data immediately. This action can't be canceled.
Abstract
You can assign users to the investigation for them to view and manage the investigation.
By default, investigation permissions utilize the role-based access control (RBAC) settings configured in the system. Users must have a role with the Forensic
permissions set to View in order to view forensic investigations. In order to create investigations or collections, a user must have a role where the Forensics
permissions is set to View/Edit. Without either role, a user cannot interact with the forensics interface.
If Scope-Based Access Control (SBAC) is enabled on your system, from the Permissions table, you can select the users from which to assign permissions to
the investigation.
Users with account administrator or instance administrator roles have access to investigations and can't be cleared from the Permissions table. They can view
and edit all Investigations, including adding/removing users, creating/deleting collections, closing the Investigation. This prevents investigation lockout in the
event of a user leaving before the Investigation is complete.
Even if a user does not have access to view an investigation via the Forensics Investigations page, they can still query the results of the collections using an
XQL query.
Field Description
User Name Name of the user as logged in the Settings → Access Management → Users.
Email The user's email as logged in the Settings → Access Management → Users.
User Type Indicates whether the user was defined in Cortex XDR using the CSP (Customer Support Portal), SSO (single sign-on) using your
organization’s IdP, or both CSP/SSO.
Role Name of the role assigned specifically to the user that is not inherited from somewhere else, such as a User Group. When the user does not
have any Cortex XDR access permissions that are assigned specifically to them, the field displays No-Role.
Abstract
The data collection section includes information related to each collection type.
13.3.2.1 | Hunting
Abstract
Hunting enables investigators to search for specific data across a large number of hosts. Hunt collections provide more details about where something
occurred. Hunting examples include finding which endpoints executed a piece of malware, which users accessed a particular file, or which endpoints were
accessed by a specific user.
Abstract
Hunt collections enable you to search endpoints for suspicious activity to contribute to helping resolve the investigation.
Select hunt collections when you want to search for a specific activity across a large number of hosts. Hunt Collections gather more details about where
something occurred. For example, use a hunt to find which endpoints executed a piece of malware, which users accessed a particular file, or which endpoints
a specific user authenticated to.
When adding a new hunt collection, you can select from various artifact types for Windows and macOS.
1. In the New Hunt Collection wizard, in the Hunt Collection Name, enter a name that will be easy to find in the collections table.
Repeat Collection Every: Run the hunt collection every x hours set.
4. In Description , enter information that is relevant to the collection you are creating.
5. In Maximum Concurrent Endpoints, enter the maximum number of endpoints that will run the searches at the same time within the time range specified.
The default is 200 endpoints.
6. In the Configuration page, refer to Configuration for hunt collection section for information of each of the artifacts.
You can save hunts in an incomplete state and edit them later. After a hunt has run, you cannot edit it. Instead, you can duplicate to create a new hunt with the
same configuration in order to edit.
When search fields are specified, the results of the search are limited based upon those filters. If more than one entry in a search filter field, the search returns
entries that match any of the provided entries. For example: A File Search with two specified paths ("C:\Test\*" and "C:\Windows\*") will return results from both
the Test folder and the Windows folder.
If you specify multiple search fields, the search returns entries that match all the selected criteria. For example: A File Search with one path ("C:\Test") and one
size filter (">= 100MB") will only return results from the Test folder that are greater-than or equal to 100 megabytes.
Not all artifacts within an artifact category support the same search fields. If an artifact does not support one of the specified fields then that filter will not be
applied to the search results. For example: For Windows, Process Execution search with the search field for User Name="jsmith", all results from the
CidSizeMRU, LastVisitedPidlMRU, and UserAssist artifacts will be filtered by that user name. Results from the Amcache, Prefetch, and Shimcache artifacts will
not be filtered by that user name because those artifacts do not have a User Name field.
In a hunt collection, you can create a search query adding any of the following artifacts.
Archive 60 minutes (Windows) 7-Zip Folder History: File Name: regular expression (case-insensitive)
History A registry key containing a list of
Example: [0-9A-F]{8}\.exe
(Windows archive files accessed using 7-
only) Zip. File Path: path (wildcards ? * ** supported)
(Windows) WinRAR ArcHistory:
Example: C:Windows\Temp\**\*.exe
A registry key containing a list of
archive files accessed using
WinRAR.
(Windows) Edge-Anaheim
(Windows) Edge-Spartan
(macOS) Quarantine
(macOS) Safari
Command 60 minutes (Windows) PSReadline: A record Search Regex: regular expression (case-insensitive)
History of commands typed into a
Example: [0-9A-F]{8}\.exe
PowerShell terminal by user. The
history file is only enabled by
default, starting with Powershell
5 on Windows 10 or newer.
Deleted 180 minutes (Windows) Recycle Bin: Folder File Name: regular expression (case-insensitive)
Files used by Windows as temporary
Example: [0-9A-F]{8}\.exe
(Windows storage for deleted files prior to
only) permanent deletion. File Path: path (wildcards ? * ** supported)
Example: C:Windows\Temp\**\*.exe
Example: ACME\jsmith
File Access 60 minutes (Windows) Jumplists: A feature Target File Name: regular expression (case-insensitive)
of the Windows Task bar that
Example: [0-9A-F]{8}\.exe
provides shortcuts to users for
recently accessed files or Target File Path: path (wildcards ? * ** supported)
applications.
Example: C:Windows\Temp\**\*.exe
(Windows) OpenSavePidlMRU:
A registry key containing a list of User Search: User SID or User Name selector.
recently opened and saved files
for a user’s account. Example: ACME\jsmith
(Windows) TypedPaths: A
registry key containing a list of
paths that the user typed into the
Windows Explorer path bar.
File Search 180 minutes (Windows, macOS) File Search: File Path: path (wildcards ? * ** supported)
Search for a file across
endpoints by specifying a file Example: C:Windows\Temp\**\*.exe
path that can include wildcards,
File Name: regular expression (case-insensitive)
and then filter those results
based on the file size, the file Example: [0-9A-F]{8}\.exe
name (supports regular
expressions), or file hash (MD5, File Hash: Supports MD5, SHA1, and SHA256.
SHA1, or SHA256).
Example:
f9d9b9ded9a67aa3cfdbd5002f3b524b265c4086c188e1be7c936ab25627bf01
Size
Log Search 180 minutes (Windows) Event Log: A Event Log Channel: Does not support wildcards.
component of Microsoft
Example: Security
Windows, where the user can
view record of events that Event ID:
occurred within a system or
process. Example: 4624
Example: [0-9A-F]{8}\.exe
Network 60 minutes (Windows) ARP Cache: A cache IP Address: IPv4 or IPv6 addresses.
Data of Address Resolution Protocol
(ARP) records for resolved MAC Example: 10.0.0.5
and IP addresses. Domain: regular expression (case-insensitive)
Persistence 60 minutes (Windows) Drivers: Windows Registry Path: path (wildcards ? * ** supported)
device drivers installed on each
Example: HKEY_USERS\*\Software\Microsoft\Windows\CurrentVersion\Run\*
endpoint.
Process 60 minutes (Windows) Amcache: A registry Executable File Name: regular expression (case-insensitive)
Execution hive used by the Application
Example: [0-9A-F]{8}\.exe
Compatibility Infrastructure to
cache the details of executed or Executable Path: path (wildcards ? * ** supported)
installed programs.
Example: C:Windows\Temp\**\test.exe
(Windows) Background Activity
Monitor: Per-user registry keys User Search: User SID or User Name selector.
created by Background Activity
Monitor (BAM) service to store Example: ACME\jsmith
the full paths of executable files
SHA256: Supports SHA256 hashes.
and a timestamp, indicating
when they were last executed. Example:
f9d9b9ded9a67aa3cfdbd5002f3b524b265c4086c188e1be7c936ab25627bf01
(Windows) CidSizeMRU: A
registry key containing a list of
recently launched applications.
(Windows) LastVisitedPidlMRU:
A registry key containing a list of
the applications and folder paths
associated with recently opened
files found in the user’s
OpenSavePidMRU key.
(Windows) Recentfilecache: A
cache created by the
Application Compatibility
Infrastructure to store the details
of executed or installed
programs (Windows 7 only).
(Windows) Shimcache: A
registry key used by the
Application Compatibility
Infrastructure to cache details
about local executables.
(macOS) CoreAnalytics: A
diagnostic log that contains
details of files executed on the
system.
Registry 180 minutes (Windows) Registry Search: Path: path (wildcards ? * ** supported)
Search Registry listings collected during
(Windows Forensic investigation. Example: HKEY_USERS\*\Software\Microsoft\Windows\CurrentVersion\Run\*
only)
Data: regular expression (case-insensitive)
Example: [0-9A-F]{8}\.exe
System 60 - 120 minutes (Windows) Application Resource Application: path (wildcards ? * ** supported)
Statistics Usage: A table in the System
(Windows Resource Usage database that Example: C:Windows\Temp\**\test.exe
only) stores statistics pertaining to User Search: User SID or User Name selector.
resource usage by running
applications. Example: ACME\jsmith
User 60 minutes (Windows) WordWheelQuery: User Search: User SID or User Name selector.
Searches Registry key containing a list of
Example: PANW\jsmith
terms that a user searched for in
Windows Explorer. Search Regex: regular expression (case-insensitive)
(macOS) Spotlights Shortcuts: A
Example: [0-9A-F]{8}\.exe
plist file that contains the
Spotlight search terms entered
by each user and the items that
they selected from the search
results.
Abstract
The hunt results page consolidates information collected by the Cortex XDR agent enabling you to investigate and take action on your endpoints.
The hunt results page consolidates information collected by the Cortex XDR agent enabling you to investigate and take action on your endpoints.
Abstract
The Process Execution table displays a normalized table containing an overview of all of the different process execution artifacts collected from the endpoints.
Investigate the following detailed fields:
The grouping button ( ) shows the number of affected endpoints grouped by executable name. This enables you to perform hunting via frequency analysis
(referred to as stacking) and provides a birds eye view of potential malware files that require further analysis.
Field Description
Context Contextual details relating to the executed process such as files opened,
command line arguments, or process run count.
MDS MDS value of the executable file, if available on the file system.
SHA1 SHA1 value of the executable file, if available on the file system.
SHA256 SHA256 value of the executable file, if available on the file system.
Field Description
Prefetch
Recentfilecache
Shimcache
UserAssist
Unknown
Benign
Malware
Grayware
Abstract
The File Access table displays a normalized table containing an overview of all of the different file access artifacts collected from the endpoints. Investigate the
following detailed fields:
Field Description
Hostname Name of the host on where the file access artifact resided.
Abstract
The Persistence table displays a normalized table containing an overview of all of the application persistence artifacts collected from the endpoints. Investigate
the following detailed fields:
The grouping button ( ) shows the number of affected endpoints grouped by file path. This enables you to perform hunting via frequency analysis (referred to
as stacking) and provides a birds eye view of potential malware files that require further analysis.
Field Description
File Path Path of a secondary executable (often a dll) associated with this
persistence mechanism.
Image Path Path of the executable associated with this persistence mechanism.
Drivers
Registry
Scheduled Tasks
Services
Startup Folder
Unknown
Benign
Malware
Grayware
Abstract
Field Description
Abstract
The Remote Access table displays a normalized table containing an overview of all of the remote access artifacts collected from the endpoints. Investigate the
following detailed fields:
Field Description
Connection ID Unique Identifier associated with the particular remote access connection
found in this row.
Abstract
The Archive History table displays an overview of the different types of archive processes that were executed on an endpoint. Investigate the following detailed
fields:
Field Description
Hostname Name of the host on which the archive history was found.
WinRAR ArcHistory
Abstract
In the Actions table, you can scroll or use the filters to see the status of any search within a hunt across any of the targeted endpoints.
Hunts consist of searches across multiple endpoints and those searches can take time to return results from all of the targeted endpoints. To view the status of
all of the searches contained within a hunt, go to Incident Response → Investigation → Forensics. From the investigation table, click the investigation link. From
the Collections tab, select Hunt and from the Status column of the hunt, click Actions. This launches a new browser tab displaying the Actions table. Within the
Actions table, you can scroll or use the filters to see the status of any search within a hunt across any of the targeted endpoints.
Using this information, you can identify the successful and failed searches and take the necessary action.
Field Description
Pending
In progress
Completed successfully
Failed
Timeout
Field Description
Example: Amcache
Last updated Latest time results were received for this action.
13.3.2.2 | Triage
Abstract
Triage collection gathers a wide range of artifacts that can be used to help understand the event that occurred on an endpoint.
Triage enables you to do a in-depth analysis of a specific endpoint to fully understand the activities that occurred on that endpoint. The triage functionality is
configurable and supports the collection of all currently supported forensic artifacts, user-defined file paths, a full file listing for all of the connected drives, full
event logs, and registry hives. The amount of data collected during a triage can be large, so triages are limited to ten or fewer endpoints per collection.
Abstract
Triage collections enable you to obtain additional information for certain activities that have occurred on the endpoints. This helps towards the forensics
analytics of an investigation.
Use triage collections when a certain activity, group of activities, or the actions of a specific user on that endpoint have been identified, and additional
information is required. The triage functionality collects detailed system information, including a full file listing for all of the connected drives, full event logs, and
registry hives, to provide you with a complete, holistic picture of an endpoint.
Triage supports data collection from both online and offline hosts, on both Windows and macOS platforms.
1. In the Triage Collection Name field, enter a name that will be easy to find in the collections table.
3. In the Description field, enter information that is relevant to the collection you are creating .
5. Select Offline to upload archives containing forensic data collected by the Offline Collector. After the archive has been uploaded, the data is extracted
and ingested into the Forensics tables on the tenant. Import Offline Triage supports uploading packages created on both the Windows and macOS
platforms.
7. In the configuration page, select the options from the Artifacts, Volatiles and File Collection list.
You can click Add Custom to add your own file to the File Collections.
8. You can select a preset from Select Presets (Windows/macOS) to copy the options of artifacts, volatiles and file collections from another collection.
You can also click Save new preset to save the options of the current collection for prospective triage collections to use.
Abstract
Use the Upload Offline Triage to upload archives containing forensic data collected by the offline collector.
The Forensics Triage feature enables you to create a custom, standalone executable package that collects all of the forensic artifacts in the configuration.
Use the Upload Offline Triage to upload archives containing forensic data collected by the offline collector. After the archive has been uploaded, the data is
extracted and ingested into the forensics table on the tenant. Upload Offline Triage supports uploading packages created on both the Windows and macOS
platforms..
3. When in the Collections page, search for or select the triage and click the menu options button ( ) to select Upload Offline Package.
4. Drag and drop or use the browse link to search for the file. More than one offline triage package can be uploaded at a time.
Do not upload memory images captured by the Offline Triage Collector. These images are collected for analysis using third-party tools and are not
intended for upload.
5. Click Done.
Abstract
Offline triage collection is supported for endpoints with no network connection or no Cortex XDR agent currently installed.
The Forensics add-on provides a triage collection option for endpoints with no network connection or no Cortex XDR agent currently installed.
Windows
2. Click the investigation link and from the Collections tab, find the triage and click the menu options button ( )/ Depending on the system type of the
endpoint, select Download 32-bit Collector or Download 64-bit Collector .
3. Copy the downloaded file to a destination of which is accessible from the targeted endpoint.
4. From the endpoint, open the folder containing the offline triage collector and right-click on the executable file cortex-xdr-payload.exe and select Run as
administrator.
The cortex-xdr-payload.exe opens a command window that displays the status of each artifact collection.
After the collection is completed, a zip file with the hostname and a timestamp in the file name is created in the same directory as the executable.
5. From the the Collections page, select the triage and click the menu options button ( ) and select Upload Offline Package.
6. In the Import Offline Triage dialog, browse for or drag and drop the zip file and click Done.
The triage file is ingested and the results are available for review.
Security software running on the endpoint (including the Cortex agent) can interfere or block the execution of the offline triage collector. Disable any
security software on the endpoint while the collector is running or whitelist the collector in your security software before running the offline triage collector.
macOS
2. Click the investigation link and from the Collections tab, find the triage and click the menu options button ( ) and select Download Collector.
3. Open the folder containing the zip file and run the command xattr -c <triage_configuration_name>.zip, to remove any extended
attributes that macOS might have applied to the file.
4. Copy the downloaded zip file to a destination of which is accessible from the targeted endpoint.
After the collection is completed, a zip file with the hostname and a timestamp in the file name is created in the same directory as the executable.
6. From the the Collections page, select the triage and click the menu options button ( ) and select Upload Offline Package.
7. In the Import Offline Triage dialog, browse for or drag and drop the zip file and click Done.
The triage file is ingested and the results are available for review.
Security software running on the endpoint (including the Cortex agent) can interfere or block the execution of the offline triage collector. Disable any
security software on the endpoint while the collector is running or whitelist the collector in your security software before running the offline triage collector.
Abstract
You can drill down from the triage collection to review the results.
The Triage collection results page displays an overview of the different types of triage collections that were initiated on an endpoint.
Alerts: Refer to Featured fields in Overview of the Alerts page for descriptions of the fields.
Artifacts: Display all of the artifact categories collected. You can select the item to add to a timeline.
Host Timeline: Displays a list of normalized, per-host timelines that include multiple forensic artifacts in a single table.
Abstract
From the Actions table, you can view the search status of all the artifacts for the triage.
You can drill down to the Actions table from the status link of the triage to view the search the status of all the artifacts for the triage.
Field Description
Field Description
Abstract
Learn more about your investigation by reviewing the additional data for analysis and documentation purposes.
Forensic investigations include additional data for analysis and documentation purposes.
Alerts
Forensics Timeline
Abstract
The alerts table displays all the collections within the investigation that has identified suspicious or malicious activity within the forensics data sets.
The alerts table displays all the collections within the investigation that has identified suspicious or malicious activity within the forensics data sets.
Refer to Featured fields in Overview of the Alerts page for the descriptions of the table fields.
Change severity
Run playbook
Manage alerts
Abstract
Investigation timeline shows the tagged forensic artifacts that were tagged. The tags display details of the forensic data collected from the endpoints.
The Timeline page enables you to view the list of forensic artifacts that were tagged. The tags display details of the forensic data collected from the endpoints.
Field Description
legitimate
malicious
suspicious
Mitre Att&ck Tactic Displays the type of MITRE ATT&CK tactic of the tagged item.
Mitre Att&ck Technique Displays the type of MITRE ATT&CK technique of the tagged item.
c. In Edit timeline entry, update the information as required and then click Save to update the changes.
You can remove a tag from the artifact in the Timeline table.
b. Right-click and select Clear timeline entry. The tag is removed from the artifact and the row is removed from the Timeline table.
Abstract
Displays the forensic investigation based on the tagged data and aligns it to the corresponding category.
Key assets & artifacts are automatically created based on the tagged data from the investigation timeline of the investigation and dividing them among the
categories:
Data Access: Displays all the items that have been tagged in the File Access tables.
The following table for Endpoints displays the endpoints that have at least one or more items tagged:
Field Description
Mobile
Server
Workstation
Kubernetes Node
Connected
Connected Lost
Deleted
Disconnected
Uninstalled
Forensics Offline
Partial Registration
Earliest Activity Timestamp of the earliest tagged item in the incident timeline for the endpoint.
Latest Activity Timestamp of the last tagged item in the incident timeline for the endpoint.
Field Description
Pending Isolation
Isolated
Not Isolated
The following table for Malware shows all the items that have been tagged in the Process Execution or Persistence tables.
Field Description
Windows
macOS
Linux
Android
Field Description
Earliest Activity Timestamp of earliest tagged item in Incident Timeline for the user.
Latest Activity Timestamp of last tagged item in Incident Timeline for the user.
The following table for Network Indicators displays the event logs with the IP addresses that have been tagged.
Field Description
Type IP Address
Hostname
URL
The following table for Data Access displays all the items that have been tagged in the File Access tables.
Field Description
Field Description
13.3.4 | Export
Abstract
Select the export option to export data collection for long-term retention or offline analysis.
You can export the data collection for long-term retention or offline analysis.
From the collections page, choose a search item from a hunt collection or the endpoint from a triage collection and click the export icon ( ). For export of all
items, select the Export All option from the Exports button at the top of the Collections page.
The Investigation Exports table displays the status of the requested exports for the selected collection. The compressed export data expires from the bucket
after 30 days.
Field Description
Collection name Displays the name of the triage or hunt. For triage, the endpoint name of the triaged host is displayed.
Exported Displays the time when the exported package was created (compressed).
Exported by Displays the name of the user who requested the export.
Export expiration Displays the timestamp of when the bucket data (compressed data) will be deleted.
The timestamp changes to red after the timestamp and the last column shows Expired.
Status Indicates how many tables from the collections have been successfully exported to a bucket.
Download button Enables you to download the the compressed (zip) export of the collection.
Abstract
Perform a vulnerability assessment of all endpoints in your network using Cortex XDR. This includes CVE, endpoint, and application analysis.
Cortex XDR vulnerability assessment enables you to identify and quantify the security vulnerabilities on an endpoint. After evaluating the risks to which each
endpoint is exposed and the vulnerability status of an installed application in your network, you can mitigate and patch these vulnerabilities on all the endpoints
in your organization.
The following are prerequisites for Cortex XDR to perform a vulnerability assessment of your endpoints.
Requirement Description
Cortex XDR lists only CVEs relating to the operating system, and not CVEs relating
to applications provided by other vendors.
Cortex XDR retrieves the latest data for each CVE from the NIST National
Vulnerability Database as well as from the Microsoft Security Response Center
(MSRC).
Cortex XDR collects KB and application information from the agents but calculates
CVE only for KBs based on the data collected from MSRC and other sources
For endpoints running Windows Insider, Cortex XDR cannot guarantee an accurate
CVE assessment.
Cortex XDR does not display open CVEs for endpoints running Windows releases
for which Microsoft no longer fixes CVEs.
Linux
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the the latest data retrieved from the NIST.
MacOS
Cortex XDR collects only the applications list from MacOS without CVE calculation.
If Cortex XDR doesn't match any CVE to its corresponding application, an error message is
displayed, "No CVEs Found".
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Cortex XDR calculates CVEs for applications according to the application version, and not
according to application build numbers.
The Enhanced Vulnerability Assessment mode uses an advanced algorithm to collect extensive details on CVEs from comprehensive databases and to
produce an in-depth analysis of the endpoint vulnerabilities. Turn on the Enhanced Vulnerability Assessment mode from Settings → Configurations →
Vulnerability Assessment. This option may be disabled for the first few days after updating Cortex XDR as the Enhanced Vulnerability Assessment engine is
initialized.
The following are prerequisites for Cortex XDR to perform an Enhanced Vulnerability Assessment of your endpoints.
Requirement Description
Requirement Description
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the latest data retrieved from the NIST.
CVEs that apply to applications that are installed by one user aren't detected when
another user without the application installed is logged in during the scan.
MacOS
Cortex XDR collects all the information about the operating system and the installed
applications, and calculates CVE based on the latest data retrieved from the NIST.
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Some CVEs may be outdated if the Cortex XDR agent wasn't updated recently.
Application versions which have reached end-of-life (EOL) may have their version listed as
0. This doesn't affect the detection of the CVEs.
Some applications are listed twice. One of the instances may display invalid version,
however, this doesn't affect the functionality.
The scanning process may impact performance on the Cortex XDR agent during
scanning. The scan may take up to two minutes.
You can access the Vulnerability Assessment panel from Assets → Vulnerability Assessment.
Collecting the initial data from all endpoints in your network could take up to 6 hours. After that, Cortex XDR initiates periodical recalculations to rescan the
endpoints and retrieve the updated data. If at any point you want to force data recalculation, click Recalculate. The recalculation performed by any user on a
tenant updates the list displayed to every user on the same tenant.
CVE Analysis
To evaluate the extent and severity of each CVE across your endpoints, you can drill down into each CVE in Cortex XDR and view all the endpoints and
applications in your environment that are impacted by the CVE. Cortex XDR retrieves the latest information from the NIST public database. From Assets → Host
Insights → Vulnerability Assessment, select CVEs on the upper-right bar. This information is also available in the va_cves dataset, which you can use to build
queries in XQL Search.
If you have the Identity Threat Module enabled, you can also view the CVE analysis in the Host Risk View. To do so, from Assets → Asset Scores, select
the Hosts tab, right click on any endpoint, and select Open Host Risk View.
For each vulnerability, Cortex XDR displays the following default and optional values.
Value Description
Affected endpoints The number of endpoints that are currently affected by this CVE. For
excluded CVEs, the affected endpoints are N/A.
You can click each individual CVE to view in-depth details about it on a
panel that appears on the right.
Value Description
Excluded Indicates whether this CVE is excluded from all endpoint and application
views and filters, and from all Host Insights widgets.
Platforms The name and version of the operating system affected by this CVE.
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score is based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex XDR as you analyze the existing vulnerabilities:
View CVE details—Left-click the CVE to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all endpoints in your network that are impacted by a CVE—Right-click the CVE and then select View affected endpoints.
Learn more about the applications in your network that are impacted by a CVE—Right-click the CVE and then select View applications.
Exclude irrelevant CVEs from your endpoints and applications analysis—Right-click the CVE and then select Exclude. You can add a comment if
needed, as well as Report CVE as incorrect for further analysis and investigation by Palo Alto Networks. The CVE is grayed out and labeled Excluded
and no longer appears on the Endpoints and Applications views in Vulnerability Assessment, or in the Host Insights widgets. To restore the CVE, you can
right-click the CVE and Undo exclusion at any time.
The CVE will be removed/reinstated to all views, filters, and widgets after the next vulnerability recalculation.
Endpoint Analysis
To help you assess the vulnerability status of an endpoint, Cortex XDR provides a full list of all installed applications and existing CVEs per endpoint and also
assigns each endpoint a vulnerability severity score that reflects the highest NIST vulnerability score detected on the endpoint. This information helps you to
determine the best course of action for remediating each endpoint. From Assets → Vulnerability Assessment, select Endpoints on the upper-right bar. This
information is also available in the va_endpoints dataset. In addition, the host_inventory_endpoints preset lists all endpoints, CVE data, and additional
metadata regarding the endpoint information. You can use this dataset and preset to build queries in XQL Search.
For each vulnerability, Cortex XDR displays the following default and optional values.
Value Description
CVEs A list of all CVEs that exist on applications that are installed on the endpoint.
You can click each individual endpoint to view in-depth details about it on a
panel that appears on the right.
Last Reported Timestamp The date and time of the last time the Cortex XDR agent started the
process of reporting its application inventory to Cortex XDR.
Value Description
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex XDR as you investigate and remediate your endpoints:
View endpoint details—Left-click the endpoint to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all applications installed on an endpoint—Right-click the endpoint and then select View installed applications. This list
includes the application name, and version, of applications on the endpoint. If an installed application has known vulnerabilities, Cortex XDR also
displays the list of CVEs and the highest Severity.
(Windows only) Isolate an endpoint from your network—Right-click the endpoint and then select Isolate the endpoint before or during your
remediation to allow the Cortex XDR agent to communicate only with Cortex XDR .
(Windows only) View a complete list of all KBs installed on an endpoint—Right-click the endpoint and then select View installed KBs. This list
includes all the Microsoft Windows patches that were installed on the endpoint and a link to the Microsoft official Knowledge Base (KB) support article.
This information is also available in the host_inventory_kbs preset, which you can use to build queries in XQL Search.
Retrieve an updated list of applications installed on an endpoint—Right-click the endpoint and then select Rescan endpoint.
Application Analysis
You can assess the vulnerability status of applications in your network using the Host inventory. Cortex XDR compiles an application inventory of all the
applications installed in your network by collecting from each Cortex XDR agent the list of installed applications. For each application on the list, you can see
the existing CVEs and the vulnerability severity score that reflects the highest NIST vulnerability score detected for the application. Any new application
installed on the endpoint will appear in Cortex XDR within 24 hours. Alternatively, you can re-scan the endpoint to retrieve the most updated list.
Starting with macOS 10.15, Mac built-in system applications are not reported by the Cortex XDR agent and are not part of the Cortex XDR Application
Inventory.
To view the details of all the endpoints in your network on which an application is installed, right-click the application and select View endpoints.
To view in-depth details about the application, left-click the application name.
Abstract
Cortex XDR Network Configuration provides a representation of your network assets by collecting and analyzing your network resources.
Network asset visibility is a crucial investigative tool for discovering rogue devices and preventing malicious activity within your network. The number of
managed and unmanaged assets in your network provides vital information for assessing security exposure and tracking network communication effectively.
Cortex XDR Network Configuration accurately represents your network assets by collecting and analyzing the following network resources:
User-defined IP Address Ranges and Domain Names associated with your internal network.
ARP Cache
With the data aggregated by Cortex XDR Network Configuration, you can locate and manage your assets more effectively and reduce the amount of research
required to:
Monitor network data communications both within and outside your network.
Abstract
Define the IP address ranges and domain names used by Cortex XDR to identify your network assets.
Internal IP address ranges and domain names must be defined in order to track and identify assets in the network. This enables Cortex XDR to analyze, locate,
and display your network assets.
By default, Cortex XDR creates Private Network ranges that specify reserved industry-approved ranges. These ranges can only be renamed.
Create New.
1. In the Create IP Address Range dialog box, enter the IP address Name and IP Address, Range or CIDR values.
You can add a range that is fully contained in an existing range, however, you cannot add a new range that partially intersects with another
range.
2. Click Save.
1. In the Upload IP Address Range dialogue box, drag and drop or search for a CSV file listing the IP address ranges. Download example file
to view the correct format.
2. Click Add.
2. In the Internal Domain Suffixes section, +Add the domain suffix you want to include as part of your internal network. For example, acme.com.
FIELD DESCRIPTION
Active Assets Number of assets within the defined range that have reported Cortex Agent logs or appeared in your Network Firewall Logs.
FIELD DESCRIPTION
Active Managed Assets Number of assets within the defined range reported Cortex XDR Agent logs.
Modification Time The timestamp shows when this range was last changed.
Abstract
Cloud Compliance performs the Center for Internet Security (CIS) benchmarking compliance checks on endpoint resources for Linux and Kubernetes agents.
Cloud Compliance is mainly designed for cloud based Linux assets and Kubernetes hosts, but can also provide the same metric data for on-prem Linux
appliances. As a result, Cloud Compliance provides an overview of violations in terms of Cloud Security posture on your Linux boxes in terms of Linux and
container compliances, and also for Kubernetes, when applicable.
To receive data in the Cloud Compliance page, configure your Linux agent settings profile to collect this data by selecting the enable cloud compliance
collection option in XDR Pro . The endpoints require this data collection option enabled for around 12 hours to set the benchmarks and display the results on
the Cloud Compliance page.
Abstract
Learn how to view and investigate User Scores and Host Scores using the Asset Scores page.
The Asset Scores page provides a central location from which you can view and investigate information relating to User Scores and Host Scores in your
network.
The Asset Scores page is available if the Identity Threat Module add-on is enabled.
Cortex XDR aggregates Workday and Active Directory data to create a list of user and host assets within your network. When alerts and incidents occur, they
are associated with a host or user asset and Cortex XDR calculates a score that represents the risk level of each asset. This score helps to identify high-risk
assets in your organization and detect compromised accounts and malicious activities.
To Include System Users in the table, select the Include System Users checkbox: system users are SYSTEM, administrators, NT authority, and others.
As new alerts are associated with incidents, the User and Host Scores are recalculated. You can view the latest User and Host Scores on the Asset Scores
page, or track the Score trend on the User Risk View and Host Risk View.
1. Select Assets → Asset Scores. Use the toggle in the page header to switch between the Users and Hosts tabs.
Field Description
Score Represents the Cortex XDR high-risk user score. The score is updated
continuously as new alerts are associated with incidents.
Field Description
Member of (Derived from AD) The security groups that the user is associated with.
Last login Last date and time the user accessed Cortex XDR.
Field Description
Has XDR agent Whether the endpoint has an XDR agent installed.
Agent installation date Date and time that the XDR agent was installed.
Field Description
3. To investigate further, right-click on a selected host or user and click Open User Risk View or Host Risk View. For more information, see Investigate a user
and Investigate a host.
Some User Associated Insights may not appear as part of the User Associated Incidents due to the insight generation mechanism. For example, when
an insight related to one of the assets in an incident is generated a few days after the associated incident, the insight may not be associated with the
incident.
Abstract
From the Cortex XDR management console, you can manage your different network assets.
Cortex XDR provides a central location from which you can view and investigate information relating to assets in your network. Using your defined internal
network configurations, Broker VM Network Mapper, Cortex XDR agent, EDR data collected from firewall logs, and logs from third-party vendors, Cortex XDR is
able to aggregate and display a list of all the assets located within your network. As soon as Cortex XDR begins receiving network assets, you can view the
data in Assets → Asset Inventory.
The following are some of the main features available on these pages.
When any row in the table is selected, a side panel on the right with greater details is displayed, where you can view additional data divided by sections.
The section heading names and data displayed change depending on the source of the assets.
Depending on the cell you’ve selected in the table, different right-click pivot menus are available, such as Open IP View and Open in Quick Launcher.
You can export the tables and respective asset views to a tab-separated values (TSV) file.
You can toggle between the Legacy View and Advanced View on the page. The Legacy View displays a list of all the assets located within your network
according to their IP address., while the
You can view the data in a table format by accessing the pages for All Assets and Specific Assets, including On-Prem Assets and Cloud Compute
Instances.
The table columns provide newly structured data with updated filtering capabilities to improve your asset visibility.
When any row in a table is selected, a side panel on the right with greater details is displayed, where you can view additional data divided by
sections. The section heading names and data displayed change depending on the source of the assets.
Depending on the cell you’ve selected in the table, different right-click pivot menus are available, such as Open IP View and Open in Quick
Launcher.
You can export the tables and respective asset views to a tab-separated values (TSV) file.
By default, the Assets table is filtered according to unmanaged assets over the last 7 days. The following table describes both the default and optional
fields in the table, and the network prerequisites required by Cortex XDR to retrieve the data.
MAC address Mac address of the asset. The asset requires at least one of the following:
MAC address vendor Vendor name of the Mac address of the asset. The asset requires at least one of the following:
Host name Host name of the asset, if available. The asset requires at least one of the following.
Platform Platform running on the asset. The asset requires at least one of the following:
Abstract
Cortex XDR enables you to view all external assets from the various asset categories on the All Assets page.
Ingesting and Viewing Cloud Compute Instances for Cloud Inventory Assets requires a Cortex XDR Pro per GB license.
The All Assets page enables you to view all your assets from various asset categories. Each asset is available in Cortex XDR in different ways depending on
the asset category and Cortex XDR license as explained in the following table.
Cloud compute Requires configuring either a Cloud Inventory data collector or Agents that are installed on the Cortex XDR Pro TB
instance Cloud Compute Instances. license
By default, the All Assets page displays all assets according to the asset name. To search for specific assets, use the filters above the results table to narrow
the results. You can export the tables and respective asset views to a tab-separated values (TSV) file. From the All Assets page, you can also manage the
asset's output using the right-click pivot menu.
The All Assets table is comprised of a number of common fields that are available when viewing any of the Specific Assets pages. The TYPE field is only
available in the All Assets table as this field determines the Specific Assets categories, and can be used to filter the different types of assets from the entire list
of assets.
When any row in the table is selected, a side panel on the right with greater details is displayed, where you can view additional data divided by sections. The
section heading names and data displayed change depending on the source of the assets.
The following table describes the fields that are available when viewing All Assets in alphabetical order.
Field Description
Active external An array column that displays all the active Service types observed for this asset.
services types*
ASM IDs The ASM identifiers for this asset, indicate it is exposed to the Internet.
Business units* A Business Unit is a designation to classify assets. tracks business units as a means to identify owning organizations of these
assets. Business units become extremely important when an organization has subsidiaries and groups established through M&A
activities.
Cloud provider* The cloud provider used to collect these cloud assets is either GCP, AWS, or Azure.
First observed* When the asset was first observed via any of the sources.
Has active external A boolean value that displays whether the asset has any active external services. Use this filter to narrow down the asset inventory
services* to internet-facing assets, and get a clear view of the organization's attack surface.
Has XDR agent* Boolean value indicating if this asset has a Cortex XDR agent installed on it.
IP addresses* Array column specifying a list of IPs associated with this asset.
Last observed* When the asset was last observed via any of the sources.
Name* Displays the name that describes the asset as provided by the source, if provided.
Operating system* The operating system reported by the source for this asset.
Sources* An array column that displays all the sources that provided observations for this asset.
Field Description
On-Prem
XDR agent ID If there is an endpoint installed on this asset, this is the endpoint ID.
Abstract
Cortex XDR enables you to view specific external assets from a designated assets category in the Specific Assets page.
The Asset Categories listed are dependent on your Cortex XDR license. For more information, see All Assets.
Description: a brief description of the assets included on the specific asset page.
Unique Fields: the unique fields that are only available when viewing this specific asset page, and are displayed in addition to the common fields listed
for the All Assets page. These fields are exposed by default.
Cloud compute Include assets managed by Agents, where the agent reported that the assets are No specific unique fields are displayed in
instance in a cloud environment. In addition, the assets can be Cloud Compute Instances addition to the common fields.
that were reported by a Cloud integration (i.e. Cloud Inventory data collector) with
or without a Cortex agent.
Cortex XDR attempts to associate the data received from the agent and the data
received from the Cloud Integration and tie them together into a single asset.
On-prem Includes devices that have an Agent and also devices that were identified by The following attributes are relevant for IoT
various sources yet were not associated with an Agent, such as IoT devices. devices and indicate the category and
subcategory to which an IoT device belongs.
Does not include devices that are in the cloud. For example, the category may identify
network behaviors common to all security
cameras. Respectively, the model identifies the
model of the IoT device.
Device model
Device category
Device subcategory
Certificate Certificates (also known as digital or public key certificates) are used when Formatted issuer name
establishing encrypted communication channels to identify and authenticate a
trusted party. The most common use of certificates is for SSL/TLS, HTTPS, FTPS, Certificate algorithm
SSH, and VPN connections. The most common use of certificates is for HTTPS-
Certificate classification
based websites, which allow a web browser to validate that an HTTPS web server
is an authentic website. Cortex XDR tracks information for each certificate, such as
Issuer, Public key, Public Key Algorithm, Subject, Subject Alternative Names,
Subject Organization, Subject Country, Subject State, and several “crypto health”
checks.
Domain A domain name attributed to an organization by Cortex XDR . Subdomains of Resolves: indicates whether the domain has a
attributed Domains are also tracked as Domains. When there are too many (>1k) DNS resolution.
recent subdomains for one domain, Cortex XDR collapses them into the parent
domain.
Responsive IP An IP that currently or has previously exposed an External Service which was No specific unique fields are displayed in
detected by Cortex XDR and associated with the organization. addition to the common fields.
Only Responsive IPs and certificates that have at least one active Service are
displayed in the Asset Inventory.
Externally detected Responsive IPs are matched with existing assets using the
asset’s IP addresses. If the Responsive IP was matched to an existing asset, its
data is added to the asset. Any externally detected Responsive IP that was not
matched with an existing asset, is considered an independent asset of type
“Unassociated External Responsive IP”.
Abstract
View asset roles and the number of assets that are associated with each role. Learn how to manage asset roles for users and endpoints.
Asset Roles are available only if the Identity Threat Module add-on is enabled.
Cortex XDR continuously analyzes your users and endpoints, and automatically classifies them based on their activities under asset roles, for example, Domain
Controller, Administrator, and Executive User. You can edit, add, and fine-tune the assets associated with each asset role at any time.
Fine-tuned asset roles aid Cortex XDR Analytics in the following areas.
Enhancement of the accuracy of the analytics that runs on assets, enabling better detection of uncommon activities by the asset based on the baseline
for the asset role.
Asset role visualization in the Incident view, the User view, and the Host view as background information for risk assessment.
Analysis of User and Host peer groups for score trend comparison over selected timelines.
You can add users and endpoints to any asset role manually or by importing a CSV file.
You can remove users from asset roles manually and override the automatically detected asset roles.
The tag family for asset roles provides the ability to slice and dice alerts and incidents. Automated and customizable asset role classification is based on
constant analysis of the users and hosts in your network. You can edit and manage the User Asset Roles and Host Asset Roles to meet the needs of your
organization.
The Assets → Asset Roles Configuration page displays the asset roles, their type, the number of assets that are associated with each asset role, and the last
modification date. On this page, you can refresh the data, filter it, and change the layout.
To edit an asset role, right-click and select Edit Asset Role. Depending on the type of asset, you can manage the user asset role list or the endpoint asset role
list for the asset role.
Abstract
User Role Management is available only if the Identity Threat Module add-on is enabled.
The Edit User Role page enables you to edit the user lists assigned to asset roles. You may want to exclude some users from certain asset roles even if Cortex
XDR automatically detected the user as having this asset role. For example, if a user's position in the organization is changed and you want their Analytics to
be adjusted accordingly.
The User list on the page displays the users classified under the asset role, if the asset role was assigned automatically or edited manually for the user, the last
modification date, and the modifier.
To access the Edit User Role page, from Assets → Asset Roles Configuration, right-click to select the user asset role and click Edit Asset Role.
Included Users displays all the users Cortex XDR automatically detects as having this asset role and the users you specify manually as having this asset role.
Excluded Users displays the users that were manually removed from an asset role. When you exclude a user from an asset role, it remains in the Excluded
Users list and even if it's detected automatically again in the future as having this asset role, it will not be included in the asset role list.
If you want to remove a user from the list of users with this asset role, right-click the user and select Exclude User. The user is then listed under Excluded Users
for this asset role. When you exclude a user from an asset role, by default Cortex XDR also removes the user from the parent asset roles of the current asset
role. To remove the user from the child asset role, but to leave it in any of its parent asset roles, click Advanced Exclusion Settings, and select Don't Exclude
next to the name of the parent asset role(s).
To include an excluded user back in the asset role, right-click the user in the Excluded Users list and select Delete User. If the user was automatically detected
as having this asset role, it will be added back to the Included Users list again. Otherwise, the next time Cortex XDR analyzes the assets and automatically
detects their asset roles, this user will be included in the asset role list.
To include users from your system manually in an asset role list, in the asset role page, click Add User.
To add one or more users manually, click Add New, and then type the user names one by one in the format Netbios\samAccount.
To add users from a CSV file, click Import from File. You can use the example file provided to structure your CSV file.
Manually added users are also analyzed by Analytics when it runs next, and are displayed in the Incident view and the User Risk view.
To delete a manually added user from the Included Users, right-click and Delete User.
Deleting a manually added user removes the user from the Included Users list. If this user is detected automatically as having this asset role in the future, it will
appear in the Included Users list again.
Excluding a manually added user ensures that even if in the future the user is detected as having this asset role, this detection is overridden and the user isn't
included in the asset role.
To change the name of a user, right-click the user name and Edit User.
Abstract
The honey user role is available only if the Identity Threat Module add-on is enabled.
A honey user is a decoy account designed to mimic a legitimate user within your environment. This kind of user looks attractive to potential attackers, with
access to many assets, and is used for triggering alerts if accessed.
One of the techniques used by an attacker trying to gain access to your network is attempting to use the credentials of accounts in your organization. By
setting up honey users, you can detect these access attempts as soon as they occur. Unlike genuine user accounts, honey users have no legitimate purpose
within the organization, making any activity involving them inherently suspicious. Cortex XDR uses its out-of-the-box Identity Threat Module to automatically
detect activity on the honey user role for identifying suspicious activities.
To use a honey user account for detection, you must configure it manually.
3. Select Add User → Add New and enter the honey user account details in the NetBIOS\SAM Account format.
Abstract
Endpoint Role Management is available only if the Identity Threat Module add-on is enabled.
The Edit Endpoint Role page enables you to edit the host lists assigned to asset roles. You may want to exclude some endpoints from certain asset roles even
if Cortex XDR automatically detected the endpoint as having this asset role. For example, if an endpoint is reassigned to another user and you want their
Analytics to be adjusted accordingly.
The Endpoints list on the page displays the endpoints classified under the asset role, if the asset role was assigned automatically or edited manually for the
endpoint, the last modification date, and the modifier.
Included Endpoints displays all the endpoints Cortex XDR automatically detects as having this asset role and the endpoints you specify manually as having
this asset role. Excluded Endpoints displays the endpoints that were manually removed from an asset role. When you exclude an endpoint, it remains in the
Excluded Endpoints list and if detected automatically again in the future as having this role, will not be included in the role list.
If you want to remove an endpoint from the list of endpoints with this asset role, right-click the endpoint and select Exclude Endpoint. The endpoint is then
listed under Excluded Endpoints for this asset role. When you exclude an endpoint from an asset role, by default Cortex XDR also removes the endpoint from
the parent asset roles of the current asset role. To remove the endpoint from the child asset role, but to leave it in any of its parent asset roles, click Advanced
Exclusion Settings, and select Don't Exclude next to the name of the parent asset role(s).
To include an Excluded endpoint back in the asset role, in the Excluded Endpoints list, right-click the endpoint and select Delete Endpoint. If the endpoint was
automatically detected as having this asset role. it will be added back to the Included Endpoints list again. Otherwise, the next time Cortex XDR scans the
assets and automatically detects their asset roles, this endpoint will be included in the asset role list.
To include endpoints from your system manually in an asset role list, in the asset role page, click Add Endpoint. Select the endpoint from the displayed
endpoint list, which displays the endpoints managed by the tenant. You can only add endpoints that have the Cortex XDR agent installed on them.
Manually added endpoints are analyzed by Analytics when it runs next and are displayed in the Incident view and the Host Risk view.
To delete a manually added endpoint from the Included Endpoints list, right-click and Delete Endpoint.
Deleting a manually added endpoint removes the endpoint from the Included Endpoints list. If this endpoint is detected automatically as having this asset role
in the future, it will appear in the Included Endpoints list.
Excluding a manually added endpoint ensures that even if in the future the endpoint is detected as having this asset role, this detection is overridden and the
endpoint isn't included in the asset role.
To change the name of an endpoint, right-click the endpoint name and Edit Endpoint.
Abstract
Cortex XDR provides a unified, normalized asset inventory for cloud assets to provide deeper visibility and context for incident investigation.
Cortex XDR provides a unified, normalized asset inventory for cloud assets in Google Cloud Platform, Microsoft Azure, and Amazon Web Services. This
capability provides deeper visibility to all the assets and superior context for incident investigation. To receive cloud assets, you must first configure a Cloud
Inventory data collector for the vendor in Cortex XDR . As soon as Cortex XDR begins receiving cloud assets, you can view the data in Assets → Cloud
Inventory, where and pages display the data in a table format.
The following are some of the main features available on these pages.
When any row in the table is selected, a side panel on the right with greater details is displayed, where you can view additional data divided by sections.
The following are some descriptions of the main sections.
Internet Exposure: when there are any open external ports, these ports and their corresponding details are displayed, so you can quickly identify
the source of the problem. You can also view the raw JSON text of the banner details obtained from Cortex Xpanse.
Asset Editors: displays the identities of the latest 5 editors listing the percentage of editing actions for a single identity. A link is provided to open a
predefined query in XQL Search on the cloud_audit_log dataset to view the edit operations by the identity selected for this asset in the last 7 days.
Asset Metadata: details the asset metadata collected for the selected row in the table.
Depending on the cell you’ve selected in the table, different right-click pivot menus are available, such as Open IP View and Open in Quick Launcher.
You can export the tables and respective asset views to a tab-separated values (TSV) file.
For more information on these sections in the side panel, see Manage Your Cloud Inventory Assets.
Abstract
Cortex XDR enables you to view all your cloud assets from the various cloud assets categories on the All Cloud Assets page.
The All Cloud Assets page enables you to view all your cloud assets from the various cloud assets categories that you configured for collection from Google
Cloud Platform, Microsoft Azure, and Amazon Web Services using the Cloud Inventory data collector.
To view the All Cloud Assets page, select Assets → Cloud Inventory → All Cloud Assets.
By default, the All Cloud Assets page displays all cloud assets according to the most recent time that the data was updated. To search for specific assets, use
the filters above the results table to narrow the results. You can export the tables and respective asset views to a tab-separated values (TSV) file. From the All
Cloud Assets page, you can also manage the asset's output using the right-click pivot menu. For more information, see Manage Your Cloud Inventory Assets.
When any row in the table is selected, a side panel on the right with greater details is displayed, where you can view additional data divided by sections, such
as Asset Metadata and Asset Editors. The Asset Editors section also provides a link to open a predefined query in XQL Search on the cloud_audit_log dataset
to view the edit operations by the identity selected for this asset in the last seven days.
The following table describes the fields available when viewing All Cloud Assets.
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Avaiblitity zone* Displays the Availability zone according to the cloud provider.
Cloud tags* Displays any cloud tags or labels configured according to the cloud provider.
Creation time* Displays the time that the cloud asset was created.1 This information is not always available.
GEO region* Displays the normalized value indicating the geographic region, such as North America or the Middle East.
Hierarchy* Displays the hierarchy of the associated Project in the cloud provider separated by a forward slash (/) similar to a file path.
The Project is called something else in each cloud provider. For more information, see the Project description.
Internet exposure Displays a list of ports, where the details regarding these ports are available to view in the side panel.
(ports)*
Last reported status* Last reported status of the asset, such as Available or Ready.
Name* Name that describes the asset as given in the cloud provider if provided.
Project* Displays the associated project name as provided by the Cloud provider. For each cloud provider, the project is called
something else.
AWS: account
GCP: project
Project ID Displays the associated project ID as provided by the Cloud provider, where the project is called something else in each cloud
provider. See Project description.
Provider* The cloud provider used to collect these cloud assets is either GCP, AWS, or Azure.
Field Description
Raw asset Internal Cortex XDR debug information that displays the raw data used to parse the data.
Resource group Displays the Rserouce group when using an Azure Provider.
Secondary asset ID Displays a Secondary asset ID provided by the cloud provider that is used in Cortex XDR to identify the asset if a Name is not
provided.
Subtype* The subtype of cloud asset based on the Type configured, which can be defined as one of the following.
VM Instance
Bucket
Disk
Image
Subnet
Security Group
Other
Type* Type of cloud asset, which can be defined as one of the following.
Compute
Cloud Function
Storage
Other
Update time* Displays the time that the cloud asset was updated. This information is not always available.
Due to a known AWS synchronization issue, where the creation time displayed in the AWS Console does not match the actual time when the AWS Bucket was
created, the Creation time in Cortex XDR does not always match the AWS Console as Cortex XDR displays the actual time.
Abstract
Cortex XDR enables you to view specific cloud assets from a designated cloud assets category in the Specific Cloud Asset pages.
Asset Type: the asset type that is automatically associated with this specific cloud asset page.
Asset Subtype: the asset subtype that is automatically associated with this specific cloud asset page.
Unique Fields: the unique fields that are only available when viewing this specific cloud asset page, and are displayed in addition to the common fields
listed for the All Cloud Assets page. These fields are exposed by default.
Disks Compute Disk DISK SIZE—Displays the disk size as an integer in GB.
Storage Buckets Storage Bucket BUCKET ACCESS—Displays the bucket access options as one of the
following.
Public
Private
Fine Grained
Unknown
Virtual Private Clouds Compute VPC DEFAULT VPC—Displays a boolean value as either Yes or No to indicate whether this
(VPCs) asset is the default VPC.
Subnets Compute Subnet No specific unique fields are displayed in addition to the common fields.
Security Groups (FW Compute Security Group No specific unique fields are displayed in addition to the common fields.
Rules)
Images Compute Image No specific unique fields are displayed in addition to the common fields.
Network Interfaces Compute Network No specific unique fields are displayed in addition to the common fields.
Interfaces
Cloud Functions Cloud Cloud Function No specific unique fields are displayed in addition to the common fields.
Function
Abstract
Cortex XDR provides a central location to view and investigate information relating to inventory assets in the cloud.
Ingesting and viewing Cloud Inventory Assets requires a Cortex XDR Pro per GB license.
AWS—Account
GCP—Project
Microsoft Azure—Subscription
Internet Exposure When there are any open external ports, the
open ports and their corresponding details are
displayed.
Public
Private
Fine Grained
Unknown
Show rows with ‘<field name>’ to filter the column list to only display the rows with a specific field name selected in the table.
Hide rows with ‘<field name>’ to filter the column list to hide the rows with a specific field name selected in the table.
Copy text to clipboard to copy the text from a specific field in the row of an asset.
Copy entire row to copy the text from all the fields in a row of an asset.
Open IP View: for the External IPs and Internal IPs column fields in the assets table, you can open the IP Address View, which provides a powerful
way to investigate and take action on an IP address by reducing the number of steps it takes to collect, research, and threat hunt related incidents.
Open in Quick Launcher: for the External IPs and Internal IPs column fields in the assets tables, you can open the Quick Launcher shortcut to
search for information, perform common investigative tasks, or initiate response actions related to a specific IP address or CIDR.
Show rows 30 days prior to ‘<timestamp field>’: for all timestamp fields in the assets tables, you can filter the column list to only display the rows
30 days earlier than the selected timestamp field.
Show rows 30 days after to ‘<timestamp field>’: for all timestamp fields in the assets tables, you can filter the column list to only display the rows
30 days after the selected timestamp field.
Gain additional verification on key artifacts by integrating Cortex XDR with other Palo Alto Networks and third-party security products.
You can integrate external threat intelligence services with Cortex XDR that provide additional verification sources for each key artifact in an incident. Cortex
XDR supports the following integrations:
Threat intelligence
Integration Description
WildFire Cortex XDR automatically includes WildFire threat intelligence in the incident and alert investigation.
WildFire detects known and unknown threats, such as malware. The WildFire verdict contains
detailed insights into the behavior of identified threats. The WildFire verdict is displayed next to
relevant Key Artifacts in the incidents details page. See Review WildFire analysis details for more
information.
VirusTotal VirusTotal provides aggregated results from over 70 antivirus scanners, domain services included in
the block list, and user contributions. The VirusTotal score is represented as a fraction. For
example, a score of 34/52 means out of 52 queried services, 34 services determined the artifact to
be malicious.
To view VirusTotal threat intelligence in Cortex XDR incidents, you must obtain the license key for
the service and add it to the Cortex XDR Configuration. When you add the service, the relevant
VirusTotal (VT) score is displayed in the incident details page under Key Artifacts.
Incident management
Integration Description
Third-party ticketing systems To manage incidents from the application of your choice, you can use the Cortex XDR API
Reference to send alerts and alert details to an external receiver. After you generate your API key
and set up the API to query Cortex XDR, external apps can receive incident updates, request
additional data about incidents, and make changes such as setting the status and changing the
severity or assigning an owner. To get started, see the Cortex XDR API Reference guide.
Prioritize and filter your incidents by using incident starring and incident scoring.
Cortex XDR provides incident starring and incident scoring to help you to prioritize your incidents.
Incident starring enables you to highlight and filter the incidents that you deem most important. Incident scoring assigns a numerical value to each incident to
reflect its severity. Using these functions, you can easily compare incident scores, filter starred incidents, and prioritize your most urgent incidents.
Abstract
Starring incidents can help you to prioritize and filter your incidents.
To help you focus on the most important incidents, you can star an incident. Starring incidents enables you to narrow down the scope of incidents on the
Incidents page and in the Incident management dashboard. Cortex XDR identifies starred incidents with a purple star.
You can star incidents manually, or create an incident starring configuration. An incident starring configuration automatically categorizes and stars incidents
that contain alerts with specific attributes. In a starring configuration you define attributes or assets for alerts that you want to star. If an alert matches the
attributes in the starring configuration, the alert and incident containing the alert are starred. You can manage all starring configurations under Incident
Response → Incident Configuration → Starred Alerts.
Incident starring supports Scope-Based Access Control (SBAC). The following parameters are considered when editing a starring configuration:
If Scoped Sever Access is enabled and set to restrictive mode, you can edit a configuration if you are scoped to all tags in the configuration.
If Scoped Sever Access is enabled and set to permissive mode, you can edit a configuration if you are scoped to at least one tag listed in the
configuration.
If a policy was added when set to restrictive mode, and then changed to permissive (or vice versa), you will only have view permissions.
You can proactively star alerts and incidents containing alerts by creating a starring configuration:
5. In the alert table, use the filters to define the alert attributes you want to include in the match criteria.
6. Click Create.
Abstract
An incident score is a numeric value that indicates the urgency of an incident. Incident scoring can help you to streamline the process of prioritizing and
investigating your incidents, and help you to identify the incidents that require immediate attention.
Types of scoring
Rule-based scoring: The score is determined by user-defined scoring rules that match the alerts triggered in the incident.
Read more...
You create scoring rules that define scores for alerts with specific attributes or assets. You can base scoring rules on:
Hostnames
IP addresses
Users
When an alert is triggered, Cortex XDR searches for scoring rules that match the alert. An alert can match multiple rules or sub-rules. If a match is found,
Cortex XDR assigns the scores of the matching rules to the alert. If multiple rules match the alert, the alert score is an aggregation of the rule scores. By
default, a score is applied only to the first alert in the incident that matches the defined rule and sub-rule.
You can create a rule hierarchy by setting up sub-rules. If an alert matches one or more sub-rules, the sub-rule scores are also aggregated in the alert
score. However, a sub-rule score is only applied to an alert if the top-level rule was a match.
To determine the incident score, Cortex XDR calculates the combined alert scores total for all alerts in the incident. You can see a breakdown of the
score by clicking on the score in the details pane.
SmartScore relies on machine learning, statistical analysis, incident attributes, and cross-customer insights to identify high-risk incidents. When an alert is
triggered, Cortex XDR calculates the SmartScore according to the compiled data.
For Cortex XDR to provide effective rule-based scores, you must define accurate scoring rules that are suitable for your environment and workflows. In
addition, SmartScore requires sufficient data to calculate and display the score. On first activation, this can take up to 48 hours. If sufficient data is not
available, no score is assigned.
When an incident is created, Cortex XDR searches for a match between your scoring rules and the alerts in an incident. If a match is found, a rule based score
is assigned. If no match is found and there is sufficient data available, Cortex XDR assigns a SmartScore. If Cortex XDR doesn't have sufficient data to assign
a score, you can manually assign a score.
To enable Cortex XDR to automatically assign a score to an incident, you must enable SmartScore and define scoring rules. For more information, see Set up
incident scoring.
You can see the assigned incident score on the Incidents page, under Incident Response → Incidents.
Abstract
To set up incident scoring you need to enable SmartScore, and enable and define scoring rules.
Enable SmartScore
2. Select to Incident Response → Incident Configuration → Incident Scoring and enable SmartScore.
On the first activation, it can take up to 48 hours for SmartScore to calculate and display the score.
1. Selec Incident Response → Incident Configuration → Scoring Rules and enable User Scoring Rules.
The Scoring Rules table displays the user-defined rules and sub-rules.
3. In the Create New Scoring Rule dialog, define the rule criteria:
2. Under Score, define the score that Cortex XDR should apply to alerts that matching the rule criteria.
3. Under Base Rule, select whether to create a top-level rule (labeled Root) or a sub-rule (labeled Rule Name (ID:#)). By default, rules are defined at
the root level.
By selecting this option you choose to apply the score only to the first alert that matches the defined rule. Subsequent alerts of the same incident
will not receive a score from this rule. By default, a score is applied only to the first alert that matches the defined rule and sub-rule.
5. In the alert table, use the filters to define the alert attributes you want to include in the rule match criteria.
With this rule, Cortex XDR assigns a score of 30 to any XDR BIOC alerts with a severity level of Critical:
Score = 30
Filters:
4. Click Create.
5. In the Scoring Rules table, click Save to save your scoring rule.
For scoped users, a small lock icon indicates that you don't have permissions to edit a rule.
What to do next
After setting up your scoring rules, you can take the following actions:
You can see details about the scoring method and the assigned score.
2. On the Incidents page, click on the menu icon to switch to the detailed view.
If you are not satisfied with the score, you can change the scoring method, or overwrite the score by setting the score manually. If you see a discrepancy with
the assigned score, consider the following:
For SmartScores, help to improve the accuracy of SmartScore. Give feedback by hovering over the displayed score.
You can change the default scoring method. In addition, if Cortex XDR was unable to assign a score, you can set the score manually.
If no score was assigned, in the incident pane click the more options icon and select Manage Score.
4. Select a different scoring method, or click Set score manually and define a new score.
Revise existing scoring rules
In the Scoring Rules table, take the following actions to review your rules and sub-rules:
Use the arrows to rearrange rule priorities. Make sure to click Save after any changes.
Select one or more rules and right-click to see the available actions.
Incident Scoring supports Scope-Based Access Control (SBAC). If you're a scoped user, a small lock icon indicates that you don't have permissions to edit a
rule. The following parameters are considered when editing a scoring rule:
If Scoped Server Access is enabled and set to restrictive mode, you can edit a rule if you are scoped to all tags in the rule.
If Scoped Server Access is enabled and set to permissive mode, you can edit a rule if you are scoped to at least one tag listed in the rule.
To change the order of a rule, you must have permissions to the other rules of which you want to change the order.
If a rule was added when set to restrictive mode, and then changed to permissive (or vice versa), you will only have view permissions.
Automation rules enable you to create rules comprised of alert conditions that trigger an action.
Cortex XDR provides an easy way to automate the day to day activities of SOC analysts. Automation rules enable you to define alert conditions that trigger the
action that you specify within the rule. As alerts are created, Cortex XDR checks if the alert matches any of the alert conditions from the automated rules, and if
there is a match, the corresponding action is triggered. The automation rules only apply to new alerts which will either create a new incident or be combined
with an existing one.
Automation rules only apply to alerts that are grouped into incidents by the system. Most alerts with low and informational severity do not allow an automation
rule to be automatically executed on them.
The automation rules run in the order they're created. You can drag the rules to change the order. If you select the setting Stop processing after this rule within
a rule, the rule is still processed, but the rules following are not processed if alert conditions are met.
Automation rules support SBAC (scoped based access control). The following parameters are considered when editing a rule.
If Scoped Server Access is enabled and set to restrictive mode, you can edit a rule if you are scoped to all tags in the rule.
If Scoped Server Access is enabled and set to permissive mode, you can edit a rule if you are scoped to at least one tag listed in the rule.
To change the order of a rule, you must have permissions to the other rule/s of which you want to change the order.
If a rule was added when set to restrictive mode, and then changed to permissive (or vice versa), you will only have view permissions.
The Automation Rules page displays a table of all the rules created. The following table describes the fields.
Read more...
Field Description
Action Action that is triggered when the alert matches the condition configured within the automation rule.
Communication
Send email
Syslog forwarding
Assign incident
Forensics
Forensic Triage
Endpoint Response
Isolate endpoint
Retrieve File
Action Parameters Required information for the action. For example, for the action Send email, you must enter the email of the person receiving the
notification.
Conditions Rule condition defined for the automation rule. For example Severity=Critical, where the rule triggers the action on all alerts where
Severity=Critical.
Stop Processing Indicates whether the Stop processing after this rule is selected.
Modification Time Time when the automation rule was last modified.
Abstract
Before you begin creating automation rules, consider setting thresholds for the following endpoint actions:
Isolate endpoint on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to isolate the endpoint, the
limit threshold defined enables the set number of endpoints to be isolated for the period of
time defined. This is to prevent an overflow of endpoints isolated from the network at the
same time.
If the setting is turned off, there is no threshold for the isolation of endpoints.
Run endpoint script on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to run the endpoint script,
the limit threshold defined enables the set number of endpoints to run the script for the
period of time defined. This is to prevent an overflow of endpoints running scripts at the
same time.
If the setting is turned off, there is no threshold for the running scripts on the endpoints.
Terminate Causality (CGO) on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is to terminate causality, the
limit threshold defined enables the set number of endpoints to terminate the causality chain
of processes for the period of time defined. This is to prevent an overflow of endpoints
terminating causality chain of processes at the same time.
If the setting is turned off, there is no threshold for terminating causality on the endpoints.
Forensic Triage on up to _ endpoints in _ hour/s When an alert condition is triggered, and the action specified is set to Forensic Triage, the
limit threshold defined enables the set number of endpoints to triage for the period of time
defined. This is to prevent an overflow of endpoints to triage at the same time.
If the setting is turned off, there is no threshold for the running scripts on the endpoints.
This option is only accessible to users that have the forensics add-on license.
Abstract
Includes the list of actions to take when the alert condition of the automation rule is triggered for Cortex XDR.
When creating the automation rule, the action is triggered when an alert matches the condition of the automation rule.
Action Settings
Communication Choose one of the options to receive notifications to keep up with alerts.
Send email
Assign Condition
Always assign
Assign To—Select the person from the list to assign the incident.
Set alert status Alert Status—Select alert status to override the present status of the alert.
New
Under Investigation
Resolved
Set alert severity Alert Severity—Select alert severity to override the present severity of the
alert.
Critical
High
Medium
Low
Forensics
Endpoint Response
Action Settings
Alert initiator host—The host specified under Host of the alert from the
Alerts table.
Alert remote host—The host specified under Remote Host of the alert
in the Alerts table.
Script.
Script Library
Code Snippet
Alert initiator host—The host specified under Host of the alert from the
Alerts table.
Alert remote host—The host specified under Remote Host of the alert
in the Alerts table.
Alert initiator file—The file specified under Initiator Path of the alert
from the Alerts table.
Terminate Causality (CGO) Select this option to terminate the causality chain of processes associated
with the alert/s of the automation rule.
Stop processing after this rule The current rule is the last to be processed only if triggered.
Abstract
Includes the list of fields included in the automation audit log for Cortex XDR.
The Automation Audit Log shows all the records of all the automation rule executions, included successful, failed and paused actions.
Right-click on a record and select View triggering alert to view the details of the alert in the Alerts table. Only If the record is an Endpoint Response action, you
can select View in Action Center, to view details of the action in the Action Center.
Field Description
Timestamp The date and time of the last time the automation rule was triggered.
Triggering Alert ID The ID of the alert that was triggered by the automation rule.
Automation Rule Version The version number that is updated every time the rule's conditions or actions are modified.
Investigate and respond to malicious activity in your network using different actions.
Learn about how incidents are created, the information contained in an incident, and how to prioritize and manage incidents.
In the following topics you will learn about how incidents are created, the information contained in an incident, and how to prioritize and manage incidents.
Abstract
Learn about how incidents are created, incident terminology, incident thresholds, and incident planning and response
An incident is a container object to group related alerts, assets, and artifacts, that originate from a single root cause. The root cause might be a self-contained
cyberattack that brings multiple actors together to attack (such as attackers, tools, and processes), or it might be a combination of malware and exploits.
Artifacts: Attributes of attacking objects such as filenames, file signers, processes, domains, and IP addresses
Each incident is individually configured and requires its own independent investigation. To see a list of all Incidents, navigate to the Incidents page.
Incident thresholds
To keep incidents fresh and relevant, Cortex XDR implements the following thresholds. When the incident reaches a threshold, it stops accepting alerts and
groups subsequent related alerts in a new incident.
14 days since the last alert in the incident was detected (excludes backward scan alerts).
Abstract
Use the Incidents page to review incident details and take remedial action.
The Incidents page is the first stop for investigating incidents. On the Incidents page you can see information about all incidents in your environment. You can
track and manage your incidents, investigate incident details, and take remedial action.
By default open incidents are displayed, but you can change the filters to browse through resolved incidents too. To make it easy for you to identify the most
critical incidents, Cortex XDR provides color coded icons that indicate severity, incident scores, and incident starring.
You can access the page from Incident Response → Incidents. The page is available in the following modes, and any changes that you make to the incident
fields will persist between modes.
Displays incidents in a split pane format that provides key details of each incident and makes it easy to prioritize the most urgent incidents.
Table view
Displays incidents in a table format. For more information about the fields in the table view, see Incidents table view reference information.
From the Incidents page you can also access the Alerts Table to see a full list of alerts in the system.
The detailed view is a split paned format consisting of the list pane and the details pane. The list pane consolidates key information for each incident based on
filtering options. From the list you can identify the most critical attacks and start prioritizing your incidents. Click an incident in the list pane to see its full details
in the details pane.
The details pane includes two views, Advanced view and Legacy view. Use the Legacy view to view incidents from earlier versions. To change the view, select
Page View from the more options icon.
The details pane is split into the following sections and tabs:
Incident header Displays detailed key information and provides administration actions for the incident, such as assigning an analyst and
setting the status. Hover and click on a field for more information, and edit where required.
Overview Displays the main incident information including the MITRE ATT&CK tactics and techniques identified in the incident, the
number of alerts triggered, automation information about playbooks, and key artifacts and assets involved in the incident.
You can click any of the widgets to start your investigation.
Key Assets & Artifacts Displays the incident asset and artifact information of the key artifacts, hosts, and users associated with the incident. Hover
over an icon for more information, or click the more options icon to see the available views and actions. For more
information about investigating key assets and artifacts, see Investigate artifacts and assets.
Alerts & Insights Displays a table of alerts and insights associated with the incident. Click on an alert or insight to see more information in the
Details panel, or use the pivot menu to see the options for further investigation.
Timeline Displays a chronological representation of alerts and actions relating to the incident. Each timeline entry represents a type
of action that was triggered in the alert.
Alerts that include the same artifacts are grouped into one timeline entry and display the common artifact in an interactive
link. Click on an entry to view additional details in the Details panel. You can also filter the timeline by action type.
Depending on the type of action, you can select the entry to further investigate and take action on the entry.
Executions Displays the alert causality chains associated with the incident. On this tab you can investigate a causality chain and take
actions on a host. For more information about investigating causality chains, see Causality view.
Abstract
The following table describes the fields in the Incidents page table view.
Read more...
Field Description
Alert Categories Alert categories that were triggered by the incident's alerts
Creation Time Date and time that the incident was created
Critical/High/Medium/Low Severity Alerts Number of critical, high, medium, or low severity alerts included in the incident
Incident Description Description generated from the alert name of the first alert that was added to the
incident, the host and user affected, or number of users and hosts affected
Incident Sources List of sources that raised high and medium severity alerts in the incident
Last Updated Last time that a user took an action on the incident, or an alert was added to the
incident
MITRE ATT&CK Tactic Types of MITRE ATT&CK tactics that were triggered by the alerts in the incident
MITRE ATT&CK Technique Types of MITRE ATT&CK techniques and sub-techniques that were triggered by
the alerts in the incident
Resolve Comment User-added comment when setting the incident status to Resolved
Resolved Timestamp Date and time that the incident status was set to Resolved
Field Description
Severity Highest severity of the alerts in the incident, or the user-defined severity
Incidents are automatically starred if they include alerts that match your incident
prioritization policy.
Status When incidents are generated they have the status set to New. To begin
investigating an incident, set the status to Under Investigation. When the incident
is resolved, set the status to Resolved and select a resolution reason. For a
description of each resolution reason, see Resolution reasons for incidents and
alerts.
Tags Tag family and the corresponding tags. If SBAC is enabled, you can view and
manage the incident according your scope settings.
When you view incidents as a scoped user and the tenant is set to permissive
mode, you can view the incident but you do not have access to entities outside of
your scope.
When you view incidents as a scoped user and the tenant is set to restrictive
mode, the incident content is not visible. You can send the incident ID to your
administrator and request an updated user scope that enables you to view the
incident.
WildFire Hits Number of Malware, Phishing, and Grayware artifacts that are part of the incident
Abstract
On the Incident view you can track incidents, investigate incident details, and take remedial action. Navigate to Incident Response → Incidents and locate the
incident you want to investigate.
If you do not have permissions to access an asset of an incident (which is shown as grayed out and locked), check your scoping permissions in Manage
Users or Manage User Groups.
The incident list provide a short summary of each incident to help you to quickly assess and prioritize your incidents:
1. Review the incident severity, score, and assignee. Select whether to Star the incident.
2. Review the status of the incident and when it was last updated.
Review the user name associated with the incident. If there is more than one user, select the [+x] to display the additional user names.
Hover over the alert source icons to display the alert source type. Select the alert source icon to display the three most common alerts that were
triggered and how many alerts of each are associated with the incident.
Click on an incident to open the incident in the right panel. In the incident header you can update various data, such as the severity, incident name, score, and
merge incidents.
The default severity is based on the highest alert in the incident. To manually change the severity select the severity tag and choose the new severity.
Hover over the incident description and select the pencil icon to edit the incident description.
Click on the assigned score to investigate how the score was calculated.
In the Manage Incident Score dialog displays all rules that contributed to the incident total score, including rules that have been deleted. Deleted scores
appear with a N/A.
You can override the Rule based score by selecting Set score manually or change the scoring method. For more information, see Incident scoring.
5. Assign an incident.
Select the assignee (or Unassigned) and begin typing the assignee’s email address for automated suggestions. Users must have logged in to the app to
appear in the auto-generated list.
Select the incident Status to update the status to either New, Under Investigation, or Resolved. By updating the status you can indicate which incidents
have been reviewed and to filter by status in the incidents table.
When setting an incident to Resolved, select the reason the resolution was resolved, add an optional comment, and select whether to Mark all alerts as
resolved. For more information, see Resolution reasons for incidents and alerts.
7. Merge incidents you think belong together. Click the more options icon and select Merge Incidents.
Read more...
If both incidents have been assigned, the merged incident takes the target incident assignee.
If the target incident is assigned and the source incident is unassigned, the merged incident takes the target assignee.
If the target incident is unassigned and the source incident is assigned, the merged incident takes the existing assignee.
In the merged incident, all source context data is lost even if the target incident does or doesn't contain context data. If the target incident contains
context data, that context data is preserved in the merged incident.
8. Create an exclusion.
Read more...
3. Filter the Alerts table to defined the alerts that you want to include in the policy.
5. Click Create.
9. Review the remediation suggestions. Click the more options icon to open the Remediation Suggestions dialog.
Add notes or comments to track your investigative steps and any remedial actions taken.
Select the Incident Notepad ( ) to add and edit the incident notes. You can use notes to add code snippets to the incident or add a general
description of the threat.
Use the Incident Messenger ( ) to coordinate the investigation between analysts and track the progress of the investigation. Select the comments
to view or manage comments.
The incident Overview tab displays the MITRE tactics and techniques, summarized timeline, and interactive widgets that visualize the number of alerts, types of
sources, hosts, and users associated with the incident.
Cortex XDR displays the number of alerts associated with each tactic and technique. Select the centered arrow at the bottom of the widget to expand
the widget and display the sub-techniques. Hover over a number of alerts to display a link to the MITRE ATT&CK official site.
In some cases, the number of alerts associated with the techniques will not be aligned with the number of the parent tactic because of missing tags or in
case an alert belongs to several techniques.
2. Investigate information about the Alerts, Alert Sources, and Assets associated with the incident.
Read more...
Review the Total number of alerts and the colored line indicating the alert severity. Select the severity tag to pivot to the Alerts & Insights
table filtered according to the selected severity.
Select each of the alert source types to pivot to the Alerts & Insights table filtered according to the selected alert source.
Select See All to pivot to the Key Assets and Artifacts tab.
Select the host names to display the Details panel. The panel is only available for hosts with Cortex XDR agent installed and displays the
host name, whether it’s connected, along with the Endpoint Details, Agent Details, Network, and Policy information. Use the available actions
listed in the top right-hand corner to take remedial actions.
3. Review the artifacts and asset that are associated with the incident.
You can click the more options icon next to an asset or artifact to open an associated view, or you can see more details in the Key Assets & Artifacts tab.
The Key Assets & Artifacts tab displays all the incident asset and artifact information of hosts, users, and key artifacts associated with the incident.
1. Investigate artifacts.
In the Artifacts section, search for and review the artifacts associated with the incident. Each artifact displays, if available, the artifact information and
available actions according to the type of artifact; File, IP Address, and Domain.
2. Investigate hosts.
In the Hosts section, search for and review the hosts associated with the incident. Each host displays, if available, host information and available actions.
To further investigate the host, select the host name to display the Details panel. The panel is only available for hosts with the agent installed and
displays the host name, whether it’s connected, along with the Endpoint Details, Agent Details, Network, and Policy information details. If the Details
panel is not available, click the more options icon next to a host name to see the available options.
In the Users section, search for and review the users associated with the incident. Each user displays, if available, the user information and available
actions
The Alerts & Insights tab displays a table of the alerts and insights associated with the incident.
1. Use the table tabs to switch between alerts and insights, and add filters to the table to refine the displayed entries.
Use the available actions listed in the top right-hand corner to take remedial actions.
The incident Timeline tab is a chronological representation of alerts and actions relating to the incident.
1. Navigate to the Timeline tab and filter the actions according to the action type.
Each timeline entry is a representation of a type of action that was triggered in the alert. Alerts that include the same artifacts are grouped into one
timeline entry and display the common artifact in an interactive link. Depending on the type of action, you can select the entry, host names, and artifacts
to further investigate the action:
For Response Actions and Incident Management Actions, you can add and view comments relating to the action.
For Alerts, click the action to open the Details panel. In the panel, navigate to the Alerts tab to view the Alerts table filtered according to the
alert ID, the Key Assets to view a list of Hosts and Users associated to the alert, and an option to add Comments.
Hash artifact: Displays the Verdict, File name, and Signature status of the hash value. Select the hash value to view the Wildfire Analysis
Report, Add to Block list, Add to Allow list and Search file.
Domain artifact: Displays the IP address and VT score of the domain. Select the domain name to Add to EDL.
IP address: Display whether the IP address is Internal or External, the Whois findings, and the VT score. Expand Whois to view the findings
and Add to EDL.
In action entries that involved more artifacts, expand Additional artifacts found to further investigate.
Abstract
When you resolve an incident or alert you must also specify a resolution reason. The following table describes the resolution reasons available for selection.
Resolved - True Positive The incident was correctly identified by Cortex XDR as a real threat, and the incident was successfully handled and resolved.
Incidents resolved as True Positive and False Positive help Cortex XDR to identify real threats in your environment by comparing
future incidents and associated alerts to the resolved incidents. Therefore, the handling and scoring of future incidents is
affected by these resolutions.
Resolved - Security The incident is related to security testing or simulation activity such as a BAS, pentest, or red team activity.
Testing
Resolved - Known Issue The incident is related to an existing issue or an issue that is already being handled.
You can investigate specific artifacts and assets on dedicated views related to IP address, Network Assets, and File and Process Hash information.
From the Incidents view, open the Key Assets & Artifact tab to see the assets and artifacts that are associated with the incident, including hosts, IP addresses,
and users. Icons represent properties of the artifacts and assets. Hover over an icon for more information. Click the more options icon to drill down in
dedicated views, or take actions on the asset or artifact. The Key Assets & Artifact tab shows the following information:
Artifacts
To aid you with threat investigation, Cortex XDR displays the WildFire-issued verdict for each key artifact in an incident. To provide additional verification
sources, you can integrate external threat intelligence services with Cortex XDR. For more information, see External integrations.
Assets
Displays Hosts and Users details. For hosts with a Cortex XDR agent installed, click on the host name to see more information in the Details panel.
Abstract
Investigate incidents, connections, and threat intelligence reports related to a specific IP address on the IP View.
Drilldown on an IP address on the IP View. On this view you can investigate and take actions on IP addresses, and see detailed information about an IP
address over a defined 24-hour or 7-day time frame. In addition, to help you determine whether an IP address is malicious, the IP View displays an interactive
visual representation of the collected activity for a specific IP address.
Right-click the IP address that you want to investigate and select Open IP View.
The overview displays network operations, incidents, actions, and threat intelligence information relating to the selected IP address, and provides a
summary of the network operations and processes related to the IP address.
b. Review the location of the IP address. By default, Cortex XDR displays information on whether the IP address is an internal or external IP address.
External—Connection Type: Incoming displaying IP address is located outside of your organization. Displays the country flag if the location
information is available.
Internal—Connection Type: Outgoing displaying IP address is from within your organization. The XDR Agent icon is displayed if the endpoint
identified by the IP address had an agent installed at that point in time.
The color of the IP address value is color-coded to indicate the IOC severity.
Depending on the threat intelligence sources that are integrated with Cortex XDR, the following threat intelligence might be available:
IOC Rule, if applicable, includes the IOC Severity, Number of hits, and Source.
Related Incidents lists the most recent incidents that contain the IP address as part of the incident’s key artifacts, according to the Last Updated
timestamp. If the IP address belongs to an endpoint with a Cortex XDR agent installed, the incidents are displayed according to the host name
rather than the IP address. To dive deeper into a specific incident, select the Incident ID.
3. In the right hand view, use the filter criteria to refine the scope of the IP address information that you want to visualize in the map.
In the Type field, select Host Insights to pivot to the Asset View of the host associated with the IP address, or select Network Connections to display the
IP View of the network connections made with the IP address.
Select Recent Outgoing Connections to view the most recent connections made by the IP address. Search all Outgoing Connections to run a
Network Connections query on all the connections made by the IP address.
Depending on the current IOC and EDL status, the Actions button is displayed.
Abstract
Investigate host assets and view host insights on the Asset View.
Drilldown on an asset on the Asset View. On this view you can investigate host assets, view host insights, and see a list of incidents related to a host.
The Asset view is available for hosts with a Cortex XDR agent installed.
Identify a host with a Cortex XDR agent installed and select Open Asset View.
The overview displays the host name and any related incidents.
Related Incidents lists the most recent incidents that contain the host as part of the incident’s key artifacts, according to the Last Updated
timestamp. To dive deeper into a specific incident, select the Incident ID.
3. In the right hand view, use the filter criteria to refine the scope of the host information that you want to display.
Network Connections: Pivot to the IP view displaying the IP addresses associated with the host.
Host Risk View: View insights and profiling information. Available with the the Identity Threat Module.
Select Run insights collection to initiate a new collection. The next time the Cortex XDR agent connects, the insights are collected and displayed.
Abstract
The Host Risk View requires the Identity Threat Module add-on.
Drilldown on a host on the Host Risk View. On this view you can see insights and profiling information about a host. When investigating alerts and incidents, you
can view anomalies in the context of the host that can help you to make better and faster decisions about risks. On the Host View View you can take the
following actions:
Analyze the host's behavior over time, and compare it to peer hosts with the same asset role.
1. Right-click the host that you want to investigate and select Open Host Risk View.
You can also see a list of all hosts under Assets → Asset Scores.
The overview displays network operations, incidents, actions, and threat intelligence information relating to the selected host. You can see the host
score, the metadata aggregated by Cortex XDR, and review the CVEs breakdown by severity. The displayed information and available actions are
context specific.
Common Vulnerabilities and Exposures (CVE) are grouped by severity. For more information on each of the CVEs, refer to Related CVEs.
The graph is based on new incidents created within the selected time frame, and updates on past incidents that are still active. The straight line
represents the host score, which is based on the scores of the incidents associated with the host.
The bubbles in the graph represent the number of alerts and insights generated on the selected day. Bigger bubbles indicate more alerts and insights,
and a possible risk.
4. Drilldown on a score for a specific day by clicking a bubble. Alternatively, review the host information for the selected timeframe (Last 7D, 30D, or
custom timeframe). The widgets in the right panel reflect the selected timeframe.
5. Review the Related Incidents for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can see the
incidents that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if an incident is resolved, its score will decrease, bringing
down the host score.
The Points column displays the risk score that the incident contributed to the host score. The points are calculated according to SmartScore or
Incident Scoring Rules.
6. Review the Related Alerts and Insights for the selected timeframe or score selected in the Score Trend graph.
The timeline displays all detection activities associated with the host. The alerts are grouped into buckets according to MITRE ATT&CK tactics. Click on a
tactic to filter the alerts in the table. To further investigate an alert, click the alert to open the Alert Panel and click Investigate.
7. Review the Latest Logins to Host during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related login attempts, and whether the attempts were successful. To further investigate login activity for the host, click View In
XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
8. Review the host's Latest Authentication Attempts during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related authentication attempts, and whether the attempts were successful. To further investigate authentication attempts by
the host, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
9. Review the Related CVEs during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the specified CVEs. This information can help you to access and prioritize security threats on each of the endpoints. To further
investigate related CVEs, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to
refine your search.
10. For hosts with associated asset roles, compare the data with other peer hosts with the same asset role. In the Score Trend graph click Compare To and
select an asset role to which you want to compare the data.
In the left panel, click Actions to see a list of available actions. Actions are context specific.
Abstract
Investigate incidents, actions, and threat intelligence reports related to a specific file or process hash on the Hash View.
Drilldown on a file or process hash on the Hash View. On this view you can investigate and take actions on SHA256 hash processes and files, and see
information about a specific SHA256 hash over a defined 24-hour or 7-day time frame. In addition, you can drill down on each of the process executions, file
operations, incidents, actions, and threat intelligence reports relating to the hash.
Identify the file or process hash that you want to investigate and select Open Hash View.
The color of the hash value is color-coded to indicate the WildFire report verdict:
Blue—Benign
Yellow—Grayware
Red—Malware
Depending on the threat intelligence sources that are integrated with Cortex XDR, the following threat intelligence might be available:
IOC Rule, if applicable, including the IOC Severity, Number of hits, and Source according to the color-coded values:
Quarantined, select the number of endpoints to open the Quarantine Details view.
f. Review the recent open incidents that contain the hash as part of the incident's Key Artifacts according to the Last Updated timestamp. To dive
deeper into specific incidents, select the Incident ID.
3. In the right hand view, use the filter criteria to refine the scope of the IP address information that you want to visualize.
Filter criteria
Filter Description
Event Type Main set of values that you want to display. The values depend on the
selected type of process or file.
Primary Set of values that you want to apply as the primary set of aggregations.
Values depend on the selected Event Type.
Secondary Set of values that you want to apply as the secondary set of
aggregations.
Timeframe Time period over which to display your defined set of values.
To view the most recent processes executed by the hash, select Recent Process Executions. To run a query on the hash, select Search all Process
Executions.
Abstract
Drilldown on a user in the User Risk View or the User View. On this view Cortex XDR aggregates all of the data collected for a user, displays the information in
graphs and tables, and provides further drilldown options for easy investigation. Cortex XDR uses Identity Analytics to aggregate information on a user and
displays insights about the user.
If the Identity Threat module is enabled you can open the User Risk View. This view displays insights and profiling information to help you investigate alerts and
incidents. Viewing anomalies in the context of baseline behavior facilitates risk assessment and shortens the time you require for making verdicts.
If the Identity Threat module is not enabled you can open the User View. This view displays an overview of the user and information about the user's score and
activity.
(User Risk View only) Review the user's working hours and related alerts.
(User Risk View only) Analyze the user's behavior over time and compare to their peers with the same asset role.
1. Right-click a user name and select Open User Risk View or Open User Card.
You can also see a list of all users under Assets → Asset Scores.
Cortex XDR normalizes and displays incident and alert times in your time zone. If you're in a half-hour time zone, the activity in the Normal Activity and
the Actual Activity charts is displayed in the whole-hour time slot preceding it. For example, if you're in a UTC +4.5 time zone, the time displayed for the
activity will be UTC +4.5, however, the visualization in the Normal Activity and the Actual Activity charts will be in the UTC +4 slot.
Review the sections of the User Risk View. Depending on your permissions, some information might be limited by your scope.
Common Locations: Displays the countries from which the user connected most in the past few weeks.
Common UAs: Displays the user agents that the user used most in the past few weeks.
Regular Activity Hours: This data is based on the preceding several weeks and takes into account holidays and seasonality to present an
accurate picture. Cortex XDR leverages endpoint telemetry to provide the activity data.
The graph is based on new incidents created within the selected time frame, and updates on past incidents that are still active. The straight line
represents the user score, which is based on the scores of the incidents associated with the user.
The bubbles in the graph represent the number of alerts and insights generated on the selected day. Bigger bubbles indicate more alerts and
insights, and a possible risk.
3. Drilldown on a score for a specific day by clicking a bubble. Alternatively, review the user information for the selected timeframe (Last 7D, 30D, or
custom timeframe). The widgets in the right panel reflect the selected timeframe.
4. Review the Related Incidents for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can
see the incidents that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if an incident is resolved, its score will decrease,
bringing down the host score.
The Points column displays the risk score that the incident contributed to the host score. The points are calculated according to SmartScore
or Incident Scoring Rules.
5. Review the Related Alerts and Insights for the selected timeframe or score selected in the Score Trend graph.
The timeline displays all detection activities associated with the user. The alerts are grouped into buckets according to MITRE ATT&CK tactics.
Click on a tactic to filter the alerts in the table. To further investigate an alert, click the alert to open the Alert Panel and click Investigate.
6. Review the user activity per day in the Actual Activity widget.
In this widget Cortex XDR compares the user's actual activity data with the Regular Activity Hours, and highlights any differences or anomalies in
the user's expected activity.
The cells are marked according to the activity that took place:, and a dashed frame indicates that Cortex XDR detected uncommon activity in the
time slot.
A dashed ribbon highlights discrepancies between regular activity hours and actual activity.
A numbered ribbon indicates the number of alerts and insights that occurred on a specific day/hour.
7. Review the user's Login Attempts during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related login attempts, and whether the attempts were successful. To further investigate login activity for the user, click
View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
8. Review the user's Latest Authentication Attempts during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related authentication attempts, and whether the attempts were successful. To further investigate authentication
attempts by the user, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to
refine your search.
9. Review the user's SAAS Log activity during the selected timeframe or on the day selected in the Score Trend graph. You can see details of the
SaaS logs that were ingested into the platform in context of the user.
To further investigate SaaS log activity for the user, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language
you can refine your search.
10. For users with associated asset roles, compare the data with other peers with the same asset role. In the Score Trend graph click Compare To and
select an asset role to which you want to compare the data.
The dashed line presents the average score for peers with the same asset role as the user, over the same time period. Hover over a bubble on the
dashed line to see the Average score for the selected peer, and a breakdown of the score per endpoint. Click Show x Hosts to see a full
breakdown of the score on the Peer Score Breakdown, filtered by the selected asset role. From the Peer Score Breakdown you can select any user
name and pivot to additional views for further investigation.
User View
Review the sections of the User View. Depending on your permissions, some information might be limited by your scope.
The User Score displays the score that is currently assigned to the user and is updated continuously as new alerts are associated with incidents.
The graph is based on new incidents created within the selected time frame, and updates on past incidents that are still active. The straight line
represents the user score, which is based on the scores of the incidents associated with the user.
Select a score to display in the Incidents table the incidents that contributed to the total user score on a specific day.
3. Click a score to drilldown on the score for a specific day. Alternatively, review the user information for the selected timeframe (Last 7D, 30D, or
custom timeframe).
4. Review the Related Incidents for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can
see the incidents that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if an incident is resolved, its score will decrease,
bringing down the host score.
The Points column displays the risk score that the incident contributed to the host score. The points are calculated according to SmartScore
or Incident Scoring Rules.
Recent Login
Recent Authentications
Cortex XDR generates alerts to bring your attention to security risks in your framework.
Alerts help you to monitor and control the security of your system framework by alerting you to security risks in your framework. Cortex XDR generates alerts
from the following:
Rules that you set up, such as BIOC, IOC, correlation rules, etc.
Agents
Firewalls
Analytics
Integrations
Integrations enable you to ingest events, such as phishing emails, SIEM events, from third party security and management vendors. You might need to
configure the integrations to determine how events are classified as events. For example, for email integrations, you might want to classify items based
on the subject field, but for SIEM events, you want to classify by event type.
Abstract
The Alerts page consolidates all non-informational alerts from your detection sources, and helps you to analyze and triage the alerts on your system.
The Alerts page consolidates all non-informational alerts from your detection sources. This helps you efficiently triage the events you see each day. By
analyzing an alert, you can better understand the cause of the alert, and take actions where required. The default alert retention period in Cortex XDR is 186
days.
By default, the Alerts page displays the security alerts received over the last seven days. Every 12 hours, the system enforces a cleanup policy to remove the
oldest alerts once the maximum limit is exceeded.
Cortex XDR processes and displays the names of users in the following standardized format, also termed “normalized user”.
<company domain>\<username>
As a result, any alert triggered based on network, authentication, or login events displays the User Name in the standardized format in the Alerts and Incidents
pages. This impacts every alert for Cortex XDR Analytics and Cortex XDR Analytics BIOC, including BIOC, and IOC alerts triggered on one of these event
types.
Featured fields
You can highlight alerts that are important to you by tagging specific alert attributes, such as host names, user names, IP addresses, and Active Directory, as
featured fields. This can help you track alerts in the Alerts table. For more information, see Create a featured alert field.
The following table describes both the default fields and additional optional fields that you can add to the alerts table using the column manager.
Field Description
Status Indicator ( Identifies whether there is enough endpoint data to analyze an alert.
Check box to select one or more alerts on which to perform actions. Select
multiple alerts to assign all selected alerts to an analyst, or to change the status
or severity of all selected alerts.
ACTION Action taken by the alert sensor, either Detected or Prevented with action
status displayed in parenthesis.
AGENT OS SUB TYPE Operating system subtype of the agent from which the alert was triggered.
ALERT ARRIVAL TIMESTAMP Time that the alert was stored in Cortex XDR.
ALERT NAME Module that triggered the alert. Alerts that match an alert starring policy also
display a purple star.
APP-ID Related App-ID for an alert. App-ID is a traffic classification system that
determines what an application is irrespective of port, protocol, encryption (SSH
or SSL) or any other evasive tactic used by the application. When known, you
can also pivot to the Palo Alto Networks Applipedia entry that describes the
detected application.
Field Description
CATEGORY Alert category based on the alert source. An example of an Cortex XDR agent
alert category is Exploit Modules.
CGO MD5 MD5 value of the CGO that initiated the alert.
CGO NAME Name of the process that started the causality chain is based on Cortex XDR
causality logic.
CGO SHA256 SHA256 value of the CGO that initiated the alert.
CGO SIGNER Name of the software publishing vendor that signed the file in the causality chain
that led up to the alert.
Cortex XDR can display both the O (Organization) value and the CN (Common
Name).
CLOUD IDENTITY TYPE Classification is used to map the identity type that initiated an operation that
triggered an alert. For example, Service, Application, and Temporary
Credentials.
CLOUD IDENTITY SUB-TYPE Specific classification of the identity initiated the operation. For example, for
Identity Type: Temporary Credentials the subtype could be Assumed
Role.
CLOUD OPERATION TYPE Represents what has happened because of the identity operation. For example,
Create, Delete, and Modify.
CLOUD PROJECT Represents the cloud provider folders or projects. For example, AWS Accounts
and Azure Subscriptions.
CLOUD PROVIDER Name of the cloud provider where the alert occurred.
CLOUD REFERENCED RESOURCE Represents the resources that are referenced in the alert log. In most cases, the
referred resource will be where the operation was initiated on.
CLOUD RESOURCE TYPE Classifications are used to map similar types of resources across different cloud
providers. For example, EC2, Google Compute Engine, and Microsoft
Compute are all mapped to Compute.
Field Description
CLOUD RESOURCE SUB-TYPE Specific classification is used to map the types of resources. For example,
DISK, VPC, and Subnet are all mapped to Compute.
CONTAINS FEATURED HOST Whether the alert includes a host name that has been flagged as a Featured
Alert Field.
CONTAINS FEATURED USER Whether the alert includes a user name that has been flagged as a Featured
Alert Field.
CONTAINS FEATURED IP ADDRESS Whether the alert includes an IP address name that has been flagged as a
Featured Alert Field.
DESCRIPTION Text summary of the event including the alert source, alert name, severity, and
file path.
DESTINATION ZONE NAME Destination zone of the connection for firewall alerts.
EMAIL RECIPIENT Email recipient value of a firewall alerts triggered on the content of a malicious
email.
EMAIL SENDER Email sender value of a firewall alerts triggered on the content of a malicious
email.
EMAIL SUBJECT Email subject value of a firewall alerts triggered on the content of a malicious
email.
EXTERNAL ID Alert ID as recorded in the detector from which this alert was sent.
FILE PATH Path to the file on the endpoint, for alerts that are triggered on a file (the Event
Type is File).
FILE MACRO SHA256 SHA256 hash value of a Microsoft Office file macro.
Field Description
FW RULE NAME Firewall rule name that matches the network traffic that triggered the firewall
alert.
FW SERIAL NUMBER Serial number of the firewall that raised the firewall alert.
HOST Hostname of the endpoint or server on which this alert was triggered. The
hostname is generally available for XDR agent alerts or alerts that are stitched
with EDR data. When the hostname is unknown, this field is blank.
HOST FQDN Fully qualified domain name (FQDN) of the Windows endpoint or server on
which this alert was triggered.
HOST IP IP address of the endpoint or server on which this alert was triggered.
HOST IPv6 IPv6 address of the endpoint or server on which this alert was triggered.
HOST MAC ADDRESS MAC address of the endpoint or server on which this alert was triggered.
HOST OS Operating system of the endpoint or server on which this alert was triggered.
INITIATED BY Name of the process that initiated an activity such as a network connection or
registry change.
INITIATOR MD5 MD5 value of the process which initiated the alert.
INITIATOR CMD Command-line used to initiate the process including any arguments.
INITIATOR SIGNATURE Signing status of the process that initiated the activity.
Field Description
Cortex XDR can display both the O (Organization) value and the CN (Common
Name).
LOCAL IP IP address of the host that triggered the alert, for alerts that are triggered on
network activity (the Event Type is Network Connection).
LOCAL PORT Port on the endpoint that triggered the alert, for alerts that are triggered on
network activity (the Event Type is Network Connection).
MITRE ATT&CK TACTIC Type of MITRE ATT&CK tactic on which the alert was triggered.
MITRE ATT&CK TECHNIQUE Type of MITRE ATT&CK technique and sub‑technique on which the alert was
triggered.
MODULE For Cortex XDR agent alerts, this field identifies the protection module that
triggered the alert.
NGFW VSYS NAME Name of the virtual system for the Palo Alto Networks firewall that triggered an
alert.
OS PARENT CREATED BY Name of the parent operating system that created the alert.
OS PARENT CMD Command line used by the parent operating system to initiate the process
including any arguments.
Cortex XDR can display both the O (Organization) value and the CN (Common
Name).
Field Description
OS PARENT USER NAME Name of the user associated with the parent operating system.
PHONE NUMBER Shows the phone number that triggered the alert. This is the number that sent a
malicious URL/spam or was blocked.
PROCESS EXECUTION SIGNATURE Signature status of the process that triggered the alert.
PROCESS EXECUTION SIGNER Signer of the process that triggered the alert.
Cortex XDR can display both the O (Organization) value and the CN (Common
Name).
REGISTRY DATA Registry data that triggered the alert, for alerts that are triggered on registry
modifications (the Event Type is Registry).
REGISTRY FULL KEY Full registry key that triggered the alert, for alerts that are triggered on registry
modifications (the Event Type is Registry).
REMOTE HOST Remote host name that triggered the alert, for alerts that are triggered on
network activity (the Event Type is Network Connection).
REMOTE PORT Remote port of a network operation that triggered the alert.
RESOLUTION STATUS Status that was assigned to this alert when it was triggered (or modified). Right-
click an alert to change the status. If you set the status to Resolved, select a
resolution reason.
Any update made to an alert impacts the associated incident. An incident with
all its associated alerts marked as resolved is automatically set to Auto-
Resolved. Cortex XDR continues to group alerts to an Auto-Resolved Incident
for up to six hours. In the case where an alert is triggered during this duration,
Cortex XDR re-opens the incident.
SEVERITY Severity that was assigned to this alert when it was triggered (or modified).
SOURCE ZONE NAME Source zone name of the connection for firewall alerts.
Field Description
TAGS Displays one or more of the following categories, which is used to filter the
results according to the selected tag:
Asset Roles
Data Sources
Detector Tags
Displays the tag family and the corresponding tags. If SBAC is enabled,
the user can view and manage the alerts table according the user's scope
settings.
TARGET FILE SHA256 SHA256 hash value of an external DLL file that triggered the alert.
TARGET PROCESS CMD Command line of the process whose creation triggered the alert.
TARGET PROCESS NAME Name of the process whose creation triggered the alert.
TARGET PROCESS SHA256 SHA256 value of the process whose creation triggered the alert.
TIMESTAMP Date and time when the alert occurred in the source origin. For example, when
the alert occurred in the XDR agent.
Right-click to show rows 30 days prior or 30 days after the selected timestamp
field value.
URL URL destination address of the domain triggering the firewall alert.
USER NAME Name of the user that initiated the behavior that triggered the alert. If the user is
a domain user account, this field also identifies the domain.
<company domain>\<username>
XFF X-Forwarded-For value from the HTTP header of the IP address connecting with
a proxy.
Abstract
You can triage, investigate, and take actions on alerts from the Alerts page.
1. Review the data shown in the alert such as the command-line arguments (CMD), process info, etc.
When the app correlates an alert with additional endpoint data, the Alerts table displays a green dot to the left of the alert row to indicate the alert is
eligible for analysis in the causality view. If the alert has a gray dot, the alert is not eligible for analysis in the causality view. This can occur when there is
no data collected for an event, or the app has not yet finished processing the EDR data. To view the reason analysis is not available, hover over the gray
dot.
3. If deemed malicious, consider responding by isolating the endpoint from the network.
Abstract
You can copy alert text into memory and paste them into an email. This is helpful if you need to share or discuss a specific alert with someone. If you copy a
field value, you can also paste it into a search or begin a query.
1. From the Alerts page, right-click the alert you want to send.
3. Paste the URL into an email or use it as needed to share the information.
Abstract
Learn more about analyzing alerts in the alert side panel and the causality view.
To help you understand the full context of an alert, Cortex XDR provides the alert side panel and the causality view that enable you to quickly make a thorough
analysis.
The causality view is available for XDR agent alerts that are based on endpoint data and for alerts raised on network traffic logs that have been stitched with
endpoint data.
1. From the Alerts page, locate the alert you want to analyze.
2. Click the alert and review the information in the alert side panel. If you want to see more information about the alert, click Investigate to open the alert
investigation panel.
You can also view the causality chain over time using the Timeline view.
4. Review the chain of execution and available data for the process and, if available, navigate through the process tree.
Abstract
For Cortex XDR agent alerts, you can create profile exceptions for Window processes, BTP, and JAVA deserialization alerts directly from the Alerts table.
1. Identify an XDR Agent alert which has a category of Exploit, right-click and select Create alert exception.
Profile: Apply the exception to an existing profile or click and enter a Profile Name to create a new profile.
b. In the Profiles table, locate the OS in which you created your global or profile exception and right-click to view or edit the exception properties.
Abstract
During investigation, if you deem a file path to be safe you can add the file path to an existing malware profile allow list directly from the Alerts table.
1. In the Alerts table, select the Initiator Path, CGO path, and/or File Path field values you want to add to your malware profile allow list.
2. Right-click and select Add <path type> to malware profile allow list.
3. In the Add <path type> to malware profile allow list dialog, select from your existing Profiles and Modules to which you want to add the file path to the
allow list.
a. Go to Endpoints → Policy Management → Prevention → Profiles and locate the malware profile you selected.
b. Right-click, select Edit Profile and locate in the Files / Folders in Allow List section the path file you added.
For more information about malware prevention profiles, see Set up malware prevention profiles.
Abstract
To help you to track alerts involving specific hosts, users, and IP addresses, you can label specific alert attributes as featured fields. Alerts that contain a
matching featured field value are identified with a flag in the Alert Name field of the Alerts table. After setting up featured fields, you can use them filter the
Alerts table and to create incident scoring rules.
Featured Active Directory values are displayed in the User and Host fields accordingly.
1. Go to Incident Response → Incident Configuration → Featured Fields and select a type of featured field.
2. Click Add featured <field-type> and select one of the following options:
Create New
To create a new featured alert field from scratch, enter one or more field-type values and click Add.
To upload field values from a CSV file, upload your file and click Import. Click Download example file to ensure you are using the correct format.
4. (Optional) Create an incident scoring rule using the Contains Featured fields to further highlight and prioritize alerts containing the Host, User, and IP
address attributes. For more information, see Set up incident scoring.
Abstract
You can view the BIOC or IOC rules that generated alerts directly from the Alerts table.
You can easily view and edit the BIOC and IOC rules that generated alerts directly from the Alerts table:
1. From the Alerts page, locate alerts with Alert Sources: XDR BIOC and XDR IOC.
2. Right-click the row, and select Manage Alert → View generating rule.
Cortex XDR opens the BIOC rule that generated the alert in the BIOC Rules page. If the rule has been deleted, an empty table is displayed.
Abstract
Access additional information relating to an alert, including related files and memory content analysis.
To help you with alert analysis, Cortex XDR can provide related files and memory content analysis.
1. From the Alerts page, locate the alert for which you want to retrieve information.
2. Right-click anywhere in the alert, and select one of the following options:
Retrieve Additional Data: Cortex XDR can provide related files and additional analysis of the memory contents when an exploit protection module
raises an alert.
For tenants without XTH, select Get Causlity Data to analyze additional data.
Select Retrieve alert data and analyze to retrieve alert data consisting of the memory contents at the time the alert was raised. You can also
enable Cortex XDR to automatically retrieve alert data for every relevant alert. After Cortex XDR receives the data and performs the analysis,
it issues a verdict for the alert. You can monitor the retrieval and analysis progress from the Action Center (pivot to view Additional data).
When the analysis is complete, it displays the verdict in the Advanced Analysis field.
Retrieve related files: To further examine files that are involved in an alert, you can request the agent send them to the Cortex XDR tenant. If
multiple files are involved, the tenant supports up to 20 files and 200MB in total size. The agent collects all requested files into one archive
and includes a log in JSON format containing additional status information. When the files are successfully uploaded, you can download
them from the Action Center for up to one week.
(For PAN NGFW source type alerts) Download triggering packet: Download the session PCAP containing the first 100 bytes of the triggering
packet directly from Cortex XDR. To access the PCAP, you can download the file from the Alerts table, Incident, or Causality view.
In the Action Center, wait for the data retrieval action to complete successfully. Then, right-click the action row and select Additional Data. From the
Detailed Results view, right-click the row and select Download Files. A ZIP folder with the retrieved data is downloaded locally.
If you require assistance from Palo Alto Networks support to investigate the alert, make sure to provide the downloaded ZIP file.
Abstract
You can review alert details offline by exporting alerts to a TSV file.
To archive, continue investigation offline, or parse alert details, you can export alerts to a tab-separated values (TSV) file:
1. From the Alerts page, adjust the filters to identify the alerts you want to export.
2. When you are satisfied with the results, click the download icon ( ).
Cortex XDR exports the filtered result set to the TSV file.
Abstract
During the process of triaging and investigating alerts, you might determine that an alert does not indicate threat. You can choose to exclude the alert, which
hides the alert, excludes it from incidents, and excludes it from search query results.
You can also set up alert exclusion rules that automatically exclude alerts that match certain criteria. For more information, see Alert exclusions.
1. From the Alerts page, locate the alert you want to exclude.
Abstract
When investigating an alert generated by a correlation rule, you can view all of the events created for the alert. You can have up to 1000 events per correlation
rule.
In addition, if the correlation rule includes a drilldown query you can run the query in the Query Builder. The drilldown query provides additional information
about an alert for further investigation.
2. Right-click the row, and select Manage Alert → Investigate Contributing Events.
Right-click the row and select Manage Alert → Open Drilldown Query.
The drilldown query can accept parameters from the alert output for the correlation rule. In addition, the alert time frame used to run the drilldown query
provides more details about the alert generated by the correlation rule. The alert time frame is the minimum and maximum timestamps of the events for
the alert. If there is only one event, the event timestamp is the time frame used for the query.
Abstract
You can run queries on incident and alert data with the incidents and alerts datasets.
You can query incident and alert data in the incidents and alerts datasets. When using the alerts dataset, keep in mind the following:
Alert fields are limited to certain fields available in the API. For the full list, see Get Alerts Multi-Events v2 API.
Abstract
Procedure of how to manage the automation rules of Cortex XDR as needed, which includes to edit, save as new, disable, delete or copy.
Before you create or manage automation rules, go to Settings → Configuration → Automation Settings and configure the settings for Endpoint Action Limit
Thresholds and Automation Rules Notifications.
2. Click Add Automation Rule, or to edit and existing rule hover over the rule and select the edit icon.
b. From the Alerts table, use the filter to retrieve the criteria to define the condition of the automation rule.
c. Click Next.
4. From list, select the relevant action to initiate when the alert condition is triggered.
5. In the Exclude Endpoints page, select the endpoints to exclude and click Next.
7. Manage the automation rule, as needed. Right-click a rule to see the available actions.
Abstract
From the Alerts page, you can pivot on an alert to open the alert investigation views.
On the Alerts page, click on an alert to open the alert panel, or right-click to pivot to an investigation view.
Abstract
The alert side panel provides detailed information about alerts at a glance and in the context of the incident.
The alert side panel provides detailed information about alerts at a glance and in the context of the incident. To open the alerts panel, on the Alerts page click
on any alert.
In this view, you can change the severity of an alert, star it, investigate it in the causality view, and exclude it from the Analytics. The panel displays the name
and description of the alert, the source that triggered the alert, and the following details where applicable:
Read more...
Number of suppressed alerts: (for IOC, BIOC, and Analytics alerts) Number of alerts that were suppressed because they were detected as
duplicates of the alert
Last suppressed alert timestamp: (for IOC, BIOC, and Analytics alerts) The last time Cortex XDR suppressed an alert because it was detected as
a duplicate of the alert
Behavioral analytics: Displays graphs that visualize the anomalies that were observed by the detector.
Read more...
The Behavioral Analytics section is available only when the Identity Threat Module add-on is enabled. Cortex XDR displays Behavioral Analytics widgets
for selected alerts and is continuously adding widgets to more alerts.
You can use this section to evaluate the deviation in the context of the baseline behavior. As you navigate between the different factors that triggered the
alert, the event and the baseline information are displayed in tabular format or in timeline format, depending on the type of event.
The tabular view displays the baseline behavior in a table, with the anomaly highlighted and in a separate line.
The timeline view displays the highlighted atypical value, and if applicable, the minimum, maximum, and average values, for the selected period.
Host: Displays the Host platform, Host name, Host IP, Host MAC address, Host FQDN.
Rule: Displays details about the alert that triggered the rule.
Connection details: Displays information about network connections, login, process execution, RPC calls, system calls, or registry events.
Cloud audit log: Displays the audit log details for alerts generated on cloud hosts.
Abstract
See the causality of an alert—the entire process execution chain that led up to the alert in the Cortex XDR app.
The causality view provides a powerful way to analyze and respond to alerts. The scope of the causality view is the Causality Instance (CI) to which this alert
pertains. The causality view presents the alert (generated by Cortex XDR or sent to Cortex XDR from a supported alert source such as the XDR agent) and
includes the entire process execution chain that led up to the alert. On each node in the CI chain, Cortex XDR provides information to help you understand
what happened around the alert.
Information overview
Summarizes information about the alert you are analyzing, including the host name, the process name on which the alert was raised, and the host IP and MAC
address . For alerts raised on endpoint data or activity, this section also displays the endpoint connectivity status and operating system.
Includes the graphical representation of the Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The causality view presents a single CI chain. The CI chain is built from process nodes, events, and alerts. The chain presents the process execution and
might also include events that these processes caused and alerts that were triggered by the events or processes. The Causality Group Owner (CGO) is
displayed on the left side of the chain. The CGO is the process that is responsible for all the other processes, events, and alerts in the chain. You need the
entire CI to fully understand why the alert occurred.
There are no CGOs in the cloud causality view, when investigating cloud Cortex XDR alerts and cloud audit logs, or SaaS causality view, when investigating
SaaS-related alerts for 501 audit events, such as Office 365 audit logs and normalized logs.
Display up to nine additional process branches that reveal alerts related to the alert/event. Branches containing alerts with the nearest timestamp to the
original alert/event are displayed first.
Causality cards that contain more causality data display a Showing Partial Causality flag. You can manually add additional child or parent processes
branches by right-clicking on the process nodes displayed in the graph.
The causality view provides an interactive way to view the CI chain for an alert. You can move it, extend it, and modify it. To adjust the appearance of the CI
chain, you can enlarge/shrink the chain for easy viewing using the size controls on the right. You can also move the chain around by selecting and dragging it.
To return the chain to its original position and size, click in the lower-right of the CI graph.
Click an alert to display its name, source, timestamp, timestamp, severity, the action taken, the tags assigned to it, and description.
When the Identity Threat Module is enabled, Cortex XDR displays the anomaly that triggered the alert against the backdrop of baseline behavior for some
alerts. To see the profiles that are generated by the detector, Open Alert Visualization. Each tab displays the factors that triggered the alert, the event and the
baseline information in tabular format or in timeline format, depending on the type of event. The graphs display the information in full mode, covering 30 days.
The tabular view displays the baseline behavior in a table, with the anomaly highlighted and in a separate line.
The timeline view displays the highlighted atypical value, and if applicable, the minimum, maximum, and average values, for the selected period.
The process node displays icons to indicate when an RPC protocol or code injection event was executed on another process from either a local or remote
host.
Injected Node
Remote IP address
Hover over a process node to display a Process Information pop-up listing useful information about the process. From any process node, you can also right-
click to display additional actions that you can perform during your investigation:
Show parents and children: If the parent is not presented by default, you can display it. If the process has children, Cortex XDR opens a dialog
displaying the Children Process Start Time, Name, CMD, and Username details.
Add to block list or allow list, terminate, or quarantine a process: If after investigating the activity in the CI chain, you want to take action on the
process, you can select the desired action to allow or block the process across your organization.
In the causality view of a Detection (Post Detected) type alert, you can also Terminate process by hash.
Entity data
Provides additional information about the entity that you selected. The data varies by the type of entity but typically identifies information about the entity related
to the cause of the alert and the circumstances under which the alert occurred.
When you investigate command-line arguments, click {***} to obfuscate or decode the base64-encoded string.
For continued investigation, you can copy the entire entity data summary to the clipboard.
Actions
You can choose to isolate the host, on which the alert was triggered, from the network or initiate a live terminal session to the host to continue investigation and
remediation.
The All Events table displays up to 100,000 related events for the process node which matches the alert criteria that were not triggered in the alert table but are
informational. The Prevention Actions tab displays the actions Cortex XDR takes on the endpoint based on the threat type discovered by the agent.
To continue the investigation, you can perform the following actions from the right-click pivot menu:
For the behavioral threat protection results, you can take action on the initiator to add it to an allow list or block list, terminate it, or quarantine it.
Revise the event results to see possible related events near the time of an event using an updated timestamp value to Show rows 30 days prior or 30
days after.
To view statistics for files on VirusTotal, you can pivot from the Initiator MD5 or SHA256 value of the file on the Files tab.
Abstract
The network causality view shows a chain of individual network processes that together and in a particular sequence of operation triggered an alert.
The network causality view provides a powerful way to analyze and respond to the stitched firewall and endpoint alerts. The scope of the network causality
view is the Causality Instance (CI) to which this alert pertains. The network causality view presents the network processes that triggered the alert, generated by
Cortex XDR, Palo Alto Networks next-generation firewalls, and supported alert source such as the Cortex XDR agent.
The network causality view includes the entire process execution chain that led up to the alert. On each node in the CI chain, Cortex XDR provides information
to help you understand what happened around the alert. The CI chain visualizes the firewall logs, endpoint files, and network connections that triggered alerts
connected to a security event.
The network causality view displays only the information it collects from the detectors. It is possible that the CI may not show some of the firewall or agent
processes.
Context
Summarizes information about the alert you are analyzing, including the host name, the process name on which the alert was raised, and the host IP address.
For alerts raised on endpoint data or activity, this section also displays the endpoint connectivity status and operating system.
Host isolation
You can choose to isolate the host, on which the alert was triggered, from the network or initiate a live terminal session to the host to continue investigation and
remediation.
Includes the graphical representation of the Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The Causality View presents a CI chain for each of the processes and the network connection. The CI chain is built from process nodes, events, and alerts.
The chain presents the process execution and might also include events that these processes caused and alerts that were triggered by the events or
processes. The Causality Group Owner (CGO) is displayed on the left side of the chain. The CGO is the process that is responsible for all the other processes,
events, and alerts in the chain. You need the entire CI to fully understand why the alert occurred.
The Causality View provides an interactive way to view the CI chain for an alert. You can move it, extend it, and modify it. To adjust the appearance of the CI
chain, you can enlarge/shrink the chain for easy viewing using the size controls on the right. You can also move the chain around by selecting and dragging it.
To return the chain to its original position and size, click in the lower-right of the CI graph.
From any process node, you can also right-click to display additional actions that you can perform during your investigation:
Show parents and children: If the parent is not presented by default, you can display it. If the process has children, Cortex XDR displays the number of
children beneath the process name and allows you to display them for additional information.
Add to block list or allow list, terminate, or quarantine a process: If after investigating the activity in the CI chain, you want to take action on the
process, you can select the desired action on the process across your organization.
In the causality view of a Detection (Post Detected) type alert, you can also Terminate process by hash.
When selecting the Network Appliance node in the Network Causality View, the event timestamp is now displayed in the Entity Data section of the card.
Yellow: Grayware.
Red: Malware.
You can view and download the WildFire report in the Entity Data section.
Entity data
Provides additional information about the entity that you selected. The data varies by the type of entity but typically identifies information about the entity related
to the cause of the alert and the circumstances under which the alert occurred.
Displays all related events for the process node which match the alert criteria that were not triggered in the alert table but are informational. You can also
export the table results to a tab-separated values (TSV) file.
For the Behavioral Threat Protection table, right-click to add to allow list or block list, terminate, and quarantine a process.
To view statistics for files on VirusTotal, you can pivot from the Initiator MD5 or SHA256 value of the file on the Files tab.
Abstract
See the causality of a cloud-type alert—the entire process execution chain that led up to the alert in the Cortex XDR app.
The cloud causality view provides a powerful way to analyze and respond to Cortex XDR alerts and cloud audit logs. The scope of the cloud causality view is
the Causality Instance (CI) of an event to which this alert pertains. The cloud causality view presents the event identity and /or IP address and the actions
performed by the identity on the cloud resource. On each node in the CI chain, Cortex XDR provides information to help you understand what happened
around the event.
Context
Summarizes information about the alert you are analyzing, including the type of Cloud Provider, Project, and Region on which the event occurred. Select View
Raw Log to view the raw log as provided by the Cloud Provider in JSON format.
Includes the graphical representation of the Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The view presents a single event CI chain. The CI chain is built from Identity and Resource nodes. The Identity node represents for example keys, service
accounts, and users, while the Resource node represents for example network interfaces, storage buckets, or disks. When available, the chain might also
include an IP address and alerts that were triggered on the Identity and Cloud Resource.
The causality view provides an interactive way to view the CI chain for an alert. You can extend the CI chain, modify it, and move the chain around by selecting
and dragging it. You can also enlarge or shrink the chain by using the size controls. To return the chain to its original position and size, click in the lower-right
of the CI graph.
1. Hover over an Identity node to display, if available, the identity Analytics Profiles.
2. Select the Identity node to display in the Entity Data section additional information about the Identity entity.
3. Select the Alert icon to display in the Entity Data section additional information about the alert.
Operations: Lists the type of operations performed by the identity on the cloud resources. Hover over the operation to display the original operation name
as provided by the cloud Provider.
Cloud resource node: Displays the referenced resource on which the operation was performed. Cortex XDR displays information on the following
resources:
Read more...
Disk resource
General resource
Image resource
1. Hover over a resource node to display, if available, the resource Analytics Profiles and Resource Editors statistics.
2. Select the resource node to display in the Entity Data section additional information about the resource entity.
Entity data
Provides additional information about the entity that you selected. The data varies by the type of entity but typically identifies information about the entity related
to the cause of the alert and the circumstances under which the alert occurred.
Displays up to 100,000 related events and up to 1,000 related alerts. In the All Events table, Cortex XDR displays detailed information about each of the related
events. To simplify your investigation, Cortex XDR scans your Cortex XDR data aggregating the events that have the same Identity or Resource and displays
the entry with an aggregated icon. Right-click and select Show Grouped Events to view the aggregated entries.
Entries highlighted in red indicate that the specific event triggered an alert. To continue the investigation, right-click to View in XQL.
Abstract
Learn more about the SaaS causality view used to identify and investigate SaaS-specific data associated with SaaS-related alerts and SaaS audit logs.
The SaaS causality view provides a powerful way to analyze and investigate software-as-a-service (SaaS) related alerts for audit stories, such as Office 365
audit logs and normalized logs, by highlighting the most relevant events and alerts associated with a SaaS-related alert. To help you identify and investigate
SaaS-specific data associated with SaaS-related alerts and SaaS audit logs, Cortex XDR displays a SaaS causality view, which enables you to swiftly
investigate a SaaS alert by displaying the series of events and artifacts that are shared with the alert.
A SaaS causality view is only available when Cortex XDR is configured to collect SaaS audit logs and data. For example, this is possible by configuring an
Office 365 data collector or Google Workspace data collector with the applicable SaaS audit logs. This enables you to investigate any Cortex XDR alerts
generated from any IOC, BIOC, or correlation rules, including SaaS events. The SaaS causality view is available from the Alerts table, or from the Query Results
after running a query on the SaaS related data. From both of these places, you can pivot (right-click) to the SaaS causality view from any row in the table and
selecting Investigate Causality Chain → Open Card in new tab or Investigate Causality Chain → Open Card in same tab.
The scope of the SaaS causality view is the Causality Instance (CI) of an event to which this alert pertains. The SaaS causality view presents the event identity
and /or IP address and the actions performed by the identity on the SaaS resource. On each node in the CI chain, Cortex XDR provides information to help you
understand what happened around the event.
Context
Summarizes information about the alert you are analyzing, including the type of SaaS provider, project, and region on which the event occurred. Select View
Raw Log to view the raw log as provided by the SaaS provider in JSON format.
Includes the graphical representation of the SaaS Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The SaaS causality view presents a single event CI chain. The CI chain is built from Identity and Resource nodes. The Identity node represents for example
keys, service accounts, and users, while the Resource node represents for example network interfaces, storage buckets, or disks. When available, the chain
can also include an IP address and alerts that were triggered on the Identity and SaaS resource.
The SaaS causality view provides an interactive way to view the CI chain for an alert. You can move it, extend it, and modify it. To adjust the appearance of the
CI chain, you can enlarge/shrink the chain for easy viewing using the size controls on the right. You can also move the chain around by selecting and dragging
it. To return the chain to its original position and size, click in the lower-right of the CI graph.
1. Hover over an Identity node to display, if available, the identity Analytics Profiles.
2. Select the Identity node to display in the Entity Data section additional information about the Identity entity.
3. Select the Alert icon to display in the Entity Data section additional information about the alert.
Resource node: Displays the referenced resource on which the operation was performed. Cortex XDR displays information on the following resources.
1. Hover over a Resource node to display, if available, the resource Analytics Profiles and Resource Editors statistics.
2. Select the Resource node to display in the Entity Data section additional information about the Resource entity.
Types of resource
Entity data
Provides additional information about the entity that you selected. The data varies by the type of entity but typically identifies information about the entity related
to the cause of the alert and the circumstances under which the alert occurred.
Displays up to 100,000 related events and up to 1,000 related alerts. In the All Events table, Cortex XDR displays detailed information about each of the related
events. To simplify your investigation, Cortex XDR scans your Cortex XDR data aggregating the events that have the same Identity or Resource and displays
the entry with an aggregated icon. Right-click and select Show Grouped Events to view the aggregated entries.
Entries highlighted in red indicate that the specific event triggered an alert. To continue the investigation, right-click to View in XQL.
To continue the investigation, in the Alerts table, right-click an alert to see the available actions.
Abstract
From the Cortex XDR management console, you can view a detailed summary of the behavior that triggered analytics alerts.
The analytics alert view provides a detailed summary of the behavior that triggered an Analytics or Analytics BIOC alert. This view also provides a visual
depiction of the behavior and additional information you can use to assess the alert. This includes the endpoint on which the activity was initiated, the user that
performed the action, the technique the analytics engine observed, and activity and interactions with other hosts inside or outside of your network.
When enabling Identity Analytics, alerts associated with suspicious user activity such as stolen or misused credentials, lateral movement, credential harvesting,
or brute-force data are displayed with a User node.
Context
For Analytics alerts, the analytics view indicates the endpoint for which the alert was raised.
For Analytics BIOC alerts, the Analytics view summarizes information about the alert, including the source host name, IP address, the process name on which
the alert was raised, and the corresponding process ID.
Alert summary
(Analytics alerts only) Describes the behavior that triggered the alert and activity impact.
Graphic summary
Similar to the Causality View, the analytics view provides a graphic representation of the activity that triggered the alert and an interactive way to view the chain
of behavior for an Analytics alert. You can move the graphic, extend it, and modify it. To adjust the appearance, you can enlarge/shrink the chain for easy
viewing using the size controls on the right. You can also move the chain around by selecting and dragging it. To return the chain to its original position and
size, click in the lower-right of the graph.
The activity depicted in the graphic varies depending on the type of alert:
Analytics alerts: You can view a summary of the aggregated activity including the source host, the anomalous activity, connection count, and the
destination host. You can also select the host to view any relevant profile information.
Analytics BIOC alerts: You can view the specific event behavior including the causality group owner that initiated the activity and related process nodes.
To view the summary of the specific event, you can select the above the process node.
The following nodes display information unique to the analytics alert view:
User node: Hover over to display the User Information and user Analytics Profile data.
Multi-Event: Displays all the event types associated with the alert in the in the All Events table.
Alert description
The alert description provides details and statistics related to the activity. Beneath the description, you can also view the alert name, severity assigned to the
alert, time of the activity, alert tactic (category) and type, and links to the MITRE summary of the attack tactic.
When selecting a User node, Identity User Details, such as Active Directory Group, Organizational Unit, and Role associated with the user are displayed. If
available, Login Details also appear.
User node: Displays the logins, hosts, alerts, and process executions associated with the user aggregated by the Identity Analytics 7 days prior to and after
the analytics alert timestamp. Right-click to Investigate Causality Chain and View in XQL the associated events.
Multi-Event: Displays the events associated with the alert according to the type of event type. Right-click to View in XQL and further Investigate with XQL the
event details.
Actions
Actions you can take in response to an Analytics alert. These actions can include isolating a host from the network, initiating a live terminal session, and adding
an IP address or domain name to an external dynamic list (EDL) that is enforceable in your Palo Alto Networks firewall security policy.
Abstract
From the Cortex XDR tenant you can view the sequence (or timeline) of events and alerts that are involved in any particular threat.
The Timeline provides a forensic timeline of the sequence of events, alerts, and informational BIOCs, and correlation rules involved in an attack. While the
causality view of an alert surfaces related events and processes that Cortex XDR identifies as important or interesting, the Timeline displays all related events,
alerts, and informational BIOCs and correlation rules over time.
The Timeline view is not available when investigating cloud Cortex XDR alerts and cloud audit logs or SaaS-related alerts for 501 audit events, such as Office
365 audit logs and normalized logs. Only the applicable cloud causality view and SaaS causality view is available for this data.
Cortex XDR displays the Causality Group Owner (CGO) and the host on which the CGO ran in the top left of the timeline. The CGO is the parent process in the
execution chain that Cortex XDR identified as being responsible for initiating the process tree. In the example above, wscript.exe is the CGO and the host it
ran on was HOST488497. You can also click the blue corner of the CGO to view and filter related processes from the Timeline. This will add or remove the
process and related events or alerts associated with the process from the Timeline.
Timespan
By default, Cortex XDR displays a 24-hour period from the start of the investigation and displays the start and end time of the CGO at either end of the
timescale. You can move the slide bar to the left or right to focus on any time-gap within the timescale. You can also use the time filters above the table to
focus on set time periods.
Activity
Depending on the type of activities involved in the CI chain of events, the activity section can present any of the following three lanes across the page:
BIOCs and correlation rules: The category of the alert is displayed on the left (for example tampering or lateral movement). Each BIOC event also
indicates a color associated with the alert severity. An informational severity can indicate something interesting has happened but there were not any
triggered alerts. These events are likely benign but are byproducts of the actual issue.
Event Information: The event types include process execution, outgoing or incoming connections, failed connections, data upload, and data download.
Process execution and connections are indicated by a dot. One dot indicates one connection while many dots indicates multiple connections. Uploads
and Downloads are indicated by a bar graph that shows the size of the upload and download.
The lanes depict when the activity occurred and provide additional statistics that can help you investigate. For BIOC, correlation rules, and alerts, the lanes
also depict activity nodes, highlighted with their severity color: high (red), medium (yellow), low (blue), or informational (gray), and provide additional
information about the activity when you hover over the node.
Cortex XDR displays up to 100,000 alerts, BIOCs and Correlation Rules (triggered and informational), and events. Click on a node in the activity area of the
Timeline to filter the results. You also can create filters to search for specific events.
You can investigate and take actions on your endpoints in the Action Center.
Endpoint investigation requires either a Cortex XDR Prevent or a Cortex XDR Pro per Endpoint license.
You can investigate and take actions on your endpoints in the Action Center.
Abstract
From the Action Center, you can track the progress of all investigation, response, and maintenance actions performed on your endpoints.
The Action Center is a central location from which you can track the progress of all investigation, response, and maintenance actions performed on your Cortex
XDR protected endpoints. The main All Actions tab displays the most recent actions initiated in your deployment. To narrow down the results, use the table
filters.
File Quarantine: View details about quarantined files on your endpoints. You can also switch to an Aggregated by SHA256 view that collapses results per
file and lists the affected endpoints in the Scope field.
Block List and Allow List: View files that are permitted and blocked from running on your endpoints regardless of file verdict.
Blocking files on endpoints is enforced by the endpoint malware profile. To block a hash value, ensure the hash value is configured in the Malware
security profile.
Select Override Report mode to allow the agent to block hashes, even if the Malware Profile is set to Report.
Endpoint Isolation: View the endpoints in your organization that have been isolated from the network. For more information, see Isolate an endpoint.
Endpoint Blocked IP Addresses: View remote IP addresses that the Cortex XDR agent has automatically blocked from communicating with endpoints in
your network.
For actions that can take a while to complete, the Action Center tracks the action progress and displays the action status and current progress description for
each stage. For example, after initiating an agent upgrade action, Cortex XDR monitors all stages from the Pending request until the action status is
Completed. Throughout the action lifetime, you can view the number of endpoints on which the action was successful and the number of endpoints on which
the action failed. After a period of 90 days since the action creation, the action is removed from Cortex XDR and is no longer displayed in the Action Center.
You cannot delete actions manually.
Abstract
In the Action Center you can initiate and monitor actions on your endpoints. In addition, you can initiate endpoint actions when viewing details about an
endpoint on the All Endpoints page.
2. Select the action you want to initiate and follow the required steps and parameters you need to define for each action.
Cortex XDR displays only the endpoints eligible for the action you want to perform.
Cortex XDR will inform you if any of the agents in your action scope will be skipped.
Track the new action in the Action Center. The action status is updated according to the action progress.
2. Select the relevant view from the left-side menu on the Action Center page.
4. Take further actions. Right-click the action to see the available options:
Additional data: Display additional details for the action, such as file paths for quarantined files or operating systems for agent upgrades. For
actions with Status, Failed or Completed with partial success, you can create an upgrade action to rerun the action on endpoints that have not
been completed successfully.
Archive: Archive the action for future reference. You can select multiple actions to archive at the same time.
Cancel for Pending endpoints: Cancel the original action for agents that are still in Pending status.
Download output: Download a zip file with the files received from the endpoint for actions such as file and data retrieval.
Rerun: Launch the Define an Action wizard populated with the same details as the original action.
Run on additional agents: Launch the action wizard populated with the details as the original action except for the agents which you have to fill
in.
Abstract
The following table describes both the default and additional optional fields that you can view from the All Actions tab of the Action Center and lists the fields in
alphabetical order.
Read more...
Field Description
Statuses:
In progress: Action initiated, but no start indication from agent after stop.
Failed: Agent reports failed back to the Cortex XDR server if it was
started after more than 10 minutes after restart initiation.
Description Action scope of affected endpoints and additional data relevant to each of the
specific actions, such as agent version, file path, and file hash.
Expiration Date Time the action will expire. To set an expiration date, the action must apply to
one or more endpoints.
By default, Cortex XDR assigns a 7-day expiration limit to the following actions:
Agent Uninstall
Agent Upgrade
Files Retrieval
Isolate
After the expiration limit, the status for any remaining Pending actions on
endpoints change to Expired and these endpoints will not perform the action.
Additional data: If additional details are available for an action or for specific endpoints, you can pivot to the Additional data view. You can also export the
additional data to a TSV file. The page can include details in the following fields but varies depending on the type of action.
Field Description
Endpoint Name Target host name of each endpoint for which an action was initiated.
Status Status of the action for the specific endpoint. (Linux)—Completed with Partial
Success for a single endpoint that did not complete the action successfully.
Action Last Update Time at which the last status update occurred for the action.
Advanced Analysis For Retrieve alert data requests related to Cortex XDR alerts triggered by
exploit protection modules, Cortex XDR can analyze the memory state for
additional verdict verification. This field displays the analysis progress and
resulting verdict.
Action Parameters Summary of the action including the alert name and alert ID.
Additional Data | Malicious Files Additional data, if any is available, for the action. For malware scans, this field
is titled Malicious Files and indicates the number of malicious files identified
during the scan.
Abstract
You can view and take actions on endpoints on the All Endpoints page.
The All Endpoints page provides a central location from which you can view and manage the endpoints on which the agent is installed.
To ensure the All Endpoints table is displaying the most accurate list of endpoints, you can perform a one-time or periodic cleanup of duplicated entities. After
the cleanup, duplicated entities are removed leaving only one endpoint entry, which is the last endpoint to connect with the server. Deleted endpoint data is
retained for 90 days from the last connection timestamp. If a deleted endpoint reconnects, Cortex XDR recovers and redisplays the endpoint’s existing data.
Go to Settings → Configurations → General → Agent Configurations → Endpoint Administration Cleanup. Enable the Periodic duplicate cleanup and select
either One-time cleanup or define a periodic cleanup to run according to the Host Name, Host IP Address, and/or MAC Address fields at a specific time
interval.
Endpoint actions
The right-click pivot menu displays the actions you can perform on your endpoints. For more information about these actions, see the topics in this section, and
the topics under Manage endpoint protection.
For the Include endpoints from auto upgrade action, you cannot enable auto upgrade for Mobile, VDI, and TS installations.
The following table describes both the default and additional optional fields that you can view in the All Endpoints table and lists. Clicking on a row in the All
Endpoints table opens a detailed view of the endpoint.
Field Description
Active Directory Active Directory Groups and Organizational Units to which the user belongs.
Assigned Extensions Policy Policy related to extensions and devices connected to the endpoint.
Field Description
Auto Upgrade Status When Cortex XDR agent auto upgrades are enabled, this field indicates the action status.
If you exclude the endpoint from auto upgrade while the auto upgrade action is In progress, the ongoing upgrade
will still take place.
Cloud Info IBM and Alibaba Cloud metadata reported by the endpoint.
Content Auto Update Whether automatic content updates are Enabled or Disabled for the endpoint in the agent settings profile.
Content Release Timestamp Time and date of when the current content version was released.
Content Rollout Delay (days) If you configured delayed content rollout, the number of days for delay is displayed here.
Content Status Status of the content version on the relevant endpoint. The Cortex XDR tenant attempts to contact an endpoint and
check the content version over a 7-day period. After this period the tenant displays one of the following statuses:
Waiting for Update: Cortex XDR is in the process of updating the new content version. Depending on your
bandwidth and network connection, updating the content version may take time.
Content Status is calculated every 30 minutes. Therefore, there might be a delay of up to 30 minutes in displaying
the data.
Disabled Capabilities List of capabilities that were disabled on the endpoint. Options are Live Terminal, Script Execution, and File
Retrieval.
You can disable these capabilities during agent installation on the endpoint or through Endpoint Administration.
Disabling any of these actions is irreversible. If you later want to enable the action on the endpoint, you must
uninstall the agent and install a new package on the endpoint.
Endpoint Alias If you assigned an alias to represent the endpoint in Cortex XDR, the alias is displayed here. To set an endpoint
alias, right-click in the endpoint row, select Endpoint Control → Change Endpoint Alias. The alias can contain any of
the following characters:
Field Description
Isolated: The endpoint has been isolated from the network with communication permitted to only Cortex XDR
and to any IP addresses and processes included in the allow list.
Pending Isolation: The isolation action has reached the server and is pending contact with the endpoint.
Pending Isolation Cancellation: The cancel isolation action has reached the server and is pending contact
with the endpoint.
Endpoint Name Hostname of the endpoint. If the agent enables Pro features, this field also includes a PRO badge. For Android
endpoints, the hostname comprises the <firstname>—<lastname> of the registered user, with a separating
dash.
Connected: The agent has checked in within 10 minutes for standard endpoints, and within 3 hours for
mobile endpoints.
Connection Lost: The agent has not checked in within 30 to 180 days for standard endpoints, and between
90 minutes and 6 hours for VDI and temporary sessions.
Disconnected: The agent has not checked in within the defined inactivity window: between 10 minutes and
30 days for standard and mobile endpoints, and between 10 minutes and 90 minutes for VDI and temporary
sessions.
VDI Pending Log-on: (Windows only) Indicates a non-persistent VDI endpoint is waiting for user logon, after
which the agent consumes a license and starts enforcing protection.
First Seen Date and time the agent first checked in (registered) with Cortex XDR.
Golden Image ID For endpoints with a System Type of Golden Image, the image ID is a unique identifier for the golden image.
Agent Incompatible: The agent is incompatible with the environment and cannot recover.
When agents are compatible with the operating system and environment, this field is blank.
Isolation Date Date and time of when the endpoint was Isolated. Displayed only for endpoints in Isolated or Pending Isolation
Cancellation status.
Install Date Date and time at which the agent was first installed on the endpoint.
Field Description
Last Certificate Enforcement (For Windows and MacOS Endpoints.) If Certificate Enforcement is Enabled, this column shows the date and time
Fallback of use of a fallback certificate from the local store. If no fallback is used, this will remain empty.
Last Content Update Time Time and date when the agent last deployed a content update.
Last Origin IP Last IPv4 address from which the XDR agent connected.
Last Origin IPv6 Last IPv6 address from which the XDR agent connected.
Last Scan Date and time of the last malware scan on endpoint.
Last Seen Date and time of the last change in an agent's status. This can occur when Cortex XDR receives a periodic status
report from the agent (once an hour), a user performed a manual Check In, or a security event occurred.
Changes to the agent status can take up to ten minutes to display on Cortex XDR .
Last Used Proxy IP address and port number of proxy that was last used for communication between the agent and Cortex XDR.
Linux Operation Mode (Agent 7.7 and later for Linux) Type of operation mode your Linux endpoint is running by the agent.
Last Upgrade Status Time Date and time of the last upgrade.
MAC Address Endpoint MAC address that corresponds to the IP address. Currently, this information is available only for IPv4
addresses.
Field Description
Network Interface Relationship between the MAC address and the IP address for agents that can report the network interfaces
information. Information is displayed in JSON format, and searches can be performed on attributes in JSON.
Network Location Agent v7.1 and later for Windows and agent v7.2 and later for macOS and Linux) Endpoint location is reported by
the agent when you enable this capability in the Agent Settings profile.
Protected: The agent is running as configured and did not report any exceptions.
Partially protected: The agent reported one or more exceptions. Clicking on the row shows in the detailed
view why an endpoint may be partially protected.
Managed Device Whether an iOS device has a corporate profile installed on it and is to some extent controlled and managed by the
corporation.
User User that was last logged into the endpoint. On Android endpoints, the Cortex XDR tenant identifies the user from
the email prefix specified during app activation.
Abstract
You can retrieve files from one or more endpoints by initiating a files retrieval request.
During an investigation, you can retrieve files from one or more endpoints by initiating a files retrieval request. For each file retrieval request, Cortex XDR
supports up to:
10 different endpoints
The request instructs the agent to locate the files on the endpoint and upload them to Cortex XDR. The agent collects all requested files into one archive and
includes a log in JSON format containing additional status information. When the files are successfully uploaded, you can download them from the Action
Center.
3. Select the operating system and enter the paths for the files you want to retrieve. Press ADD after each completed path.
You cannot define a path using environment variables on Mac and Linux endpoints.
4. Click Next.
5. Select the target endpoints (up to 10) from which you want to retrieve files and click Next.
To track the status of a file retrieval action, return to the Action Center. Cortex XDR retains retrieved files for up to 30 days.
If at any time you need to cancel the action, right-click, and select Cancel for pending endpoint. You can cancel the retrieval action only if the endpoint is
still in Pending status and no files have been retrieved from it yet. The cancellation does not affect endpoints that are already in the process of retrieving
files.
7. To view additional data and download the retrieved files, right-click the action and select Additional data.
This view displays all endpoints from which files are being retrieved, including their IP Address, Status, and Additional Data such as error messages of
names of files that were not retrieved.
8. When the action status is Completed Successfully, right-click the action and download the retrieved files logs.
If the Password Protection (for downloaded files) setting under Settings → Configuration → General → Server Settings is enabled, enter the password
'suspicious' to download the file.
If you want to prevent Cortex XDR from retrieving files from an endpoint running the agent, you can disable this capability during agent installation or later on
from the All Endpoints page. Disabling script execution is irreversible. If you later want to re-enable this capability on the endpoint, you must re-install the
agent. See the XDR agent administrator’s guide for more information.
Disabling File Retrieval does not take effect on file retrieval actions that are in progress.
Abstract
Retrieve support logs from an endpoint when additional forensic data is needed.
When you need to investigate or share additional forensic data, you can initiate a request to retrieve all the support logs and alert data dump files from an
endpoint. After Cortex XDR receives the logs, you can download the log files or generate a secured link to access them on the Cortex XDR server.
3. Select the target endpoints (up to 10) from which you want to retrieve logs and click Next.
In the next heartbeat, the agent will retrieve the request to package and send all logs to Cortex XDR .
You can also retrieve support files from the All Endpoints table by right-clicking and selecting Endpoint Control+Retrieve Support File.
2. In the Action Center, locate your Support File Retrieval action type and wait for the Status field to display Completed Successfully.
3. When the status is Completed Successfully, right-click and select Additional data.
In the Actions table, you can see the endpoints from which support files were retrieved.
4. Select an endpoint, right-click and select either Download files or Generate support file link.
The secured link is valid for only 7 days. Following the 7 day period, in order to access the files, you will need to initiate a new support file link.
To open the file you will need the support file password. For more information, see Retrieve support file password.
Abstract
Learn how to retrieve the password to access files from the Tech Support File (TSF), which is generated in a zip format protected by an encrypted password.
From Cortex XDR agent version 7.8 and later, the Tech Support File (TSF) is generated in a zip format protected by an encrypted password. The TSF file is
archived inside another file which also includes a metadata file that contains a token. The token is used to retrieve the password to unzip the TSF file.
To retrieve the password for the TSF file from the endpoint, go to the Cortex XDR server from the Tokens and Passwords option.
To retrieve the password for the TSF file from the server, go to the Action Center.
a. At the top of the page, click Tokens and Passwords and select Retrieve Support File Password.
b. In the Retrieve Support File Password dialog box, in the Encrypted Password field, paste the token that you copied from the metadata file located
in the saved file when running the Cytool log collect.
c. Click the copy button to copy the password displayed and then click Ok. Use the password to unzip the TSF file.
a. Right-click the relevant action of action type Support File Retrieval and select Additional Data.
c. In the Retrieve Support File Password dialog box, in the Encrypted Password field, paste the token that you copied from the metadata file located
in the download file.
d. Click the copy button to copy the password displayed and then click Ok. Use the password to unzip the TSF file.
Abstract
The agent can scan your Windows and Mac endpoints and attached removable drives for dormant malware that is not actively attempting to run.
In addition to blocking the execution of malware, the Cortex XDR agent can scan your Windows, Mac and Linux endpoints and attached removable drives for
dormant malware that is not actively attempting to run. The agent examines the files on the endpoint according to the Malware Security Profile that is in effect
on the endpoint (quarantine settings, unknown file upload, etc.) When a malicious file is detected during the scan, the agent reports the malware to Cortex
XDR so you can manually take action to remove the malware before it is triggered and attempts to harm the endpoint.
System scan: Initiate a full system scan on demand from Endpoints Administration for an endpoint, as explained in the following procedure.
Periodic scan: Configure periodic full scans that run on the endpoint as part of the malware security profile. To configure periodic scans, see Set up
malware prevention profiles.
Custom scan: (Windows, requires agent v7.1 or later) The end user can initiate a scan on demand to examine a specific file or folder. For more
information, see the Cortex XDR Agent Administrator's Guide for Windows.
You can initiate full scans of one or more endpoints from the All Endpoints table or the Action Center. After initiating a scan, you can monitor the scan progress
in the Action Center. Scan time varies depending on the number of endpoints, connectivity to those endpoints, and the number of files for which Cortex XDR
3. Click Next.
4. Select the target endpoints (up to 100) on which you want to scan for malware.
Scanning is available on Windows, Mac and Linux endpoints. Cortex XDR automatically filters out any endpoints for which scanning is not supported.
Scanning is also not available for inactive endpoints.
5. Click Next.
6. Review the action summary and click Done. Cortex XDR initiates the action at the next heartbeat and sends the request to the agent to initiate a malware
scan.
When the status is Completed Successfully, you can view the scan results.
After an agent completes a scan, it reports the results to Cortex XDR. To view the scan results for an endpoint:
a. In the Action Center, right-click the scan action and select Additional data.
b. Right-click the endpoint for which you want to view the scan results and select View related security events.
Cortex XDR displays a filtered list of malware alerts for files that were detected on the endpoint during the scan.
Manage file execution on your endpoints by adding file hashes to your allow and block lists.
Quarantine files and manage the files automatically quarantined by Cortex XDR.
Review the file verdict and the WildFire Analysis Report for a file.
Import hashes from the Endpoint Security Manager or from external feeds.
Abstract
Set rules for the execution (or running) of particular files on your endpoints in Cortex XDR.
You can manage file execution on your endpoints by adding file hashes to your allow and block lists. If you trust a certain file and know it to be benign, you can
add the file hash to the allow list. This allows the file to be executed on all your endpoints regardless of the WildFire or local analysis verdict. Similarly, if you
want to always block a file from running on your endpoints, you can add the associated hash to the block list.
Adding files to the allow and block lists takes precedence over any other policy rules that are applied to these files. In the Action Center, you can monitor the
allow and block list actions performed in your network, and add or remove files from these lists.
PS1
Linux ELF
You can add up to 100 file hashes at one time. If you add a comment, it is added to all the hashes you added in this action.
4. Click Next.
In the next heartbeat, the agent retrieves the updated lists from Cortex XDR.
6. You are automatically redirected to the Block List or Allow List that corresponds to the action in the Action Center.
7. To manage the file hashes on the Block List or the Allow List, right-click a file to see the available actions.
Abstract
You can review and manage all files that have been quarantined by the agent due to a security incident.
When the agent detects malware on a Windows endpoint, you can take additional precautions to quarantine the file. When the agent quarantines malware, it
moves the file from the location on a local or removable drive to a local quarantine folder (%PROGRAMDATA%\Cyvera\Quarantine) where it isolates the file.
This prevents the file from attempting to run again from the same path or causing any harm to your endpoints.
To evaluate whether an executable file is considered malicious, the agent calculates a verdict using information from the following sources in order of priority:
Local analysis
Enable the agent to automatically quarantine malicious executables by configuring quarantine settings in a Malware prevention profile. For more
information, see Set up malware prevention profiles.
Right-click a specific file from the causality view and select Quarantine. For more information, see Causality view.
1. To view the quarantined files in your network, go to Incident Response → Response → Action Center → File Quarantine.
Toggle between the Detailed and Aggregated By SHA256 tabs to see information on your quarantined files.
In the Detailed view, filter and review the Endpoint Name, Domain, File Path, Quarantine Source, and Quarantine Date of all the quarantined files. You
can take the following actions:
Reinstate a quarantined file: Right-click one or more rows and select Restore all files by SHA256.
This will restore all files with the same hash on all of your endpoints.
Review the quarantined file inspection results on VirusTotal: Right-click the Hash field and select Open in VirusTotal.
Drill down on the hash value: Right-click the Hash field and select Open Hash View. You can see each of the process executions, file operations,
incidents, actions, and threat intelligence reports relating to the hash value.
Search for where the hash value appears in Cortex XDR: Right-click the Hash field and select Open in Quick Launcher.
Export to file: Click the icon on the top right corner to download a detailed list of the quarantined hashes in a TSV format.
Open the Quarantine Details page: Right-click a row and select Additional Data to open the page detailing the Endpoint Name, Domain, File Path,
Quarantine Source, and Quarantine Date of a specific file hash.
Permanently delete quarantined files on the endpoint: Right-click and select Delete all files by SHA256.
Abstract
For each file, Cortex XDR receives a file verdict and the WildFire Analysis Report detailing additional information you can use to assess the nature of a file.
For each file, Cortex XDR receives a file verdict and the WildFire Analysis Report. This report contains detailed sample information and behavior analysis in
different sandbox environments, leading to the WildFire verdict. You can use the report to assess whether the file poses a real threat on an endpoint. The
details in the WildFire analysis report for each event vary depending on the file type and the behavior of the file.
WildFire analysis details are available for files that receive a WildFire verdict. The Analysis Reports section includes the WildFire analysis for each testing
environment based on the observed behavior for the file.
If you are analyzing an incident in the incident detail view you can see artifact details on the Key Assets & Artifacts tab. Under Artifacts, identify a file with
a WildFire verdict and click Wildfire Analysis Report ( ). If you are analyzing an alert, hover over the alert and Investigate. You can open ( ) the WildFire
report of any file included in the alert Causality Chain.
Cortex XDR displays the preview of WildFire reports that were generated within the last couple of years. To view a report that was generated more than
two years ago, you can download the report.
On the left side of the report, you can see all the environments in which the Wildfire service tested the sample. If a file is low risk and WildFire can easily
determine that it is safe, only static analysis is performed on the file. Select the testing environment to review the summary and additional details. To learn
more about the behavior summary, see WildFire Analysis Reports—Close Up.
If you want to download the WildFire report as it was generated by the WildFire service, click ( ). The report is downloaded in PDF format.
If you know the WildFire verdict is incorrect, for example, WildFire assigned a Malware verdict to a file you wrote and know to be Benign, you can report an
incorrect verdict to Cortex XDR to request the verdict change.
1. Open the WildFire report and verify the verdict that you are reporting.
4. Under Comment, enter any details that can help us to better understand why you disagree with the verdict.
6. Click OK.
The threat team will perform further analysis of the sample to determine whether it should be reclassified. If a malware sample is determined to be safe,
the signature for the file is disabled in an upcoming antivirus signature update. If a benign file is determined to be malicious, a new signature is
generated. After the investigation is complete, you will receive an email describing the action that was taken.
Abstract
You can import file hash exceptions from the Endpoint Security Manager or from external feeds.
The Action Center displays information on files that are quarantined, or included in the allow list and block list. To import hashes from the Endpoint Security
Manager or from external feeds, take the following steps:
Files must be in csv format, for example Verdict_Override_Exports.csv. If necessary, resolve any conflicts encountered during the upload and
retry.
4. Click Next.
Cortex XDR imports your hashes. Depending on the assigned verdict, Cortex XDR then distributes them to the allow list or block list.
To assist you with your investigation, Cortex XDR provides response actions for investigating and remediating endpoints. For example, if you detect a
compromised endpoint you can isolate it from your network. This action prevents the endpoint from communicating with other internal or external devices, and
thereby reducing an attacker’s mobility on your network.
For response actions that rely on the Cortex XDR agent, the following table describes the supported platforms and minimum agent version. A dash (—)
indicates that the setting is not supported.
Isolate an Endpoint ✓ ✓ ✓
Halts all network access on the Agent 6.0 and later Agent 7.3 and later on macOS Agent 7.7 and later
endpoint except for traffic to Cortex 10.15.4 and later
XDR. This prevents a compromised
endpoint from communicating with
other internal or external devices.
Search and Destroy Malicious Files Agent 7.2 and later Agent 7.3 and later on macOS
10.15.4 and later
Searches for the presence of known
and suspected malicious files on
endpoints, and destroys the file on
endpoints where it exists.
Abstract
Initiate a Live Terminal session from the Cortex XDR management console to control the endpoint remotely.
To investigate and respond to security events on endpoints, you can use the Live Terminal to initiate a remote connection to an endpoint. The remote
connection is facilitated by the Cortex XDR agent by using a remote procedure call. With the Live Terminal you can manage remote endpoints, and perform
investigation and response actions on endpoints. Actions include:
Live Terminal is supported for endpoints that meet the following requirements:
Windows update patch for WinCRT (KB 2999226). To verify the Hotfixes that are installed on the
endpoint, run the systeminfo command from a command prompt.
Endpoint activity reported within the last 90 minutes (as identified by the Last Seen time stamp in the
endpoint details).
Endpoint activity reported within the last 90 minutes (as identified by the Last Seen time stamp in the
endpoint details).
Any Linux supported version as listed in Where Can I Install the Cortex XDR Agent? in the Palo Alto
Networks Compatibility Matrix.
Endpoint activity reported within the last 90 minutes (as identified by the Last Seen time stamp in the
endpoint details).
You can run PowerShell 5.0 or a later release on Live Terminal of Windows.
1. You can initiate a Live Terminal session from the All Endpoints page. Right-click an endpoint and select Security Operations → Initiate Live Terminal. It
might take the Cortex XDR agent a few minutes to facilitate the connection.
You can also initiate a Live Terminal as a response action to a security event. If the endpoint is inactive or does not meet the requirements, the option is
disabled.
2. Use the Live Terminal to investigate and take action on the endpoint.
You can fine-tune the Live Terminal session visibility on the endpoint by adjusting the User Interface options in your Agent Settings Profile.
After you terminate the Live Terminal session, you can save a session report that logs all actions from the Live Terminal session. The report is available for
download as a text file report when you close the live terminal session.
Example 19.
Jun 27th 2019 13:56:13 Live Terminal session has started [success]
Jun 27th 2019 14:00:45 Kill process calc.exe (4920) [success]
Jun 27th 2019 14:11:46 Live Terminal session end request [success]
Jun 27th 2019 14:11:47 Live Terminal session has ended [success]
From the Live Terminal you can monitor processes running on the endpoint. The Task Manager displays the task attributes, owner, and resources used. If you
discover an anomalous process while investigating the cause of a security event, you can take immediate action to terminate the process or the whole process
tree, and block processes from running.
1. From the Live Terminal session, open the Task Manager to navigate the active processes on the endpoint.
You can toggle between a sorted list of processes and the default process tree view ( ). You can also export the list of processes and process details
to a comma-separated values file. If the process is known as malware, the row displays a red indicator and identifies the file using a malware attribute.
Suspend process: To stop an attack while investigating the cause, you can suspend a process or process tree without killing it entirely.
Open in VirusTotal: VirusTotal aggregates known malware from antivirus products and online scan engines. You can scan a file using the VirusTotal
scan service to check for false positives or verify suspected malware.
Get WildFire verdict: WildFire evaluates the file hash signature to compare it against known threats.
Get file hash: Obtain the SHA256 hash value of the process.
Download Binary: Download the file binary to your local host for further investigation and analysis. You can download files up to 200MB in size.
Mark as Interesting: Add an Interesting tag to a process so that you can easily locate the process in the session report.
Remove from Interesting: If no threats are found, you can remove the Interesting tag.
Choose whether to save the session report including files and tasks marked as interesting. Administrator actions are not saved to the endpoint.
The File Explorer enables you to navigate the file system on the remote endpoint and take the following actions:
Create, move, delete, or download files, folders, and drives, including connected external drives and devices such as USB drives and CD-ROM.
View file attributes, creation and last modified dates, and the file owner.
2. Navigate through the file directory on the endpoint and manage your files. To locate a specific file, you can search for any filename rows on the screen
from the search bar, or you can double-click a folder to explore its contents.
4. Investigate files for malware. Right-click a file to see the available actions:
Open in VirusTotal: VirusTotal aggregates known malware from antivirus products and online scan engines. You can scan a file using the VirusTotal
scan service to check for false positives or verify suspected malware.
Get WildFire verdict: WildFire evaluates the file hash signature to compare it against known threats.
Get file hash: Obtain the SHA256 hash value of the file.
Download Binary: Download the file binary to your local host for further investigation and analysis. You can download files up to 200MB in size.
Mark as Interesting: Add an Interesting tag to a file or directory so that you can easily locate the file in the session report.
Remove from Interesting: If no threats are found, you can remove the Interesting tag.
Choose whether to save the live terminal session report including files and tasks marked as interesting. Administrator actions are not saved to the
endpoint.
The Live Terminal provides a command line interface for running operating system commands on a remote endpoint. Each command runs independently and
is not persistent.
2. Run commands to manage the endpoint. For example, you can manage files or launch batch files.
You can enter or paste the commands into the command line interface, or you can upload a script. To chain multiple commands together use &&, as
shown in the following example:
Example 20.
cd c:\windows\temp\ && <command1> && <command2>
Choose whether to save the live terminal session report including files and tasks marked as interesting. Administrator actions are not saved to the
endpoint.
The Live Terminal provides a Python command line interface for running Python commands and scripts. The Python command interpreter uses Unix command
syntax and supports Python 3 with standard Python libraries.
1. From the Live Terminal session, select Python to start the python command interpreter on the remote endpoint.
You can enter or paste the commands into the command line interface, or you can upload a script.
Choose whether to save the live terminal session report including files and tasks marked as interesting. Administrator actions are not saved to the
endpoint.
If you want to prevent Cortex XDR from initiating Live Terminal remote sessions on an endpoint that is running the Cortex XDR agent, you can disable this
capability during agent installation or through Cortex XDR Endpoint Administration. Disabling script execution is irreversible. If you later want to re-enable this
capability on the endpoint, you must re-install the Cortex XDR agent.
Disabling Live Terminal does not take effect on sessions that are in progress.
Abstract
In the event that an endpoint is compromised, you can immediately isolate it to reduce an attacker’s mobility.
When you isolate an endpoint, you halt all network access on the endpoint except for traffic to Cortex XDR. This can prevent a compromised endpoint from
communicating with other endpoints, thereby reducing an attacker’s mobility on your network. After the agent receives the instruction to isolate the endpoint
and carries out the action, Cortex XDR shows an Isolated status. To ensure an endpoint remains in isolation, agent upgrades are not available for isolated
endpoints.
IP-based file storage protocol traffic will also be blocked. This might affect endpoint functionality if the endpoint uses such mounts.
Network isolation is supported for endpoints that meet the following requirements:
(VDI) Network isolation allow list in the agent settings profile is configured to ensure VDI sessions
remain uninterrupted. For more information, see Set up agent settings profiles.
Network isolation on Mac endpoints does not terminate active connections that were initiated before
the agent was installed on the endpoint.
CONFIG_NETFILTER
CONFIG_IP_NF_IPTABLES
CONFIG_IP_NF_MATCH_OWNER
Network isolation on Linux endpoints is based on the defined IP addresses and ports.
1. Go to Incident Response → Response → Action Center → New Action and select Isolate.
You can also initiate the action (for one or more endpoints) from the Isolation page of the Action Center or from Endpoints → Endpoint Management →
Endpoint Administration.
2. Enter a Comment to provide additional background or other information that explains why you isolated the endpoint.
After you isolate an endpoint, Cortex XDR displays the Isolation Comment under Action Center → Isolation. If needed, you can edit the comment from
the right-click pivot menu.
3. Click Next.
4. Select the target endpoint that you want to isolate from your network.
5. Click Next.
In the next heartbeat, the agent will receive the isolation request from Cortex XDR.
7. To track the status of an isolation action, go to Incident Response → Response → Action Center → Currently Applied Actions → Endpoint Isolation.
If after initiating an isolation action, you can cancel the action by right-clicking the action and selecting Cancel for pending endpoint. You can cancel the
isolation action only if the endpoint is still in Pending status and has not been isolated yet.
8. After you remediate the endpoint, cancel endpoint isolation to resume normal communication.
You can cancel isolation from Actions Center → Isolation or from Endpoints → Endpoint Management → Endpoint Administration. From either place
right-click the endpoint and select Endpoint Control → Cancel Endpoint Isolation.
If file system operations become unresponsive during isolation, such as being unable to list folder content, unmount the mounted network shares.
Abstract
As of agent 7.7 and above, you can pause the agent protection capabilities on one or more endpoints while maintaining connectivity with Cortex XDR. By only
pausing the protection and retaining connectivity, the agent will run with all the profiles disabled, but continue to send data and take actions from the server.
When you are ready, you can resume the endpoint protection.
2. In the All Endpoints page, select the endpoints on which you want to pause protection, right-click and select Endpoint Control → Pause Endpoint
Protection.
3. Verify the endpoints, add an optional comment that appears in the Management Audit log, and Pause the protection.
Paused endpoints display a pause icon in the Endpoint Name field, and one of the following the action statuses in Manual Protection Pause field:
Protection Active
Pending Pause
Protection Paused
Pending Activation
4. When you are ready to resume protection, select the paused endpoints, right-click and select Endpoint Control → Resume Endpoint Protection and
Resume protection on the listed endpoints.
Go to Incident Response → Response → Action Center and locate Action Type Pause Endpoint Protection or Resume Endpoint Protection.
Abstract
You can obtain action remediation suggestions from Cortex XDR about malicious causality chains that have been detected.
When investigating suspicious incidents and causality chains you might need to restore and revert changes made to your endpoints as result of a malicious
activity. To avoid manually searching for the affected files and registry keys on your endpoints, you can request remediation suggestions.
To initiate remediation suggestions, you must have the following system requirements:
An App Administrator, Privileged Responder, or Privileged Security Admin role permissions which include the remediation permissions.
1. You can initiate a remediation suggestions analysis from the following places:
In the Incidents view, click the more options icon in the incident panel and select Remediation Suggestions.
Endpoints that are part of the Incident view and do not meet the required criteria are excluded from the remediation analysis.
Right-click any process node involved in the causality chain and select Remediation Suggestion.
Analysis can take a few minutes. You can minimize the analysis pop-up if desired while navigating to other pages.
Field descriptions
Field Description
Original Event Description Summary of the initial event that triggered the malicious causality chain.
Field Description
Original Event Timestamp Timestamp of the initial event that triggered the malicious causality chain.
Suggested Remediation Action suggested by the remediation scan for you to apply to the causality chain
process:
Delete File.
Restore File.
Rename File.
Terminate Process
Terminate Causality
Terminate the entire causality chain of processes that have been executed
under the process tree of the listed Causality Group Owner (GCO) process
name.
Manual Remediation
Suggested Remediation Description Summary of the remediation suggestion to apply to the file or registry.
Remediation Date Displays the timestamp of when all of the endpoint artifacts were remediated. If
missing a successful remediation, the field will not display the timestamp.
Go to Response → Action Center → All Actions and locate your remediation process in the Action Type field. Right-click Additional data to open the
Detailed Results window.
Abstract
For enhanced endpoint remediation and endpoint management, you can run Python 3.7 scripts on your endpoints directly from Cortex XDR. For commonly
used actions, Cortex XDR provides out-of-the-box scripts. You can also write and upload your own Python scripts and code snippets into Cortex XDR for
custom actions. Cortex XDR enables you to manage, run, and track the script execution on the endpoints, and store and display the execution results per
endpoint.
To run scripts on an endpoint, you must have the following system requirements:
Endpoints running the Agent v7.1 and later. Since the agent uses its built-in capabilities and many available Python modules to execute the scripts, no
additional setup is required on the endpoint.
Role in the hub with the following permissions to run and configure scripts:
Script configuration (required to upload a new script, run a snippet, and edit an existing script)
Scripts (required to view the Scripts Library and the script execution results)
Running snippets requires both Run High-risk scripts and Script configuration permissions. Additionally, all scripts are executed as System User on the
endpoint.
Your scripts are available in the Action Center → Scripts Library, including out-of-the-box scripts and custom scripts. From the Scripts Library, you can view the
script code and metadata, and perform the following actions from the right-click pivot menu:
Run: Run the selected script. Cortex XDR redirects you to the Action Center where the details of this script are populated in the new action fields.
Edit: Edit the script code or metadata. This option is not available for the out-of-the-box scripts.
The following table describes the default and optional fields that you can view in the Scripts Library. The fields are in alphabetical order.
Field Description
Created By User who created the script. For out-of-the-box scripts, the user name is Palo
Alto Networks.
Description Script description is an optional field that can be completed when creating,
uploading, or editing a script.
Modification Date Date and time in which the script or its attributes were last edited.
Name Script name is a mandatory field that can be completed when creating,
uploading, or editing a script.
Field Description
Out-of-the-box scripts
Palo Alto Networks provides out-of-the-box scripts. You can view the scripts, download the script code and metadata, and duplicate the scripts, however you
cannot edit the code or definitions of out-of-the-box scripts.
The following table lists the out-of-the-box scripts provided by Palo Alto Networks, in alphabetical order. New scripts are continuously uploaded into Cortex
XDR through content updates, and are labeled New for a period of three days.
file_exists Search for a specific file on the endpoint according to the full path.
get_process_list List CPU and memory for all processes running on the endpoint.
list_directories List all directories under a specific path on the endpoint. You can limit the
number of levels you want to list.
process_kill_cpu Set a minimum CPU value and kill all process on the endpoint that are using
higher CPU.
process_kill_mem Set a minimum RAM usage in bytes and kill all process on the endpoint that
are using higher private memory.
(Windows)
(Windows)
(Windows)
*Since all scripts are running under System context, you cannot perform any registry operations on user-specific hives (HKEY_CURRENT_USER of a specific
user).
Drag your script file into the window, or browse and select it. During upload, Cortex XDR parses the script to ensure you are using only supported Python
modules. Click supported modules to view the supported modules list. If your script is using unsupported Python modules, or if your script is not using
proper indentation, you will be required to fix it. You can use the editor to update your script directly in Cortex XDR.
General: Specify the general script definitions including name and description, risk categorization, supported operating systems, and timeout in
seconds.
Input: Set the starting execution point of your script code. To execute the script line by line, select Just run. Alternatively, to set a specific function
in the code as the entry point, select Run by entry point. Select the function from the list, and specify for each function parameter its type.
Output: If your script returns an output, specify the output type. Cortex XDR displays this information in the script results table.
Single parameter: If the script returns a single parameter, select the output type from the list and the output will be displayed as is. To
detect the type automatically, select Auto Detect.
Dictionary: If the script returns multiple values, select Dictionary. By default, Cortex XDR displays the dictionary value as is in the script
results table.
To improve the display of the script results table and enable filtering, you can assign user-friendly names and types to your dictionary keys.
To retrieve files from the endpoint, add the files_to_get key to the dictionary. This key includes an array of paths from which files will be
retrieved from the endpoint.
3. When you are finished, create the new script. The script is uploaded to the Scripts Library.
Create a script manifest
You can create a script manifest to automatically enter file definitions for a script. For more information see Step 2 in Upload your scripts.
The script manifest file that you upload into Cortex XDR has to be a single-line textual file, in the exact format explained below. If your file is structured
differently, the manifest validation will fail and you will be required to fix the file.
Example 21.
In this example, we are showing each parameter in a new line. However, when you create your file, you must remove any \n or \t characters.
{
"name":"script name",
"description":"script description",
"outcome":"High Risk|Standard",
"platform":"Windows,macOS,Linux",
"timeout":600,
"entry_point":"entry_point_name",
"entry_point_definition":{
You can use letters and digits. Avoid the use of special characters.
If a script is potentially harmful, set it as High— Risk to limit the user roles that can run it. Otherwise, set it as Standard.
Enter the name of the operating system this script supports. The options are Windows, macOS, and Linux. If you need to define more than one, use a
comma as a separator.
Enter the number of seconds after which Cortex XDR agent halts the script execution on the endpoint.
To Run by entry point, you must specify the entry point name, and all input and output definitions.
auto_detect
boolean
number
string
ip
number_list
string_list
ip_list
To set the script to Just run, leave both the Entry_point and Entry_point_definitions empty:
Example 22.
{
"name":"script name",
"description":"script description",
"outcome":"High Risk|Standard",
"platform":"Windows,macOS,Linux",
"timeout":600,
"entry_point":"",
"entry_point_definition":{}
}
When you run a script, you can see the script execution in the Action Center and track the script execution status. The Status indicates the action's progress,
which includes the general action status and the breakdown by endpoints included in the action. The following table lists the possible status of a script
execution action for each endpoint, in alphabetical order:
Status Description
Aborted The script execution action was aborted after it was already in progress on the
endpoint.
Canceled The script execution action was canceled before the agent pulled the request
from the server.
Completed Successfully The script was executed successfully on the endpoint with no exceptions.
Expired The script execution actions expire after four days. After an action expires, the
status of any remaining pending actions on endpoints changes to Expired and
these endpoints will not receive the action.
Pending The agent has not yet pulled the script execution request from the server.
Pending Abort The agent is in the process of executing the script, and has not pulled the
abort request from the server yet.
Timeout The script execution reached its configured time out and the agent stopped
the execution on the endpoint.
You can use Interactive Mode to dynamically track the script execution progress on all target endpoints and view the results as they are being received in real-
time. Additionally, you can start executing more scripts on the same scope of target endpoints.
To initiate Interactive Mode for a script that is already running, in the Action Center, right-click the execution action of the relevant script and select Open in
interactive mode.
Cancel or abort script execution
You can cancel or abort a script execution action for Pending and In Progress actions:
When the script execution action is Pending, the agent has not yet pulled the request from the Cortex XDR server. When you cancel a pending action,
the server pulls back the request and updates the action status to Canceled. To cancel the action for all pending endpoints, go to the Action Center,
right-click the action and Cancel for pending endpoints. Alternatively, to cancel a pending action for specific endpoints, go to Action Center → Additional
data → Detailed Results, right-click the endpoint(s) and Cancel pending action.
When the script execution action is In Progress, the agent has begun running the script on the endpoint. When you abort an action that is in progress,
the agent halts the script execution on the endpoint and updates the action status to Aborted. To abort the action for all In Progress endpoints and
cancel the action for any Pending endpoints, go to the Action Center, right-click the action and Abort and cancel execution. Alternatively, to abort an in
progress action for specific endpoints, go to Action Center → Additional data → Detailed Results, right-click the endpoints and Abort for endpoint in
progress.
Cortex XDR logs all script execution actions, including the script results and the parameters specified when running the script. To view full details about the
run, including returned values, right-click the script and select Additional data.
The script results are divided into the upper bar and the main view. The upper bar displays the script meta-data including the script name and entry point, the
script execution action status, the parameter values used in this run and the target endpoints scope. You can also download the exact code used in this run as
a py file.
Main results view: Displays a table listing all target endpoints and their details.
In addition to the endpoint details (name, IP, domain, etc), the following table describes the default and additional optional fields that you can view per
endpoint. The fields are in alphabetical order.
Field Description
*Returned values If your script returned values, the values are also listed in the additional
data table according to your script output definitions.
Execution timestamp Date and time the agent started the script execution on the endpoint. If the
execution has not started yet, this field is empty.
Failed files Number of files the agent failed to retrieve from the endpoint.
Retention date Date after which the retrieved file will no longer be available for download.
The value is 90 days from the execution date.
Retrieved files Number of files that were successfully retrieved from the endpoint.
Status See the list of statuses and their descriptions in Track script execution.
For each endpoint, you can right-click to download the script stdout, download retrieved files, and view returned exceptions. You can also Export to file
to download the detailed results table in TSV format.
Aggregated results: A visualization of the script results. Cortex XDR automatically aggregates only results that have a small variety of values. To see
how many of the script results were aggregated successfully, see the counts on the toggle (for example, aggregated results 4/5). You can filter the
results to adjust the endpoints considered in the aggregation. You can also generate a PDF report of the aggregated results view.
Rerun a script
You can select a script execution action in the Action Center and rerun it. When you rerun a script, the same parameter values, target endpoints, and defined
timeout are used, as defined in the previous run. However, you can make changes to the script before rerunning it. In addition, if the target endpoints in the
original run were defined using a filter, the filter will be recalculated when you rerun the script.
Cortex XDR uses the current version of the script. If the script has been deleted or the supported operating system definition has been modified the since the
previous run, you will not be able to rerun the script.
1. From the Action Center, right-click the script you want to rerun and select Rerun.
You are redirected to the final summary stage of the script execution action.
To run the script with the same parameters and on the same target endpoints as the previous run, click Done. To change any of the previous run
definitions, navigate through the wizard and make the necessary changes. Then, click Done. The script execution action is added to the Action Center.
To understand why a script returned Failed execution status, you can take the following actions:
1. Check script exceptions: If the script generated exceptions, you can view them to learn why the script execution failed. From the Action Center, right-
click the Failed script and select Additional data. In the Script Results table, right-click an endpoint for which the script execution failed and select View
exceptions. The agent executes scripts on Windows endpoints as a SYSTEM user, and on Mac and Linux endpoints as a root user. These context
differences could cause differences in behavior, for instance when using environment variables.
2. Validate custom scripts: If a custom script that you uploaded failed, and the reason the script failed is still unclear from the exceptions or if the script
did not generate any exceptions, try to identify whether it failed due to an error in Cortex XDR or an error in the script. To identify the error source,
execute the script without the agent on the same endpoint with regular Python 3.7 installation. If the script execution is unsuccessful, you should fix your
script. Otherwise, if the script was executed successfully with no errors, contact Customer Support.
If you want to prevent Cortex XDR from running scripts on an agent, you can disable this capability during agent installation, or through Endpoint
Administration. Disabling script execution is irreversible. If you want to re-enable this capability on the endpoint, you must reinstall the agent. For more
information, see the Cortex XDR Agent Administrator’s Guide.
Disabling Script Execution does not take effect on scripts that are in progress.
Abstract
Cortex XDR enables you to effectively hunt down any identified malicious file that may exist on any of your endpoints.
To take immediate action on known and suspected malicious files, you can search and destroy the files. After identifying the presence of a malicious file, you
can immediately destroy the file from any or all endpoints on which the file exists.
The agent builds a local database on the endpoint with a list of all the files, including their path, hash, and additional metadata. Depending on the number of
files and the disk size of each endpoint, it can take a few days for Cortex XDR to complete the initial endpoint scan and populate the files database. You
cannot search an endpoint until the initial scan is complete and all file hashes are calculated.
After the initial scan is complete, the agent retains a snapshot of the endpoint files inventory. The agent maintains the files database by initiating periodic scans
and closely monitoring all actions performed on the files.
You can search for specific files according to the file hash, the file full path, or a partial path using regex parameters from the Action Center or the Query
Builder. When you find the file, you can select it in the search results and destroy the file by hash or by path. If you already know the path or hash, you can also
destroy a file from the Action Center without performing a search. When you destroy a file by hash, all the file instances on the endpoint are removed.
You can validate a hash against VirusTotal and WildFire to provide additional context before initializing the File Destroy action.
The Cortex XDR agent does not include the following information in the local files inventory:
Information about files that existed on the endpoint and were deleted before the Cortex XDR agent was installed.
Information about files where the file size exceeds the maximum file size for hash calculations that are pre-configured in Cortex XDR .
If the Agent Settings Profile on the endpoint is configured to monitor common file types only, then the local files inventory includes information about
these file types only. You cannot search or destroy file types that are not included in the list of common file types.
The following are prerequisites to enable Cortex XDR to search and destroy files on your endpoints:
Supported platforms:
Windows: Cortex XDR agent version 7.2 or a later. If you plan to enable Search and Destroy on VDI sessions, you must perform the initial scan on
the Golden Image.
Mac: Cortex XDR agent version 7.3 or a later release running on macOS version 10.15.4 or later.
Ensure File Search and Destroy is enabled for your Cortex XDR agent.
Search a file
You can search for files on the endpoint by file hash or file path. The search returns all instances of this file on the endpoint. You can then immediately destroy
all of the file instances on the endpoint, or upload the file to Cortex XDR for further investigation.
You can search for a file using the Query Builder, or use the Action Center wizard as described in the following workflow.
To search by path, enter the specific path for the file on the endpoint or specify the path using wildcards. When you provide a partial path or partial
file name using *, the search will return all the results that match the partial expression. Note the following limitations:
The file path must begin with a drive name, for example: c:\.
You must specify the exact path folder hierarchy, for example c:\users\user\file.exe. You must specify the exact path folder
hierarchy also when you replace folder names with wildcards, by using a wildcard for each folder in the hierarchy. For example,
c:\*\*\file.exe.
Click Next.
3. Select the target endpoints on which you want to search for the file. Cortex XDR displays only endpoints eligible for file search. Click Next.
Cortex XDR displays the summary of the file search action. If you need to change your settings, go Back. If all the details are correct, click Run. The File
search action is added to the Action Center.
In the Action Center, you can monitor the action progress in real-time and view the search results for all target endpoints. For a detailed view of the
results, right-click the action and select Additional data. Cortex XDR displays the search criteria, timestamp, and real-time status of the action on the
target endpoints. You can:
View results by file (default view): Cortex XDR displays the first 100 instances of the file from every endpoint. Each search result includes details
about the endpoint (such as endpoint status, name, IP address, and operating system) and details about the file instance (such as full file name
and path, hash values, and creation and modification dates).
View the results by endpoint: For each endpoint in the search results, Cortex XDR displays details about the endpoint (such as endpoint status,
name, IP address, and operating system), the search action status, and details about the file (whether it exists on the endpoint or not, how many
instances of the file exist on the endpoint, and the last time the action was updated).
If not all endpoints in the query scope are connected or the search has not completed, the search action remains in Pending status.
After you located the malicious file instances on all your endpoints, proceed to destroy all the file instances on the endpoint. From the search results
Additional data, right-click the file to immediately Destroy by path, Destroy by hash, or Get file to upload it to Cortex XDR for further examination.
Destroy a file
When you know a file is malicious, you can destroy all of its instances on your endpoints, directly from Cortex XDR. You can destroy a file immediately from the
File search action result, or initiate a new action from the Action Center. When you destroy a file, the Cortex XDR agent deletes all the file instances on the
endpoint. To destroy a file from the file search results, see Search a file.
2. To destroy by hash, provide the SHA256 of the file. To destroy by path, specify the exact file path and file name. Click Next.
3. Select the target endpoints from which you want to remove the file. Cortex XDR displays only endpoints eligible for file destroy. When you’re done, click
Next.
Cortex XDR displays the summary of the file destroy action. If you need to change your settings, go Back. If all the details are correct, click Run. The File
destroy action is added to the Action Center.
Abstract
An External Dynamic List (EDL) is a text file hosted on an external web server that your Palo Alto Networks firewall uses to provide control over user access to
IP addresses and domains that the Cortex XDR has found to be associated with an alert.
Cortex XDR hosts two external dynamic lists you can configure and manage.
IP Addresses EDL
An App Administrator, Privileged Investigator, or Privileged Security Admin role which includes EDL permissions
1. Enable EDL.
b. Enable External Dynamic List and enter the Username and Password that the Palo Alto Networks firewall should use to access the EDL.
Testing is currently only available using the following curl and Windows PowerShell commands:
For Linux/OS/Windows
3. Record the IP Addresses EDL URL and the Domains EDL URL. You will need these URLs in the coming steps to point the firewall to these lists.
b. On the firewall, select Device → Certificate Management → Certificates and Import the certificate. Make sure to give the device certificate a
descriptive name, and select OK to save the certificate.
c. Select Device → Certificate Management → Certificate Profile and Add a new certificate profile.
d. Give the profile a descriptive name and Add the certificate to the profile.
6. Set the Cortex XDR EDL as the source for a firewall EDL.
For more detailed information about how Palo Alto Networks firewall EDLs work, how you can use EDLs, and how to configure them, review how to Use
an External Dynamic List in Policy.
a. On the firewall, select Objects → External Dynamic Lists and Add a new list.
c. Enter the IP Addresses Block List URL or the Domains Block List URL that you recorded in the last step as the list Source.
d. Select the Certificate Profile that you created in the last step.
e. Select Client Authentication and enter the username and password that the firewall must use to access the EDL.
f. Use the Repeat field to define how frequently the firewall retrieves the latest list from Cortex XDR .
7. Select Policies → Security and Add or edit a security policy rule to add the Cortex XDR EDL as match criteria to a security policy rule.
Review the different ways you can Enforce Policy on an External Dynamic List; this topic describes the complete workflow to add an EDL as match
criteria to a security policy rule.
b. In the Destination tab, select Destination Zone and select the external dynamic list as the Destination Address.
c. Click OK to save the security policy rule and Commit your changes.
You do not need to perform an additional commit or make any subsequent configuration changes for the firewall to enforce the EDL as part of your
security policy; even as you update the Cortex XDR EDL, the firewall will enforce the list most recently retrieved from Cortex XDR .
You can also use the IP list and URL lists as part of a URL Filtering policy, or the domain list as part of a custom Anti-Spyware profile.
You can add to your IP address or Domain lists as you triage alerts from the Action Center or throughout Cortex XDR .
To add an IP address or Domain from the Action Center, select Add to EDL. You can choose to enter the IP address or Domain you want to add Manually
or choose to Upload File.
During investigation, you can also Add to EDL from the Actions menu that is available from investigation pages such as the Incidents View, Causality
View, IP View, or Quick Launcher.
9. At any time, you can view and make changes to the IP addresses and domain name lists.
a. Navigate to Incident Response → Response → Action Center → Currently Applies Actions → External Dynamic List.
c. If desired, select New Action to add additional IP addresses and domain names.
d. If desired, select one or more IP addresses or domain names, right-click and Delete any entries that you no longer want included on the lists.
Abstract
Certain forensic artifacts exist only in the computer’s memory, such as volatile data created by running processes. The Memory Collection option enables
Cortex XDR to capture the memory of a Windows endpoint. After the memory image has been captured from the Cortex XDR endpoint, the image is available
to download. Use the image to perform a full analysis using industry-standard tools.
2. Select the target Windows endpoint from which you want to collect the memory image (only one endpoint at a time). Click Next.
A summary of the memory collection action is displayed. If you need to change your settings, click Back. If all the details are correct, click Done. The
Memory Collection action is added to the Action Center.
In the Action Center, you can monitor the action progress in real-time and view the status for the target endpoint. For a detailed view of the results, right-
click the action and select Additional data. Cortex XDR displays the action, timestamp, and real-time status of the action on the target endpoint.
In the Detailed Results - Memory Collection screen, right-click the action and select Download files.
Learn more about how to build Cortex Query Language (XQL) queries using the Query Builder.
To support investigation and analysis, you can search your data by creating queries in the Query Builder. You can create queries with the Cortex Query
Language (XQL) or by using the predefined queries for different types of entities.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Query Builder aids in the detection of threats by allowing you to search for indicators of compromise and suspicious patterns within data sources. It assists
in expanding incident investigations by identifying related events and entities, such as activities associated with specific user accounts or network lateral
movement. In addition, the Query Builder enables data analytics on suspected threats, helping organizations analyze large volumes of data to identify trends,
anomalies, and correlations that may indicate potential security issues.
To support investigation and analysis, you can search all of the data ingested by Cortex XDR by creating queries in the Query Builder. You can create queries
that investigate leads, expose the root cause of an alert, perform damage assessment, and hunt for threats from your data sources.
Cortex XDR provides different options in the Query Builder for creating queries:
You can use the Cortex Query Language (XQL) to build complex and flexible queries that search specific datasets or presets, or the entire xdr_data
dataset. With XQL Search you create queries based on stages, functions, and operators. To help you build your queries, Cortex XDR provides tools in
the interface that provide suggestions as you type, or you can look up predefined queries, common stages and examples.
Abstract
Learn more about how to build XQL queries in the Query Builder.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Cortex Query Language (XQL) enables you to query data ingested into Cortex XDR for rigorous endpoint and network event analysis returning up to 1M
results. To help you create an effective XQL query with the proper syntax, the query field in the user interface provides suggestions and definitions as you type.
XQL forms queries in stages. Each stage performs a specific query operation and is separated by a pipe character (|). Queries require a dataset, or data
source, to run against. Unless otherwise specified, the query runs against the xdr_data dataset, which contains all log information that Cortex XDR collects
from all Cortex product agents, including EDR data, and PAN NGFW data. In XDM queries, you must specify the dataset mapped to the XDM that you want to
run your query against.
Forensic datasets are not inlcuded by default in XQL query results, unless the dataset query is explicitly defined to use a forensic dataset.
In a dataset query, unless otherwise specified, the query runs against the xdr_data dataset, which contains all log information that Cortex XDR collects from
all Cortex product agents, including EDR data, and PAN NGFW data. In a dataset query, if you are running your query against a dataset that has been set as
default, there is no need to specify a dataset. Otherwise, specify a dataset in your query. The Dataset Queries lists the available datasets, depending on
system configuration.
Users with different dataset permissions can receive different results for the same XQL query.
An administrator or a user with a predefined user role can create and view queries built with an unknown dataset that currently does not exist in Cortex
XDR. All other users can only create and view queries built with an existing dataset.
When you have more than one dataset or lookup, you can change your default dataset by navigating to Settings → Configurations → Data Management
→ Dataset Management, right-click on the appropriate dataset, and select Set as default. For more information about setting default datasets, see
Dataset management.
The basic syntax structure for querying datasets that are not mapped to the XDM is:
or
You can specify a dataset using one of the following formats, which is based on the data retention offerings available in Cortex XDR.
Example 23.
dataset = xdr_data
Example 24.
cold_dataset = xdr_data
You can build a query that investigates data in both a cold dataset and a hot dataset in the same query. In addition, as the hot storage dataset format is
the default option and represents the fully searchable storage, this format is used throughout this guide for investigation and threat hunting. For more
information on hot and cold storage, see Dataset management.
When using the hot storage default format, this returns every xdr_data record contained in your Cortex XDR instance over the time range that you provide to
the Query Builder user interface. This can be a large amount of data, which may take a long time to retrieve. You can use a limit stage to specify how many
records you want to retrieve.
There is no practical limit to the number of stages that you can specify. See Stages for information on all the supported stages.
In the xdr_data dataset, every user field included in the raw data for network, authentication, and login events has an equivalent normalized user field
associated with it that displays the user information in the following standardized format:
<company domain>\<username>
For example, the login_data field has the login_data_dst_normalized_user field to display the content in the standardized format. To ensure the most
accurate results, we recommend that you use these normalized_user fields when building your queries.
Additional components
XQL queries can contain different components, such as functions and stages, depending on the type of query you want to build.
Abstract
Learn more about some important information before getting started with XQL queries.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Before you begin running XQL queries, consider the following information:
Cortex XDR offers features in the XQL search interface to help you to build queries. For more information see Useful XQL user interface features.
Before you run a query, review this list to better understand query behavior and results. For more information, see Expected results when querying fields.
If you have existing Splunk queries, you can translate them to XQL. For more information, see Translate to XQL.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The user interface contains several useful features for querying data, and for viewing results:
Translate to XQL: Converts your existing Splunk queries to the XQL syntax. When you enable Translate to XQL , both an SPL query field and an XQL
query field are displayed. You can easily add a Splunk query, which is converted automatically into XQL in the XQL query field. This option is disabled by
default.
Query Results: After you create and run an XQL query, you can view, filter, and visualize your Query Results.
XQL Helper: Describes common stage commands and provides examples that you can use to build a query.
Query Library: Contains common, predefined queries that you can use or modify to your liking. In addition, there is a personal query library for saving
and managing your own queries so that you can share with others, and queries can be shared with you. For more information, see Manage your personal
query library.
Schema: Contains schema information for every field found in the result set. This information includes the field name, data type, descriptive text (if
available), and the dataset that contains the field. Contains the list of all the fields of all the datasets that were involved in the query.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Cortex XDR includes built-in mechanisms for mitigating long-running queries, such as default limits for the maximum number of allowed alerts. The following
suggestions can help you to streamline your queries:
The default results for any query is a maximum of 1,000,000 results, when no limit is explicitly stated in the query. Queries based on XQL query entities
are limited to 10,000 results. Adding a smaller limit can greatly reduce the response time.
Example 25.
dataset = microsoft_windows_raw
| fields *host*
| limit 100
Use a small time frame for queries by specifying the specific date and time in the custom option, instead of picking the nearest larger option available.
Use filters that exclude data, along with other possible filters.
Select the specific fields that you would like to see in the query results.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
If specific fields are stated in the fields stage, those exact fields will be returned.
The _time system field will not be added to queries that contain the comp stage.
All current system fields will be returned, even if they are not stated in the query.
Each new column in the result set created by the alter stage will be added as the last column. You can specify a different column order by modifying the
field order in the fields stage of the query.
Each new column in the result set created by the comp stage will be added as the last column. Other fields that are not in the group by /
calculated column will be removed from the result set, including the core fields and _time system field.
When no limit is explicitly stated in a datamodel query, a maximum of 1,000,000 results are returned (default). When this limit is applied to results using
the limit stage, it will be indicated in the user interface.
Abstract
Learn how to create queries using the Cortex Query Language (XQL).
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Build Cortex Query Language (XQL) queries to analyze raw log data stored in Cortex XDR. You can query the Cortex Data Model (XDM) or datasets using
specific syntax.
1. From Cortex XDR, select Incident Response → Investigation → Query BuilderXQL Search.
2. (Optional) To change the default time period against which to run your query, at the top right of the window, select the required time period, or create a
customized one.
Whenever the time period is changed in the query window, the config timeframe is automatically set to the time period defined, but this won't be
visible as part of the query. Only if you manually type in the config timeframe will this be seen in the query.
3. (Optional) To translate Splunk queries to XQL queries, enable Translate to XQL. If you choose to use this feature, enter your Splunk query in the Splunk
field, click the arrow icon ( ) to convert to XQL, and then go to Step 5.
4. Create your query by typing in the query field. Relevant commands, their definitions, and operators are suggested as you type. When multiple
suggestions are displayed, use the arrow keys to select a suggestion and to view an explanation for each one.
You only need to specify a dataset if you are running your query against a dataset that you have not set as default. Otherwise, the query runs
against the xdr_data dataset. For more information, see How to build XQL queries.
Example 26.
dataset = xdr_data
b. Press Enter, and then type the pipe character (|). Select a command, and complete the command using the suggested options.
Example 27.
dataset = xdr_data
| filter agent_os_type = ENUM.AGENT_OS_MAC
| limit 250
Run the query by the specified date and time, or on a specific date, by selecting the calendar icon ( ).
6. (Optional) The Save As options save your query for future use:
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see Manage your personal query library.
Widget to Library: For more information, see Create custom XQL widgets.
While the query is running, you can navigate away from the page. A notification is sent when the query has finished. You can also Cancel the query or run a
new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about reviewing the results returned from an XQL query.
The results of a Cortex Query Language (XQL) query are displayed in a tab called Query Results.
It's also possible to graph the results displayed. For more information, see Graph query results.
Use the following options in the Query Results tab to investigate your query results:
Option Use
Table tab Displays results in rows and columns according to the entity fields. Columns can be filtered, using their filter icons.
More options (kebab icon ) displays table layout options, which are divided into different sections:
In the Appearance section, you can Show line breaks for any text field in the Query Results. By default, the text in these fields are
wrapped unless the Show line breaks option is selected. In addition, you can change the way rows and columns are displayed.
In the Log Format section, you can change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
In the Search column section, you can find a specific column; enable or disable display of columns using the checkboxes.
Show and hide rows according to a specific field in a specific event: select a cell, right-click it, and then select either Show rows with … or
Hide rows with …
Graph tab Use the Chart Editor to visualize the query results.
Advanced Displays results in a table format which aggregates the entity fields into one column. You can change the layout, decide whether to Show
tab line breaks for any text field in the results table, and change the log format from the menu.
Select Show more to pivot an Expanded View of the event results that include NULL values. You can toggle between the JSON and Tree
views, search, and Copy to clipboard.
More options ( ) works in a similar way to how it works on the Table tab.
Show more in the bottom left corner of each row opens the Expanded View of the event results that also include NULL values. Here,
you can toggle between the JSON and Tree views, search, and Copy to clipboard.
Log format options change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
Free text Searches the query results for text that you specify in the free text search. Click the Free text search icon to reveal or hide the free text
search search field.
Option Use
Filter Enables you to filter a particular field in the interface that is displayed to specify your filter criteria.
For integer, boolean, and timestamp (such as _time) fields, we recommend that you use the Filter instead of the Free text search, in order
to retrieve the most accurate query results.
Fields menu Filters query results. To quickly set a filter, Cortex XDR displays the top ten results from which you can choose to build your filter. This
option is only available in the Table and Advanced tabs,
From within the Fields menu, click on any field (excluding JSON and array fields) to see a histogram of all the values found in the result set
for that field. This histogram includes:
A count of the total number of times a value was found in the result set.
The value's frequency as a percentage of the total number of values found for the field.
In order for Cortex XDR to provide a histogram for a field, the field must not contain an array or a JSON object.
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see personal query library.
Widget to Library: For more information, see Create custom XQL widgets.
You can continue investigating the query results in the Causality View or Timeline by right-clicking the event and selecting the desired view. This option is
available for the following types of events:
Network
File
Registry
Injection
Load image
System calls
For network stories, you can pivot to the Causality View only. For cloud Cortex XDR events and Cloud Audit Logs, you can only pivot to the Cloud Causality
View, while software-as-a-service (SaaS) related alerts for audit stories, such as Office 365 audit logs and normalized logs, you can only pivot to the SaaS
Causality View.
Add a file path to your existing Malware Profile allowed list by right-clicking a <path> field, such as target_process_path, and select Add <path type> to
malware profile allow list.
Abstract
Learn how to translate your Splunk queries to XQL queries in Cortex XDR.
To help you easily convert your existing Splunk queries to the Cortex Query Language (XQL) syntax, Cortex XDR includes a toggle called Translate to XQL in
the query field in the user interface. When building your XQL query and this option is selected, both a SPL query field and XQL query field are displayed, so
you can easily add a Splunk query, which is converted to XQL in the XQL query field. This option is disabled by default, so only the XQL query field is
displayed.
This feature is still in a Beta state and you will find that not all Splunk queries can be converted to XQL. This feature will be improved upon in the upcoming
releases to support greater Splunk query translations to XQL.
The following table details the supported functions in Splunk that can be converted to XQL in Cortex XDR with an example of a Splunk query and the resulting
XQL query. In each of these examples, the xdr_data dataset is used.
bin index = xdr_data | bin _time span=5m dataset in (xdr_data) | bin _time span=5m
count index=xdr_data | stats count(_product) BY _time dataset in (xdr_data) | comp count(_product) by _time
ctime index=xdr_data | convert ctime(field) as field dataset in (xdr_data) | alter field = format_timestamp(
earliest index = xdr_data earliest=24d dataset in (xdr_data) | filter _time >= to_timestamp(ad
eval index=xdr_data | eval field = "test" dataset in (xdr_data) | alter field = "test"
floor index=xdr_data | eval floor_test = floor(1.9) dataset in (xdr_data) | alter floor_test = floor(1.9)
json_extract index= xdr_data | eval dataset in (xdr_data) | alter London = dfe_labels -> df
London=json_extract(dfe_labels,"dfe_labels{0}")
join join agent_hostname [index = xdr_data] join type=left conflict_strategy=right (dataset in (xdr
len index = xdr_data | where uri != null | eval dataset in (xdr_data) | filter agent_ip_addresses != nu
length = len(agent_ip_address) len(agent_ip_addresses)
lower index = xdr_data | eval field = lower("TEST") dataset in (xdr_data) | alter field = lowercase("TEST")
md5 index=xdr_data | eval md5_test = md5("test") dataset in (xdr_data) | alter md5_test = md5("test")
mvcount index = xdr_data | where http_data != null | dataset in (xdr_data) | filter http_data != null | alte
eval http_data_array_length =
mvcount(http_data)
mvexpand index = xdr_data | mvexpand dfe_labels limit = dataset in (xdr_data) | arrayexpand dfe_labels limit 10
100
pow index=xdr_data | eval pow_test = pow(2, 3) dataset in (xdr_data) | alter pow_test = pow(2, 3)
relative_time(X,Y) index ="xdr_data" | where _time > dataset in (xdr_data) | filter _time >
relative_time(now(),"-7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
index ="xdr_data" | where _time > dataset in (xdr_data)| filter _time >
relative_time(now(),"+7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
replace index= xdr_data | eval description = dataset in (xdr_data) | alter description = replace(age
replace(agent_hostname,"\("."NEW")
round index=xdr_data | eval round_num = round(3.5) dataset in (xdr_data) | alter round_num = round(3.5)
search index = xdr_data | eval ip="192.0.2.56" | dataset in (xdr_data) | alter ip = "192.0.2.56" | filte
search ip="192.0.2.0/24"
sha256 index = xdr_data | eval sha256_test = dataset in (xdr_data) | alter sha256_test = sha256("tes
sha256("test")
sort (ascending index = xdr_data | sort action_file_size dataset in (xdr_data) | sort asc action_file_size | lim
order)
sort (descending index = xdr_data | sort -action_file_size dataset in (xdr_data) | sort desc action_file_size | li
order)
spath index = xdr_data | spath output=myfield dataset in (xdr_data) | alter myfield = json_extract(ac
input=action_network_http path=headers.User-
Agent
split index = xdr_data | where mac != null | eval dataset in (xdr_data)\n | filter mac != null\n | alter
split_mac_address = split(mac, ":")
stats dc index = xdr_data | stats dc(_product) BY _time dataset in (xdr_data) | comp count_distinct(_product) b
sum index=xdr_data | where action_file_size != null dataset in (xdr_data) | filter action_file_size != null
| stats sum(action_file_size) by _time
table index = xdr_data | table _time, agent_hostname, dataset in (xdr_data) | fields _time, agent_hostname, a
agent_ip_addresses, _product
showperc percentfield
percentfield
upper index=xdr_data | eval field = upper("test") dataset in (xdr_data) | alter field = uppercase("test")
var index=xdr_data | stats var (event_type) by dataset in (xdr_data) | comp var(event_type) by _time
_time
2. Toggle to Translate to XQL, where both a SPL query field and XQL query field are displayed.
The XQL query field displays the equivalent Splunk query using the XQL syntax.
You can now decide what to do with this query based on the instructions explained in Create XQL query.
Abstract
Cortex XDR enables you to generate helpful visualizations of your XQL query results.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Example 28.
dataset = xdr_data
| fields action_total_upload, _time
| limit 10
The query returns the action_total_upload, a number field, and _time, a string field, for up to 10 results.
Navigate to Query Results → Chart Editor ( ) to manually build and view the graph using the selected graph parameters:
Main
Graph Type: Type of graphs and output options available: Area, Bubble, Column, Funnel, Gauge, Line, Map, Pie, Scatter, Single Value, or
Word Cloud.
Subtype and Layout: Depending on the selected type of graph, choose from the available display options.
Data
Depending on the selected type of graph, customize the Color, Font, and Legend.
You can express any chart preferences in XQL. This is helpful when you want to save your chart preferences in a query and generate a chart every time
that you run it. To define the parameters, either:
Example 29.
view graph type = column subtype = grouped header = “Test 1” xaxis = _time yaxis = _product,action_total_upload
Select ADD TO QUERY to insert your chart preferences into the query itself.
To easily track your query results, you can create custom widgets based on the query results. The custom widgets you create can be used in your
custom dashboards and reports. For more information, see Create custom XQL widgets.
Select Save to Widget Library to pivot to the Widget Library and generate a custom widget based on the query results.
Abstract
Learn more about the Cortex Query Language (XQL) entities available in the Query Builder.
With Query Builder, you can build complex queries for entities and entity attributes so that you can surface and identify connections between them. Cortex XDR
provides Cortex Query Language (XQL) queries for different types of entities in the Query Builder that search predefined datasets. The Query Builder searches
the raw data and logs stored in Cortex XDR tenant and for the entities and attributes you specify, it returns up to 1,000,000 results.
The Query Builder provides queries for the following types of entities:
File: Search on file creation and modification activity by file name and path. See Create file query.
Network: Search network activity by IP address, port, host name, protocol, and more. See Create network query.
Image Load: Search on module load into process events by module IDs and more. See Create image load query.
Registry: Search on registry creation and modification activity by key, key value, path, and data. See Create registry query.
Event Log: Search Windows event logs and Linux system authentication logs by username, log event ID (Windows only), log level, and message. See
Create event log query.
Network Connections: Search security event logs by firewall logs, endpoint raw data over your network. See Create network connections query.
Authentications: Search on authentication events by identity, target outcome, and more. See Create authentication query.
All Actions: Search across all network, registry, file, and process activity by endpoint or process. See Query across all entities.
The Query Builder also provides flexibility for both on-demand query generation and scheduled queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate authentication activity across all ingested authentication logs and data.
2. Select AUTHENTICATION.
By default, Cortex XDR will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
5. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate Windows and Linux event log attributes and investigate event logs across endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can search Windows and Linux event log attributes and investigate event logs across endpoints with a Cortex XDR agent installed.
3. Enter the search criteria for your Windows or Linux event log query.
Define any event attributes for which you want to search. By default, Cortex XDR will return the events that match the attribute you specify. To exclude an
attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific dateor Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
7. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between file activity and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between file activity and endpoints. The Query Builder searches your logs and endpoint data for the
file activity that you specify. To search for files on endpoints instead of file-related activity, build an XQL query. For more information, see How to build XQL
queries.
2. Select FILE.
File attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
notepad.exe|chrome.exe). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value,
toggle the = option to =!. Attributes are:
ACTION_IS_VFS: Denotes if the file is on a virtual file system on the disk. This is relevant only for files that are written to disk.
DEVICE TYPE: Type of device used to run the file: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the file.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors—The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent
process that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about create a query to investigate the connections between image load activity, acting processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate connections between image load activity, acting processes, and endpoints.
3. Enter the search criteria for the image load activity query.
Identifying information about the image module: Full Module Path, Module MD5, or Module SHA256.
By default, Cortex XDR will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the same
search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between firewall logs, endpoints, and network activity.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate network events stitched across endpoints and the Palo Alto Networks next-generation firewall logs.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
PROTOCOL: Network transport protocol over which the traffic was sent.
SESSION STATUS
PRODUCT
VENDOR
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the
same search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
Destination TARGET HOST,NAME, PORT, HOST NAME, PROCESS USER NAME, HOST IP, CMD, HOST OS, MD5, PROCESS PATH, USER ID,
SHA256, SIGNATURE, or PID
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between network activity, acting processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate connections between network activity, acting processes, and endpoints.
2. Select NETWORK.
Network traffic type: Select the type or types of network traffic alerts you want to search: Incoming, Outgoing, or Failed.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
LOCAL IP: Local IP address related to the communication. Matches can return additional data if a machine has more than one NIC.
PROTOCOL: Network transport protocol over which the traffic was sent.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate connections between processes, child processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between processes, child processes, and endpoints.
For example, you can create a process query to search for processes executed on a specific endpoint.
2. Select PROCESS.
Process action: Select the type of process action you want to search: On process Execution or Injection into another process.
Process attributes—Define any additional process attributes for which you want to search.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
By default, Cortex XDR will return results that match the attribute you specify. To exclude an attribute value, toggle the operator from = to !=.
Attributes are:
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
PROCESS_FILE_INFO: Metadata of the process file, including file property details, file entropy, company name, encryption status, and
version number.
PROCESS_SCHEDULED_TASK_NAME: Name of the task scheduled by the process to run in the Task Scheduler.
DEVICE TYPE: Type of device used to run the process: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the process.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
CMD: Command-line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash
Run search on process, Causality and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate a process,
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
INSTALLATION TYPE can be either Cortex XDR agent or Data Collector. For more information about the data collector applet, Activate Pathfinder.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate connections between registry activity, processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between registry activity, processes, and endpoints.
2. Select REGISTRY.
Registry attributes: Define any additional registry attributes for which you want to search. By default, Cortex XDR will return the events that match
the attribute you specify. To exclude an attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
From the Cortex XDR management console, you can search for endpoints and processes across all endpoint activity.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Some examples of queries you can run across all entities include:
Select Add Process to your search, and specify one or more of the following attributes for the acting (parent) process. Use a pipe (|) to separate multiple
values. Use an asterisk (*) to match any string of characters.
Field Description
CMD Command line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash.
Run search on The causality actor, also referred to as the causality group owner (CGO), is the parent process in the execution chain that
process, Causality the agent identified as being responsible for initiating the process tree. The OS actor is the parent process that creates an
and OS actors OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiating process, clear this option.
Select Add Host to your search and specify one or more of the following attributes:
HOST: HOST NAME, HOST IP address, HOST OS, HOST ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME , PATH , CMD , MD5 , SHA256 , USER NAME , SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last7D (days), Last1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run the query immediately and view the results in the Query Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about viewing the results of a query, modifying a query, and rerunning queries from Query Center.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Query Center displays information about all queries that were run in the Query Builder. From the Query Center you can manage your queries, view query
results, and adjust and rerun queries. Right-click a query to see the available options.
The Query Description column displays the parameters that were defined for a query. If necessary, use the Filter to reduce the number of queries that
Cortex XDR displays.
Queries that were created from a Query Builder template are prefixed with the template name.
4. (Optional) Export to file to export the results to a tab-separated values (TSV) file.
Right-click a value in the results table to see the options for further investigation.
Modify a query
After you run a query, you might need to change your search parameters to refine the search results or correct a search parameter. You can modify a query
from the Results page:
For queries created in XQL, the Results page includes the XQL query builder with the defined parameters. Modify the query and Run, schedule, or save
the query.
For queries created with a Query Builder template, the defined parameters are shown at the top of the Results page. Select Back to edit to modify the
query with the template format or Continue in XQL to open the query in XQL.
If you want to rerun a query, you can either schedule it to run on or before a specific date, or you can rerun it immediately. Cortex XDR creates a new query in
the Query Center, and when the query completes, it displays a notification in the notification bar.
To rerun a query immediately, right-click anywhere in the query and then select Rerun Query.
1. In the Query Center, right-click anywhere in the query and then select Schedule.
2. Choose a schedule option and the date and time that the query should run:
Cortex XDR creates a new query and schedules it to run on or by the selected date and time.
4. View the status of the scheduled query on the Scheduled Queries page.
You can also make changes to the query, edit the frequency, view when the query will next run, or disable the query. For more information, see Manage
scheduled queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The table below lists the common fields in the Query Center.
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated; this field allows you to view data for queries
performed before deprecation.
COMPUTE UNIT USAGE Number of query units that were used to execute the API query and Cold Storage
query.
EXECUTION ID Unique identifier of Cortex Query Language (XQL) queries in the tenant. The
identifier ID generated for queries executed in Cortex XDR and XQL query API.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME* For saved queries, the Query Name identifies the query specified by the
administrator.
For scheduled queries, the Query Name identifies the auto-generated name
of the parent query. Scheduled queries also display an icon to the left of the
name to indicate that the query is recurring.
Field Description
Queued: The query is queued and will run when there is an available slot.
Running
Failed
Partially completed: The query was stopped after exceeding the maximum
number of permitted results. The default results for any query is a maximum
of 1,000,000 results, when no limit is explicitly stated in the query. Queries
based on XQL query entities are limited to 10,000 results. To reduce the
number of results returned, you can adjust the query settings and rerun.
Completed
SIMULATED COMPUTE UNITS Number of query units that were used to execute the Hot Storage query.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Scheduled Queries page displays information about your scheduled and recurring queries. From this page, you can edit scheduled query parameters,
view previous executions, disable, and remove scheduled queries. Right-click a query to see the available options.
2. Locate the scheduled query for which you want to view previous executions.
3. Right-click anywhere in the query row, and select Show executed queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The table below ists the common fields in the Scheduled Queries page.
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated, this field allows you to view data for queries performed before
deprecation.
NEXT EXECUTION For queries that are scheduled to run at a specific frequency, this displays the next execution time.
For queries that were scheduled to run at a specific time and date, this field will show None.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME For saved queries, the Query Name identifies the query specified by the administrator.
For scheduled queries, the Query Name identifies the auto-generated name of the parent query.
Scheduled queries also display an icon to the left of the name to indicate that the query is
reoccurring.
SCHEDULE TIME Frequency or time at which the query was scheduled to run.
Abstract
Cortex XDR provides as part of the Query Library a personal library for saving and managing your own queries.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Cortex XDR provides as part of the Query Library a personal query library for saving and managing your own queries. When creating a query in XQL Search or
managing your queries from the Query Center, you can save queries to your personal library. You can also decide whether the query is shared with others (on
The queries listed in your Query Library have different icons to help you identify the different states of the queries:
The Query Library contains a powerful search mechanism that enables you to search in any field related to the query, such as the query name, description,
creator, query text, and labels. In addition, adding a label to your query enables you to search for these queries using these labels in the Query Library.
2. Locate the query that you want to save to your personal query library.
3. Right-click anywhere in the query row, and select Save query to library.
Query Name: Specify a unique name for the query. Query names must be unique in both private and shared lists, which includes other people’s
queries.
Labels (Optional): Specify a label that is associated with your query. You can select a label from the list of predefined labels or add your label and
then select Create Label. Adding a label to your query enables you to search for queries using this label in the Query Library.
Share with others: You can either set the query to be private and only accessible by you (default) or move the toggle to Share with others the
query, so that other users using the same tenant can access the query in their Query Library.
3. Click Save.
A notification appears confirming that the query was saved successfully to the library, and closes on its own after a few seconds.
The query that you added is now listed as the first entry in the Query Library. The query editor is opened to the right of the query.
As needed, you can return to your queries in the Query Library to manage your queries. Here are the actions available to you.
Search query data and metadata: Use the Query Library’s powerful search mechanism that enables you to search in any field related to the query,
such as the query name, description, creator, query text, and label. The Search query data and metadata field is available at the top of your list of
queries in the Query Library.
Show: Filter the list of queries from the Show menu. You can filter by the Palo Alto Networks queries provided with Cortex XDR , filter by the queries
Created by Me, or filter by the queries Created by Others. To view the entire list, Select all (default).
Save as new: Duplicate the query and save it as a new query. This action is available from the query menu by selecting the 3 vertical dots.
Share with others: If your query is currently unshared, you can share with other users on the same tenant your query, which will be available in their
Query Library. This action is only available from the query menu by selecting the 3 vertical dots when your query is unshared.
Unshare: If your query is currently shared with other users, you can Unshare the query and remove it from their Query Library. This action is only
available from the query menu by selecting the 3 vertical dots when your query is shared with others. You can only Unshare a query that you
created. If another user created the query, this option is disabled in the query menu.
Delete the query. You can only delete queries that you created. If another user created the query, this option is disabled in the query menu when
selecting the 3 vertical dots.
15.8 | Dashboards
Abstract
Cortex XDR dashboards help you to monitor system activity in your environment. You can use any of the predefined dashboards that are provided in Cortex
XDR, or you can create your own custom dashboards. You can also save any dashboard as a report template.
Abstract
Dashboards help you to monitor system activity in your environment. Select a dashboard from the drop-down menu, or take actions on your dashboards from
the Dashboard Manager.
Dashboards offer graphical overviews of your tenant's activities, enabling you to effectively monitor incidents and overall activity in your environment. Each
dashboard comprises widgets that summarize information about your endpoint in graphical or tabular format.
When you sign in to Cortex XDR your default dashboard is displayed. To change the displayed dashboard, in the dashboard header choose from the list of
predefined and custom dashboards. You can also manage all of your dashboards from the Dashboard Manager.
On each dashboard, you can you can see the selected Time Range on the right side of the header. An indicator shows the time that the dashboard was last
updated, and if the data is not up-to-date you can click Refresh to update all widget data. You can also select widget specific time frames from the menu on an
XQL widget. If you select a different time frame for a widget, a clock icon is displayed.
Click the dashboard menu to see additional actions, including the option to save the dashboard as a report template, set it as your default dashboard, and
disable the background animation.
Widget specific time frames are are only supported in Cortex XDR Pro and Cortex XSIAM.
Types of dashboards
Predefined dashboards are configured for different system set-ups and use cases, and to assist SOC analysts in their investigations. You can create
reports and custom dashboards that are based on predefined dashboards. For more information, see Predefined dashboards.
Custom dashboards
Custom dashboards provide the flexibility to design dashboards that are built to your own specifications. You can base custom dashboards on the
predefined dashboards or create a new dashboard from scratch, and save your custom dashboards as reports. For more information, see Custom
dashboards.
You can see all of your predefined and custom dashboards in the Dashboard Manager, and take the following actions:
You cannot edit the predefined dashboards but you can create a new dashboard that is based on a dashboard template.
You can import and export dashboards in a JSON format, which enables you to transfer your configurations between environments for onboarding,
migration, backup, and sharing. You can also bulk export and import multiple dashboards at a time.
If you import a dashboard template that already exists in the system, the imported template will overwrite the existing template. If you do not want
to overwrite the existing template, duplicate and rename the existing template before importing the new template.
Abstract
Predefined dashboards are set up to help you monitor different aspects of your environment.
Cortex XDR provides predefined dashboards that display widgets tailored to the dashboard type. The dashboards can help you to monitor different aspects of
your environment. To access your default dashboard, select Dashboards & Reports → Dashboard. From the dashboard header, a drop-down menu lists all
available predefined and custom dashboards. The available dashboards depend on your license type.
To change your default dashboard, go to Dashboards & Reports → Dashboard Manager. In the Dashboard Manager you can also create custom dashboards
based on existing dashboards, and save dashboards as report templates.
Agent management Provides an overview of the deployed agents in your Requires a Cortex XDR Prevent or Cortex XDR Pro per Endpoint
organization, their statuses and content versions, and a license.
breakdown by OS type.
Cloud inventory Provides an overview of your cloud-based assets. Requires a Cortex XDR Pro per GB license.
Data ingestion Provides an overview of data ingestion by product and Requires a Cortex XDR Pro license.
vendor, the daily quota consumption, and your data
ingestion rate. Due to a calculation change in NGFW log ingestion and improvements
to data ingestion metrics, you cannot view data earlier than July 2023
on this dashboard. However, you can still view this data by running
Cortex Query Language (XQL) queries on the metrics_center data
set.
Incident Provides a breakdown of the top incidents and hosts in Select the star in the the right corner of a widget to filter the data for
management your environment, and an overview of the top incident incidents that match incident starring policies.
assignees.
A purple star indicates that the widget is displaying only starred
incidents. The starring filter is persistent and continues to show the
filtered results until you clear the star.
Network traffic Provides an overview of network traffic analysis, and Requires a Cortex XDR Pro license.
analysis (NTA) highlights key pieces of information.
NGFW ingestion Provides an overview of ingestion status for all log types, Requires a Cortex XDR Pro per GB license.
the daily quota consumption for NGFW, and a
breakdown by log type.
Risk management Provides alert and incident information to aid in risk Requires a Cortex XDR Pro license and the Identity Threat Module
assessment by highlighting information about add-on to be enabled.
compromised accounts and insider threats.
Security manager Provides general information about Cortex XDR Requires a Cortex XDR Pro per Endpoint license.
incidents and agents.
Abstract
Custom dashboards can support your day-to-day operations by providing options that are tailored to your unique workflow.
You can create custom dashboards that are tailored to your unique workflow and support your day-to-day operations. Cortex XDR provides dashboard
templates that are based on the predefined dashboards, or you can build dashboards from scratch by selecting widgets and choosing their placement on the
dashboard.
You can see all predefined and custom XQL widgets in the Widget Library. Custom XQL widgets are built on Cortex Query Language (XQL) queries and
provide the flexibility to query specific data, and select the graphical format you require (such as table, line graph, or pie chart).
To enhance the functionality of your dashboards, you can also configure fixed filters and dashboard drilldowns that enabling dashboard users to filter and
manipulate the displayed data as follows:
Fixed filters enable dashboard users to alter the scope of the dashboard by selecting predefined or dynamic input values from the filters in the
dashboard header.
Dashboard drilldowns provide dashboard users with interactive data insights when clicking on data points in widgets. Drilldowns can trigger contextual
changes on the dashboard, or they can link to an XQL search, a custom URL, another dashboard, or a report. Users can hover over a widget to see
details about the drilldown, and click a value to trigger the drilldown.
Custom XQL widgets, fixed filters, and dashboard drilldowns are supported in Cortex XDR Pro and Cortex XSIAM only.
Abstract
Build customized dashboards to display and filter the information that is most relevant to you.
You can build custom dashboards based on predefined dashboard templates, or you can build a new dashboard from scratch.
2. In the Dashboard Builder, under Dashboard Name enter a unique name for the dashboard.
3. Under Dashboard Type, choose a built-in dashboard template or a blank template, and click Next.
To get a feel for how the data will look, Cortex XDR provides mock data. To see how the dashboard would look with real data in your environment,
change the toggle to Real Data.
5. Add widgets to the dashboard. From the widget library, drag widgets on to the dashboard.
For agent-related widgets, you can limit the results to only the endpoints that belong to the group by applying an endpoint scope.
Select the menu on the top right corner of the widget, select Groups, and select one or more endpoint groups.
For incident-related widgets, you can limit your incidents to only those that match an incident starring configuration on your dashboard. A purple
star indicates that the widget is displaying only starred incidents. For more information, see Incident starring.
Add fixed filters to your dashboard to provide dashboard users with useful dashboard filters that are based on predefined or dynamic input values. Any
defined filters are displayed in the dashboard header.
Fixed filters are based on XQL widgets with dynamic parameters. If a dashboard contains these widgets, the Add Filters & Inputs option is displayed. For
more information, see Configure fixed dashboard filters.
Add drilldowns to your dashboard to provide interactive data insights when clicking on data points, table rows, or other visualization elements.
Dashboard drilldowns are based on XQL widgets. To add a drilldown to an XQL widget, click on the widget menu, and select Add drilldown. For more
information, see Configure dashboard drilldowns.
By default, the widgets use the dashboard time frame. You can change the widget time frame from the widget menu.
10. To set the custom dashboard as your default dashboard, select Define as default dashboard.
11. To keep this dashboard visible only for you, select Private.
Otherwise, the dashboard is public and visible to all Cortex XDR users with the appropriate roles to view dashboards.
Abstract
Create, search, and view custom widgets in Cortex XDR, or use predefined widgets.
From the Widget Library you can take the following actions:
Create custom widgets based on XQL search queries. For more information see Create custom XQL widgets.
Search for custom and predefined widgets. Widgets are grouped by category.
Any dashboards or reports that include the widget are affected by the changes.
Abstract
You can create custom XQL widgets based on a Cortex Query Language (XQL) query, and add parameters that you can configure as fixed filters or
dashboard drilldowns.
Custom XQL widgets are supported in Cortex XDR Pro and Cortex XSIAM only.
With custom XQL widgets you can personalize the information that you display on your custom dashboards and reports. You can build widgets that query
specific information that is unique to your workflow, and define the graphical format you require (such as table, line graph, or pie chart).
All of your predefined and custom XQL widgets are available in the Widget Library under Dashboards & Reports → Customize → Widget Library. From the
Widget Library, you can browse all widgets by category, create new XQL widgets, and edit and delete existing XQL widgets.
3. Define an XQL query that searches for the data you require. Select XQL Helper to view XQL search and schema examples. For more information, see
How to build XQL queries.
Cortex Query Language (XQL) queries generated from the Widget Library do not appear in the Query Center. The results are used only for creating the
custom widget.
5. In the Widget section, define how you want to visualize the results.
You can use parameters to filter widget data on a dashboard or report, and create drilldowns on dashboards. Base your filters on fields and values in the
query results.
To specify parameters with a single predefined value, use the = operator. To specify parameters with multiple values (predefined or dynamic), use
the IN operator.
The following query defines the $domain parameter for filtering dashboard data by domain, based on the domain field in the agent_auditing
dataset.
Single value parameters are based on static predefined values. In this example, the dashboard user will be able to select a domain from a list of
predefined domains.
The following query defines the $endpointname parameter for filtering dashboard data by one or more endpoint names, based on the
endpoint_name field in the agent_auditing dataset.
You can configure this parameter with static predefined values, or dynamic values that are pulled from an XQL query.
3. (Optional) Under Assign Parameters (default values), define default values for the parameters. When you add the widget to a dashboard or report,
the data will be automatically populated. Alternatively, you can configure all input values when you set up a dashboard or report.
Abstract
Configure fixed filters that enable dashboard users to alter the scope of the dashboard by selecting predefined and dynamic values.
Fixed dashboard filters are supported in Cortex XDR Pro and Cortex XSIAM only.
Define fixed filters on your dashboards to enable dashboard users to alter the scope of the dashboard by selecting from predefined or dynamic values. You
can define filters with free text, single select, and multiple select input values. After configuration, anyone who views your dashboard can use the fixed filters in
the dashboard header.
Fixed filters are based on parameters that are defined in custom XQL widgets. Before you can configure fixed filters, take the following steps:
1. Create custom XQL widgets with parameters. For more information, see Create custom XQL widgets.
2. Add the widgets to a Custom dashboard. For more information, see Custom dashboards.
This option only appears if the dashboard contains custom XQL widgets with defined parameters.
4. On the FILTERS & INPUTS panel, click +Add an input and select one of the following options:
Guidelines
Select an option that corresponds with the parameter configured in the XQL widget. Parameters with single predefined or free text values use the =
operator, and parameters with multiple values, use the IN operator.
Predefined values are most suitable for filtering fields that have static values, such as status fields with a limited number of available options.
Dynamic values help you to filter with values that change often. You can configure an XQL query that extracts all of the values that are available for
that field. For example, in the endpoints dataset, the endpoint_name field values can change frequently.
5. Click Parameter and select the parameter that you want to configure.
The parameters are extracted from the XQL queries of the widgets on the dashboard. You can define up to four parameter filters on a report or
dashboard.
6. If you selected Single Select or Multi Select values, click Dropdown Options and specify input values. When you generate the dashboard, these input
values appear in a dropdown list for selection.
Guidelines
The values must support the parameter type. For example, for $name specify characters and for $num specify numbers.
If you uploaded numbers in a string, specify each number in quotes, for example "500".
To configure Dynamic inputs for Multi Select values, click XQL Query to fetch dynamic values.
Guidelines
In the XQL Query Builder, configure a query that includes the field stage and the name of the column from which to take the dropdown values.
All values in the specified field will be available for selection, and the values are dynamically updated.
In this example, the endpoint_name field is configured. The dashboard user will be able to filter by one or more values from the endpoint_name
field.
If you specify more than one field, only the first field value is used.
7. Under Default Value, select a value from the list of defined values. Specifying a default value ensures that the widget is automatically populated when
you open the dashboard.
After the initial setup, when you access your dashboard the filters and inputs might need further refinement. You can make changes to the configured
parameters in the XQL widgets, and update the Filters & Inputs on your dashboard until you are satisfied with the results.
Abstract
Configure drilldowns on custom dashboards to provide users with interactive data insights when clicking on data points in a widget
Dashboard drilldowns are supported in Cortex XDR Pro and Cortex XSIAM only.
Dashboard drilldowns can trigger contextual changes on the dashboard, or they can link to an XQL search, a custom URL, another dashboard, or a report.
You configure drilldowns on individual widgets. After a drilldown is configured, clicking the widget triggers the drilldown.
To configure drilldowns your dashboard must contain custom XQL widgets. In addition, if you want to configure in-dashboard drilldowns your custom XQL
widget must contain one or more parameters. For more information about configuring parameters in custom XQL widgets, see Create custom XQL widgets.
2. Identify the widget to which you want to apply a drilldown, click on the widget menu, and select Add drilldown.
Field Action
Parameters Select the parameter by which to filter. You can choose any parameter that is defined in the XQL query of the widget.
If the selected parameter is configured in other XQL widgets on the dashboard, these widgets are also affected by the
drilldown.
Value When a user clicks the widget, the dashboard is filtered by this value.
Select a variable from which to capture the clicked value, for example, the $y-axis.value in a chart. For more information,
see Variables in drilldowns.
Field Action
(Optional) Select parameters by which to filter the data on the target dashboard. Parameters are only available if there are
Parameter parameters defined in the widgets on the target dashboard.
(Optional) Value When a user clicks the widget, this value is configured as a parameter on the target dashboard.
Select a variable from which to capture the clicked value in the source dashboard, for example, the $y-axis.value in
a chart. For more information, see Variables in drilldowns.
Open XQL Search: Runs an XQL query based on the clicked value.
Field Action
In the following example two parameters are passed from a table widget to an XQL query. The first parameter with the cell value that the user
clicked on, and a second parameter with the cell value in the request_url column in the row that the user clicked.
dataset=xdr_data
|filter event_type=$y_axis.value and requestUri=$row.request_url
|fields action_download, action_remote_ip as remote_ip,
actor_process_image_name as process_name
|comp count_distinct(action_download) as total_download by process_name,
remote_ip, remote_hostname
|sort desc total_download
|limit 10
|view graph type=single subtype=standard xaxis=remote_ip yaxis=total_download
Field Action
In the following URL, the $x_axis.value parameter represents cortex products names. On drilldown, the $x_axis.value is replaced with the
clicked product name in the pie chart.
https://www.paloaltonetworks.com/cortex/cortex-$x_axis.value
Abstract
Learn about the widget variable values that you can use in dashboard drilldowns.
The following list describes the widget variables that are available in drilldowns, according to widget type. The variable defines the value to capture in the
drilldown, according to the element that is clicked. The captured value is then configured as a parameter by which to filter data on drilldown.
Read more...
Read more...
$y_axis.name: Selects the y-axis name that the single value represents.
Table
Read more...
$row.<field_name>: Selects the field (column) from the clicked table row.
15.8.4 | Reports
Abstract
Create, edit, and customize reports in Cortex XDR. Schedule reports with Cron expressions.
Reports contain statistical data in the form of widgets, which enable you to analyze data from inside or outside Cortex XDR, in different formats such as graphs,
pie charts, or text from information. After generating a report, it also appears in the Reports tab, so you can use the report again.
Abstract
On the Report Templates page, you can view, delete, import, export, create, and modify report templates. You can also select and generate multiple reports.
Abstract
You can run reports that are based on dashboard templates, or you can create reports from scratch.
You can generate reports using pre-designed dashboard templates, or create custom reports from scratch with widgets from the Widget Library. You can also
schedule your reports to run regularly or just once. All reports are saved under Dashboards & Reports → Reports.
To take actions on existing report templates, go to Dashboards & Reports → Customize → Report Templates. On this page you can also import and export
report templates in a JSON format, which enables you to transfer your configurations between environments for onboarding, migration, backup, and sharing.
You can bulk export and import multiple report templates at a time.
If you import a report template that already exists in the system, the imported template will overwrite the existing template. If you do not want to overwrite
the existing template, duplicate and rename the existing template before importing the new template.
2. Right-click the dashboard from which you want to generate a report, and select Save as report template.
3. Enter a unique name for the report and an optional description, and click Save.
To run the report without make any modifications, hover over the report name, and select Generate Report.
To modify or schedule the report, hover over the report name, and select Edit.
6. After your report completes, you can download it from the Dashboards & Reports → Reports page.
You can base your report on an existing template, or you can start with a blank template.
3. Under Data Timeframe, select the time frame from which to run the report. Custom time frames are limited to one month.
Cortex XDR offers mock data to help you visualize the data's appearance. To see how the report would look with real data in your environment, switch to
Real Data. Select Preview in A4 to see how the report is displayed in an A4 format.
6. Add or remove widgets to the report. From the widget library, drag widgets on to the report.
For agent-related widgets, you can limit the results to only the endpoints that belong to the group by applying an endpoint scope. Select the menu
on the top right corner of the widget, select Groups, and select one or more endpoint groups.
For incident-related widgets, you can limit your incidents to only those that match an incident starring configuration on your dashboard. A purple
star indicates that the widget is displaying only starred incidents.
Filters are supported only in Cortex XDR Pro and Cortex XSIAM.
For reports that include Custom XQL widgets with predefined parameters, the FILTERS & INPUTS option is displayed. Defining filters and inputs for the
report gives you the flexibility to filter the report data based on default values that you define.
2. On the FILTERS & INPUTS panel, click +Add an input and select one of the following options:
The parameters are extracted from the XQL queries of the widgets on the dashboard. You can define up to four parameter filters on a report or
dashboard.
5. Under Default Value, specify a value for the selected parameters. This value overwrites any predefined default values in the XQL query.
The values must support the parameter type. For example, for $name specify characters and for $num specify numbers.
8. When you have finished customizing your report template, click Next.
9. If you are ready to run the report select Generate now, or define options for scheduling the report.
10. (Optional) Under Email Distribution and Slack workspace add the recipients that you want to receive a PDF version of your report.
Select Add password used to access report sent by email and Slack to set password encryption. Password encryption is only available in PDF format.
11. (Optional) Select Attach CSV to attach CSV files of your XQL query widgets to the report.
From the menu, select one or more of your custom widgets to attach to the report. The CSV files of the widgets are attached to the report along with the
report PDF. Depending on how you selected to send the report, the CSV file is attached as follows:
Email: Sent as separate attachments for each widget. The total size of the attachment in the email cannot exceed 20 MB.
Slack: Sent within a ZIP file that includes the PDF file.
13. After your report completes, you can download it from the Dashboards & Reports → Reports page.
In the Name field, icons indicate the number of attached files for each report. Reports with multiple PDF and CSV files are marked with a zip icon.
Reports with a single PDF are marked with a PDF icon.
You can receive an email alert if a report fails to run due to a timeout or fails to upload to the GCP bucket.
2. Enter a name and a description for your rule, and under Log Type, select Management Audit Logs.
4. Under Distribution List, select the email address to send the notification to.
5. Click Done.
The Quick Launcher provides a quick, in-context shortcut that you can use to search for information, perform common investigation tasks, or initiate actions.
The Quick Launcher provides a quick, in-context shortcut that you can use to search for information, perform common investigation tasks, or initiate response
actions from any place in Cortex XDR. The tasks that you can perform with the Quick Launcher include:
Search for host, username, IP address, domain, filename, filepath, timestamp to easily launch the artifact and assets views.
For hosts, Cortex XDR displays results for exact matches but supports the use of wildcard (*) which changes the search to return matches that contain
the specified text. For example, a search of compy-7* will return any hosts beginning with compy-7 such as compy-7000, compy-7abc, and so forth.
Search the Asset Inventory for a specific asset name or IP address. In addition, the following actions are available when searching for Asset Inventory
data.
Change search to <host name of asset> to display additional actions related to that host. This option is only relevant when searching for an IP
address that is connected to an asset.
Open in Asset Inventory is a pivot available when the host name of an asset is selected.
Begin Go To mode. Enter forward slash (/) followed by your search string to filter and navigate to Cortex XDR pages. For example, / rules searches
for all pages that include rules and allows you to navigate to those pages. Select Esc to exit Go To mode.
Isolate an endpoint
You can open the Quick Launcher by clicking the Quick Launcher icon located in the top navigation bar, or from the application menus, or by using the default
keyboard shortcut: Ctrl-Shift+X on Windows or CMD+Shift+X on macOS. To change the default keyboard shortcut, select Settings → Configurations →
General → Server Settings → Keyboard Shortcuts. The shortcut value must be a keyboard letter, A through Z, and cannot be the same as the Artifact and
Asset Views defined shortcut.
You can also prepopulate searches in Quick Launcher by selecting text in the app or selecting a node in the Causality or Timeline Views.
Cortex XDR enables you to investigate any threat, also referred to as a lead, which has been detected.
This topic describes the steps you can take to investigate a lead. A lead can be:
An alert from a non-Palo Alto Networks system with information relevant to endpoints or firewalls.
Information from online articles or other external threat intelligence that provides well-defined characteristics of the threat.
1. Use threat intelligence to build a Cortex Query Language (XQL) query using the Query Builder.
For example, if external threat intelligence indicates a confirmed threat involving specific files or behaviors, search for those characteristics.
2. Review and refine the query results by using filters and running follow-up queries to find the information you are looking for.
4. Open the Timeline to view the sequence of events over time. If deemed malicious, take action using one or more of the response actions. For more
information, see Timeline.
5. Inspect the information again, and identify any characteristics you can use to create a BIOC or correlation rule.
If you can create a BIOC or correlation rule, test and tune it as needed. For more information, see Create a correlation rule and Create a BIOC rule.
16 | Data management
16.1 | Broker VM
Abstract
Set up a Broker VM to establish a secure connection in which you can route your endpoints, and collect and forward logs and files for analysis.
Set up and configure the Broker VM to create a secure connection for routing endpoints, collecting logs, and forwarding logs and files for analysis. Learn how
to manage the Broker VM, and implement it within a high availability (HA) cluster setup.
Abstract
Learn about the Cortex XDR Broker virtual machine (VM) and why use it in your network configuration.
The Palo Alto Networks Broker VM is a secured virtual machine, integrated with Cortex XDR, that bridges your network and Cortex XDR. By setting up the
Broker VM, you establish a secure connection in which you can route your endpoints, collect logs, and forward logs and files for analysis.
Cortex XDR can leverage the Broker VM to run different services separately using the same Palo Alto Networks authentication. After you complete the initial
setup, the Broker VM automatically receives updates and enhancements from Cortex XDR, providing you with new capabilities without having to install a new
VM or manually update the existing VM.
According to your Cortex XDR license, the following figure illustrates the different Broker VM features that could be available on your organization side:
Abstract
Learn more about how to set up and configure a Broker VM as a standalone broker or add the broker to a high availability (HA) cluster.
You can set up a standalone Broker VM or add a Broker VM to a High Availabilty (HA) cluster to prevent a single point of failure. For more information, see
Broker VM High Availability Cluster.
Setup
To set up the Broker virtual machine (VM), you need to deploy an image created by Palo Alto Networks on your network or supported cloud infrastructure and
activate the available applications. You can set up several Broker VMs for the same tenant to support larger environments. Ensure each environment matches
the necessary requirements.
Requirements
Before you set up the Broker VM, verify you meet the following requirements:
Hardware
For standard installation, use a minimum of a 4-core processor, 8 GB RAM, and 512 GB disk.
If you only intend to use the Broker VM for agent proxy, you can use a 2-core processor. If you intend to use the Broker VM for agent installer and content
caching, you must use an 8-core processor.
The Broker VM comes with a 512 GB disk. Therefore, deploy the Broker VM with thin provisioning, meaning the hard disk can grow up to 512 GB but will do so
only if needed.
Bandwidth
When the Broker VM is collecting data with a Cortex XDR Pro license, the optimal outgoing bandwidth into the Cortex XDR server should be about 25% of the
incoming data traffic into the Broker VM applets.
There can be cases in which the Broker VM requires up to 50% of the incoming bandwidth as outgoing. Such cases can be, network instability between the
Broker VM and Cortex XDR, or data that is being collected, but not well compressed.
Ensure that your virtual machine (VM) is compatible with one of the following options and install the applicable broker image according to the installation steps
provided:
Amazon Web Services (AWS) VMDK Set up Broker VM on Amazon Web Services
Enable communication between the Broker Service, and other Palo Alto Networks services and applications.
(Default) Broker's NTP server used for broker registration and communication
encryption. The Broker VM provides default servers you can use, or you
time.google.com can define an NTP server of your choice.
pool.ntp.org
br-<XDR tenant>.xdr.<region>.paloaltonetworks.com Broker Service server depending on the region of your deployment, such as
us or eu.
HTTPS over TCP port 443
distributions.traps.paloaltonetworks.com Information needed to communicate with your Cortex XDR tenant. Used by
tenants deployed in all regions.
HTTPS over TCP port 443
Enable access to Cortex XDR from the Broker VM to allow communication between agents and collectors and Cortex XDR.
Collectors are only supported with a Cortex XDR Pro per GB license.
If you use SSL decryption in your firewalls, you need to add a trusted self-signed certificate authority on the Broker VM to prevent any difficulties with SSL
decryption. If adding a CA certificate to the broker is not possible, ensure that you’ve added the Broker Service FQDNs to the SSL Decryption Exclusion list on
your firewalls. For more information on adding a trusted self-signed certificate authority, see Update the Trusted CA Certificate for the Broker VM in Task 1.
Configure the Broker VM settings.
Initial Setup
2. Click Add Broker → Generate Token, and copy to your clipboard. The token is valid for 24 hours. A new token is generated each time you select
Generate Token.
You'll paste this token after configuring settings and the Broker VM is registered in Task 2. Register your Broker VM.
When DHCP is not enabled in your network and there isn't an IP address for your Broker VM, configure the Broker VM with a static IP using the serial console
menu.
Log in with the default password !nitialPassw0rd, and then define your own unique password. The password must contain a minimum of eight characters,
contain letters and numbers, and at least one capital letter and one special character.
Network Interfaces
Review the pre-configured Name, IP address, and MAC Address, and select the Address Allocation: DHCP (default) or Static. If you choose Static,
define the static IP address, Netmask, Default Gateway, and DNS Server settings, and then save your configurations.
When configuring more than one network interface, ensure that only one Default Gateway is defined. The rest must be set to 0.0.0.0, which configures
them as undefined.
You can also specify which of the network interfaces is designated as the Admin and can be used to access the Broker VM web interface. Only one
interface can be assigned for this purpose from all of the available network interfaces on the Broker VM, and the rest should be set to Disable.
2. (Optional) Set the internal network settings (requires Broker VM 14.0.42 and later).
Internal Network
Specify a network subnet to avoid the Broker VM dockers colliding with your internal network. By default, the Network Subnet is set to 172.17.0.1/16.
For Broker VM version 9.0 and earlier, Cortex XDR will only accept 172.17.0.0/16.
3. (Optional) Configure a proxy server address and other related details to route Broker VM communication.
Proxy Server
You can configure another Broker VM as a proxy server for this Broker VM by selecting the HTTP type. When selecting HTTP to route Broker VM
communication, you need to add the IP Address and Port number (set when activating the Agent Proxy) for another Broker VM registered in your
tenant. This designates the other Broker VM as a proxy for this Broker VM.
2. Specify the proxy Address (IP or FQDN), Port, and an optional User and Password. Select the pencil icon to specify the password. Avoid using
special characters in the proxy username and password.
4. (Optional) Configure your NTP servers (requires Broker VM 8.0 and later).
Specify the required server addresses using the FQDN or IP address of the server.
5. (Optional) Allow SSH connections to the Broker VM (Requires Broker VM 8.0 and later).
SSH Access
Enable or disable SSH connections to the Broker VM. SSH access is authenticated using a public key, provided by the user. Using a public key grants
remote access to colleagues and Cortex XDR support who need the private key. You must have Instance Administrator role permissions to configure
SSH access.
To enable connection, generate an RSA Key Pair, and enter the public key in the SSH Public Key section. Once one SSH public key is added, you can
Add Another. When you are finished, Save your configuration.
When using PuTTYgen to create your public and private key pairs, you need to copy the public key generated in the Public key for pasting into
OpenSSH authorized_keys file box, and paste it in the Broker VM SSH Public Key section as explained above. This public key is only available when the
PuTTYgen console is open after the public key is generated. If you close the PuTTYgen console before pasting the public key, you will need to generate
a new public key.
When you SSH the Broker VM using PuTTY or a command prompt, you need to use the admin username. For example:
We strongly recommend disabling SSH connectivity when it's not being used. Therefore, activate SSH connectivity when it's needed and disable it right
afterwards.
6. (Optional) Update the SSL Server certificates for the Broker VM (Requires Broker VM 10.1.9 and later).
Upload your signed server certificate and key to establish a validated secure SSL connection between your endpoints and the Broker VM. When you
configure the server certificate and the key files in the Broker VM, Cortex XDR automatically updates them in the tenant UI. Cortex XDR validates that the
certificate and key match, but does not validate the Certificate Authority (CA).
The Palo Alto Networks Broker VM supports only strong cipher SHA256-based certificates. MD5/SHA1-based certificates are not supported.
Trusted CA Certificate
Upload your signed Certificate Authority (CA) certificate or Certificate Authority chain file in a PEM format with the associated key, and click Save. If you
use SSL decryption in your firewalls, you need to add a trusted self-signed CA certificate on the Broker VM to prevent any difficulties with SSL
decryption. For example, when configuring Palo Alto Networks NGFW to decrypt SSL using a self-signed certificate, you need to ensure the Broker VM
can validate a self-signed CA by uploading the cert_ssl-decrypt.crt file on the Broker VM.
If adding a CA certificate to the Broker VM is not possible, ensure that you’ve added the Broker Service FQDNs to the SSL Decryption Exclusion list on
your firewalls. See Enable Access to Cortex XDR.
8. (Optional) Collect and Generate New Logs (Requires Broker VM 8.0 and later). Your Cortex XDR logs will download automatically after approximately 30
seconds.
Register and enter your unique Token, created in the Broker VMs page. This can take up to 30 seconds.
You are directed to Settings → Configurations → Data Broker → Broker VMs. The Broker VMs page displays your Broker VM details and allows you to edit the
defined configurations.
Abstract
Learn more about the Broker VM image types available that are compatible with your viirtual machine (VM).
Amazon Web Services (AWS) VMDK Set up Broker VM on Amazon Web Services
Abstract
Learn how to set up your Cortex XDR Broker virtual machine (VM) on Alibaba Cloud.
After you download your Cortex XDR Broker virtual machine (VM) QCOW2 image, you need to upload it to Alibaba Cloud. Since the image file is larger than
5G, you need to download the ossutil utility file provided by Alibaba Cloud to upload the image.
Download a Cortex XDR Broker VM QCOW2 image. For more information, see the virtual machine compatability requirements in Set up and configure Broker
VM.
The download is dependent on the operating system and infrastructure you are using.
Alibaba Cloud supports using the following operating systems for the utility file: Windows, Linux, and macOS.
Supported architectures: x86 (32-bit and 64-bit) and ARM (32-bit and 64-bit)
For more information on downloading the utility, see the Alibaba Cloud documentation.
Task 2. Upload the image file to Alibaba Cloud using the utility file you downloaded
The command is dependent on the operating system and architecture you are using. Below are a few examples of the commands to use based on the different
operating systems and architectures, which you may need to modify based on your system requirements.
./ossutil64 cp Downloads/<name of Broker VM QCOW2 image> oss://<directory name>/<file name for uploaded image>
Example
Format
./ossutilmac64 cp Downloads/<name of Broker VM QCOW2 image oss://<directory name>/<file name for uploaded image>
Example
D:\ossutil>ossutil64.exe cp Downloads\<name of Broker VM QCOW2 image> oss://<directory name>/<file name for uploaded image>
For Linux and Windows uploads, you can use Alibaba Cloud’s graphical management tool called ossbrowser.
2. Select Hamburger menu → Object Storage Service → <directory name>, where the <directory name> is the directory you configured when uploading
the image. For example, in the step above the <directory name> used in the examples provided is kvm-images-qcow2.
The Object Storage Service must be created in the same Region as the image of the virtual machine.
3. From the list of images displayed, find the row for the Broker VM QCOW2 image that you uploaded, and click View Details.
4. In the URL field of the View Details right-pane displayed, copy the internal link for the image in Alibaba cloud. The URL that you copy ends with .com and
you should not include any of the text displayed after this.
5. Select Hamburger menu → Elastic Compute Service → Instances & Images → Images.
6. In the Import Images area on the Images page, click Import Images.
OSS Object Address: This field is a combination of the internal link that you copied for the Broker VM image and the file name for the uploaded
image, using this format <internal link>/<file name for uploaded image>. Paste the internal link for the Broker VM QCOW2 image in Alibaba Cloud
that you copied, and add the following text after the .com: /<file name for uploaded image>.
Leave the rest of the fields as defined by the default or change them according to your system requirements.
8. Click OK.
A notification is displayed indicating that image was imported successfully. Once the Status for the imported image in the Images page changes to
Available, you will know the process is complete. This can take a few minutes.
1. Select Hamburger menu → Elastic Compute Service → Instances & Images → Instances.
Read more...
Region: Ensure the Region selected is the same as the OSS Object Address.
Selected Instance Type Quantity: Set these settings according to your system requirements.
Image: Select Custom Image, and in the field select the image that you imported to Alibaba Cloud.
4. Click Next.
Read more...
Network Type: Select the applicable Network Type and update the field according to your system configuration.
Public IP Address (Optional): Enable the instance to access the public network.
Security Group: You must select a Security Group for setting network access controls for the instance. Ensure that port 22 and port 443 are
allowed in the security group rules to access the Broker VM.
Elastic Network Interface (Optional): Add an ENI according to you system requirements.
6. Click Next.
Read more...
Instance Name: You can either leave the default instance name or specify a new name for the VM instance.
8. Click Next.
11. Review the Preview screen settings, select ECS Terms of Service and Product Terms of Service, and click Create Instance.
A dialog box is displayed indicating that the VM instance has been created. Click Console to bring you back to the Instances page, where you can see
the IP Address listed to connect to the VM instance.
Abstract
Learn how to set up your Cortex XDR Broker virtual machine (VM) on AWS.
After you download your Cortex XDR Broker VMDK image, you can convert the image to an Amazon Web Services (AWS) Amazon Machine Image (AMI) using
the AWS CLI. The task below explains how to do this on Ubuntu Linux.
Download a Cortex XDR Broker VM VMDK image. For more information, see the virtual machine compatability requirements in Set up and configure
Broker VM.
You need to set up an AWS VM Import role (vmimport) before you continue with the steps to convert the image as it is required for the import-image
CLI command. You can use a different role, if the role vmimport doesn't exist or doesn't have the required permissions. For more information on setting
up an AWS VM Import role and the permissions required, see Required service role.
To convert the image to AWS, perform the following procedures in the order listed below.
You need to log in using an AWS Identity and Access Management (IAM) user, where the permissions are defined in the IAM policy to use the virtual machine
Import and export.
1. Log in to the AWS IAM Console, and in the navigation pane, select Access Management → Users → Add Users.
2. Select Access key - Programmatic access as the AWS credential type, and click Next: Permissions.
4. In the JSON tab, copy and paste the following syntax to define the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::mys3bucket","arn:aws:s3:::mys3bucket/*"]
},
{
"Effect": "Allow",
"Action": [
"ec2:CancelConversionTask",
"ec2:CancelExportTask",
"ec2:CreateImage",
"ec2:CreateInstanceExportTask",
"ec2:CreateTags",
"ec2:DescribeConversionTasks",
"ec2:DescribeExportTasks",
"ec2:DescribeExportImageTasks",
"ec2:DescribeImages",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:ExportImage",
"ec2:ImportInstance",
"ec2:ImportVolume",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:ImportImage",
"ec2:ImportSnapshot",
"ec2:DescribeImportImageTasks",
"ec2:DescribeImportSnapshotTasks",
"ec2:CancelImportTask"
],
"Resource": "*"
}
]
}
5. Click Next until you can specify the Policy name, and then click Create Policy.
6. Select the policy that you created above based on the syntax you added.
8. After confirmation that the user is created, record the following user information, which you will need later.
User name
Access key ID
Install the AWS CLI and configure it with the IAM user that you created.
1. Login to the server with admin privilege and install the AWS CLI.
# sudo bash
# apt install awscli
# aws configure
AWS Secret Access Key—The Secret access key for the IAM user you created.
Default region name—The Region where you've defined the IAM user you created.
To create an AMI image, you need to download Broker VM VMDK file from the Cortex XDR Web Console, import this file to your S3 bucket, and then convert
the VMDK file in the S3 bucket into an AMI Image.
1. In the Cortex XDR Web Console , select Settings → Configurations → Data Broker → Broker VMs → Add Broker → VMDK.
5. In the S3 buckets page, + Create bucket to upload your Broker VM image to this bucket.
Specify a unique name for the S3 bucket and use the default configurations.
6. Upload the Broker VM VMDK you downloaded from Cortex XDR to the AWS S3 bucket.
Run
Read more...
# vi configuration.json
2. Copy and paste the following syntax into the json file.
In S3Bucket, replace <your_bucket> with the Bucket Name and not its ARN Name. S3Key is the VMDK filename, which you should replace
instead of <broker-vm-version.vmdk>.
[
{
"Description":"Cortex XDR Broker VM <version>",
"Format":"vmdk",
"UserBucket":{
"S3Bucket":"<your_bucket>",
"S3Key":"<broker-vm-version.vmdk>"
}
}
]
trust-policy.json
Read more...
# vi trust-policy.json
2. Copy and paste the following syntax into the json file.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
role-policy.json
Read more...
# vi role-policy.json
2. Copy and paste the following syntax into the json file. Replace the <disk-image-file-bucket> and <export-bucket> with the correct bucket
name. You can specify * to configure access to all your S3 buckets.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<disk-image-file-bucket>",
"arn:aws:s3:::<disk-image-file-bucket>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::<export-bucket>",
"arn:aws:s3:::<export-bucket>/*"
]
},
8. Use the create-role command to create a role named vmimport and grant VM import and export access to the trust-policy.json file.
9. Use the put-role-policy command to attach the policy to the vmimport role created above.
# aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file:// role-
policy.json"
Run
To track the progress, use the task id value from the output and run:
Once the task is complete, the AMI Image is ready for use.
11. (Optional) After the AMI image has been created, you can define a new name for the image.
Select Services → EC2 → IMAGES → AMIs and locate your AMI image using the task ID. Select the pencil icon to specify a new name.
You can launch the a Broker VM instance in AWS EC2 using the AMI Image created.
A t2.medium (4GB RAM) is the lowest machine type that can be used as an instance type. Usually, the lowest machine type is sufficient with the Local Agent
Settings applet. Yet, when enabling more applets, 8 GB is required.
1. To view the AMI image that you added, select Services → EC2 → Images → AMIs.
2. Select EC2 → Instances, and click Launch instances to create an instance of the AMI image.
3. In the Launch Instance Wizard define the instance according to your company requirements and Launch.
4. (Optional) In the Instances page, locate your instance and use the pencil icon to rename the instance Name.
In the Change Security Groups pop-up, select HTTPS to be able to access the Broker VM Web UI, and SSH to allow for remote access when
troubleshooting. Make sure to allow these connections to the Broker VM from secure networks only.
Locate your instance, right-click, and select Instance Settings → Get Instance Screenshot.
You are directed to your Broker VM console listing your Broker details.
Registration of the Broker VM to Cortex XDR is performed from the Broker VM Web Console.
2. Determine the IP Address of the EC2 instance and use it to open the Broker VM Web Console, such as https://<ip_address>.
Abstract
Learn more about how to set up your Cortex XDR Broker VM on Google Cloud Platform.
You can deploy the Broker VM on Google Cloud Platform. The Broker VM allows communication with external services through the installation and setup of
applets such as the Syslog collector applet.
To set up the Broker VM on the Google Cloud Platform, install the VMDK image provided in Cortex XDR.
Download a Cortex XDR Broker VM VMDK image. For more information, see the virtual machine compatability requirements in Set up and configure
Broker VM.
To complete the set up, you must have G Cloud installed and have an authenticated user account.
From G Cloud, create a Google Cloud Storage bucket to store the Broker VM image.
1. Create a project in GCP and enable Google Cloud Storage, for example, brokers-project. Make sure you have defined a default network.
Task 3. Upload the VMDK image to the Google Cloud Storage bucket
You can import the GCP image using either G Cloud CLI or Google Cloud console.
The import tool uses Cloud Build API, which must be enabled in your project. For the import to work, Cloud Build service account must have compute.admin
and iam.serviceAccountUser roles. When using the Google Cloud console to import the image, you will be prompted to add these permissions
automatically.
gcloud CLI
Before importing a GCP image using the gcloud CLI, ensure that you update the Google Cloud components to version 371.0.0 and above using the following
command:
The following command uses the minimum required parameters. For more information on permissions and available parameters, refer to the Google Cloud
SDK.
gcloud compute images import <VMDK image> --data-disk --source-file="gs://<image path>" --network=<network_name> --subnet=<subnet_name> --zone=<region> --async
When the Google Compute completes the image creation, create a new instance.
3. In the Boot disk option, choose Custom images and select the image you created.
If you are using the Broker VM to facilitate only Agent Proxy, use e2-startdard-2. If you are using the Broker VM for multiple applets, use e2-standard-4.
Task 6. Allow the 4443 port in your firewall configuration by creating a firewall rule
1. From the Google Cloud menu, select VPC network → Firewall, and click CREATE FIREWALL RULE.
Source IPv4 ranges: Enter the IP network of computers that will be connecting to the Broker VM. To include all machines, enter 0.0.0.0/0.
3. Click CREATE.
2. For the specific Broker VM containing the rule, select the ellipse to display More actions, and select View network details.
3. In the Firewall and routes details section, select the FIREWALLS tab.
You can now connect to the Broker VM web console using the Broker VM IP address. Connect with https over port 4443 using the format https://<ip
address>:4443.
Abstract
Learn set up your Cortex XDR Broker virtual machine (VM) on a KVM using Ubuntu.
After you download your Cortex XDR Broker virtual machine (VM) QCOW2 image, you need to upload it to a kernel-based Virtual Machine (KVM). The
instructions below provide an example of doing this using Ubuntu 18.04 LTS.
Download a Cortex XDR Broker VM QCOW2 image. For more information, see the virtual machine compatability requirements in Set up and configure Broker
VM.
2. Click the New VM icon ( ) to open the Create a new virtual machine wizard.
3. In the Step 1 screen of the wizard, select Import existing disk image, and click Forward.
2. Click Browse Local, select the QCOW2 image file that you downloaded, and click Open.
OS type
Version
5. Click Forward.
Specify 2 CPUs.
7. Click Forward.
8. In the Step 4 screen of the wizard, set a Name for your new VM.
9. Click Finish.
Abstract
Learn how to set up your Cortex XDR Broker virtual machine (VM) on Microsoft Azure.
After you download your Cortex XDR Broker VHD (Azure) image, you need to upload it to Azure as a storage blob.
Download a Cortex XDR Broker VM VHD (Azure) image. For more information, see the virtual machine compatability requirements in Set up and configure
Broker VM.
Make sure you extract the zipped hard disk file on a server that has more then 512 GB of free space.
Task 2. Create a new storage blob on your Azure account by uploading the VHD file
Microsoft Windows
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Connect-AzAccount
az storage blob upload -f <vhd to upload> -n <vhd name> -c <container name> --account-name <account name>.
Ubuntu 18.04
2. Connect to Azure.
az login
1. In the Azure home page, navigate to Azure services → Disks and Add a new disk.
2. Navigate to the Create a managed disk → Basics page, and define the following information:
Read more...
Heading Parameter
Disk details Disk name: Enter a name for the disk object.
Source blob:
Read more...
2. From the navigation panel, select the bucket and then container to which you uploaded the Cortex XDR VHD image.
1. Create your Broker VM disk, and after deployment is complete, click Go to resource.
Read more...
Heading Parameter
Instance (Optional) Virtual machine name: Enter the same name as the disk name you defined.
details
Configure network security group—Select HTTPS to be able to access the Broker VM Web UI, and SSH to allow for remote access
when troubleshooting. Make sure to allow these connection to the Broker VM from secure networks only.
After deployment is complete, click Go to resource. You are directed to your VM page.
Creating the VM can take up to 15 minutes. The Broker VM Web UI is not accessible during this time.
6. Ensure that the VM you created contains an Outbound port rule that allows the broker to reach the Azure Instance Metadata Service using the IP
address 169.254.169.254 and port 80. For more information about the Azure Instance Metadata Service, see the Azure Documentation.
To configure an outbound rule on your VM, select Networking → Network settings, and under the Rules → Outbound port rules section, you can either:
For more information on creating a rule in an Azure VM, see Create a Security Rule in the Azure Documentation.
Configure a new outbound port rule by selecting Create port rule → Outbound port rule and setting the following settings in the Add outbound
security rule dialog box:
Name: Enter a unique name for this new outbound port rule, such as AzureInstanceMetadataService.
Edit an existing outbound port rule and ensure that the settings provided above for creating a new outbound port rule match what is already
configured in the rule.
Abstract
Learn how to set up your Cortex XDR Broker virtual machine (VM) on Microsoft Hyper-V.
To set up a Broker virtual machine (VM) image on Microsoft Hyper-V, you need to download a Cortex XDR Broker VM VHD image, and then upload it to your
newly created Microsoft Hyper-V VM. Microsoft Hyper-V 2012 or later is supported.
Download a Cortex XDR Broker VM VHD image. For more information, see the virtual machine compatability requirements in Set up and configure Broker VM.
Task 1. Create a new VM in the Hyper-V Manager and upload the VHD image
1. In the Hyper-V Manager, select New → Virtual Machine to open the New Virtual Machine Wizard.
2. In the Specify Name and Location screen, specify a Name for your VM, and click Next.
4. In the Assign Memory screen, set the Startup memory to 8192 MB, and click Next.
5. In the Configuring Networking screen, select the network adapter for the Connection, and click Next.
6. In the Connect Virtual Hard Disk screen, select Use an existing virtual hard disk, Browse to the downloaded VHD image file, and click Next.
7. In the Completing the New Virtual Machine Wizard screen, click Finish.
1. From the Virtual Machines list, right-click the VM that you created, and select Start.
2. When the State of the VM updates to Running, right-click the VM, and select Connect.
Abstract
Learn how to set up your Cortex XDR Broker virtual machine (VM) on Nutanix Hypervisor.
Download a Cortex XDR Broker VM QCOW2 image. For more information, see the virtual machine compatability requirements in Set up and configure Broker
VM.
2. In the Add Images page, ensure the Image Source is set to Image File, and click Add File.
3. Select the downloaded QCOW2 file and click Open. Additional fields related to the QCOW2 file are automatically displayed in the Add Image page,
where the Name and Type of file are automatically populated.
4. (Optional) Define the rest of the fields displayed for the QCOW2 file.
5. Click Next.
6. Select the location by defining the Placement Method and Select Clusters settings.
7. Click Save.
Saving the image to Nutanix hypervisor can take time as it’s a large file.
1. Select Hamburger menu → Compute & Storage → VMs, and click Create VM.
Read more...
Number of VMs: Select the number of VMs you want to create. The default is set to 1.
VM Properties
Cores per CPU: Select the number of cores to create for each CPU. The default number is 1.
3. Click Next.
Disks
Networks
5. Click Next.
6. Set the Management fields, where you can leave the default settings for the various fields.
7. Click Next.
Creating the VM can take up to 15 minutes. The Broker VM Web user interface is not accessible during this time.
Select Summary and you can use the IP Addresses and Host IP listed to connect to the VM.
Abstract
Learn more about how to set up you Cortex XDR Broker VM on VMware ESXi.
To set up the Broker VM on VMware ESXi, you deploy the OVA image provided in Cortex XDR. VMware ESXi 6.5 or later is supported. The instructions below
provide an example of doing this using vSphere Client 7.0.3.01400.
Ensure you have a virtualization platform installed that is compatible with an OVA image, and have an authenticated user account.
Download a Cortex XDR Broker VM OVA image. For more information, see the virtual machine compatibility requirements in Set up and configure Broker
VM.
1. From vSphere Client, right-click an inventory object for the virtual machine of your broker, and select Deploy OVF Template.
2. In the Select an OVF template page of the wizard, select Local file, click UPLOAD FILES to select the OVA image file that you downloaded, and click
NEXT.
3. In the Select a name and folder page, enter a unique name for the virtual machine, select a deployment location, and click NEXT.
4. In the Select a compute resource page, select a resource where to run the deployed VM template, and click NEXT.
5. In the Review details page, verify the OVA template details, and click NEXT.
6. In the Select storage page, define where and how to store the files for the deployed OVA template, and click NEXT. For more information on the options
available, see the VMware vSphere documentation.
7. In the Select networks page, select a source network and map it to a destination network, and click NEXT.
The Source Network column lists all networks that are defined in the OVA template.
8. In the Ready to complete page, review the details and click FINISH.
A new task for creating the virtual machine is displayed in the Recent Tasks pane. When the Status of the task reaches 100%, the task is complete, and
the new virtual machine is created on the selected resource.
9. Navigate to the resource where the new virual machine is created, right-click the resource, and select Power → Power On.
Abstract
Learn more about the different Broker VM data collector applets available to configure.
Data collector applets, except for the Local Agent Settings applet, require a Cortex XDR Pro per GB license. Pathfinder requires a Cortex XDR Pro per
Endpoint or Cortex XDR Pro per GB license.
The Broker VM has a number of data collector applets that you can configure to ingest different types of data. These data collector applets are in addition to
the others that are available in the Settings → Configurations → Data Collection → Collection Integrations page with a Cortex XDR Pro license.
Abstract
Learn more about activating the Broker VM with an Apache Kafka Collector applet.
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
Apache Kafka is an open-source distributed event streaming platform for high-performance data pipelines, streaming analytics and data integration. Kafka
records are organized into Topics. The partitions for each Topic are spread across the bootstrap servers in the Kafka cluster. The bootstrap servers are
responsible for transferring data from Producers to Consumer Groups, which enable the Kafka server to save offsets of each partition in the Topic consumed
by each group.
The Broker VM provides a Kafka Collector applet that enables you to monitor and collect events from Topics on self-managed on-prem Kafka clusters directly
to your log repository for query and visualization purposes. The applet supports Kafka setups with no authentication, with SSL authentication, and SASL SSL
authentication.
After you activate the Kafka Collector applet, you can collect events as datasets (<Vendor>_<Product>_raw) by defining the following.
Kafka connection details including the Bootstrap Server List and Authentication Method.
Topics Collection configuration for the Kafka topics that you want to collect.
Before activating the Kafka Collector applet, review and perform the following:
Kafka cluster set up on premises, from which the data will be ingested.
Create a user in the Kafka cluster with the necessary permissions and the following authentication details:
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Kafka Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Kafka Collector.
a. Specify the Bootstrap Server List, which is the <hostname/ip>:<port> of the bootstrap server (or servers). You can specify multiple servers,
separated with a comma. For example, hostname1:9092,1.1.1.1:9092.
No Authentication
Default connection method for a new Kafka setup, which doesn’t require authentication. With a standard Kafka setup, any user or application can
write messages to any topic, as well as read data from any topic.
SSL Authentication
Authenticate your connection to Kafka using an SSL certificate. Use this authentication method when the connection to the Kafka server is a secure
TCP, and upload the following:
Broker Certificate: Signed certificate used for the applet to authenticate to the Kafka server.
Private Key: Private key for the applet used for decrypting the SSL messages coming from the Kafka server.
(Optional) CA Certificate: CA certificate that was used to sign the server and private certificates. This CA certificate is also used to
authenticate the Kafka server identity.
Authenticate your connection to the Kafka server with your Username, Password, and optionally, your CA Certificate.
c. Test Connection to verify that you can connect to the Kafka server. An error message is displayed for each server connection test that fails.
Select the Topic Subscription Method for subscribing to Kafka topics. Use List Topics to specify a list of topics. Use Regex Pattern Matching to specify a
regular expression to search available topics.
Topic(s)
Specify Topic(s) from the Kafka server. For the List Topics subscription method, use a comma separated list of topics to subscribe to. For the Regex
Pattern Matching subscription method, use a regular expression to match the Topic(s) to subscribe to.
Specify a Consumer Group, a unique string or label that identifies the consumer group this log source belongs to. Each record that is published to a
Kafka topic is delivered to one consumer instance within each subscribing consumer group. Kafka uses these labels to load balance the records over all
consumer instances in a group. When specified, the Kafka collector uses the given consumer group. When not specified, Cortex XDR assigns the Kafka
applet collector to a new automatically generated consumer group which is automatically generated for this log source with the name PAN-<Broker VM
device name>-<topic name>.
Log Format
Select the Log Format from the list as either RAW (default), JSON, CEF, LEEF, CISCO, or CORELIGHT. This setting defines the parser used to parse all
the processed event types defined in the Topics field, regardless of the file names and extension. For example, if the Topics field is set to * and the Log
Format is JSON, all files (even those named file.log) in the cluster are processed by the collector as JSON, and any entry that does not comply with
the JSON format are dropped.
Specify the Vendor and Product which will be associated with each entry in the dataset. The vendor and product are used to define the name of your
Cortex Query Language (XQL) dataset (<Vendor>_<Product>_raw).
For CEF and LEEF logs, Cortex XDR takes the vendor and product names from the log itself, regardless of what you configure on this page.
Click Add Query to create another Topic Collection. Each topic can be added for a server only once.
As needed, you can manage your Topic Collection settings. Here are the actions available to you.
Disable/Enable a Topics Collection by hovering over the top area of the Topics Collection section, on the opposite side of the Topics Collection
name, and selecting the applicable button.
Rename a Topics Collection by hovering over the top area of the Topics Collection section, on the opposite side of the Topics Collection name, and
selecting the pen icon.
Delete a Topics Collection by hovering over the top area of the Topics Collection section, on the opposite side of the Topics Collection name, and
selecting the delete icon.
5. (Optional) Click Add Connection to create another Kafka Connection for collecting data.
As needed, you can return to your Kafka Collector settings to manage your connections.
Rename a connection by hovering over the default Collection name, and selecting the edit icon to edit the text.
Delete a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting the
delete icon. You can only delete a connection when you have more than one connection configured. Otherwise, this icon is not displayed.
7. Activate the Kafka Collector applet. The Activate button is enabled when all the mandatory fields are filled in.
After a successful activation, the APPS field displays Kafka with a green dot indicating a successful connection.
8. (Optional) To view metrics about the Kafka Collector, in the Broker VMs page, left-click the Kafka connection displayed in the APPS field for your Broker
VM.
Cortex XDR displays Resources, including the amount of CPU, Memory, and Disk space the applet is using.
Ensure that you Save your changes, which is enabled when all mandatory fields are filled in.
Abstract
Learn more about activating the Broker VM with a CSV Collector applet.
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
The Broker VM provides a CSV Collector applet that enables you to monitor and collect CSV (comma-separated values) log files from a shared Windows
directory directly to your log repository for query and visualization purposes. After you activate the CSV Collector applet on a Broker VM in your network, you
can ingest CSV files as datasets by defining the list of folders mounted to the Broker VM and setting the list of CSV files to monitor and upload to Cortex XDR
using a username and password.
Before activating the CSV Collector applet, review and perform the following:
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → CSV Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → CSV Collector.
3. Configure your CSV Collector by defining the list of folders mounted to the Broker VM and specifying the list of CSV files to monitor and upload to Cortex
XDR. You must also specify a username and password.
Mounted Folders
Field Description
Folder Specify the complete file path to the Windows directory containing the shared CSV files using the format: //host/<folder_path>.
Path For example, //testenv1pc10/CSVFiles.
After you configure the mounted folder details, Add ( ) details to the applet.
Field Description
Folder Path Select the monitored Windows directory and specify the name of the CSV file. Use a wildcard file search using these characters in
+ Name the name of the directory, CSV file name, and Path Exclusion.
*: Matches either multiple characters, such as 2021-report*.csv, or all CSV files with *.csv.
**: Searches all directories and subdirectories. For example, if you want to include all the CSV files in the directory and any
subdirectories, use the syntax //host/<folder_path>/**/*.csv.
When you implement a wildcard file search, ensure that the CSV files share the same columns and header rows as all other logs that
are collected from the CSV files to create a single dataset.
Path Specify the complete file path for any files from the Windows directory that you do not want included. The same wildcard file search
Exclusion characters are allowed in this field as explained above for the FOLDER PATH +NAME field. For example, if you want to exclude any
(Optional) CSV file prefixed with 'exclude_' in the directory and subdirectories of //host/<folder_path>, use the syntax
//host/<folder_path>/**/exclude_*.csv.
Tags To easily query the CSV data in the database, you can add a tag to the collected CSV data. This tag is appended to the data using
(Optional) the format <data>_<tag>.
Target Either select the target dataset for the CSV data or create a new dataset by specifying the name for the new dataset.
Dataset
After a successful activation, the APPS field displays CSV with a green dot indicating a successful connection.
The CSV Collector checks for new CSV files every 10 minutes.
5. (Optional) To view metrics about the CSV Collector, left-click the CSV connection in the APPS field for your Broker VM.
Cortex XDR displays Resources, including the amount of CPU, Memory, and Disk space the applet is using.
After you activate the CSV Collector, you can make additional changes as needed. To modify a configuration, left-click the CSV connection in the APPS
column to display the CSV settings, and select:
Abstract
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
The Broker VM provides a Database Collector applet that enables you to collect data from a client relational database directly to your log repository for query
and visualization purposes. After you activate the Database Collector applet on a Broker VM in your network, you can collect records as datasets
(<Vendor>_<Product>_raw) by defining the following.
Database connection details, where the connection type can be MySQL, PostgreSQL, MSSQL, and Oracle. Cortex XDR uses Open Database
Connectivity (ODBC) to access the databases.
Settings related to the query details for collecting the data from the database to monitor and upload to Cortex XDR .
Before activating the Database Collector applet, review and perform the following:
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → DB Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → DB Collector.
Database Connection
Field Description
Connection Select the type of database connection as MySQL, PostegreSQL, MSSQL, or Oracle.
Database Specify the database name for the type of database configured. This field is relevant when configuring a Connection Type for
MySQL, PostegreSQL, and MSSQL.
When configuring an Oracle connection, this field is called Service Name, so you can specify the name of the service.
Enable SSL Select whether to Enable SSL (default) to encrypt the data while in transit between the database and the Broker VM.
Database Query
Field Description
Rising Specify a column for the Database Collector applet to keep track of new rows from one input execution to the next. This column must
Column be included in the query results.
Retrieval Specify a Retrieval Value for the Database Collector applet to determine which rows are new from one input execution to the next.
Value Cortex XDR supports configuring this value as an integer or a string that contains a timestamp. The following string timestamp
formats are supported: ISO 8601 format, RFC 2822 format, date strings with month names spelled out, such as “January 1, 2022”,
date strings with abbreviated month names, such as “Jan 1, 2022", and date strings with two-digit years- MM/DD/YY.
The first time the input is run, the Database Collector applet only selects those rows that contain a value higher than the value you
specified in this field. Each time the input finishes running, the Database Collector applet updates the input's Retrieval Value with the
value in the last row of the Rising Column.
Unique IDs Specify the column name(s) to match against when multiple records have the same value in the Rising Column. This column must be
(Optional) included in the query results. This is a comma separated field that supports multiple values. In addition, when specifying a Unique
IDs, the query should use the greater than equal to sign (>=) in relation to the Retrieval Value. If the Unique IDs is left empty, the user
should use the greater than sign (>).
Field Description
Collect Specify the execution frequency of collection by designating a number and then selecting the unit as either Seconds, Minutes,
Every Hours, or Days.
Vendor and Specify the Vendor and Product for the type of data being collected. The vendor and product are used to define the name of your
Product Cortex Query Language (XQL) dataset (<Vendor>_<Product>_raw).
SQL Query Specify the SQL Query to run and collect data from the database by replacing the example query provided in the editor box. The
question mark (?) in the query is a checkpoint placeholder for the Retrieval Value. Every time the input is run, the Database Collector
applet replaces the question mark with the latest checkpoint value (i.e. start value) for the Retrieval Value.
Generate Select Generate Preview to display up to 10 rows from the SQL Query and Preview the results. The Preview works based on the
Preview Database Collector settings, which means that if after running the query no results are returned, then the Preview returns no records.
Add Query To define another Query for data collection on the configured database connection, select Add Query. Another Query section is
(Optional) displayed for you to configure.
4. (Optional) Click Add Connection to define another database connection to collect data from another client relational database.
As needed, you can return to your Database Collector settings to manage your connections. Here are the actions available to you:
Edit the connection name by hovering over the default Collection name, and selecting the edit icon to edit the text.
Edit the query name by hovering over the default Query name, and selecting the edit icon to edit the text.
Disable/Enable a query by hovering over the top area of the query section, on the opposite side of the query name, and selecting the applicable
button.
Delete a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting the
delete icon. You can only delete a connection when you have more than one connection configured. Otherwise, this icon is not displayed.
Delete a query by hovering over the top area of the query section, on the opposite side of the query name, and selecting the delete icon. You can
only delete a query when you have more than one query configured. Otherwise, this icon is not displayed.
After a successful activation, the APPS field displays DB with a green dot indicating a successful connection.
7. (Optional) To view metrics about the Database Collector, left-click the DB connection in the APPS field for your Broker VM.
Cortex XDR displays Resources, including the amount of CPU, Memory, and Disk space the applet is using.
After you activate the Database Collector, you can make additional changes as needed. To modify a configuration, left-click the DB connection in the
APPS column to display the Database Collector settings, and select:
Abstract
Learn more about activating a Broker VM with a Files and Folders Collector applet.
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
The Broker VM provides a Files and Folders Collector applet that enables you to monitor and collect logs from files and folders in a network share for a
Windows or Linux directory, directly to your log repository for query and visualization purposes. The Files and Folders collector applet only starts to collect files
that are more than 256 bytes and is only supported with a Network File System version 4 (NFSv4). After you activate the Files and Folders Collector applet, you
can collect files as datasets (<Vendor>_<Product>_raw) by defining the following.
Settings related to the list of files to monitor and upload to Cortex XDR, where the log format is either Raw (default), JSON, CSV, TSV, PSV, CEF, LEEF,
Corelight, or Cisco.
Before activating the Files and Folders Collector applet, review and perform the following:
Know the complete path to the files and folders that you want Cortex XDR to monitor.
Ensure that the user permissions for the network share include the ability to rename and delete files in the folder that you want to configure collection.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Files and Folder Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Files and Folder Collector.
Field Description
Folder Path Specify the path to the files and folders that you want Cortex XDR to monitor continuously to collect the files. The following formats
are available based on the type of machine you are using:
When using the Linux file share, including the Linux share with nfs, a Username and Password is not required, so these fields
are grayed out in the screen.
Recursive Select this checkbox to configure the Files and Folders Collector applet to recursively examine any subfolders for new files as long
as the folders are readable. This is not configured by default.
Username Specify the username to access the shared resource using a User Principal Name (UPN) format.
Field Description
Mode Select the mode to use for collecting data. The settings displayed change depending on your selection.
Tail: Continuously monitors the files for new data (default). The collector adds the new data from the files to the dataset.
Batch: Reads the files automatically at user determined intervals, updates the lookup datasets, and then renames or deletes
the uploaded source files. Renaming or deleting the read source files ensures that the collector always reads the most up-to-
date file. Depending on the Storage Method, the collector can Append the new data from the files to the dataset or
completely Replace the data in the dataset.
In Batch mode, the Files and Folders Collector supports collecting logs from a network share for a maximum file size of 500
MB.
Collect Every This option is only displayed in Batch Mode. Specify the execution frequency of collection by designating a number and then
selecting the unit as either Minutes, Hours, or Days.
After Files This option is only displayed in Batch Mode. Select what to do with the files after they are uploaded to the Cortex XDR server. You
Uploaded can Rename files with a suffix (default) or you can Delete files. When renaming, the suffix is added to the end of the original file
name using the format <file name>.<suffix>, which becomes the new name of the file.
Include Specify the files and folders that must match to be monitored by Cortex XDR. Multiple values are allowed with commas separating
the values and are case-sensitive.
Allowed wildcard:
Example 33.
Exclude Specify the files and folders that must match to not be monitored by Cortex XDR . Multiple values are allowed with commas
(Optional) separating the values.
Allowed wildcard:
Example 34.
Log Format Select the Log Format from the list as either Raw (default), JSON, CSV, TSV, PSV, CEF, LEEF, Corelight, or Cisco. This setting defines
the parser used to parse all the processed files as defined in the Include and Exclude fields, regardless of the file names and
extension. For example, if the Include field is set * and the Log Format is JSON, all files (even those named file.log) in the
specified folder are processed by the Files and Folders Collector as JSON, and any entry that does not comply with the JSON
format are dropped.
When uploading JSON files, Cortex XDR only parses the first level of nesting and only supports single line JSON format, such that
every new line means a separate entry.
# of Lines to Specify the number of lines to skip at the beginning of the file. This is set to 0 by default.
Skip
(Optional) Use this option only in cases where your files contain some sort of "header" lines, such as a general description, an introduction, a
disclaimer, or similar, and you want to skip ingesting them. The Lines to Skip are not part of the file format. For example, in CSV files,
there is no need to skip lines.
Field Description
Storage This option is only displayed in Batch Mode. Specify whether to Append the read data to the dataset, or to Replace all the data in the
Method dataset with the newly read data.
Append: This mode is useful for log files where you want to keep all the log info from before.
Replace: This mode is useful for adding inventory data from CSV and JSON files which include properties, for example, a list of
machines, a list of users, or a mapping of endpoints to users to create a lookup dataset. In each data collection cycle, the new
data completely replaces the existing data in the dataset. You can use the records from the lookup datasets for correlation and
enrichment through parsing rules, correlation rules, and queries.
When the storing method is Replace, the maximum size for the total data to be imported into a lookup dataset is 30 MB
each time the data is fetched.
The inventory data ingested using the Files and Folders collector is counted towards license utilization.
When you use a JOINT function with a lookup table in a query or correlation rule, make sure you configure the conflict
strategy to point to the raw dataset. This ensures that the system fields are taken from the raw dataset and not from the
lookup table.
Target This option is only displayed in Batch Mode when the storing method is Replace. Select the name of an existing Lookup dataset or
Dataset create a new Lookup dataset by specifying the name.
When you create a new target dataset name, specify a name that will be more meaningful for your users when they query the dataset.
For example, if the original file name is accssusr.csv, you can save the dataset as access_per_users.
Dataset names can contain special characters from different languages, numbers (0-9) and underscores (_). You can create dataset
names using uppercase characters, but in queries, dataset names are always treated as if they are lowercase.
You can't specify a file name that's the same as a system file name.
The name of a dataset created from a tsv file must always include the extension. If the original file name is mrkdptusrsnov23.tsv,
you can name save the dataset with the name marketing_dept_users_Nov_2023.tsv.
Vendor Specify the Vendor and Product for the type of data being collected. The vendor and product are used to define the name of your
and Cortex Query Language (XQL) dataset (<Vendor>_<Product>_raw).
Product
The Vendor and Product defaults to Auto-Detect when the Log Format is set to CEF or LEEF.
Generate Preview
Select Generate Preview to display up to 10 rows from the first file and Preview the results. The Preview works based on the Files and Folders Collector
settings, which means that if all the files that were configured to be monitored were already processed, then the Preview returns no records.
4. (Optional) Click Add Connection to define another Files and Folders connection for collecting logs from files and folders in a shared resource.
As needed, you can return to your Files and Folders Collector settings to manage your connections. Here are the actions available to you:
Edit the connection name by hovering over the default Collection name, and selecting the edit icon to edit the text.
Disable/Enable a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting
the applicable button.
Delete a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting the
delete icon. You can only delete a connection when you have more than one connection configured. Otherwise, this icon is not displayed.
After a successful activation, the APPS field displays File with a green dot indicating a successful connection.
7. (Optional) To view metrics about the Files and Folders, left-click the File connection in the APPS field for your Broker VM.
Cortex XDR displays Resources, including the amount of CPU, Memory, and Disk space the applet is using.
After you activate the Files and Folders Collector, you can make additional changes as needed. To modify a configuration, left-click the File connection in
the APPS column to display the Files and Folder Collector settings, and select:
Abstract
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
The Broker VM provides a FTP Collector applet that enables you to monitor and collect logs from files and folders via FTP, FTPS, and SFTP directly to your log
repository for query and visualization purposes. A maximum file size of 500 MB is supported. After you activate the FTP Collector applet on a Broker VM in your
network, you can collect files as datasets (<Vendor>_<Product>_raw) by defining the following.
FTP, FTPS, or SFTP (default) connection details with the path to the folder containing the files that you want to monitor and upload to Cortex XDR .
Settings related to the list of files to monitor and upload to Cortex XDR , where the log format is either Raw (default), JSON, CSV, TSV, PSV, CEF, LEEF,
Corelight, or Cisco. Once the files are uploaded to Cortex XDR , you can define whether in the source directory the files are renamed or deleted.
Before activating the FTP Collector applet, review and perform the following:
Ensure that the user permissions for the FTP, SFTP, or FTPS include the ability to rename and delete files in the folder that you want to configure
collection.
When setting up an FTPS Collector with a server using a Self-signed certificate, you must upload the certificate first to the Broker VM as a Trusted CA
certificate.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → FTP Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → FTP Collector.
FTP Connection
Field Description
Host Enter the hostname, IP address, or FQDN of the FTP server. When configuring a FTPS Collector, you must specify the FQDN.
Field Description
SSH Key-Based This checkbox is only displayed when setting a SFTP Collector, which works with both Username and Password authentication
Authentication or SSH Key-Based Authentication. You can either leave this checkbox clear and set a Username and Password (default) or
select SSH Key-Based Authentication to Browse to a Private Key. When this connection is established with a server using a
Self-signed certificate, you must upload it first to the Broker VM as a Trusted CA Certificate.
When configuring an SFTP connection, Cortex XDR expects the private key to be in the RSA format that is included in the ----
-BEGIN RSA PRIVATE KEY----- tag. Cortex XDR does not support providing the private key in the OpenSSH format from
the -----BEGIN OPENSSH PRIVATE KEY----- tag.
When using ssh-keygen using a Mac, you get the OpenSSH format by default. The command for getting the RSA format is:
Folder Path Specify the path to the folder on the FTP site where the files are located that you want to collect.
Recursive Select this checkbox to configure the FTP Collector applet to recursively examine any subfolders for new files as long as the
folders are readable. This is not configured by default.
FTP Settings
Field Description
Collect Every Specify the execution frequency of collection by designating a number and then selecting the unit as either Minutes, Hours, or
Days.
After Files Select what to do with the files after they are uploaded to the Cortex XDR server. You can either select Rename files with a suffix
Uploaded (default) and then you must specify the Suffix or Delete files. When adding a suffix, the suffix is added at the end of the original file
name using the format <file name>.<suffix>, which becomes the new name of the file.
Include Specify the files and folders that must match to be monitored by Cortex XDR . Multiple values are allowed with commas separating
the values.
Allowed wildcard:
Example 35.
Exclude Specify the files and folders that must match to not be monitored by Cortex XDR . Multiple values are allowed with commas
(Optional) separating the values.
Allowed wildcard:
Example 36.
Field Description
Log Format Select the Log Format from the list as either Raw (default), JSON, CSV, TSV, PSV, CEF, LEEF, Corelight, or Cisco, which indicates to
Cortex XDR how to parse the data in the file. This setting defines the parser used to parse all the processed files as defined in the
Include and Exclude fields, regardless of the file names and extension. For example, if the Include field is set * and the Log Format
is JSON, all files (even those named file.log) in the specified folder are processed by the FTP Collector as JSON, and any entry
that does not comply with the JSON format are dropped.
When uploading JSON files, Cortex XDR only parses the first level of nesting and only supports single line JSON format, such that
every new line means a separate entry.
# of Lines to Enter the number of lines to skip at the beginning of the file. This is set to 0 by default.
Skip
Use this option only in cases where your files contain some sort of "header" lines, such as a general description, an introduction, a
(Optional)
disclaimer, or similar, and you want to skip ingesting them. The Lines to Skip are not part of the file format. For example, in CSV files,
there is no need to skip lines.
Specify the Vendor and Product for the type of data being collected. The vendor and product are used to define the name of your Cortex Query
Language (XQL) dataset (<Vendor>_<Product>_raw).
The Vendor and Product defaults to Auto-Detect when the Log Format is set to CEF or LEEF.
Preview
Select Generate Preview to display up to 10 rows from the first file and Preview the results. The Preview works based on the FTP Collector settings, which
means that if all the files that were configured to be monitored were already processed, then the Preview returns no records.
4. (Optional) Click Add Connection to define another FTP connection for collecting logs from files and folders via FTP, FTPS, or SFTP.
As needed, you can return to your FTP Collector settings to manage your connections. Here are the actions available to you:
Edit the connection name by hovering over the default Collection name, and selecting the edit icon to edit the text.
Disable/Enable a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting
the applicable button.
Delete a connection by hovering over the top area of the connection section, on the opposite side of the connection name, and selecting the
delete icon. You can only delete a connection when you have more than one connection configured. Otherwise, this icon is not displayed.
After a successful activation, the APPS field displays FTP with a green dot indicating a successful connection.
7. (Optional) To view metrics about the FTP Collector, left-click the FTP connection in the APPS field for your Broker VM.
Cortex XDR displays Resources, including the amount of CPU, Memory, and Disk space the applet is using.
After you activate the FTP Collector, you can make additional changes as needed. To modify a configuration, left-click the FTP connection in the APPS
column to display the FTP Collector settings, and select:
Abstract
Learn more about activating a Local Agent Settings applet on a Broker VM.
The Local Agent Settings applet on the Palo Alto Networks Broker VM enables you to:
To deploy Cortex XDR in restricted networks where endpoints do not have a direct connection to the internet, setup the Broker VM to act as a proxy that routes
all the traffic between the Cortex XDR management server and XDR agents/XDR Collectors via a centralized and controlled access point. This enables your
To reduce your external network bandwidth loads, you can cache XDR agent installations, upgrades, and content updates on your Cortex XDR Broker VM. The
Broker VM retrieves from Cortex XDR the latest installers and content files every 15 minutes and stores them for a 30-days retention period since an agent last
asked for them. If the files were not available on the Broker VM at the time of the ask, the agent proceeds to download the files directly from the Cortex XDR
server. If asked by an agent, the Broker VM can also cache a specific installer that is not on the list of latest installers.
Requirements
Before you activate the Local Agent Settings applet, verify the following prerequisites and limitations listed by the main features.
General
Agent Proxy
Supported with Traps agent version 5.0.9 and Traps agent version 6.1.2 and later releases.
Broker VM supports forwarding the XDR Collectors request URLs on all Broker VM versions.
Broker VMs can act as as a proxy for routing XDR Collector traffic to the Cortex XDR tenant. The Broker VM does not cache XDR Collector installers.
Supported with XDR agent version 7.4 and later releases and Broker VM 12.0 and later.
Requires a Broker VM with an 8-core processor to support caching for 10K endpoints.
For the agent installer and content caching to work properly, you must configure different settings where the instructions differ depending on whether you
are configuring a standalone Broker VM or High Availability (HA) cluster:
Standalone broker
FQDN: A FQDN must be configured for the standalone broker as configured in your local DNS server. This is to ensure that XDR agents know who
to access to receive agent installer and content caching data.
SSL certificates: Ensure you upload strong cipher SHA256-based SSL certificates when you setup the Broker VM. For more information, see Set up
and configure Broker VM.
Download source: Requires adding the Broker VM as a download source in your Agent Settings Profile.
HA cluster
FQDN: A FQDN must be configured in the cluster settings as configured in your local DNS server, which points to a Load Balancer. This ensures
that the XDR agents turn to the load balancer to route the requests for the agent installer and content caching data to the correct broker. For more
information on configuring the Load Balancer FQDN in a HA cluster, see Configure High Availability Cluster.
SSL certificates: In each broker in the cluster, ensure you upload strong cipher SHA256-based SSL certificates when you setup the Broker VM. For
more information, see Set up and configure Broker VM.
Download source: Requires adding the cluster as a download source in your Agent Settings Profile.
Agents communicate with the Broker VM using Hypertext Transfer Protocol Secure (https) over port 443. You must ensure this port is open so that the Broker
VM is accessible to all agents that are configured to use its cache.
The broker needs to communicate with the same URLs that the agents communicate with to avoid receiving any inaccessible URLs errors. For a complete list
of the URLs that you need to allow access, see Enable access to required PANW resources.
After you configure and register your Palo Alto Networks Broker VM, proceed to set up your Local Agent Settings applet.
2. In either the Brokers tab or the Clusters tab, locate your Broker VM.
Ensure your proxy server is configured. If not, proceed to add it as described in Set up and configure Broker VM.
c. In the Activate Local Agent configuration, enable Agent Proxy by setting the Proxy to Enabled, and specify the Port. You can also configure the
Listening Interface, where the default is set to All.
When you install your XDR agents, you must configure the IP address of the Broker VM and a port number during the installation. You can use the
default 8888 port or set a custom port. You are not permitted to configure port numbers between 0-1024 and 63000-65000, or port numbers 4369,
5671, 5672, 5986, 6379, 8000, 9100, 15672, 25672. Additionally, you are not permitted to reuse port numbers you already assigned to the Syslog
Collector applet.
If not, upload them as described in Set up and configure Broker VM and Save.
Right-click the Broker VM, select Configure. Under Device Name, enter your Broker VM FQDN. This FQDN record must be configured in your local
DNS server.
A FQDN must be configured for Agent Installer and Content Caching to function properly.
You can either right-click the Broker VM and select Add App → Local Agent Settings, or in the APPS column, select Add → Local Agent Settings.
In the Activate Local Agent configuration, enable Agent Installer and Content Caching by setting Caching to Enabled.
You can only enable Agent Installer and Content Caching, when in the Broker VM Configuration, you've uploaded your signed SSL Server
Certificate and key and set the FQDN. For more information, see the Agent Installer and Content Caching requirements explained above.
e. To enable agents to start using Broker VM caching, you must add the Broker VM as a download source in your Agent Settings profile and select
which Broker VMs to use. Then, ensure the profile is associated with a policy for your target agents.
5. After a successful activation, the APPS field displays Local Agent Settings with a green dot indicating a successful connection. Left-click the Local Agent
Settings connection to view the applet status and resource usage.
To help you easily troubleshoot connectivity issues for a Local Agent Settings applet on the Palo Alto Networks Broker VM, Cortex XDR displays a list of
Denied URLs. These URLs are displayed when you left-click the Local Agent Settings applet to view the Connectivity Status. As a result, in a situation
where the Local Agent Settings applet is reported as activated with a failed connection, you can easily determine the URLs that need to be allowed in
your network environment.
6. Manage the local agent settings. After the local agent settings have been activated, left-click the Local Agent Settings connection in the APPS column to
display the settings, and select:
Abstract
Ingesting records from external sources requires a Cortex XDR Pro per GB license.
To receive NetFlow flow records from an external source, you must first set up the NetFlow Collector applet on a Broker VM within your network. NetFlow
versions 5, 9, and IPFIX are supported.
To increase the log ingestion rate, you can add additional CPUs to the Broker VM. The NetFlow Collector listens for flow records on specific ports either from
any, or from specific IP addresses.
After the NetFlow Collector is activated, the NetFlow Exporter sends flow records to the NetFlow Collector, which receives, stores, and pre-processes that data
for later analysis.
Performance Requirements
Since multiple network devices can send data to a single NetFlow Collector, we recommend that you configure a maximum of 50 NetFlow Collectors per Broker
VM applet, with a maximum aggregated rate of approximately 50K flows per second (FPS) to maintain system performance.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → NetFlow Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → NetFlow Collector.
General Settings
Specify the number of the UDP Port on which the NetFlow Collector listens for flow records (default 2055).
This port number must match the UDP port number in the NetFlow exporter device. The rules for each port are evaluated, line by line, on a first match
basis. Cortex XDR discards logs for non-configured flow records without an “Any” rule.
Since Cortex XDR reserves some port numbers, it is best to select a port number that is not in the range of 0-1024 (except for 514), in the range of
63000-65000 or has one of the following values: 4369, 5671, 5672, 5986, 6379, 8000, 8888, 9100, 15672, or 28672.
Custom Settings
Field Description
Source Specify the IP address or a Classless Inter-Domain Routing (CIDR) of the source network device that sends the flow records to Cortex
Network XDR . Leave the field empty to receive data from any device on the specified port (default). If you do not specify an IP address or a
CIDR, Cortex XDR can receive data from any source IP address or CIDR that transmits via the specified port. If IP addresses overlap in
multiple rows in the Source Network field, such as 10.0.0.10 in the first row and 10.0.0.0/24 in the second row, the NetFlow Collector
captures the IP address in the first row.
Vendor Specify a particular vendor and product to be associated with each dataset entry or leave the default IP Flow setting.
and
The Vendor and Product values are used to define the name of your Cortex Query Language (XQL) dataset
Product
<Vendor>_<Product>_raw. If you do not define a vendor or product, Cortex XDR uses the default values with the resulting dataset
name ip_flow_ip_flow_raw. Consider changing the default values in order to uniquely identify the source network device.
After each configuration, select to save your changes and then select Done to update the NetFlow Collector with your settings.
Edit: To change the UDP Port, Source Network, Vendor, or Product defined.
You can make additional changes to the Source Network by right-clicking on the Source Network value.
The options available change, according to the set Source Network value.
Option Description
Edit To change the UDP Port, Source Network, Vendor, or Product defined.
Copy entire row To copy the Source Network, Product, and Vendor information.
Open IP View To view network operations and to view any open incidents on this IP within a defined period. This option is only available
when the Source Network value is a specific IP address or CIDR.
Open in Quick To search for information using the Quick Launcher shortcut . This option is only available when the Source Network
Launcher value is a specific IP address or CIDR.
To prioritize the order of the NetFlow formats listed for the configured data source, drag and drop the rows to change their order.
After successful activation, the APPS field displays NetFlow with a green dot indicating a successful connection.
7. (Optional) To view NetFlow Collector metrics, left-click the NetFlow connection in the APPS field for your Broker VM.
Option Description
Logs Received and Number of logs that the applet received and sent per second over the last 24 hours. If there are more logs received than
Logs Sent sent, this can indicate a connectivity issue.
Resources Displays the amount of CPU, Memory, and Disk space the applet uses.
After you activate the NetFlow Collector, you can make additional changes. To modify a configuration, left-click the NetFlow connection in the APPS
column to display the NetFlow Collector settings, and select:
Abstract
Learn more about activating the Network Mapper to scan your network.
Activating the Network Mapper requires a Cortex XDR Pro per Endpoint or Cortex XDR Pro per GB license.
The Network Mapper allows you to scan your network to detect and identify unmanaged hosts in your environment according to defined IP address ranges.
The Network Mapper configurations are used to locate unmanaged assets that appear in the Assets table. For more information, see Asset Inventory.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Network Mapper.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Network Mapper.
Field Description
Scan Method Select the either ICMP echo or TCP SYN scan method to identify your network hosts. When selecting TCP SYN you can
enter single ports and ranges together, for example 80-83, 443.
Scan Requests per Define the maximum number of scan requests you want to send on your network per second. By default, the number of
Second scan requests are defined as 1000.
Each IP address range can receive multiple scan requests based on it's availability.
Scanning Scheduler Define when you want to run the network mapper scan. You can select either daily, weekly, or monthly at a specific time.
Scanned Ranges Select from the list of exiting IP address ranges to scan. Make sure to after each selection.
IP address ranges are displayed according to what you defined as your Network Parameters.
After a successful activation, the APPS field displays Network Mapper with a green dot indicating a successful connection.
5. In the APPS field, left-click the Network Mapper connection to view the following scan and applet metrics:
Scan Details
Field Description
Scan Duration Period of time in minutes and seconds the scan is running.
Scan Progress How much of the scan has been completed in percentage and IP address ratio.
Detected Hosts Number of hosts identified from within the IP address ranges.
Applet Metrics
After the network mapper has been activated, left-click the Network Mapper connection in the APPS column to display the Network Mapper settings, and
select:
Abstract
Learn how to activate Pathfinder, an applet that deploys a non-persistent data collector on endpoints that are not managed by a Cortex XDR agent.
Pathfinder requires a Cortex XDR Pro per Endpoint or Cortex XDR Pro per GB license.
The Pathfinder applet isn't supported when configuring Broker VMs in high availability (HA) clusters.
Pathfinder is a highly recommended, but optional component integrated with the Broker VM that deploys a non-persistent data collector on network hosts,
servers, and workstations that are not managed by a Cortex XDR agent. The collector is automatically triggered by analytics-type alerts with a severity of high
and medium and provides insights into assets that you couldn't scan previously. For more information about analytics alerts, see Cortex XDR Analytics Alert
Reference.
When an alert is triggered, the data collector can run for up to two weeks gathering EDR data from unmanaged hosts. You can track and manage the collector
directly from Cortex XDR, and investigate the EDR data by running a query from the Query Center.
Except for Vanilla Windows 7, Cortex XDR supports activating Pathfinder on Windows operating systems with PowerShell version 3 and later. Verify these
requirements wherever you want to activate Pathfinder.
The Pathfinder configuration must contain at least one IP address range to run. Make sure that your internal IP address ranges are defined on your
network. To avoid a collision, IP address ranges can only be associated with one Pathfinder applet. For more information, see Configure Cortex XDR
network parameters.
When using Kerberos as the authentication method for the Pathfinder credentials, confirm that you have a reverse DNS zone and reverse DNS records
on your DNS server. The Broker VM has access to domain controllers over port 88 and is able to acquire the authentication ticket. It is recommended to
use Kerberos for better security.
The Broker VM requires a Service Account (SA) that has administrator privileges on all Windows workstations and servers in your environment. Cortex
XDR recommends that you limit the number of users granted access to the SA account as it poses a credential compromise security threat.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Pathfinder.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Pathfinder.
Pathfinder isn't supported when configuring Broker VMs in high availability (HA) clusters.
Define the domain access credentials. Make sure to enter the user name and password using the Service Account with Local Admin privileges on
the remote endpoint.
(Broker VM version 9.0 and later) Define Pathfinder to access target hosts using credentials stored in your CyberArk vault. Credentials are not
stored on the Broker VM; Pathfinder queries CyberArk each time according to the defined parameters.
4. Click Test to run a test on the credentials and Pathfinder permissions. Testing may take a few minutes to complete but ensures that Pathfinder can
deploy a data collector.
6. Click Next, and select the IP address ranges to scan from your defined network configurations.
By default, every IP address range will use the Pathfinder credentials and settings you defined in the Credentials section and is labeled as an Applet
Configuration.
If you want to configure other credentials for a specific range, override the settings in the right pane. IP address ranges you edit are labeled as Custom
Configuration. Make sure to test the credentials for this specific range.
7. Activate Pathfinder. After the activation is complete, Pathfinder is displayed in the APPS column with a green dot indicating a successful connection.
Hovering over the Pathfinder connection shows details such as the connectivity status, handled and failed tasks, and the resources the applet is using.
Abstract
Learn how to set up and activate the Syslog Collector applet on a Broker VM within your network.
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
To receive Syslog data from an external source, you must first set up the Syslog Collector applet on a Broker VM within your network. The Syslog Collector
supports a log ingestion rate of 90,000 logs per second (lps) with the recommended Broker VM setup.
To increase the log ingestion rate, you can add additional CPUs to the Broker VM. The Syslog Collector listens for logs on specific ports and from any or
specific IP addresses.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Syslog Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Syslog Collector.
Cortex XDR supports multiple sources over a single port on a single Syslog Collector. The following options are available:
Edit the Optional Settings of the default PORT/PROTOCOL: 514/UDP. See Task 3.
Once configured, you cannot change the Port/PROTOCOL. If you don’t want to use a data source, ensure to remove the data source from the list as
explained in Task 5.
Field Description
Format Select the Syslog format you want to send to the UDP 514 protocol and port on the Syslog Collector: Auto-Detect (default), CEF, LEEF,
CISCO, CORELIGHT, or RAW.
The Vendor and Product defaults to Auto-Detect when the Log Format is set to CEF or LEEF.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the Syslog Collector settings. Yet, when the values are blank in the event log row, Cortex XDR uses
the Vendor and Product that you specified in the Syslog Collector settings. If you did not specify a Vendor or Product in the
Syslog Collector settings and the values are blank in the event log row, the values for both fields are set to unknown.
Vendor Specify a particular vendor and product for the Syslog format defined or leave the default Auto-Detect setting.
and
Product
Source Specify the IP address or Classless Inter-Domain Routing (CIDR). If you leave this blank, Cortex XDR will allow receipt of logs from any
Network source IP address or CIDR that transmits over the specified protocol and port. When you specify overlapping addresses in the Source
Network field in multiple rows, such as 10.0.0.10 in the first row and 10.0.0.0/24 in the second row, the order of the addresses matter. In
this example, the IP address 10.0.0.10 is only captured from the first row definition. For more information on prioritizing the order of the
syslog formats, see Task 5.
After each configuration, select to save the changes and then Done to update the Syslog Collector with your settings.
Protocol
Choose a protocol over which the Syslog will be sent: UDP, TCP, or Secure TCP.
When configuring the Protocol as Secure TCP, these additional General Settings are available:
Private Key: Browse to your private key for the server certificate.
The log forwarder (for example, a firewall) authenticates the Broker VM by default. The Broker VM does not authenticate the log forwarder by
default, but you can use this option to set set up such authentication. If you use this option, ensure that you have a client certificate on the log
forwarding side that matches the CA certificate on the Broker VM side.
Minimal TLS Version: Select either 1.0 or 1.2 (default) as the minimum TLS version allowed.
The server certificate and private key pair is expected in a PEM format.
Cortex XDR will notify you when your certificates are about to expire.
Port
Choose a port on which the Syslog Collector will listen for logs.
Because some port numbers are reserved by Cortex XDR , you must choose a port number that is not:
Values of 4369, 5671, 5672, 5986, 6379, 8000, 8888, 9100, 15672, or 28672
Field Description
Format Select the Syslog format you want to send to the UDP/514 protocol and port on the Syslog Collector: Auto-Detect (default), CEF, LEEF,
CISCO, CORELIGHT, or RAW.
Vendor Enter a particular vendor and product for the Syslog format defined or leave the default Auto-Detect setting.
and
Product
Source Specify the IP address or Classless Inter-Domain Routing (CIDR). If you leave this blank, Cortex XDR will allow receipt of logs from any
Network source IP address or CIDR that transmits over the specified protocol and port. When you specify overlapping addresses in the Source
Network field in multiple rows, such as 10.0.0.10 in the first row and 10.0.0.0/24 in the second row, the order of the addresses matter.
In this example, the IP address 10.0.0.10 is only captured from the first row definition. For more information on prioritizing the order of
the syslog formats, see Task 5.
After each configuration, select to save the changes and then Done to update the Syslog Collector with your settings.
Task 5. Make additional changes to the Syslog Collector data sources configured
To remove a Syslog Collector data source, right-click the row after the Port/Protocol entry, and select Remove.
To prioritize the order of the Syslog formats listed for the protocols and ports configured, drag and drop the rows to the order you require.
Click Save. After a successful activation, the APPS field displays Syslog with a green dot indicating a successful connection.
To view metrics about the Syslog Collector, left-click the Syslog connection in the APPS field for your Broker VM. Cortex XDR displays the following information:
Metric Description
Logs Received and Number of logs received and sent by the applet per second over the last 24 hours. If the number of incoming logs received is
Logs Sent larger than the number of logs sent, it could indicate a connectivity issue.
Resources Displays the amount of CPU, Memory, and Disk space the applet is using.
After the Syslog Collector has been activated, you can make additional changes to your configuration if needed. To modify a configuration, left-click the Syslog
connection in the APPS column to display the Syslog Collector settings, and select:
Abstract
Set up your Windows Event Collector to connect with the Cortex XDR Broker VM and collect events.
Ingesting logs and data from external sources requires a Cortex XDR Pro per GB license.
After you have configured and registered your Broker VM, activate your Windows Event Collector application.
To enable the collection of the event logs, you need to configure and establish trust between the Windows Event Forwarding (WEF) collectors and the WEC.
Establishing trust between the WEFs and the WEC is achieved by mutual authentication over TLS using server and client certificates. The WEF, a WinRM
plugin, runs under the Network Service account. Therefore, you need to provide the WEFs with the relevant certificates and grant the account access
permissions to the private key used for client authentication, for example, authenticate with WEC.
You can also activate the Windows Event Collector on Windows Core. For more information, see Activate Windows Event Collector on Windows Core.
Ensure you meet the following prerequisites before activating the Windows Event Collector applet:
You must configure different settings related to the FQDN where the instructions differ depending on whether you are configuring a standalone Broker
VM or High Availability (HA) cluster.
Standalone broker
A FQDN must be configured for the standalone broker as configured in your local DNS server. Therefore, the Broker VM is registered in the DNS, its
FQDN is resolvable from the events forwarder (Windows server), and the Broker VM FQDN is configured. For more information, see Configure High
Availability Cluster.
HA cluster
A FQDN must be configured in the cluster settings as configured in your local DNS server, which points to a Load Balancer. For more information, see
Configure High Availability Cluster.
After ingestion, Cortex XDR normalizes and saves the Windows event logs in the dataset xdr_data. The normalized logs are also saved in a unified format in
microsoft_windows_raw. This enables you to search the data using Cortex Query Language (XQL) queries, build correlation rules, and generate
dashboards based on the data.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click Add → Windows Event Collector.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Windows Event Collector.
3. In the Activate Windows Event Collector window, define the Collected Events to configure the events collected by the applet. This lists event sources
from which you want to collect events.
Field Description
Source Select from the pre-populated list with the most common event sources on Windows Servers. The event source is the name of the
software that logs the events.
A source provider can only appear once in your list. When selecting event sources, depending on the type event you want to
forward, ensure the event source is enabled, for example auditing security events. If the source is not enabled, the source
configuration in the given row will fail.
Field Description
Minimal TLS Select either 1.0 or 1.2 (default) as the minimum TLS version allowed. Ensure that you verify that all Windows event forwarders are
Version supporting the minimal defined TLS version.
Example 37.
To forward all the Windows Event Collector events to the Broker VM, define as follows:
Source: ForwardedEvents
By default, Cortex XDR collects Palo Alto Networks predefined Security events that are used by the Cortex XDR detectors. Removing the Security
collector interferes with the Cortex XDR detection functionality. Restore to Default to reinstate the Security event collection.
4. Click Activate. After a successful activation, the APPS field displays WEC with a green dot indicating a successful connection.
1. In the APPS column, left-click the WEC connection to display the Windows Event Collector settings, and select Configure.
2. In the Windows Event Forwarder Configuration window, perform the following tasks:
a. In the Subscription Manager URL field, click (copy) . This will be used when you configure the subscription manager in the GPO (Global Policy
Object) on your domain controller.
b. Enter a password in the Define Client Certificate Export Password field to be used to secure the downloaded WEF certificate that establishes the
connection between your DC/WEF and the WEC. You will need this password when the certificate is imported to the events forwarder.
To view your Windows Event Forwarding configuration details at any time, select your Broker VM, right-click and navigate to Windows Event
Collector → Configure.
Cortex XDR monitors the certificate and triggers a Certificate Expiration notification 30 days prior to the expiration date. The notification is sent daily
specifying the number of days left on the certificate, or if the certificate has already expired.
You must install the WEF certificate on every Windows Server, whether DC or not, for the WEFs that are supposed to forward logs to the Windows Event
Collector applet on the Broker VM.
1. Locate the PFX file you downloaded from the Cortex XDR console and double-click to open the Certificate Import Wizard.
b. Verify the File name field displays the PFX certificate file you downloaded and click Next.
c. In the Passwords field, specify the Client Certificate Export Password you defined in the Cortex XDR console followed by Next.
d. Select Automatically select the certificate store based on the type of certificate, and then click Next and Finish.
4. In the file explorer, navigate to Certificates and verify the following for each of the folders:
In the Trusted Root Certification Authorities → Certificates folder, ensure the CA ca.wec.paloaltonetworks.com is displayed.
6. Right-click the certificate and navigate to All tasks → Manage Private Keys.
7. In the Permissions window, select Add and in the Enter the object name section, enter NETWORK SERVICE, and then click Check Names to verify the
object name. The object name is displayed with an underline when valid. and then click OK.
8. Click OK, verify the Group or user names that are displayed, and then click Apply Permissions for private keys.
Task 4. Add the Network Service account to the domain controller Event Log Readers group.
You must install the WEF certificate on every Windows Server, whether DC or not, for the WEFs that are supposed to forward logs to the Windows Event
Collector applet on the Broker VM.
1. To enable events forwarders to forward events, the Network Service account must be a member of the Active Directory Event Log Readers group. In
PowerShell, execute the following command on the domain controller that is acting as the event forwarder:
PS C:\> net localgroup "Event Log Readers" "NT Authority\Network Service" /add
Example 38.
`PS C:\Users\Administrator> wevtutil gl security
name: security
enabled: true
type: Admin
owningPublisher:
isolation: Custom
channelAccess: O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)
logging:
logFileName: %SystemRoot%\System32\Winevt\Logs\security.evtx
retention: false
autoBackup: false
maxSize: 134217728
Example 39.
PS C:\Users\Administrator> wevtutil sl security "/ca:O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;S-1-5-20)"
Make sure you grant access on each of your domain controller hosts.
Task 5. Create a WEF Group Policy that applies to every Windows server you want to configure as a WEF
2. In the Group Policy Management window, navigate to Domains → your domain name → Group Policy Object, right-click and select New.
3. In the New GPO window, enter your group policy Name: as Windows Event Forwarding, and click OK.
4. Navigate to Domains → your domain name → Group Policy Objects → Windows Event Forwarding, right-click and select Edit.
1. Select Computer Configuration → Policies → Windows Settings → Security Settings → System Services, and in the view panel locate and
double-click Windows Remote Management (WS-Management).
2. Mark the Define this policy setting checkbox, select Automatic, and then click Apply and OK.
At a minimum for your WEC configuration, you must enable logging of the same events that you have configured to be collected in your WEC
configuration on your domain controller. Otherwise, you will not be able to view these events as the WEC only controls querying not logging. For
example, if you have configured authentication events to be collected by your WEC using an authentication protocol, such as Kerberos, you
should ensure all relevant audit events for authentication are configured on your domain controller. In addition, you should ensure that all relevant
audit events that you want collected, such as the success and failure of account logins for Windows Event ID 4625, are properly configured,
particularly for those that you want Cortex XDR to apply grouping and analytics inspection.
Example 40.
Here is an example of how to configure the WEC to collect authentication events using Kerberos as the authentication protocol to enable the
collection of Broker VM supported Kerberos events, Kerberos pre-authentication, authentication, request, and renewal tickets.
1. Select Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies
→ Account Logon.
2. In the view pane, right-click Audit Kerberos Authentication Service and select Properties. In the Audit Kerberos Authentication Service
window, mark Configure the following audit events:, and click Success and Failure followed by Apply and OK.
Navigate to Computer Configuration → Policies → Administrative Templates: Policy definitions → Windows Components → Event Forwarding, right-click
Configure target Subscription Manager and select Edit.
b. In the Options section, select Show and in the Show Contents window, paste the Subscription Manage URL you copied from the Cortex XDR
console, and then click OK.
Select Computer Configuration → Preferences → Control Panel Settings → Local Users and Groups, right-click and select New → Local Group.
b. In the Members section, click Add and enter in the Name filed Network Service followed by OK.
You must type out the name, do not select the name from the browse button.
c. Click Apply and OK to save your changes, and close the Group Policy Management Editor window.
If Windows Firewall is enabled on your event forwarders, you will have to define an outbound rule to enable the WEF to reach port 5986 on the WEC.
In the Group Policy Management window, select Computer Configuration → Policies → Windows Settings → Security Settings → Windows Firewall with
Advanced Security → Outbound Rules, right-click and select New Rule.
b. Protocols and Ports: Select TCP and in the Specific Remote Ports field enter 5986 followed by Next.
d. Profile: Select Domain and disable Private and Public followed by Next.
Link the policy to the OU or the group of Windows servers you would like to configure as event forwarders. In the following flow, the domain controllers are
configured as an event forwarder.
1. Select Group Policy Management → <your domain name> → Domain Controllers, right-click and select Link an existing GPO....
2. In the Select GPO window, click Windows Event Forwarding followed by OK.
Verify that the Computer Policy update has completed successfully. User Policy update has completed successfully.
confirmation message is displayed.
After the Windows Event Collector has been activated in the Cortex XDR Management Console, left-click the WEC connection in the APPS column to display
the Windows Event Collector settings, and select:
To view metrics about the Windows Event Collector, left-click the WEC connection in the APPS field for your Broker VM, and you'll see the following metrics:
Logs Received and Logs Sent: Number of logs received and sent by the applet per second over the last 24 hours. If the number of incoming logs
received is larger than the number of logs sent, it could indicate a connectivity issue.
Resources: Displays the amount of CPU, Memory, and Disk space the applet is using.
Abstract
Learn more about activating the Windrows Event Collector on Windows Core OS to connect with the Broker VM.
After you have configured and registered your Broker VM, you can activate your Windows Event Collector application on Windows Core OS (WCOS). WCOS is
a stripped-down, lightweight version of Windows that can be adapted to run on a wide variety of devices with minimal work compared to the previous way
explained in Activate Windows Event Collector.
The Windows Event Collector (WEC) runs on the Broker VM collecting event logs from Windows Servers, including Domain Controllers (DCs). The Windows
Event Collector can be deployed in multiple setups, and can be connected directly to multiple event generators (DCs or Windows Servers) or routed using one
or more Windows Event Collectors. Behind each Windows event collector there may be multiple generating sources.
To enable the collection of the event logs, you are configuring and establishing trust between the Windows Event Forwarding (WEF) collectors and the WEC.
Establishing trust between the WEFs and the WEC is achieved by mutual authentication over TLS using server and client certificates. The WEF, a WinRM
plugin, runs under the Network Service account. Therefore, you need to provide the WEFs with the relevant certificates and grant the account access
permissions to the private key used for client authentication, for example, authenticate with WEC.
Ensure you meet the following prerequisites before activating the Windows Event Collector applet on Windows Core:
You must configure different settings related to the FQDN where the instructions differ depending on whether you are configuring a standalone Broker
VM or High Availability (HA) cluster.
Standalone broker
A FQDN must be configured for the standalone broker as configured in your local DNS server. Therefore, the Broker VM is registered in the DNS, its
FQDN is resolvable from the events forwarder (Windows server), and the Broker VM FQDN is configured. For more information, see Edit Broker VM
Configuration.
HA cluster
A FQDN must be configured in the cluster settings as configured in your local DNS server, which points to a Load Balancer. For more information, see
Configure High Availability Cluster.
After ingestion, Cortex XDR normalizes and saves the Windows event logs in the dataset xdr_data. The normalized logs are also saved in a unified format in
microsoft_windows_raw. This enables you to search the data using XQL queries, build correlation rules, and generate dashboards based on the data.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click Add → Windows Event Collector.
3. In the Activate Windows Event Collector window, define the Collected Events to configure the events collected by the applet. This lists event sources
from which you want to collect events.
Field Description
Source Select from the pre-populated list with the most common event sources on Windows Servers. The event source is the name of the
software that logs the events.
A source provider can only appear once in your list. When selecting event sources, depending on the type event you want to
forward, ensure the event source is enabled, for example auditing security events. If the source is not enabled, the source
configuration in the given row will fail.
Minimal TLS Select either 1.0 or 1.2 (default) as the minimum TLS version allowed. Ensure that you verify that all Windows event forwarders are
Version supporting the minimal defined TLS version.
Example 41.
To forward all the Windows Event Collector events to the Broker VM, define as follows:
Source: ForwardedEvents
By default, Cortex XDR collects Palo Alto Networks predefined Security events that are used by the Cortex XDR detectors. Removing the Security
collector interferes with the Cortex XDR detection functionality. Restore to Default to reinstate the Security event collection.
4. Click Activate. After a successful activation, the APPS field displays WEC with a green dot indicating a successful connection.
1. In the APPS column, left-click the WEC connection to display the Windows Event Collector settings, and select Configure.
2. In the Windows Event Forwarder Configuration window, perform the following tasks.:
a. In the Subscription Manager URL field, click (copy) . This will be used when you configure the subscription manager in the GPO (Global Policy
Object) on your domain controller.
b. Enter a password in the Define Client Certificate Export Password field to be used to secure the downloaded WEF certificate that establishes the
connection between your DC/WEF and the WEC. You will need this password when the certificate is imported to the events forwarder.
To view your Windows Event Forwarding configuration details at any time, select your Broker VM, right-click and navigate to Windows Event
Collector → Configure.
Cortex XDR monitors the certificate and triggers a Certificate Expiration notification 30 days prior to the expiration date. The notification is sent daily
specifying the number of days left on the certificate, or if the certificate has already expired.
PowerShell
2. Copy the PFX file that you downloaded to the local Core machine in one of the following ways:
(Recommended) If you're able to RDP to your server, open Notepad, and select File → Open to copy and paste files from your local machine
directly to the server. If you have any local drives mapped through the RDP options, the local drives are also displayed. We recommend this
method as it's the simplest.
If you have enabled WinRM for remote PowerShell execution, you can copy over PowerShell using this command:
Copy-Item –Path <path to PFX certificate file> –Destination '<temporary file path>' –ToSession $session
Example 42.
$session = New-PSSession –ComputerName SERVER1
Use SSH on server core. This includes enabling SSH on server core and using winscp to drag and drop the PFX file.
Use SMB to open the file share c$ on the \\server1\c$ server. You can only use this option if you are an administrator and the firewall on your
network isn't set to block file sharing.
You can also launch PowerShell and run the following command to tell the remote server to copy a file from your local computer using SMB:
Copy-Item –Path <path to PFX certificate file> –Destination '\\<computer name>\c$\<path to PFX file>
Example 43.
Copy-Item –Path C:\Downloads\forwarder.wec.paloaltonetworks.com.pfx –Destination '\\windows-core-server\c$\forwarder.wec.paloaltonetworks.com.pfx
Example 44.
certutil -f -importpfx '.\forwarder.wec.paloaltonetworks.com.pfx'
You will need to enter the Client Certificate Export Password you defined in the Cortex XDR console.
Ensure the client certificate appears in "My" (Personal) store by running the following command:
certutil -store My
Ensure the CA appears in Trusted Root Certification Authorities by running the following command:
a. Retrieve the Thumbprint of the forwarder.wec.paloaltonetworks.com.pfx certificate by running the following script:
b. Grant NT AUTHORITY\NETWORK SERVICE with read permissions by running the following script with the $thumbprint set to the value you
copied in the previous step by replacing <Thumbprint retrieved value>.
#Create new CSP object based on existing certificate provider and key name
#Note: Ensure this command is pasted to the same row and doesn’t break to multiple rows.
#Otherwise, the command will fail with errors.
$csp = New-Object System.Security.Cryptography.CspParameters($cert.PrivateKey.CspKeyContainerInfo.ProviderType,
$cert.PrivateKey.CspKeyContainerInfo.ProviderName,
$cert.PrivateKey.CspKeyContainerInfo.KeyContainerName)
# Create new access rule - could use parameters for permissions, but I only needed GenericRead
$access = New-Object System.Security.AccessControl.CryptoKeyAccessRule($account,"GenericRead","Allow")
# Add access rule to CSP object
$csp.CryptoKeySecurity.AddAccessRule($access)
#Create new CryptoServiceProvider object which updates Key with CSP information created/modified above
$rsa2 = New-Object System.Security.Cryptography.RSACryptoServiceProvider($csp)
c. After the script runs, validate the permissions are now set correctly.
Task 4. Add the Network Service account to the domain controller Event Log Readers group.
You must install the WEF certificate on every Windows Server, whether DC or not, for the WEFs that are supposed to forward logs to the Windows Event
Collector applet on the Broker VM.
1. To enable events forwarders to forward events, the Network Service account must be a member of the Active Directory Event Log Readers group. In
PowerShell, execute the following command on the domain controller that is acting as the event forwarder:
PS C:\> net localgroup "Event Log Readers" "NT Authority\Network Service" /add
Example 45.
`PS C:\Users\Administrator> wevtutil gl security
name: security
enabled: true
type: Admin
owningPublisher:
isolation: Custom
channelAccess: O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)
logging:
logFileName: %SystemRoot%\System32\Winevt\Logs\security.evtx
retention: false
autoBackup: false
maxSize: 134217728
publishing:
fileMax: 1
Example 46.
PS C:\Users\Administrator> wevtutil sl security "/ca:O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;S-1-5-20)"
Make sure you grant access on each of your domain controller hosts.
As a Group Policy Management Console is not available on Core servers, it’s not possible to fully edit a Group Policy Object (GPO) either with PowerShell or
using a web solution. As a result, follow this alternative method, which is based on configuring a group policy from another Windows DC by remotely
configuring the group policy.
1. Use any DC that has the Group Policy Management Console available in the same domain as the Core server, and verify the connection between the
servers with a simple ping.
Example 47.
gpmc.msc /gpcomputer: WIN-SI2SVDOKIMV.ENV21.LOCAL
4. In the Group Policy Management window, navigate to Domains → your domain name → Group Policy Object, right-click and select New.
5. In the New GPO window, enter your group policy Name: as Windows Event Forwarding, and click OK.
6. Navigate to Domains → your domain name → Group Policy Objects → Windows Event Forwarding, right-click and select Edit.
1. Select Computer Configuration → Policies → Windows Settings → Security Settings → System Services, and in the view panel locate and
double-click Windows Remote Management (WS-Management).
2. Mark the Define this policy setting checkbox, select Automatic, and then click Apply and OK.
At a minimum for your WEC configuration, you must enable logging of the same events that you have configured to be collected in your WEC
configuration on your domain controller. Otherwise, you will not be able to view these events as the WEC only controls querying not logging. For
example, if you have configured authentication events to be collected by your WEC using an authentication protocol, such as Kerberos, you
should ensure all relevant audit events for authentication are configured on your domain controller. In addition, you should ensure that all relevant
audit events that you want collected, such as the success and failure of account logins for Windows Event ID 4625, are properly configured,
particularly for those that you want Cortex XDR to apply grouping and analytics inspection.
Example 48.
Here is an example of how to configure the WEC to collect authentication events using Kerberos as the authentication protocol to enable the
collection of Broker VM supported Kerberos events, Kerberos pre-authentication, authentication, request, and renewal tickets.
1. Select Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies
→ Account Logon.
2. In the view pane, right-click Audit Kerberos Authentication Service and select Properties. In the Audit Kerberos Authentication Service
window, mark Configure the following audit events:, and click Success and Failure followed by Apply and OK.
Navigate to Computer Configuration → Policies → Administrative Templates: Policy definitions → Windows Components → Event Forwarding, right-click
Configure target Subscription Manager and select Edit.
b. In the Options section, select Show and in the Show Contents window, paste the Subscription Manage URL you copied from the Cortex XDR
console, and then click OK.
Select Computer Configuration → Preferences → Control Panel Settings → Local Users and Groups, right-click and select New → Local Group.
b. In the Members section, click Add and enter in the Name filed Network Service followed by OK.
You must type out the name, do not select the name from the browse button.
c. Click Apply and OK to save your changes, and close the Group Policy Management Editor window.
If Windows Firewall is enabled on your event forwarders, you will have to define an outbound rule to enable the WEF to reach port 5986 on the WEC.
In the Group Policy Management window, select Computer Configuration → Policies → Windows Settings → Security Settings → Windows Firewall with
Advanced Security → Outbound Rules, right-click and select New Rule.
b. Protocols and Ports: Select TCP and in the Specific Remote Ports field enter 5986 followed by Next.
d. Profile: Select Domain and disable Private and Public followed by Next.
Link the policy to the OU or the group of Windows servers you would like to configure as event forwarders. In the following flow, the domain controllers are
configured as an event forwarder.
1. Select Group Policy Management → <your domain name> → Domain Controllers, right-click and select Link an existing GPO....
2. In the Select GPO window, click Windows Event Forwarding followed by OK.
Verify that the Computer Policy update has completed successfully. User Policy update has completed successfully.
confirmation message is displayed.
After the Windows Event Collector has been activated in the Cortex XDR Management Console, left-click the WEC connection in the APPS column to display
the Windows Event Collector settings, and select:
To view metrics about the Windows Event Collector, left-click the WEC connection in the APPS field for your Broker VM, and you'll see the following metrics:
Logs Received and Logs Sent: Number of logs received and sent by the applet per second over the last 24 hours. If the number of incoming logs
received is larger than the number of logs sent, it could indicate a connectivity issue.
Resources: Displays the amount of CPU, Memory, and Disk space the applet is using.
Abstract
Renewing your WEC certificates in Cortex XDR includes renewing your Windows Event Forwarding (WEF) client certificate and your WEC server certificate.
You must install the WEF certificate on every Windows server, whether a Domain Controller (DC) or not, for the WEFs that are supposed to forward logs to the
Windows Event Collector applet on the Broker VM.
After you receive a notification for renewing your WEC CA certificate, we recommend that you do not add any new WEF clients until the WEC certification
renewal process is complete. Events from these WEF clients that are added afterwards will not be collected by the server until the WEC certificates are
renewed.
In addition, Cortex XDR manages the renewal of your WEC certificates by implementing the following time limits:
The WEC CA certificate is increased for an extended period of time for a maximum of 20 years.
The Broker VM applet includes an automatic renewal mechanism for a WEC server certificate, which has a lifespan of 12 months.
The WEC client certificate after the renewal is issued with a lifespan of 5 years.
On the Brokers tab, find the Broker VM, and in the APPS column, left-click the WEC connection to display the Windows Event Collector settings,
and select Configure.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click the WEC connection to display the Windows Event Collector settings,
and select Configure.
3. In the Windows Event Forwarder Configuration window, perform the following tasks:
a. In the Subscription Manager URL field, click (copy) . This will be used when you configure the subscription manager in the GPO (Global Policy
Object) on your domain controller.
b. Enter a password in the Define Client Certificate Export Password field to be used to secure the downloaded WEF certificate that establishes the
connection between your DC/WEF and the WEC. You will need this password when the certificate is imported to the events forwarder.
You must install the WEF certificate on every Windows Server, whether DC or not, for the WEFs that are supposed to forward logs to the Windows Event
Collector applet on the Broker VM.
a. Locate the PFX file you downloaded from the Cortex XDR console and double-click to open the Certificate Import Wizard.
2. Verify the File name field displays the PFX certificate file you downloaded and click Next.
3. In the Passwords field, enter the Client Certificate Export Password you defined in the Cortex XDR console followed by Next.
4. Select Automatically select the certificate store based on the type of certificate, and then click Next and Finish.
d. In the file explorer, navigate to Certificates and verify the following for each of the folders:
In the Trusted Root Certification Authorities → Certificates folder, ensure the CA ca.wec.paloaltonetworks.com is displayed.
You can see more than one ca.wec.paloaltonetworks.com and forwarder.wec.paloaltonetworks.com file from a previous installation
in the directory, so select the file with the most extended Expiration Date. You can verify that you are using the correct certificate:
To verify the client certificate in the Personal → Certificates folder is related to the CA, you can select your
forwarder.wec.paloaltonetworks.com file and from the Certification Path tab, double-click ca.wec.paloaltonetworks.com. In the
Details tab, Show: Properties only, and verify the Thumbprint matches the ca.wec.paloaltonetworks.com file Thumbprint.
For the Trusted Root Certificate (i.e. CA certificate), you can verify the Thumbprint of your ca.wec.paloaltonetworks.com file matches
the Subscription Manager URL by double-clicking the file and from the Details tab verifying the Thumbprint.
f. Right-click the certificate and navigate to All tasks → Manage Private Keys.
g. In the Permissions window, select Add and in the Enter the object name section, enter NETWORK SERVICE, and then click Check Names to verify
the object name. The object name is displayed with an underline when valid. and then click OK.
h. Click OK, verify the Group or user names that are displayed, and then click Apply Permissions for private keys.
a. Navigate to Computer Configuration → Policies → Administrative Templates: Policy definitions → Windows Components → Event Forwarding,
right-click Configure target Subscription Manager and select Edit.
2. In the Options section, select Show and in the Show Contents window, paste the Subscription Manage URL you copied from the Cortex XDR
console, and then click OK.
You have completed the WEF certification renewal process for ALL clients in your environment. Otherwise, events from the WEFs that you did not install
the new client certificate will not be collected by the WEC.
You are approaching the WEC server CA certificate expiration date, which is 2 years after the Windows Event Collector applet activation, and receive a
notification in the Cortex XDR console.
On the Clusters tab, find the Broker VM, and in the APPS column, left-click the WEC connection to display the Windows Event Collector settings,
and select Renew WEC Server Certificate.
3. Click Renew.
Once Cortex XDR renews the WEC server certificate, the status of the WEC in the APPS field on the Broker VMs machine is Connected indicating the
applet is running. In addition, the health status of the Windows Event Collector applet is now green instead of yellow and the warning message that
appeared when you hovered over the health status no longer appears. Your WEC server certificate is issued with a lifespan of 12 months.
We also suggest that you run the following XQL query to verify that your event logs are being captured:
dataset = xdr_data
| filter _product = "Windows"
| fields _vendor,_product,action_evtlog_level,action_evtlog_event_id
| sort desc _time
| limit 20
If this query does not display results with a timestamp from after the renewal process, it could indicate that the renewal process is not complete, so wait a
few minutes before running another query. If you are still having a problem, contact Technical Support.
Abstract
Learn more about managing your Broker VMs from the management console.
After you configure the Broker VMs, you can manage these brokers from the Cortex XDR management console in the Broker VMs page.
When managing a Broker VM, the options differ for a standalone Broker VM versus a Broker VM node that is added to a high availability (HA) cluster. Certain
configuration options that are only relevant for a Broker VM cluster node, such as Remove from Cluster, are only displayed when the Broker VM is a cluster
peer.
Maintenance Releases
Cortex XDR updates and enhances the Broker VM automatically through maintenance releases. The Broker VM version release process uses several security
measures and tools to ensure that every released version is highly secure. These include the following.
Abstract
Learn more about viewing the details of any particular Broker VM.
In Cortex XDR , select Settings → Configurations → Data Broker → Broker VMs to view detailed information regarding your registered Broker VMs in the
Brokers tab.
The Broker VMs table enables you to monitor and mange your Broker VM and applet connectivity status, version management, device details, and usage
metrics.
The following table describes both the default fields and additional optional fields that you can add to the Brokers table using the column manager and lists the
fields in alphabetical order.
Read more...
Certain fields are also exposed in the Clusters tab, when a Broker VM node is added to a High Availability (HA) cluster, and each cluster node is expanded to
view the Broker VM nodes table. An asterisk (*) is beside every field that is also included in the Broker VM nodes table for each HA cluster.
Field Description
Device Name
APPS
Black
Red
Orange
Past Version
Green
APPS* List of active or inactive applets and the connectivity status for each.
CLUSTER NAME* Indicates the name of the HA cluster that the Broker VM has been added to. For a
standalone Broker VM, which isn't added to any HA cluster, this field is empty.
CPU USAGE* CPU usage of the Broker device in percentage synced every 5 minutes.
Field Description
CONFIGURATION STATUS* Broker VM configuration status. Status is defined by the following according to
changes made to any of the Broker VM configurations.
up to date
Broker VM configuration changes made through the Cortex XDR console have
been applied.
in progress
Broker VM configuration changes made through the Cortex XDR console are
being applied.
submitted
Broker VM configuration changes made through the Cortex XDR console have
reached the Broker VM and awaiting implementation.
failed
Broker VM configuration changes made through the Cortex XDR console have
failed. Need to open a Palo Alto Networks support ticket.
DISK USAGE* Disk usage from the total allocated for data caching in the Broker VM as a
percentage. Inside the brackets is displayed how much this is in GB from the total
disk size in GB.
A notification is added to the Notification Center whenever the disk space is low
disk and whenever the disk size is increased.
EXTERNAL INTERFACE The IP interface the Broker VM is using to communicate with the server.
For AWS and Azure cloud environments, the field displays the Internal IP value.
LAST SEEN Indicates when the Broker VM was last seen on the network.
MEMORY USAGE* Memory usage of the Broker VM in percentage synced every 5 minutes.
STATUS* Connection status of the Broker VM. Status is defined by either Connected or
Disconnected.
Disconnected Broker VMs do not display CPU Usage, Memory Usage, and Disk
Usage information.
Notifications about the Broker VM losing connectivity to Cortex XDR appear in the
Notification Center.
Field Description
VERSION* Version number of the Broker VM. If the status indicator is not green, then the
Broker VM is not running the latest version.
Notifications about the available new Broker VM version appear in the Notification
Center.
Abstract
After configuring and registering your Broker VM, you can edit existing configurations and define additional settings in the Broker VMs page in the Brokers tab.
When you have a high availability (HA) cluster configured, you can also edit any Broker VM nodes configurations in the Clusters tab from the Broker VMs table
under the Cluster.
2. In the Broker VMs table, locate your Broker VM, right-click, and select Configure.
For all Broker VM nodes added to a HA cluster, you can also Configure the Broker VM nodes from the Clusters tab.
Edit the existing Network Interfaces, Proxy Server, NTP Server, and SSH Access configurations.
Change the name of your Broker VM device name by selecting the pencil icon. The new name will appear in the Brokers table.
FQDN
Set your Broker VM FQDN as it will be defined in your Domain Name System (DNS). This enables connection between the WEF and WEC, acting as the
subscription manager. The Broker VM FQDN settings affect the WEC and Agent Installer and Content Caching.
Specify a network subnet to avoid the Broker VM dockers colliding with your internal network. By default, the Network Subnet is set to 172.17.0.1/16.
For Broker VM version 9.0 and lower, Cortex XDR accepts only 172.17.0.0/16.
Auto Upgrade
Enable or Disable automatic upgrade of the Broker VM. By default, auto upgrade is enabled at Any time for all 7 days of the week, but you can also set the
Days in Week and Specific time for the automatic upgrades. If you disable auto-upgrade, new features and improvements will require manual upgrade.
Monitoring
Enable or Disable of local monitoring of the Broker VM usage statistics in Prometheus metrics format, allowing you to tap in and export data by navigating to
http://<broker_vm_address>:9100/metrics/. By default, monitoring your Broker VM is disabled. For more information with an example of how to set
up Prometheus and Grafana to monitor the Broker VM, see Monitor Broker VM using Prometheus.
Enable/Disable ssh Palo Alto Networks support team SSH access by using a Cortex XDR token.
Enabling allows Palo Alto Networks support team to connect to the Broker VM remotely, not the customer, with the generated password. If you use SSL
decryption in your firewalls, you need to add a trusted self-signed CA certificate on the Broker VM to prevent any difficulties with SSL decryption. For example,
when configuring Palo Alto Networks NGFW to decrypt SSL using a self-signed certificate, you need to ensure the Broker VM can validate a self-signed CA by
uploading the cert_ssl-decrypt.crt file on the Broker VM.
Make sure you save the password before closing the window. The only way to re-generate a password is to disable ssh and re-enable.
Broker VM 14.0.42 and later
Customize the login banner displayed, when logging into SSH sessions on the Broker VM in the Welcome Message field by overwriting the default welcome
message with a new one added in the field. When the field is empty, the default message is used.
Broker UI Password
Reset your current Broker VM Web UI password. Define and Confirm your new password. Password must be at least 8 characters.
(Optional) SSL Server Certificate section (Requires Broker VM 10.1.9 and later)
Upload your signed server certificate and key to establish a validated secure SSL connection between your endpoints and the Broker VM. When you configure
the server certificate and the key files in the tenant UI, Cortex XDR automatically updates them in the Broker VM UI, even when the Broker VM UI is disabled.
Cortex XDR validates that the certificate and key match, but does not validate the Certificate Authority (CA).
Abstract
Learn more about increasing the storage allocated for data caching in the Broker VM.
The storage allocated for data caching in the Broker VM is fixed at around 346.4 GB using a Logical Volume Manager (LVM). You can increase the disk space
allocated to attain better resilience during network and connectivity issues by adding a new disk. The disk needs to be added manually to an applicable
hypervisor that your broker supports, so that the Broker VM automatically detects the physical disk and allows you to connect to it. Extending the existing disk
is not supported.
When allocating storage for data caching, ensure you are aware of the following:
You must allocate the entire disk as opposed to portions of the disk.
You can connect multiple disks to increase the data caching space according to your requirements.
Once a disk is connected, It's not possible to dismiss a disk that has already been allocated, or to reduce the disk space of the data caching.
This operation is irreversible, and will make the disk become an integral part of the broker, where disconnecting the disk will result in errors and data loss.
1. Gracefully shutdown the applicable Broker VM in the hypervisor to manually add a disk.
2. Add a disk manually through the hypervisor portal. This step involves accessing the portal and attaching a new disk to the VM.
Follow your hypervisor documentation to understand how to add a persistent disk storage to your VM.
4. In the Broker VMs table, locate your Broker VM, right-click, and select Configure.
5. Scroll down to the Storage section, verify that your disk is detected with a new line that reads New disk detected with the correct disk name and disk
size, and click Add to data caching space.
If your disk is not listed and you didn't shutdown your Broker VM in your hypervisor before manually adding a disk to the VM, you'll need to reboot the
Broker VM before the disk details are detected by the Broker VM. This can be performed either in the hypervisor or directly in the Broker VMs page.
6. In the ARE YOU SURE? dialog box that is displayed, confirm that you want to add the new disk to the broker's data caching space and are aware of all
the ramifications by clicking Yes,add.
Once completed, a notification is added to the Notification Center indicated whether the disk size was increased successfully. If not, the notification
includes the errors encountered during the process.
Abstract
You can enable local monitoring of the Broker VM to provide usage statistics in a Prometheus metrics format. You can tap in and export data by navigating to
http://<broker_vm_address>:9100/metrics/. By default, monitoring is disabled.
Prerequisite
To monitor the Broker VM using Prometheus, ensure that you enable monitoring on the Broker VM. This is performed after configuring and registering your
Broker VM, when you can edit existing configurations and define additional settings in the Broker VMs page.
2. In the Broker VMs table, locate your Broker VM, right-click, and select Configure.
For all Broker VM nodes added to a HA cluster, you can also Configure the Broker VM nodes from the Clusters tab.
3. In the Broker VM Configurations page, select Monitoring from the left pane.
6. Click Save.
Below is an example of how to set up Prometheus and Grafana to monitor the Broker VM. This is set up using a docker compose on an Ubuntu machine to
monitor the CPU usage.
2. Install Docker:
vim docker-compose.yml
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
You need to configure Prometheus to scrape the Broker VM metrics by creating a Prometheus configuration file.
1. Create a Prometheus configuration file named prometheus.yml in the same directory as the docker-compose.yml file that you created above.
vim prometheus.yml
global:
scrape_interval: 15s
scrape_timeout: 10s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['<your server IP address>:9090']
- job_name: 'node'
static_configs:
- targets: ['<Broker VM IP address>:9100']
1. In the terminal, run the following command from the project directory:
docker-compose up -d
Username: admin
Password: admin
You can now create dashboards in Grafana to visualize the data from Prometheus.
3. Add a panel to the dashboard and configure the dashboard to display the Prometheus metrics that you want.
Abstract
Learn more about collecting logs from a Broker VM to review them as part of an investigation.
Cortex XDR enables you to collect your Broker VM logs directly from the Cortex XDR management console.
You can collect logs by either regenerating the most up-to-date logs and downloading them once they are ready, or downloading the current logs from the last
creation date reflected in the TIMESTAMP.
1. Select Settings → Configurations → Data Broker → Broker VMs to view the Broker VMs table in the Brokers tab.
2. Locate your Broker VM, right-click and select either Generate New Logs or Download Logs (<TIMESTAMP>).
The Download Logs (<TIMESTAMP>) is only displayed when you’ve downloaded your logs previously using Generate New Logs.
Logs are generated automatically, but can take up to a few minutes depending on the size of the logs.
Abstract
Learn more about upgrading the Broker VM from the Cortex XDR management console.
For all brokers that were deployed with a Broker VM image, downloaded prior to July 9th, 2023 (installed with Ubuntu 18.04 or earlier), the Broker VM must be
reinstalled with a new image (installed with Ubuntu 20.04 or later) before upgrading to the latest version. For more information, see Migrating to a New Broker
VM Image.
2. In either the Brokers or Clusters tab, locate your Broker VM, right-click, and select Upgrade Broker version.
After a Broker VM upgrade, your broker may require a reboot to finish installing important updates. A notification about this will be sent to your Cortex
XDR console Notification Center.
Abstract
This option can only be used on Broker VMs with version 20.0 and later, and is only suitable for importing a configuration of brokers in the same version, or from
a broker in an older version to a broker in a newer version.
Importing Broker VM configurations allows you to copy, including applet settings, the configuration of one Broker VM to another. The import overrides the
Broker VM and applet settings in the target Broker VM.
1. To replace the Broker VM configuration, right-click the Broker VM and select Import Configuration.
2. Select the Broker VM that has the configuration that you want to import.
3. (Optional) After the import is complete and the new configurations are applied to the target Broker VM, you can choose to shutdown the original Broker
VM (default configuration). This step ensures that there are no conflicts in data collection and applets operation.
5. Click Import.
After a successful import, the new configurations are immediately applied to the target Broker VM.
Abstract
Cortex XDR enables you to connect remotely to a Broker VM directly from Cortex XDR.
1. In Cortex XDR, select Settings → Configurations → Data Broker → Broker VMs table.
2. Locate the Broker VM you want to connect to, right-click and select Open Live Terminal.
Cortex XDR opens a CLI window where you can perform the following commands:
Logs
Broker VM logs are located in /data/logs/folder and contain the applet name in the file name.
Example 49.
Ubuntu commands
Example 50.
route or ifconfig -a
Sudo commands
Broker VM supports the commands listed in the following table. All the commands are located in the /home/admin/sbin folder.
Cortex XDR requires you use the following values when running commands:
The only applet that is available with a Cortex XDR Prevent license is the Local Agent Settings. The rest of the applets are only available with a Cortex
XDR Pro license.
Applet Names
Pathfinder: odysseus
Services
Read more...
applets_status Check the status of one or more applets. sudo ./applets_status wec
hostnamectl Check and update the machine hostname on a sudo ./hostnamectl set-hostname
Linux operating system. <new_host_name>
services_restart Restarts one or more services. OS services are sudo ./services_restart cloud_sync
not supported.
services_status Check the status of one or more services. sudo ./services_status cloud_sync
squid_tail Display the Proxy applet Squid log file in real- sudo ./squid_tail
time.
Abstract
You can add standalone Broker VMs to a high availability (HA) cluster from either the Brokers tab or Clusters tab.
You can only add a Broker VM to a cluster, when the Broker VM version is 19.0 and later, the STATUS is Connected, and the Broker VM version isn't older than
the cluster version.
Once you add a Broker VM to a cluster, the Broker VM becomes a cluster node and is added to the cluster folder in the Clusters tab. If it is the only peer Broker
VM in the cluster, it is designated as the Primary node; otherwise, it is designated as a standby node.
Brokers tab
2. In the Select Cluster field, choose the cluster that you want this Broker VM to be added to.
Clusters tab
2. In the Select broker field, choose the standalone Broker VM that you want to add to this cluster.
Adding a Broker VM to a cluster overrides all previous Broker VM settings and disables all active applets on this Broker VM. When the Broker VM is
added to a cluster, the cluster configuration and cluster applet settings propagate to the Broker VM. The state of the applets on the Broker VM is
dependent on the applet mode and Broker VM node role in the cluster. When the operation completes, a notification is added to the Notification Center.
Abstract
Learning more about changing the role of the current Primary node in a HA cluster.
You can manually change the role of the current Primary node in a high availability (HA) cluster from both the Brokers tab and Clusters tab of the Broker VMs
page.
There are various reasons for changing the role of the current Primary node to another node in the HA cluster, for example, to perform maintenance, by
initiating a manual switchover.
The option is only available for a Primary node, and only if there is another available standby node that is connected in the cluster.
2. In either the Brokers tab or Clusters tab, right-click a Primary Broker VM node, and select Switchover.
3. If multiple standby nodes are connected in the cluster, select the node that you want to change to Primary in the Select broker menu. When only one
standby node is configured, skip this step.
4. Click Switchover.
Abstract
Learn more about removing a Broker VM node from a high availability cluster.
You can remove a Broker VM node from a high availability (HA) cluster in either the Brokers tab or Clusters tab of the Broker VMs page. This option is only
available if the Broker VM is currently a member of a cluster.
When a Broker VM node is removed from a HA cluster, it becomes a standalone Broker VM. All its configuration settings, including applet settings, are reset to
default like a newly created Broker VM. If you remove a Primary node, an automatic failover occurs.
You can remove a Broker VM node from a cluster if the current node STATUS is Connected.
2. In either the Brokers tab or Clusters tab, right-click a Broker VM node, and select Remove from Cluster.
When removing the last node in the cluster, all applets in this cluster become Inactive, and the cluster becomes Unavailable.
When the Broker VM receives the new configuration, the Broker VM becomes a standalone Broker VM with settings reset to default.
If you've enabled a Load Balancer Health-Check on the cluster, you need to exclude this Broker VM from your Load Balancer settings.
Abstract
High availability (HA) is a deployment in which at least two Broker VMs are placed in a Broker VM cluster and their configuration is synchronized to prevent a
single point of failure on your network at the hardware and application level. A heartbeat connection between the Broker VM nodes and the Cortex XDR Server
ensures seamless failover if a node fails. Setting up a HA cluster provides redundancy and enables data collection continuity.
Cluster Architecture
The Clusters tab on the Broker VMs page enables you to view your cluster configurations, which displays the associated nodes, node statuses, applets
configured, and applet statuses. You can add as many clusters as you want in a tenant. Each Cortex XDR cluster can include as many nodes as you need.
The cluster operation is fully managed from the tenant, and there is no need to install additional components. There is no need for cluster nodes to
communicate with one another on the network. In each cluster, one Broker VM is designated as the Primary cluster node and the rest of the nodes are
designated as standby nodes. The cluster architecture is dependent on the type of applets configured in the cluster. Applets on cluster nodes run either in the
active/active mode or in the active/passive mode and exhibit different behaviors as detailed in the table below.
With Cortex XDR Prevent, it's only relevant to configure a HA cluster with a Local Agent Settings applet as this is the only applet supported for this product
license. The other applets are collector applets, which are only available in Cortex XDR Pro or Cortex XSIAM.
Read more...
active/active The applets that operate in the active/active mode run simultaneously on all the nodes in the cluster to achieve The active/active
High Availability and Load Balancing. Failure of an applet on a particular node causes all traffic to be applets are:
redistributed to the remaining nodes in the HA cluster.
Syslog
For load balancing, you must install a Load Balancer in your network which will distribute the incoming data Collector
between the nodes.
Netflow
Collector
Windows
Event
Collector
Local Agent
Settings
active/passive The applets that operate in the active/passive mode run only on the Primary node designated in the cluster. The The active/passive
other nodes are synchronized and ready to transition from standby to the active Primary node should there be a applets are:
failover. In this mode, all nodes share the same configuration settings, while only 1 operates at a given time.
Kafka
Collector
Network
Mapper
CSV Collector
FTP Collector
Files and
Folders
Collector
DB Collector
The Pathfinder applet isn't supported when configuring Broker VMs in HA clusters.
Automatic Failover
In each cluster, whenever there's a failure on the Primary node, Cortex XDR automatically switches to one of the standby nodes, initiates the applets on the
new Primary node, and continues data collection on that node. Any successful or unsuccessful failover attempt displays an alert in the notification area and is
logged in the Management Audit Logs table.
The following conditions can trigger a failover for the Primary node:
Connectivity issues between a Primary node and the Cortex XDR server.
Any failure of one of the internal components, such as MariaDB, Redis, RabbitMQ, or Docker engine.
Manual Switchover
At any time, you can change the role of the current Primary node in the cluster to another node in the HA cluster, for example, to perform maintenance, by
initiating a manual switchover.
Automatic Upgrades
You can configure automatic upgrades within Broker VM HA cluster nodes to update cluster nodes without noticeable down-time or other disruption of the HA
cluster service by implementing the rolling upgrade mechanism. An automatic upgrade is performed in the following order:
Abstract
You can create a High Availability (HA) cluster by either creating a new cluster from scratch and then adding applets and Broker VM nodes to the cluster, or by
creating a new cluster from an existing standalone Broker VM. There is no limit to the number of clusters and nodes that you can add.
There are a number of different ways that you can configure the HA cluster to acheive fault tolerance depending on your system requirements. For example,
once a cluster is created from scratch, you can start by configuring the applets that you want the cluster to maintain and then adding the Broker VM nodes that
will be managed by the cluster to maintain this configuration, or vise versa. When you create a new cluster from an existing Broker VM, the cluster inherits the
applets already configured, which can help save time with your cluster configuration.
Guidelines
For the cluster to start working and provide services, you need at least one operational node. Until this node is added, the cluster is unavailable. Once a
node is added, the cluster begins operating, but it's not considered healthy.
For the cluster to be healthy and maintain HA and redundancy, you need at least two working nodes in the cluster.
For active/active applets that require load balancing, you must install a Load Balancer in your network to distribute the incoming data between the
nodes.
Be sure you do the following tasks before creating a cluster from an existing Broker VM:
Since the Pathfinder applet isn't supported when configuring HA clusters, you must ensure Pathfinder is deactivated on the Broker VM.
If the Broker VM is explicitly specified in some Agent Settings profile, which mean Cortex XDR agents retrieve release upgrades and content updates
from this Broker VM, you must change the Broker VM's current designated role. To do this, you need to modify the Agent Settings profile by removing the
specific selection of this broker as a Download Source for XDR agents (Endpoints → Policy Management → Prevention → Profiles → Edit Profile →
Download Source → Broker Selection). After you create the cluster for this broker, you can go back the Agent Settings profile and select the cluster that
you created from this broker to be used as a Download Source for XDR agents.
To create a cluster and then add Broker VMs to the cluster, click Add Cluster.
To create a new cluster from an existing Broker VM in the Brokers tab, right-click a standalone Broker VM, and click Create a Cluster from this Broker.
You can only create a new custer from an existing Broker VM, when the Broker VM version is 19.0 and later, and the STATUS is Connected.
The Create a Cluster from this Broker option is only listed if the Broker VM is not already added to a cluster.
Specify the domain name of your Load Balancer FQDN as configured in your local DNS server. The Load Balancer FQDN settings affect the Windows Event
Collector and Local Agent Settings applets.
When creating a cluster from an existing Broker VM and either a WEC or Local Agent Settings applet are enabled in the Broker VM, the Load Balancer FQDN is
mandatory to configure, and is automatically populated based on the Broker VM settings.
Implementing a Load Balancer requires exposing a health check API that is called by the Load Balancer at regular intervals. You can access the health check
page by sending an HTTP request to http[s]://<Broker VM IP>:<port>/health/. A successful HTTP response of 200 OK as the status code
indicates the Broker VM’s readiness to receive logs.
Disabled/Enabled toggle
When the Protocol is set to HTTPS, you may need to perform a few follow-up steps to establish a validated secure SSL connection with the Broker VM.
If you're using your own Certificate Authority (CA) to sign the certificates, you'll need to place the CA in the client, such as the Load Balancer, and upload
the certificates to the Broker VM.
If you're using a Trusted CA Signed SSL Certificate, you'll only need to upload it to the Broker VM.
If the SSL Server Certificates of the Broker VM are self-signed certificates, no further steps are necessary.
Auto Upgrade options
You can configure automatic upgrades within Broker VM HA cluster nodes to update cluster nodes without noticeable down-time or other disruption of the HA
cluster service by implementing the rolling upgrade mechanism. Setting automatic upgrades includes these parameters:
Auto Upgrade
In a HA cluster configuration, the rolling upgrades process is automatically performed by default whenever a new version of the Broker VM is available.
If you want to upgrade the Broker VM nodes manually, clear the Use Default (Enabled) checkbox, and set Auto Upgrade to Disabled. You can manually
upgrade the Broker VM nodes individually by right-clicking the Broker VM and selecting Upgrade Broker version.
Days In Week
You can configure the days in the week that the rolling upgrades are performed. By default, the upgrades are configured to run every day.
Schedule
You can configure whether the rolling upgrades are performed at any time during the day or at a specific time by setting a time range of at least 4 hours.
Once configured, the rolling upgrades are only performed when the cluster STATUS is Healthy. An automatic upgrade is performed in the following order:
Read more...
2. The Primary node is switched over to one of the upgraded standby nodes.
Click Save.
The cluster is now listed in the Clusters tab of the Broker VMs page, whose output differs depending on how the cluster was created:
When the cluster is added from scratch, the cluster is listed as an empty folder, and you can start to add Broker VM nodes and applets to this cluster. While the
cluster doesn’t have any peer nodes, the STATUS is Unavailable.
When the cluster is added from an existing Broker VM, the cluster inherits all applet settings from the Broker VM. You can leave the configuration as is or
add/remove additional applets as desired. This node automatically becomes the first node (Primary) in the cluster. You can now add other Broker VM nodes to
this HA cluster. While the cluster contains only one Broker VM node, the STATUS is Warning.
Task 5. Add Broker VMs to your cluster as you require to achieve fault tolerance and high availability
For the cluster to be healthy and maintain HA and redundancy, you need at least two working nodes in the cluster.
Abstract
Learn more about managing your broker VM clusters from the Clusters tab of the Broker VMs page.
After you've configured a cluster, you can manage all your Broker VM clusters from the Clusters tab on the Broker VMs page (Settings → Configurations →
Data Broker → Broker VMs → Clusters).
Abstract
The Clusters tab of the Broker VMs page (Settings → Configurations → Data Broker → Broker VMs) enables you to view detailed information regarding your
High Availablilty (HA) cluster.
The Clusters table enables you to monitor and mange your cluster nodes and applets, and view stats.
In addition, when each cluster is expanded, a table is displayed, which enables you to view detailed information regarding the various Broker VM nodes that
are currently added to your cluster. If you haven't added any Broker VM nodes to a particular cluster, the table is empty.
Clusters Table
The following table describes all the fields that are available in the Clusters table. You can hide any field column using the column manager.
Fields Description
CLUSTER Beside the full name of each cluster, a status indicator is displayed with one of the following colors:
NAME
Green: Healthy, the Primary node and all available standby nodes are connected and operating with no warnings, and all activated
applets are running without a problem.
Orange: Warning as the system has detected errors in the cluster, but the applets can still be running. For example, all applets are
running normally in the Primary Broker VM, but no available standby nodes are detected, or the Primary node is operating fine, but
there is an applet that failed to start in one of the standby nodes. The errors must be addressed as soon as possible.
Red: Critical as the system has detected one or more critical errors in the cluster, and nodes are not able to run some applets. For
example, an error was detected in some Primary applet and no standby node is available for failover. All errors must be addressed
as soon as possible.
Black: Unavailable as the cluster doesn’t have any peer nodes configured.
STATUS Connection status of the cluster according to the statuses and colors explained in the CLUSTER NAME field above.
Unavailable clusters do not display CPU USAGE, MEMORY USAGE, and DISK USAGE information.
Notifications about the cluster losing connection between applets and Broker VM nodes appear in the Notification Center.
CPU Average CPU utilization between all nodes in the cluster as a percentage.
USAGE
MEMORY Sum of all memory in use out of the sum of the total memory on all nodes in the cluster as a percentage.
USAGE
DISK Sum of all disk space in use out of the sum of the total disk space on all nodes in the cluster as a percentage.
USAGE
APPS List of active applets and the connectivity status for each.
Green: Connected
White: Inactive
The fields that are available in the Broker VM nodes table for each cluster are similar to many of the fields that are displayed in the table for the Broker VMs in
the Brokers tab. For more information on these fields, see View Broker VM details.
Abstract
After configuring a high availability (HA) cluster, you can always edit the cluster configurations from the Clusters tab of the Broker VMs page.
An HA cluster is always configurable no matter what the status of the cluster or whether it has any Broker VM nodes added.
1. Select Settings → Configurations → Data Broker → Broker VMs, and select the Clusters tab.
2. In the Clusters table, locate the cluster, right-click, and select Configure.
3. In the Cluster Configurations window, you can edit the parameters based on your previous settings. For more information on each of these settings, see
Configure High Availability Cluster.
Abstract
You can add an applet to a high availability (HA) cluster from the Clusters tab of the Brokers VM page.
You can always add an applet to a cluster, even if the cluster status is Unavailable or Error. When an applet is added to a cluster without any Broker VM nodes,
the cluster status is Unavailable and the cluster APPS status displays as Inactive.
1. Select Settings → Configurations → Data Broker → Broker VMs, and select the Clusters tab.
2. In the Clusters table, locate the cluster that you want to add an applet.
3. You can either right-click the cluster, and select Add App → <name of applet>, or in the APPS column, left-click Add → <name of applet>.
The applet is only available for you to add to the cluster if it hasn't already been added.
With Cortex XDR Prevent, it's only relevant to configure a HA cluster with a Local Agent Settings applet as this is the only applet supported for this
product license. The other applets are collector applets, which are only available in Cortex XDR Pro or Cortex XSIAM.
The various applets that you can configure are the same as when configuring a standalone Broker VM. For more information on a particular applet
configuration, locate the applet in the Set up Broker VM section in the Cortex XDR Admin Guide.
The applet is listed with a status indicator in the APPS column, where the colors depict the following statuses.
Green: Connected
White: Inactive
Once the applet configuration is changed in a cluster, the changes are automatically applied to the cluster nodes depending on the applet and cluster
node role. For example, if you add the Kafka Collector, which is an "active/passive" applet, the applet is automatically initiated and enters an active state
on the Primary node and is on standby on the standby nodes. While if you add the Syslog Collector "active/active" applet, the changes automatically
propagate so that the applet is active on all cluster nodes, including Primary and standby.
Abstract
You can add standalone Broker VMs to a high availability (HA) cluster from either the Brokers tab or Clusters tab.
You can only add a Broker VM to a cluster, when the Broker VM version is 19.0 and later, the STATUS is Connected, and the Broker VM version isn't older than
the cluster version.
Brokers tab
2. In the Select Cluster field, choose the cluster that you want this Broker VM to be added to.
Clusters tab
2. In the Select broker field, choose the standalone Broker VM that you want to add to this cluster.
Adding a Broker VM to a cluster overrides all previous Broker VM settings and disables all active applets on this Broker VM. When the Broker VM is
added to a cluster, the cluster configuration and cluster applet settings propagate to the Broker VM. The state of the applets on the Broker VM is
dependent on the applet mode and Broker VM node role in the cluster. When the operation completes, a notification is added to the Notification Center.
Abstract
You can remove a high availability (HA) cluster in the Clusters tab of the Broker VMs page.
When removing a cluster, the cluster is disassembled and the cluster object is deleted. All nodes in the cluster are reverted back to standalone Broker VMs
with their settings reset to default as a newly created Broker VMs.
If you've configured load balancing for any "active/active" applets configured, you need to update your Load Balancer configuration settings to stop sending
logs to these Broker VM nodes.
You cannot remove a cluster that is used as a download source from which the Cortex XDR agents retrieve release upgrades and content updates. You'll need
to change the cluster's current designated role before removing a cluster.
3. Follow the instructions in the REMOVE CLUSTER window, whose instructions differ depending on the type of cluster you are trying to remove, and
Remove the cluster.
Abstract
Learn about the notifications that are relevant to Cortex XDR Broker VMs.
To help you monitor your Broker VM version, connectivity, and high availability clusters, Cortex XDR sends notifications to your Cortex XDR console Notification
Center.
Add Cluster
Applet Activated
Applet configuration
Applet Deactivated
Notifies when the Broker VM is utilizing over 90% of the allocated disk space.
Cluster Configuration
Cluster failover
Notifies when a failover is initiated in the cluster from one Broker VM node to another.
Notifies when a failover completed successfully. The Broker VM is now Primary in the cluster.
Notifies when a failover in the cluster completed with errors and error message.
Notifies when couldn't perform a failover in the cluster as there is no available standby node with sufficient redundancy.
Notifies when failed to detect an available standby Broker VM node in the cluster.
Notifies when critical errors detected in the cluster and there is no available standby Broker VM node for failover.
Notifies whether the disk space allocated for data caching in the Broker VM has been increased successfully. If not, the notification includes the errors
encountered during the process. For more information on allocating disk space to the Broker VM, see Increase Broker VM storage allocated for data caching.
Notifies after a Broker VM update whether a broker needs a reboot to finish installing important updates.
If the Broker VM Auto Upgrade is disabled, the notification includes a link to the latest release information. It is recommend you upgrade to the latest
version.
If the Broker VM Auto Upgrade is enabled, 12 hours after the release you are notified of the latest upgrade, or you are notified that the upgrade failed. In
such a case, open a Palo Alto Networks Support Ticket.
For all brokers that were deployed with an old Broker VM image, downloaded prior to July 9th, 2023 (installed with Ubuntu 18.04 or earlier), the Broker VM must
be reinstalled with a new image (installed with Ubuntu 20.04 or later) before upgrading to the latest version. The name of the Broker VM to upgrade is indicated
with a link to the instructions.
For more information on upgrading to a new Broker VM image, see Migrating to a New Broker VM Image.
Remove Cluster
To ensure you stay informed about Broker VM activity, you can also configure notification forwarding to forward your Broker audit logs to an email distribution
list or Syslog server. For more information about the Broker VM audit logs, see Broker VM Activity in the Cortex XDR Administrator Guide.
Abstract
Cortex XDR logs entries for events related to the Broker VM monitored activities. Cortex XDR stores the logs for 365 days. To view the Broker VM audit logs,
select Settings → Management Audit Logs.
You can customize your view of the logs by adding or removing filters to the Management Audit Logs table. You can also filter the page result to narrow down
your search. The following table describes the default and optional fields that you can view in the Cortex XDR Management Audit Logs table:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Critical
High
Medium
Low
Informational
Field Description
Type* and Sub-Type* Additional classifications of Broker VM logs (Type and Sub-Type):
Broker VMs:
Action on device
Add Cluster
Applet Activated
Applet Configuration
Applet Deactivated
Authentication succeeded
Broker Log
Cluster Configuration
Cluster Failover
Cluster Switchover
Device configuration
Disconnect
Register
Remove Cluster
Remove Device
Rolling Upgrades
Subscription Created
Subscription Deleted
Subscription Edited
Broker API:
Authentication failed
Ingesting logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR provides an XDR Collectors (XDRC) configuration that is dedicated for on-premise data collection on Windows and Linux machines. The XDRC
includes a dedicated installer, a collector upgrade configuration, content updates, and policy management. The XDRC is a data collector that gathers and
processes logs and events from multiple sources. It leverages Elasticsearch Filebeat, a lightweight log shipper, to collect log data from various systems and
applications. Additionally, Winlogbeat gathers Windows event logs, ensuring comprehensive visibility into Windows environments. These components facilitate
centralized analysis, threat detection, and investigation across the Cortex XDR ecosystem.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR logs entries for events related to the XDR Collector monitored activities. Cortex XDR stores the logs for 365 days. To view the XDR Collector audit
logs, select Settings → XDR Collector Audit Logs.
Abstract
Learn about the supported operating systems and requirements for the collector machines used for the Cortex XDR Collectors.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can configure XDR Collectors that are dedicated for on-premise data collection on Windows and Linux machines. The following hardware and software
specifications are required for the collector machines.
Supported operating system versions Red Hat Enterprise Linux 6 (6.7 and later)
Ubuntu Server 12
Ubuntu Server 14
Ubuntu Server 16
Ubuntu Server 18
Ubuntu Server 20
Ubuntu Server 22
Oracle Linux 7
Oracle Linux 8
Oracle Linux 9
ca-certificates
Windows 7
Windows 8
Windows 10
Education
Windows 11
Windows 11
Enterprise
Education/Home
IoT Enterprise
Windows Server
Datacenter
2019
2022
Abstract
Depending on your network environment settings, you should enable network access to the Cortex XDR Collectors resources.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To enable access to XDR Collectors components, you must allow access to various Palo Alto Networks resources. If you use the specific Palo Alto Networks
App-IDs indicated in the table, you do not need to explicitly allow access to the resource. A dash (-) indicates there is no App-ID coverage for a resource.
Some of the IP addresses required for access are registered in the United States. As a result, some GeoIP databases do not correctly pinpoint the location in
which IP addresses are used. All customer data is stored in your deployment region, regardless of the IP address registration and restricts data transmission
through any infrastructure to that region. For considerations, see Plan and prepare.
Throughout this topic, <xdr-tenant> refers to the chosen subdomain of your Cortex XDR tenant and <region> is the region in which your Strata Logging
Service is deployed.
Refer to the following tables for the FQDNs, IP addresses, ports, and App-ID coverage for your deployment.
For IP address ranges in GCP, refer to the following tables for IP address coverage for your deployment.
https://www.gstatic.com/ipranges/goog.json: Refer to this list to look up and allow access to the IP address ranges subnets.
https://www.gstatic.com/ipranges/cloud.json: Refer to this list to look up and allow access to the IP address ranges associated with your region.
CA (Canada): 34.120.31.199
JP (Japan): 35.241.28.254
SG (Singapore): 34.117.211.129
AU (Australia): 34.120.229.65
DE (Germany): 34.98.68.183
IN (India): 35.186.207.80
CH (Switzerland): 34.111.6.153
PL (Poland): 34.117.240.208
TW (Taiwan): 34.160.28.41
QT (Qatar): 35.190.0.180
FA (France): 34.111.134.57
IL (Israel): 34.111.129.144
ID (Indonesia): 34.111.58.152
ES (Spain): 34.111.188.248
Port: 443
Used for the first request in registration flow where the Port: 443
agent passes the distribution id and obtains the ch-
<xdr-tenant>.traps.paloaltonetworks.com
of its tenant.
JP (Japan): 34.95.66.187
SG (Singapore): 34.120.142.18
AU (Australia): 34.102.237.151
DE (Germany): 34.107.161.143
IN (India): 34.120.213.188
CH (Switzerland): 34.149.180.250
PL (Poland): 35.190.13.237
TW (Taiwan): 34.149.248.76
QT (Qatar): 34.107.129.254
FA (France): 34.36.155.211
IL (Israel): 34.128.157.130
ID (Indonesia): 34.128.156.84
ES (Spain): 34.120.102.147
Port: 443
CA (Canada): 35.203.82.121
JP (Japan): 34.84.125.129
SG (Singapore): 34.87.83.144
AU (Australia): 35.189.18.208
DE (Germany): 34.107.57.23
IN (India): 35.200.158.164
CH (Switzerland): 34.65.248.119
PL (Poland): 34.116.216.55
TW (Taiwan): 35.234.8.249
QT (Qatar): 34.18.46.240
FA (France): 34.155.222.152
IL (Israel): 34.165.156.139
ID (Indonesia): 34.128.115.238
ES (Spain): 34.175.30.176
Port: 443
The following table lists the required resources for Federal (United States - Government).
FQDN IP Addresses And Port App-ID Coverage Required For XDR Collectors
Used for the first request in registration flow where the Port: 443
agent passes the distribution ID and obtains the ch-
<xdr-tenant>.traps.paloaltonetworks.com
of its tenant.
FQDN IP Addresses And Port App-ID Coverage Required For XDR Collectors
Used for all other requests between the agent and its Port: 443
tenant server including heartbeat, uploads, action
results, and scan reports.
api-<xdr- IP address: -
tenant>.xdr.federal.paloaltonetworks.com 130.211.195.231
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
On the XDR Collectors Administration page, you can view the list of collectors and perform additional tasks such as changing the alias of the collector,
upgrading the collector version, and setting a proxy address and port for the collector.
Abstract
You can configure the Cortex XDR Collector upgrade scheduler and the number of parallel upgrades.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can configure the Cortex XDR Collector upgrade scheduler and the number of parallel upgrades. There can be a maximum of 500 parallel upgrades
scheduled in a week, which is the default configuration at any time of day.
To define the XDR Collector upgrade scheduler and number of parallel upgrades.
Amount of Parallel Upgrades: Specify the number of parallel upgrades, where the maximum number is 500 (default).
Days in Week: Select the specific days in the week that you want the upgrade to occur, where the default is configured as every day in the
week.
Schedule: Select whether you want the upgrade to be at Any time (default) or at a Specific time. When setting a specific time, you can set the
From and To times.
3. Click Save.
Abstract
Learn how to create an XDR Collector installation package for a Windows or Linux collector machine.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To install a Cortex XDR Collector for the first time, you must first create an XDR Collector installation package. After you create and download an installation
package, you can then install it directly on the collector machine or you can use a software deployment tool of your choice to distribute the software to multiple
collector machines.
To install the XDR Collector software, you must use a valid installation package that exists in your XDR Collectors console. If you delete an installation package,
any XDR Collectors installed from this package are not able to register to Cortex XDR .
To move existing XDR Collectors between Cortex XDR managing servers, you need to first Uninstall the XDR Collector from the collector machine and then for
the new XDR Collector create a new installation package.
3. Enter a unique Name and an optional Description to identify the installation package.
The package Name must be no more than 100 characters and can contain letters, numbers, hyphens, underscores, commas, and spaces.
4. Select the Platform for which you want to create the installation package as either Windows or Linux.
Cortex XDR prepares your installation package and makes it available in the XDR Collectors Installations page.
When the status of the package displays Completed, right-click the Collector Version row, and click Download.
For a Linux installation, you can Download Linux RPM installer or Download Linux DEB installer (according to your Linux collector machine
distribution), and deploy the installers on the on-premise collector machines using the Linux package manager. Alternatively, you can Download
Linux SH installer and deploy it manually on the Linux collector machine.
Once the applicable installation package is downloaded, you can install the package.
As needed, you can return to the XDR Collectors Installations page to manage your XDR Collectors installation packages. To manage a specific
package, right click the Collector Version, and select the desired action:
Delete the installation package. Deleting an installation package does not uninstall the XDR Collector software from any on-premise collector
machines.
Since Cortex XDR relies on the installation package ID to approve XDR Collector registration during install, it is not recommended to delete the
installation package for any active on-premise collector machines. Hiding the installation package will remove it from the default list of available
installation packages, and can be useful to eliminate confusion in the XDR Collectors console main view. These hidden installation can be viewed
by removing the default filter.
Copy text to clipboard to copy the text from a specific field in the row of an installation package.
Hide installation packages. Using the Hide option provides a quick method to filter out results based on a specific value in the table. You can also
use the filters at the top of the page to build a filter from scratch. To create a persistent filter, save ( ) it.
Abstract
Learn about the Cortex XDR Collector installation options on Windows collector machines.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
A standard XDR Collector installation for Windows is intended for standard physical collector machines or persistent virtual collector machines. You can
perform the Windows installation for the XDR Collectors using the MSI or Msiexec.
Abstract
Learn how to install the Cortex XDR Collector on Windows using the MSI.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Use the following workflow to install the XDR Collector using the MSI file.
Before completing this task, ensure that you create and download a Cortex XDR Collector installation package in Cortex XDR.
When the package is executed using the MSI, an installation log is generated in %TEMP%\MSI<Random characters>.log by default.
1. With Administrator level privileges, run the MSI file that you downloaded in Cortex XDR on the collector machine.
2. Click Next.
3. Select I accept the terms in the License Agreement and click Next.
5. Click Yes.
6. After you complete the installation, verify the Cortex XDR Collector can establish a connection.
If the XDR Collector does not connect to Cortex XDR, verify your Internet connection on the collector machine. If the XDR Collector still does not connect,
verify the installation package has not been removed from the Cortex XDR management console.
Abstract
Learn how to install the Cortex XDR Collectors on Windows using the Msiexec.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Msiexec provides full control over the installation process and allows you to install, modify, and perform operations on a Windows Installer from the command
line interface (CLI). You can also use Msiexec to log any issues encountered during installation.
You can also use Msiexec in conjunction with a System Center Configuration Manager (SCCM), Altiris, Group Policy Object (GPO), or other MSI deployment
software to install the XDR Collector on multiple collector machines for the first time.
When you install the XDR Collector with Msiexec, you must install the XDR Collector per-machine and not per-user.
Although Msiexec supports additional options, the XDR Collectors installers support only the options listed here. For example, with Msiexec, the option to install
the software in a non-standard directory is not supported—you must use the default path.
The following parameters apply to the initial installation of the XDR Collector on the collector machine.
Where
LOG_LEVEL: Sets the level of logging for the XDR Collector log (INFO, DEBUG, ERROR, and TRACE).
PROXY_LIST: Proxy address or name, where you can add a comma separated list, such as 2.2.2.2:8888,1.1.1.1:8080.
LOG_PATH: The path to save the XDR Collector, Filebeat, and Winlogbeat logs.
DATA_PATH: The path for persistence, content, Filebeat application data, Winlogbeat application data, and transaction data.
DISTRIBUTION_ID
Before completing this task, ensure that you create and download a Cortex XDR Collector installation package in Cortex XDR .
Select Start → All Programs Accessories. Right-click Command prompt and Run as administrator.
Select Start. In the Start Search box, type cmd. Then, to open the command prompt as an administrator, press CTRL+SHIFT+ENTER keys.
2. Run the msiexec command followed by one or more supported options and properties.
For example:
Abstract
Learn how to install the Cortex XDR Collector on Linux collector machines.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can install the XDR Collector using three available packages for a Linux installation: Linux RPM, Linux DEB, and Linux SH. You can install the XDR
Collector package on any Linux server, including a physical or virtual machine, and as temporary sessions.
Before completing this task, ensure that you create and download a Cortex XDR Collector installation package, and then upload these installation files to your
Linux environment.
For example:
user@local ~
$
ssh root@ubuntu.example.com
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1041-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
2. Extract the installation files you uploaded using one of the following commands, which is dependent on the Linux package you downloaded:
3. Create a directory and copy the collector.conf installation file to the /etc/panw/ directory.
You can install the XDR Collectors on the collector machine manually using the shell installer or using the Linux package manager for .rpm and .deb
installers:
When performing a XDR Collector installation or upgrade in Linux using a shell installer, the /tmp folder cannot be marked as noexec. Otherwise, the
installation or upgrade fails. As a workaround, before the installation or upgrade, use the following command:
1. Depending on your Linux distribution, install the XDR Collectors using one of the following commands, where the <file name> is taken from the
files provided in the downloaded Linux installation package:
rpm -i ./<file_name>.rpm
dpkg -i ./<file_name>.deb
rpm -i ./<file_name>.rpm
1. Enable execution of the script using the chmod +x <file_name>.sh command, where the <file name> is taken from the file provided in the
downloaded Linux installation package.
For example:
If the XDR Collector does not connect to Cortex XDR, verify your Internet connection on the collector machine. If the XDR Collector still does not connect,
verify the installation package has not been removed from the Cortex XDR management console.
Additional options are available to help you customize your installation if needed. The following table describes common options and parameters.
If you are using rpm or deb installers, you must also add these parameters to the /etc/panw/collector.conf file prior to installation.
Option Description
After the initial installation, you can change the proxy settings from using the
configuration XML.
Option Description
The path for persistence, content, Filebeat application data, and transaction
data.
--data–path=/tmp/xdrLog
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
The following table provides important information about the XDR Collectors installation for Windows and Linux.
Installation folder Windows: The default installation path for the Windows
XDR Collector. Contains all
%PROGRAMFILES%\Palo Alto Service name: XDR
Program Core files and
Networks\XDR Collector Collector
executables.
Linux: Process name:
xdrcollectorsvc.exe
/opt/paloaltonetworks/xdr-
collector Linux
Configuration Windows: Contains the XML configuration For both Windows and Linux, the file
file of the XDR Collector for both name is XDR_Collector.xml.
%PROGRAMFILES%\Palo Alto
Windows and Linux.
Networks\XDR
Collector\config Any change in this XML
configuration file is saved to the
Linux: XDR Collector database and the
settings are taken from this file.
/opt/paloaltonetworks/xdr-
collector/config In some circumstances, such as
after an XDR Collectors upgrade,
the configured settings in the XML
configuration file can be erased.
Yet, this won't affect the saved
settings in the XDR Collectors
database.
Persistence Windows: Contains the Operating System For both Windows and Linux, the file
persistence file for the XDR name is .scouter.json.
%PROGRAMDATA%\XDR Collector, which issued as part of
Collector\OSPersistence the registration process.
Linux:
/etc/panw/OSPersistence/
Abstract
You can set an application-specific proxy for a Cortex XDR Collector without affecting the communication of other applications on the collector machine.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
In environments where Cortex XDR Collectors communicate with the Cortex XDR server through a wide system proxy you can set an application-specific proxy
for the XDR Collector without affecting the communication of other applications on the collector machine. You can set the proxy after installation from the XDR
Collectors Administration page in Cortex XDR as described in this topic. You can assign up to ten different proxy servers per XDR Collector. The proxy server
the agent uses is selected randomly and with equal probability. If the communication between the XDR Collector and the Cortex XDR sever through the app-
specific proxies fails, the XDR Collector resumes communication through the system-wide proxy defined on the collector machine. If that fails as well, the XDR
Collector resumes communication with Cortex XDR directly.
a. Select the row of the on-premise collector machine that you want to set a proxy.
c. You can assign up to ten different proxies per XDR Collector. For each proxy, specify the IP address and port number. After each Proxy Address
and Port added, select to add the values to a list underneath these fields. Broker VM's in the same tenant can also be configured to use as a
proxy, by enabling Agent proxy in the Broker VMs.
e. If necessary, you can later Disable Collector Proxy from the right-click menu.
When you disable the proxy configuration, all proxies associated with that XDR Collector are removed. The XDR Collector resumes communication
with the Cortex XDR sever through the wide-system proxy if defined, otherwise if a wide-system is not defined the XDR Collector resumes
communicating directly with the Cortex XDR server. If neither a wide-system proxy nor direct communication exist and you disable the proxy, the
XDR Collector disconnects from Cortex XDR .
Abstract
You can upgrade the Cortex XDR Collector software by using the appropriate method for the collector machine operating system.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
After you install the Cortex XDR Collector and the XDR Collector registers with Cortex XDR, you can upgrade the XDR Collector software for on-premise
Windows or Linux collector machine. You need to create a new installation packages and push the XDR Collector package to up to 500 collector machines
from Cortex XDR.
1. Create an XDR Collector Installation Package for each operating system version where you want to upgrade the XDR Collector.
If needed, filter the list of on-premise collector machines. To reduce the number of results, use the collector machine name search and filters at the top of
the page.
You can also select collector machines running different operating systems to upgrade the XDR Collectors at the same time.
For each platform, select the name of the installation package you want to push to the selected on-premise collector machines.
The XDR Collector keeps the name of the original installation package after every upgrade.
5. Upgrade.
XDR distributes the installation package to the selected collector machine at the next heartbeat communication with the XDR Collector. To monitor the
status of the upgrades, go to Response → Action Center. From the Action Center you can also view additional information about the upgrade (right-click
the action and select Additional data) or cancel the upgrade (right-click the action and select Cancel Collector Upgrade).
Abstract
You can uninstall the Cortex XDR Collector from one or more Windows or Linux collector machines at any time.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you want to uninstall the XDR Collector from the on-premise collector machine, you can do so from the XDR Collectors console at any time. You can uninstall
the XDR Collector from an unlimited number of collector machines in a single bulk action. Uninstalling a collector machine triggers the following lifespan flow:
Once you uninstall the XDR Collector from the on-premise collector machine, Cortex XDR distributes the uninstall to the selected collector machine at the
next heartbeat communication with the XDR Collector. All XDR Collector files are removed from the collector machine.
The collector machine status changes to Uninstalled. After a retention period of 7 days, the XDR Collector is deleted from the database and is
displayed in XDR as Collector Machine Name - N/A (Uninstalled).
Data associated with the deleted on-premise collector machine is displayed in the Action Center tables for the standard 90 days retention period.
The following workflow describes how to uninstall the XDR Collector from one or more Windows or Linux on-premise collector machines.
You can also select collector machines running different operating systems to uninstall the XDR Collectors at the same time.
4. To proceed, select I agree to confirm that you understand this action uninstalls the XDR Collector on all selected collector machines.
5. Click OK.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To identify one or more collector machines by a name that is different from the collector machine hostname, you can configure an alias. You can set an alias for
a single collector machine or you can set an alias for multiple collector machines in bulk. To quickly search for the collector machines during investigation and
when you need to take action, you can use the either the collector machine hostname or the alias.
3. Right-click anywhere in the collector machine rows, and select Change Collector Alias.
5. Use the Quick Launcher to search the collector machines by alias across the XDR Collectors console.
Abstract
To easily apply policy rules and manage specific collector machines, you can define a collector machine group.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To easily apply policy rules and manage specific collector machines, you can define a collector machine group. If you set up Directory Sync, you can also
leverage your Active Directory user, group, and computer information in collector machine groups.
There are two methods you can use to define a collector machine group:
Create a dynamic group by allowing Cortex XDR to populate your collector machine group dynamically using collector machine characteristics, such as
a partial hostname or alias; full or partial domain name; IP address, range or subnet; XDR Collector version; or operating system version.
After you define a collector machine group, you can then use it to target policy and actions to specific recipients. The XDR Collectors Groups page displays all
collector machine groups along with the number of collector machines and policy rules linked to the collector machine group.
3. Specify a Group Name and optional Description to identify the collector machine group. The name you assign to the group will be visible when you
assign endpoint security profiles to endpoints.
4. Determine the collector machine properties for creating a collector machine group:
Dynamic: Use the filters to define the criteria you want to use to dynamically populate a collector machine group. Dynamic groups support multiple
criteria selections and can use AND or OR operators. For collector machine names and aliases, and domains, you can use * to match any string
of characters. As you apply filters, Cortex XDR displays any registered collector machine matches to help you validate your filter criteria.
Static: Select specific registered collector machines that you want to include in the collector machine group. Use the filters, as needed, to reduce
the number of results.
When you create a static collector machine group from a file, the IP address, hostname, or alias of the collector machine must match an existing
Cortex XDR that has registered with Cortex XDR .
Disconnecting Directory Sync in your Cortex XDR deployment can affect existing collector machine groups and policy rules based on Active
Directory properties.
After you save your collector machine group, it is ready for use to assign in policies for your collector machines and in other places where you can use
collector machine groups.
At any time, you can return to the XDR Collectors Endpoints page to view and manage your collector machine groups. To manage a group, right-click
the group and select the desired action.
Save as new: Duplicate the collector machine group and save it as a new group.
View collectors: Pivot from an collector machine group to a filtered list of collector machines on the Administration page where you can quickly
view and initiate actions on the collector machines within the group.
Copy text to clipboard to copy the text from a specific field in the row of a group.
Copy entire row to copy the text from all the fields in a row of a group.
Show rows with ‘<Group name>’ to filter the group list to only display the groups with a specific group name.
Hide rows with ‘<Group name>’ to filter the group list to hide the groups for a specific group name.
Abstract
To quickly resolve any issues in policy, Palo Alto Networks can seamlessly deliver software packages called content updates.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To quickly resolve any issues in policy, Palo Alto Networks can seamlessly deliver software packages for Cortex XDR called content updates. Content updates
for XDR Collectors contain changes or updates to the Elasticsearch Filebeat infrastructure or the Elasticsearch* Winlogbeat infrastructure.
When a new update is available, Cortex XDR notifies the XDR Collectors. The XDR Collectors then randomly choose a time within a six-hour window during
which they retrieve the content update from Cortex XDR.
Abstract
Add an XDR collector profile to define the type of data to collect from a Linux or Windows platform.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can add XDR collector profiles that define the type of data that is collected from Linux or Windows platforms.
Abstract
Add a Cortex XDR Collector profile, which defines the data that is collected from a Windows collector machine, and defines automatic XDR Collector upgrade
settings.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
XDR Collector profiles define the data that is collected from a Windows collector machine, and define automatic upgrade settings for the XDR collector. For
Windows, you can configure a Filebeat profile, a Winlogbeat profile, or a Settings profile.
The Filebeat and Winlogbeat profiles use configuration files in YAML format. To facilitate the configuration of the YAML file, you can add out-of-the-box
collection templates. Templates save you time, and don't require previous knowledge of configuration file generation. You can edit and combine the provided
templates, and you can add your own collection settings to the configuration file.
Cortex XDR supports using Filebeat version 8.8.1 with the operating systems listed in the Elasticsearch support matrix that conform with the collector
machine operating systems supported by Cortex XDR. Cortex XDR supports the input types and modules available in Elasticsearch Filebeat.
Fileset validation is enforced. You must enable at least one fileset in the module, because filesets are disabled by default.
Cortex XDR collects all logs in either an uncompressed JSON or text format. Compressed files, such as the gzip format, are not supported.
Cortex XDR supports logs in single line format or multiline format. For more information about handling messages that span multiple lines of text in
Elasticsearch Filebeat, see Manage Multiline Messages.
Related Information
Use an XDR Collector Windows Winlogbeat profile to collect event log data, using the Elasticsearch Winlogbeat default configuration file, called
winlogbeat.yml.
Cortex XDR supports using Winlogbeat version 8.8.1 with the Windows versions listed in the Elasticsearch support matrix that conform with the collector
machine operating systems supported by Cortex XDR. Cortex XDR supports the modules available in Elasticsearch Winlogbeat.
After ingestion, Cortex XDR normalizes and saves the Windows event logs collected by the Winlogbeat profile in the dataset xdr_data. The normalized
logs are also saved in a unified format in <vendor>_<product>_raw if the product and vendor are defined, and otherwise, in
microsoft_windows_raw. You can search the data using Cortex Query Language XQL queries, build correlation rules, and generate dashboards
based on the data.
Related information
Cortex XDR, see XDR Collector machine requirements and supported operating systems
Use an XDR Collector Settings profile to configure automatic upgrade settings for XDR Collector releases.
To map your XDR Collector profile to a collector machine, you must use an XDR Collector policy. After you have created your profile, map it to a new or existing
policy.
Filebeat profile
In the Filebeat Configuration File editor, you can define the data collection for your Elasticsearch Filebeat configuration file called filebeat.yml.
Cortex XDR provides YAML templates for DHCP, DNS, IIS, XDR Collector Logs, and NGINX.
1. In Cortex XDR, select Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Windows.
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
4. In the Filebeat Configuration File editing box, type or paste the contents of your configuration file, or use a template. To add a template, select one from
the list, and click Add.
5. Cortex XDR supports all sections in the filebeat.yml configuration file, such as support for Filebeat fields and tags. You can use the "Add fields"
processor to identify the product/vendor for the data collected by the XDR Collectors, so that the collected events go through the ingestion flow (Parsing
Rules). To configure the product/vendor, ensure that you use the default fields attribute (do not use the target attribute), as shown in the following
example:
processors:
- add_fields:
fields:
vendor: <Vendor>
product: <Product>
For more information about the "Add fields" processor, see Add_fields.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
7. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
Winlogbeat profile
In the Winlogbeat Configuration File editor, you can define the data collection for your Elasticsearch Winlogbeat configuration file called winlogbeat.yml.
Cortex XDR provides a YAML template for Windows Security. To add a template, select it and click Add.
1. In Cortex XDR, select Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Windows.
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
4. In the Winlogbeat Configuration File editing box, type or paste the contents of your configuration file, or use the template. To add the template, click
Select template, and then click Windows Security. Click Add.
5. Cortex XDR supports all sections in the winlogbeat.yml configuration file, such as support for Winlogbeat fields and tags. You can use the "Add
fields" processor to identify the product/vendor for the data collected by the XDR Collectors, so that the collected events go through the ingestion flow
(Parsing Rules). To configure the product/vendor, ensure that you use the default fields attribute (do not use the target attribute), as shown in the
following example:
processors:
- add_fields:
fields:
vendor: <Vendor>
product: <Product>
For more information about the "Add fields" processor, see Add_fields.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
7. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
Settings profile
You can configure automatic upgrades for XDR Collector releases. By default, this is disabled, and the Use Default (Disabled) option is selected. To implement
automatic upgrades, follow these steps:
1. In Cortex XDR, select Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Windows.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
Additional fields are displayed for defining the scope of the automatic upgrade.
To ensure the latest XDR Collector release is used, leave the Use Default (Latest collector release) checkbox selected.
Latest collector release Configures the scope of the automatic upgrade to whenever a new XDR Collector release is available
including maintenance releases and new features.
Only maintenance release Configures the scope of the automatic upgrade to whenever a new XDR Collector maintenance release
is available.
Only maintenance releases Configures the scope of the automatic upgrade to whenever a new XDR Collector maintenance release
in a specific version is available for a specific version. When this option is selected, you can select the specific Release
Version.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
8. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
As needed, you can return to the XDR Collectors Profiles page to manage your XDR Collectors profiles. To manage a specific profile, right click anywhere in an
XDR Collector profile row, and select the desired action:
Save As New Copies the existing profile with its current settings, so that you can make modifications, and save it as a new profile with a unique
name
View Collector Opens a new tab that displays the XDR Collectors Policies page, showing the policies that are currently associated with your XDR
Policies Collector profiles
Copy text to Copies the text from a specific field in the row of a XDR Collector profile
clipboard
Copy entire row Copies the text from the entire row of a XDR Collector profile
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can extend visibility into logs from Windows DHCP, and enrich network logs with Windows DHCP data by using one of the following data collectors with
Elasticsearch Filebeat :
When Cortex XDR begins receiving logs, it automatically creates a Windows DHCP dataset (microsoft_dhcp_raw). Cortex XDR uses Windows DHCP logs
to enrich your network logs with hostnames and MAC addresses. Using XQL Search, you will be able to search for these items in the microsoft_dhcp_raw
dataset.
Although this enrichment is available when configuring a Windows DHCP collector for a cloud data collection integration, we recommend configuring Cortex
XDR to receive Windows DHCP logs with an XDR Collector Windows Filebeat profile, because it is simpler to set up.
Related information
For more information about configuring the filebeat.yml file, see Elasticsearch Filebeat documentation.
When you add an XDR Collector Windows Filebeat profile using the Elasticsearch Filebeat default configuration file, called filebeat.yml, you can define
whether the collected data undergoes follow-up processing in the backend for Windows DHCP data. You can further enrich network logs with Windows DHCP
data by setting vendor to “microsoft”, and product to “dhcp” in the filebeat.yml file.
Configuration activities include editing the filebeat.yml file. To avoid formatting issues in this file, use the template provided by Cortex XDR to make your
customizations. We recommend that you edit the file inside the user interface, instead of copying it and editing it elsewhere. Validate the syntax of the YML file
before you finish creating your profile.
Configure Cortex XDR to receive logs from Windows DHCP using an XDR Collector Windows Filebeat profile:
1. In Cortex XDR, select Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Windows.
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
4. In the Filebeat Configuration File editing box, select the DHCP template, and click Add.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
7. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
To receive Windows DHCP logs with this collector, you must configure data collection from Windows DHCP via Elasticsearch Filebeat. This is configured by
setting up a Windows DHCP Collector in Cortex XDR and installing and configuring an Elasticsearch Filebeat agent on your Windows DHCP Server. Cortex
XDR supports using Filebeat up to version 8.0.1 with the Windows DHCP Collector.
Certain settings in the Elasticsearch Filebeat default configuration file called filebeat.yml must be populated with values provided when you configure the
Collection Integrations settings in Cortex XDR for the Windows DHCP Collector. To help you configure the filebeat.yml file correctly, Cortex XDR provides
an example file that you can download and customize. After you set up collection integration, Cortex XDR begins receiving new logs and data from the source.
Windows DHCP logs are stored as CSV (comma-separated values) log files. The logs rotate by days (DhcpSrvLog-<day>.log), and each file contains two
sections: Event ID Meaning, and the events list.
Configuration activities include editing the filebeat.yml file. To avoid formatting issues in this file, use the example file provided by Cortex XDR to make your
customizations. Do not copy and paste the code syntax examples provided later in this procedure into your filebeat.yml file. Validate the syntax of the YML
file before you finish creating your profile.
Configure Cortex XDR to receive logs from Windows DHCP via Elasticsearch Filebeat with the Windows DHCP collector:
To help you configure your filebeat.yml file correctly, Cortex XDR provides an example filebeat.yml file that you can download and
customize. To download this file, click the filebeat.yml link provided in this dialog box.
f. In the Name field, specify a descriptive name for your log collection configuration.
Click the copy icon next to the key, and save the copy somewhere safe. You will need to provide this key when you set the api_key value in the
Elasticsearch Output section in the filebeat.yml file, as explained in Step #2. If you forget to record the key and close the window, you will
need to generate a new key and repeat this process.
i. Expand the Windows DHCP collector that you just created. Click the Copy api url icon, and save the copy somewhere safe. You will need to
provide this URL when you set the hosts value in the Elasticsearch Output section in the filebeat.yml file, as explained in Step #2.
a. Navigate to the Elasticsearch Filebeat installation directory, and open the filebeat.yml file to configure data collection with Cortex XDR. We
recommend that you use the download example file provided by Cortex XDR.
b. Update the following sections and tags in the filebeat.yml file. The following code examples detail the specific sections to make these
changes in the file.
Elasticsearch Output: Set the hosts and api_key, where both of these values were obtained when you configured the Windows DHCP
Collector in Cortex XDR, as explained in Step #1. The following code example shows how to configure the Elasticsearch Output section in
the filebeat.yml file, and indicates which settings need to be obtained from Cortex XDR.
Processors: Set the tokenizer and add a drop_event processor to drop all events that do not start with an event ID. The code in the
example below shows how to configure the Processors section in the filebeat.yml file and indicates which settings need to be obtained
from Cortex XDR.
The tokenizer definition is dependent on the Windows server version that you are using, because the log format differs.
Return to the integrations page in Cortex XDR, and view the statistics for the log collection configuration.
4. After Cortex XDR begins receiving logs from Windows DHCP via Elasticsearch Filebeat, you can use XQL Search to search for logs in the new
microsoft_dhcp_raw dataset.
Abstract
Extend Cortex XDR visibility into Windows DNS Debug logs using Elasticsearch Filebeat with an XDR Collectors profile.
Extend Cortex XDR visibility into Windows DNS Debug logs using an XDR Collector Windows Filebeat profile.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
During configuration of an XDR Collector Windows Filebeat profile, you can configure the profile to enrich network logs with Windows DNS Debug log data.
You do this by editing the Elasticsearch Filebeat default configuration file called filebeat.yml. In this file, you can define whether the collected data
undergoes follow-up processing in the backend for Windows DNS Debug log data. Cortex XDR uses Windows DNS Debug logs to enrich network logs. These
logs can be searched, using XQL Search. You can search the Windows DNS Debug Cortex Query Language dataset (microsoft_dns_raw) for raw data,
and the normalized stories using the xdr_data dataset with the preset called network_story.
a. In Windows, open DNS Manager, right-click your Windows DNS Server, and select Properties.
b. Select Debug Logging → Log packets for debugging, and keep the settings that are automatically configured for collecting regular Windows DNS
logs in the Packet direction and Packet contents sections.
c. (Optional) To collect detailed Windows DNS logs, under the Other options section, select Details.
Detailed logs are significantly larger, because more information is added to the logs.
d. In the Log file section, for File path and name , enter the file path and log name of your Windows DNS logs, such as
c:\Windows\System32\dns\DNS.log. This path will also be configured in your filebeat.yml file, as explained in a later step (see
Example 54, “Example”).
e. Click OK.
2. In Cortex XDR, go to Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Windows.
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
5. In the Filebeat Configuration File editing box, select the DNS template of your choice (detailed, or non-detailed). If you configured detailed collection in
the Windows DNS Manager, select the detailed DNS template here. Click Add.
6. Configure the filebeat.yml file to collect Windows DNS Debug log data.
a. In the filebeat.inputs: section of the file, for paths:, configure the file path to your Windows DNS Debug logs. This file path must be the
same as the one configured in your Windows DNS server settings, as explained in an earlier step.
The following examples show how to configure the filebeat.yml file to normalize Windows DNS Debug logs with an XDR Collector.
To avoid formatting issues in your filebeat.yml file, we recommend that you validate the syntax of the file.
filebeat.inputs:
- type: filestream
enabled: true
paths:
- c:\Windows\System32\dns\DNS.log
processors:
- add_fields:
fields:
vendor: "microsoft"
product: "dns"
filebeat.inputs:
- type: log
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
8. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
Abstract
Add a Cortex XDR Collector profile, which defines the data that is collected from a Linux collector machine, and defines automatic XDR Collector upgrade
settings.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
An XDR Collector Linux profile defines the data that is collected from a Linux collector machine. For Linux, you can configure a Filebeat profile and a Settings
profile.
The Filebeat profile uses a configuration file in YAML format. To facilitate the configuration of the YAML file, you can add an out-of-the-box collection templates.
The templates save you time, and don't require previous knowledge of configuration file generation. You can edit and combine the provided templates, and
you can add your own collection settings to the configuration file.
Use an XDR Collector Linux Filebeat profile to collect file and log data using the Elasticsearch Filebeat default configuration file, called
filebeat.yml.
Cortex XDR supports using Filebeat version 8.8.1 with the operating systems listed in the Elasticsearch Support Matrix that conform with the collector
machine operating systems supported by Cortex XDR. Cortex XDR supports the input types and modules available in Elasticsearch Filebeat.
Fileset validation is enforced. You must enable at least one fileset in the module, because filesets are disabled by default.
Cortex XDR collects all logs in either an uncompressed JSON or text format. Compressed files, such as the gzip format, are not supported.
Cortex XDR supports logs in single line format or multiline format. For more information about handling messages that span multiple lines of text in
Elasticsearch Filebeat, see Manage Multiline Messages.
Related Information
Use an XDR Collector Settings profile to configure automatic upgrade settings for XDR Collector releases.
To map your XDR Collector profile to a collector machine, you must use an XDR Collector policy. After you have created your profile, map it to a new or existing
policy.
Filebeat profile
In the Filebeat Configuration File editor, you can define the data collection for your Elasticsearch Filebeat configuration file called filebeat.yml.
Cortex XDR provides YAML templates for XDR Collector Logs, Linux (RHEL/CentOS), NGINX (Linux), and Linux (Debian/Ubuntu).
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
4. In the Filebeat Configuration File editing box, type or paste the contents of your configuration file, or use a template. To add a template, select one from
the list, and click Add.
5. Cortex XDR supports all sections in the filebeat.yml configuration file, such as support for Filebeat fields and tags. You can use the "Add fields"
processor to identify the product/vendor for the data collected by the XDR Collectors, so that the collected events go through the ingestion flow (Parsing
Rules). To configure the product/vendor, ensure that you use the default fields attribute (do not use the target attribute), as shown in the following
example:
processors:
- add_fields:
fields:
vendor: <Vendor>
product: <Product>
For more information about the "Add fields" processor, see Add_fields.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
7. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
Settings profile
You can configure automatic upgrades for XDR Collector releases. By default, this is disabled, and the Use Default (Disabled) option is selected. To implement
automatic upgrades, follow these steps:
1. In Cortex XDR, select Settings → Configurations → XDR Collectors → Profiles → +Add Profile → Linux.
Profile Name: Enter a unique name to identify the profile. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed in the list of profiles when you configure a policy.
(Optional) Add description here: To provide additional context for the purpose or business reason for your new profile, enter a profile description.
Additional fields are displayed for defining the scope of the automatic upgrade.
Latest collector release Configures the scope of the automatic upgrade to whenever a new XDR Collector release is available
including maintenance releases and new features.
Only maintenance release Configures the scope of the automatic upgrade to whenever a new XDR Collector maintenance release
is available.
Only maintenance releases Configures the scope of the automatic upgrade to whenever a new XDR Collector maintenance release
in a specific version is available for a specific version. When this option is selected, you can select the specific Release
Version.
Your new profile will be listed under the applicable platform on the XDR Collectors Profiles page.
8. Apply profiles to XDR Collector machine policies by performing one of the following:
Right-click a profile, and select Create a new policy rule using this profile.
Launch the new policy wizard from XDR Collectors → Policies → XDR Collectors Policies.
As needed, you can return to the XDR Collectors Profiles page to manage your XDR Collectors profiles. To manage a specific profile, right click anywhere in an
XDR Collector profile row, and select the desired action:
Save As New Copies the existing profile with its current settings, so that you can make modifications, and save it as a new profile with a unique
name
View Collector Opens a new tab that displays the XDR Collectors Policies page, showing the policies that are currently associated with your XDR
Policies Collector profiles
Copy text to Copies the text from a specific field in the row of a XDR Collector profile
clipboard
Copy entire row Copies the text from the entire row of a XDR Collector profile
Abstract
Enable a Cortex XDR Collector profile by mapping it to a policy. Each policy that you create must apply to one or more collector machines or collector machine
groups.
To create a policy from scratch on the XDR Collectors Policies page, select Settings → Configurations → XDR Collectors → Policies → +Add
Policy.
To add a profile to an existing policy, select Settings → Configurations → XDR Collectors → Policies, then right-click the policy that you want to
edit, and select Edit.
To create a new policy from a profile on the XDR Collectors Profiles page, select Settings → Configurations → XDR Collectors → Profiles, right-
click the profile, and select Create a new policy rule using this profile.
a. Policy Name: Enter a unique name to identify the policy. The name can contain only letters, numbers, or spaces, and must be no more than 30
characters. The name that you enter here will be displayed when you view and configure policies.
b. (Optional) Description: To provide additional context for the purpose or business reason for your policy, enter a policy description.
c. Platform: Select the operating system of the XDR Collector machines that will use the policy.
d. Select the profiles that you want to map to the policy. If you do not specify a profile, the XDR Collector uses the Default profile.
e. Click Next.
3. On the XDR Collectors Endpoints page, select the XDR Collectors (endpoints) or XDR Collector groups to which you want to map the policy. You can use
the provided filters to find XDR Collectors listed on this page.
Cortex XDR automatically applies a filter for the platform that you selected in the previous step. To change the platform, go Back to the general policy
settings.
4. Click Next.
5. On the Summary page, review the settings that you configured for the new policy.
6. (Optional) If necessary, change a policy's position relative to other policies in the table on the XDR Collectors Policies page.
The XDR Collector evaluates policies from top to bottom. When an XDR Collector finds the first match, it applies that policy as the active policy. To
change the policy order, click and drag the arrows in the Name cell of a policy to the desired location in the policy hierarchy.
As needed, you can return to the XDR Collectors Policies page to manage your XDR Collector policies. To manage a specific policy, right-click anywhere in an
XDR Collector policy row, and select the desired action. You cannot delete or disable default policies.
View Policy Details Opens a new dialog box that displays details about the profiles mapped to the policy
Save As New Copies the existing policy with its current settings, so that you can make modifications, and save it as a new policy with a different
name
Copy text to Copies the text from a specific field in the row of a XDR Collector policy
clipboard
Copy entire row Copies the text from the entire row of a XDR Collector policy
Abstract
After Cortex XDR begins receiving data from your XDR Collectors configuration, the app automatically creates an XQL dataset.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
After Cortex XDR begins receiving data from your XDR Collectors configuration that are dedicated for on-premises data collection on Windows and Linux
machines.
For Filebeat, the app automatically creates an Cortex Query Language (XQL) dataset of event logs using the vendor name and the product name
specified in the configuration file section of the Filebeat profile. The dataset name follows the format <vendor>_<product>_raw. If not specified,
Cortex XDR automatically creates a new default dataset in the format <module>_<module>_raw or <input>_<input>_raw. For example, if you are
using the NGINX module, the dataset is called nginx_nginx_raw.
For Winlogbeat, the app automatically creates an XQL dataset of event logs using the vendor name and the product name specified in the configuration
file section of the Winlogbeat profile. The dataset name follows the format <vendor>_<product>_raw. If not specified, Cortex XDR automatically
creates a new default dataset, microsoft_windows_raw, for event log collection. Winlogbeat data is also normalized to xdr_data (and thus the
xdr_event_log preset).
After Cortex XDR creates the dataset, you can search for your XDR Collector data using XQL Search.
Data can be ingested both from Palo Alto Networks products, and from third-party vendor products.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Data can be ingested both from Palo Alto Networks products, and from third-party vendor products.
Abstract
Cortex XDR provides visibility into your external logs. The availability of logs and alerts varies by the data source.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
The following table describes the visibility of each vendor and device type, and where you can view information ingested from external sources, depending on
the data source.
A indicates support and a dash (—) indicates the feature is not supported.
Network connections
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Amazon S3 (Route 53 —
logs) Raw data is Option to ingest network Cortex XDR can raise Cortex XDR alerts
searchable in XQL Route 53 DNS logs (Analytics, IOC, BIOC, and Correlation
Search. as Cortex XDR network Rules) when relevant from logs.
connection stories that are
While Correlation Rules alerts are raised
searchable in the Query
on non-normalized and normalized
Builder and in XQL
logs, Analytics, IOC and BIOC alerts
Search.
are only raised on normalized logs.
Corelight Zeek —
Raw data is Network stories that Cortex XDR can raise Cortex XDR alerts
searchable in XQL include Corelight Zeek (Analytics, IOC, BIOC, and Correlation
Search. network connection logs Rules) when relevant from logs.
are searchable in the
Query Builder and in XQL While Correlation Rules alerts are raised
Search. on non-normalized and normalized
logs, Analytics, IOC and BIOC alerts
are only raised on normalized logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Fortinet Fortigate
Raw data is Network stories that Cortex XDR can raise Cortex XDR alerts Alerts from Fortinet
searchable in XQL include Fortinet network (Analytics, IOC, BIOC, and Correlation firewalls are raised
Search. connection logs are Rules) when relevant from logs. throughout Cortex
searchable in the Query XDR when relevant.
While Correlation Rules alerts are raised
Builder and in XQL
on non-normalized and normalized
Search.
logs, Analytics, IOC and BIOC alerts
are only raised on normalized logs.
Okta — —
Raw data is Cortex XDR can raise Cortex XDR alerts
searchable in XQL (Analytics, IOC, BIOC, and Correlation
Search. Rules) when relevant from logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Google Workspace —
Raw data is Relevant Login, Token, For all logs, Cortex XDR can
searchable in XQL Google drive, SAML, raise Cortex XDR alerts (Analytics and
Search. Admin Console, Enterprise Correlation Rules) when relevant from
Groups, and Rules audit logs.
logs normalized into
authentication stories. All
are searchable in the
Query Builder.
Okta —
Logs and stories are Logs stitched with Cortex XDR can raise Cortex XDR alerts
searchable in XQL authentication stories are (Analytics, IOC, BIOC, and Correlation
Search searchable in the Query Rules only) when relevant from logs.
Builder.
While Correlation Rules alerts are
raised on non-normalized and
normalized logs, Analytics, IOC
and BIOC alerts are only raised
on normalized logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
OneLogin —
Raw data is All log types are Cortex XDR can raise Cortex XDR alerts
searchable in XQL normalized into (Analytics, IOC, BIOC, and Correlation
Search. authentication stories, and Rules) when relevant from logs.
are searchable in the
While Correlation Rules alerts are raised
Query Builder.
on non-normalized and normalized
logs, Analytics, IOC and BIOC alerts
are only raised on normalized logs.
PingFederate —
Logs and stories are Logs stitched with Cortex XDR can raise Cortex XDR alerts
searchable in XQL authentication stories are (Analytics, IOC, BIOC, and Correlation
Search searchable in the Query Rules) when relevant from logs.
Builder.
While Correlation Rules alerts are raised
on non-normalized and normalized
logs, Analytics, IOC and BIOC alerts
are only raised on normalized logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Amazon CloudWatch —
(generic logs, EKS logs) Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Analytics, IOC, BIOC, and
Correlation Rules) when
relevant from logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Okta — —
Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Analytics, IOC, BIOC, and
Correlation Rules) when
relevant from logs.
Endpoint logs
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Cloud assets
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
Apache Kafka — —
Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Analytics, IOC, BIOC, and
Correlation Rules) when
relevant from logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
BeyondTrust Privilege — —
Management Cloud Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Correlation Rules only)
when relevant from logs.
Box —
Raw data is searchable in Selected Box audit event Cortex XDR can
XQL Search. logs are normalized into raise Cortex XDR alerts
stories and are searchable (Analytics, IOC, BIOC, and
in the Query Builder and in Correlation Rules) when
XQL. relevant from logs.
Dropbox —
Raw data is searchable in Selected Box audit event Cortex XDR can
XQL Search. logs are normalized into raise Cortex XDR alerts
stories and are searchable (Analytics, IOC, BIOC, and
in the Query Builder and in Correlation Rules) when
XQL. relevant from logs.
Elasticsearch Filebeat — —
Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Correlation Rules only)
when relevant from logs.
Forcepoint DLP — —
Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Correlation Rules only)
when relevant from logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
IoT Security —
Raw data is searchable in Cortex XDR uses IoT Cortex XDR can
XQL Search. Security information to raise Cortex XDR alerts
improve analytics detection (Analytics, IOC, BIOC, and
and assets management Correlation Rules) when
information. relevant from logs.
Salesforce.com — —
Raw data is searchable in Cortex XDR can
XQL Search. raise Cortex XDR alerts
(Correlation Rules only)
when relevant from logs.
ServiceNow CMDB — —
Raw data is searchable in Cortex XDRCortex XDR
XQL Search. alerts (Correlation Rules
only) when relevant from
logs.
Workday — —
Raw data is searchable in Cortex XDRCortex XDR
XQL Search. alerts (Correlation Rules
only) when relevant from
logs.
Vendor And Device Type Raw Data Visibility Normalized Log Visibility Cortex XDR Alert Visibility Vendor Alert Visibility
When ingesting data from an external source, Cortex XDR creates a dataset that you can query using Cortex Query Language (XQL). Datasets created in this
way use the following naming convention.
<vendor_name>_<product_name>_raw
The datatypes used for the fields in an imported dataset are automatically assigned based on the input content. Fields can have a datatype
of string, int, float, array, time, or boolean. All other fields are ingested as a JSON object.
For CEF type files, when extension values are quoted, the CEF parser automatically removes the quotes from the values. In addition, files containing invalid
UTF-8 are parsed under XQL mapping field _invalid_utf8.
Abstract
Monitor Cortex XDR authentication and audit logs for detecting attacks on Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can audit and query Cortex XDR authentication logs and activity logs to track and trigger alerts about malicious activity on Cortex XDR.
A indicates support and a dash (—) indicates the feature is not supported.
LOG TYPE RAW DATA VISIBILITY NORMALIZED LOG VISIBILITY Cortex XDR ALERT VISIBILITY
Cortex XDR
authentication
logs Logs and stories are Cortex XDR authentication logs normalized into Cortex XDR can raise Cortex XDR alerts
searchable in XQL Search. authentication stories, which are searchable in (Analytics, IOC, BIOC, and Correlation Rules)
the Query Builder. when relevant from logs.
Abstract
To augment your Cortex XDR data, you can set up Cortex XDR to ingest data from a variety of external third-party sources.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To provide you with a more complete and detailed picture of the activity involved in an incident, you can ingest data from a variety of external, third-party
sources into Cortex XDR.
Cortex XDR can receive logs, or both logs and alerts, from the source. Depending on the data source, Cortex XDR can provide visibility into your external data
in the form of:
Log stitching with other logs in order to create network or authentication stories.
Alerts reported by the vendor throughout Cortex XDR, such as in the alerts table, incidents, and views.
To ingest data, you must set up the Syslog Collector applet on a Broker VM within your network.
The following table summarizes the vendor data that can be ingested, according to log or data type.
Corelight Zeek
Fortinet Fortigate
Okta
Google Workspace
Okta
OneLogin
PingFederate
Operation and System Logs from Cloud Providers Amazon S3 (generic logs)
Okta
Microsoft Azure
Custom External Sources Any Vendor Sending CEF, LEEF, CISCO, CORELIGHT, or RAW
formatted Syslog
Any vendor logs from a third party source over FTP, FTPS, or SFTP
Apache Kafka
Box
Dropbox
Elasticsearch Filebeat
Forcepoint DLP
IoT Security
Salesforce.com
ServiceNow CMDB
Workday
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR supports streaming data directly from Prisma Access accounts, Next-Generation Firewalls (NGFW), and Panorama devices to your Cortex XDR
tenants using the Strata Logging Service.
New tenants (and tenants upgraded from XDR to XSIAM) will work with the new direct integration of Next-Generation Firewall and Panorama into Cortex. For
such tenants, there’s no option to use the Strata Logging Service integration.
For tenants where a Strata Logging Service license exists, the configured integrations, such as Next-Generation Firewall and Prisma Access, can be migrated
to Cortex XDR in either of the following ways before the license expires:
More than two weeks before the license for existing integrations with Strata Logging Service expires, manually migrate the integrations, using the
corresponding Migrate Devices buttons on the Collection Integrations page. Make sure you select all your devices to connect directly to Cortex XDR.
Two weeks prior to the end of your Strata Logging Service license, Cortex XDR will automatically migrate your integrations to your Strata Logging
Service.
Abstract
Stream data directly from other Palo Alto Networks products to Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR supports streaming data directly from Prisma Access accounts, Next-Generation Firewalls (NGFW), and Panorama devices to your Cortex XDR
tenants using the Strata Logging Service.
Ensure you have deployed Panorama and NGFW, and hold Super User permissions to your Customer Support Account (CSP).
Once your tenant has been activated, navigate to the Collection Integrations page to configure your integrations. All devices and accounts allocated to your
CSP accounts are available to integrate.
For Palo Alto Networks Integrations there is an option to turn on or off the collection of URL and File log types. For more information, see Collecting URL and
File log types.
New tenants (and tenants upgraded from XDR to XSIAM) will work with the new direct integration of Next-Generation Firewall and Panorama into Cortex. For
such tenants, there’s no option to use the Strata Logging Service integration.
For tenants where a Strata Logging Service license exists, the configured integrations, such as Next-Generation Firewall and Prisma Access, can be migrated
to Cortex XDR in either of the following ways before the license expires:
More than two weeks before the license for existing integrations with Strata Logging Service expires, manually migrate the integrations, using the
corresponding Migrate Devices buttons on the Collection Integrations page. Make sure you select all your devices to connect directly to Cortex XDR.
Two weeks prior to the end of your Strata Logging Service license, Cortex XDR will automatically migrate your integrations to your Strata Logging
Service.
Abstract
Learn how to ingest detection data from Next-Generation Firewall and Panorama.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward firewall data from your Next-Generation Firewall (NGFW) and Panorama devices to Cortex XDR.
Collection of firewall data from multiple accounts is supported. Super User permissions on both the Cortex XDR tenant accounts and the NGFW or Panorama
accounts are required for this use case. Additionally, the CSP accounts must be in the same account; linked accounts are not supported.
New tenants (and tenants upgraded from XDR to XSIAM) will work with the new direct integration of Next-Generation Firewall and Panorama into Cortex. For
such tenants, there’s no option to use the Strata Logging Service integration.
For tenants where a Strata Logging Service license exists, the configured integrations, such as Next-Generation Firewall and Prisma Access, can be migrated
to Cortex XDR in either of the following ways before the license expires:
Two weeks prior to the end of your Strata Logging Service license, Cortex XDR will automatically migrate your integrations to your Strata Logging
Service.
Ensure that you have completed the following on the NGFW or Panorama side:
For Panorama only, ensure that the Panorama Cloud Services plugin is installed.
On the Cortex XDR side, ensure that you have user role permissions for Data Collection > Log Collections.
Configuration of data ingestion from multiple accounts requires Super User permissions on both the Cortex XDR tenant and on the device accounts.
Additionally, the CSP accounts must be in the same account; linked accounts are not supported.
If your firewalls are located in a different region, or bandwidth issues are encountered due to large log size, you can ingest NGFW logs in CEF format,
using the Syslog collector. This solution provides similar protection, out-of-the-box data modeling and analytics to logs ingested into Strata Logging
Service. For more information, see Ingest Next-Generation Firewall logs using the Syslog collector.
In the following procedure, general information is provided for NGFW and Panorama. For detailed instructions, consult the documentation for your
specific devices and Panorama version.
1. In the user interface for setting up firewalls, for Strata Logging Service/Cloud Logging, enable the following options directly, or using device templates.
c. (Optional, depending on your organization's requirements) Select Enable Duplicate Logging (Cloud and On-Premise).
For PAN-OS and Panorama versions 10.1 and later, each firewall requires a separate certificate. Certificates need to be requested through the Customer
Support portal. To sign in to the portal, click here. For PAN-OS and Panorama versions 10.0 and earlier, you are only required to generate one global PSK
for all the firewall devices.
Cortex XDR does not validate your firewall credentials, you must ensure the certificates or PSK details have been updated in your firewalls in order for
data to stream.
6. Verify that the connection between the firewalls and Strata Logging Service is valid.
9. On the Collection Integrations page, locate your NGFW data source and select Add Instance to begin a new connection.
10. Select Add NGFW Device or Add Panorama Device, and then do one of the following:
For devices in your account, select one or more devices from Select FW/Panorama devices.
To include devices from other accounts, select Select devices from other accounts, and then select one or more FW or Panorama devices from
other accounts. For cross-account connections, you must have Super User permissions on the Cortex tenant account and the device account.
Devices already connected are listed at the end. A device may be connected via Strata Logging Service, or via Cortex XDR. Rectify any streaming
issues that may arise by checking configurations for the relevant connection type (Strata Logging Service or Cortex XDR).
11. To complete the onboarding process of your devices, on the Next Steps to Connect Your Devices page, expand the relevant device version, and follow
the corresponding instructions.
Connection is established regardless of the firewall credential status and can take up to several minutes, select Sync now to refresh your instances.
13. Validate that your data is streaming. It might be necessary to create traffic before you verify data streaming.
In your NGFW Standalone Firewall Devices, track the Last communication timestamp.
Run XQL Query: dataset = panw_ngfw_system_raw| filter log_source_id = "[NGFW device SN]"
After you create the NGFW instance, on the Collection Integrations page, expand the NGFW to track the status of your Standalone Firewall Devices and
Panorama Devices.
It might take an hour or longer after connecting the firewall in Cortex XDR until you start seeing notifications that the certificate has been approved, and that the
logging service license has appeared on the firewall.
When Cortex XDR begins receiving detection data, the console begins stitching logs with other Palo Alto Network-generated logs to form stories. Use the XQL
Search dataset panw_ngfw_*_raw to query your data, where the following logs are supported:
*These datasets use the query field names as described in the Cortex schema documentation.
For stitched raw data, you can query the xdr_data dataset or use any preset designated for stitched data, such as network_story. For query examples,
refer to the in-app XQL Library. When relevant, Cortex XDR can also raise Cortex XDR alerts (Analytics, Correlation Rules, IOC, and BIOC only) from Strata
Logging Service detection data. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only
raised on normalized logs.
IOC and BIOC alerts are applicable on stitched data only and are not available on raw data.
You can see an overview of ingestion status for all log types, and a breakdown of each log type and its daily consumption quota on the NGFW Ingestion
Dashboard.
Abstract
Use the Syslog collector to ingest NGFW logs in CEF format. This method is useful when your firewalls are located in a different region, or bandwidth issues are
encountered due to large log size.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Use the Syslog collector to ingest Next-Generation Firewall (NGFW) logs in CEF format. This method is useful when your firewalls are located in a different
region, or bandwidth issues are encountered due to large log size. This solution provides similar protection, out-of-the-box data modeling and analytics to logs
ingested into Strata Logging Service.
In the following procedure, general information is provided for NGFW and Panorama. For detailed instructions, consult the documentation for your specific
devices and Panorama version, to ensure that you have configured log forwarding correctly for all the log types that you would like to forward to Cortex XDR.
The following steps only cover configuration of the custom log schema (CEF) for a given syslog server. They do not replace the administrator guide’s
configuration coverage of log forwarding.
1. To configure the device to include its IP address in the header of Syslog messages, select Panorama/Device → Setup → Management, click the Edit
icon in the Logging and Reporting Settings section, and navigate to the Log Export and Reporting tab.
2. From the Syslog HOSTNAME Format menu, select ipv4-address or ipv6-address, and click OK.
4. Enter a server profile Name and Location (Location refers to a virtual system, if the device is enabled for virtual systems).
5. On the Servers tab of the Syslog Server Profiles window, click Add, and enter the following information for the Syslog server:
6. Select the Custom Log Format tab and click configure the log formats as follows:
To avoid the possible effects of line formatting, do not copy/paste the message formats directly into the PAN-OS web interface. Instead, paste into a text
editor, remove any carriage return or line feed characters, and then copy and paste into the web interface.
From version 10.0 and later, the log format documented for log types (Traffic, Threat, and URL) exceeds the maximum supported 2048 characters in the
Custom Log Format tab on the firewall and Panorama. Select the CEF keys and values to limit the number of characters to 2048, as per your
requirements.
Escaped Characters: \=
Escape Character: \
Set up a Syslog collector for the logs, as explained in Activate Syslog Collector. In Task 4, ensure that you set Format to CEF.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward data from Prisma Access to Cortex XDR. When your Cortex XDR tenant begins receiving detection data, it begins stitching logs with other
Palo Alto Networks-generated logs to form stories. Use the XQL Search to query the data.
Collection of data from multiple accounts is supported. Super User permissions on both the Cortex XDR tenant accounts and the Prisma Access accounts are
required for this use case.
New tenants (and tenants upgraded from XDR to XSIAM) will work with the new direct integration of Next-Generation Firewall and Panorama into Cortex. For
such tenants, there’s no option to use the Strata Logging Service integration.
For tenants where a Strata Logging Service license exists, the configured integrations, such as Next-Generation Firewall and Prisma Access, can be migrated
to Cortex XDR in either of the following ways before the license expires:
More than two weeks before the license for existing integrations with Strata Logging Service expires, manually migrate the integrations, using the
corresponding Migrate Devices buttons on the Collection Integrations page. Make sure you select all your devices to connect directly to Cortex XDR.
Two weeks prior to the end of your Strata Logging Service license, Cortex XDR will automatically migrate your integrations to your Strata Logging
Service.
Configuration of data ingestion from multiple accounts requires Super User permissions in both Cortex XDR tenant and Prisma Access accounts.
Cortex XDR does not validate your Prisma Access account credentials. You must ensure the account has been deployed in order for data to stream.
3. In the Connect Prisma Access dialog box, you can choose to connect Prisma Access to this account or other accounts:
To connect Prisma Access to other accounts, click Connect Prisma Access from other accounts and select the account from the accounts listed.
Click Connect.
On the Collection Integrations page, expand Prisma Access to track the status of your instance.
To ensure the data is streaming into your tenant, using XQL, query by: is_prisma_mobile.
After you create the Prisma Access instance, on the Collection Integrations page, expand the Prisma Access integration to track the connection, or, if you
want, to Delete the instance.
Abstract
Configure Data Collection Settings to receive alerts from Prisma Cloud Compute.
Ingestion of alerts from Prisma Cloud Compute requires a Cortex XDR Pro per GB license.
To receive alerts from Prisma Cloud Compute, first configure the Collection Integrations settings in Cortex XDR. In Prisma Cloud, you then must create a
webhook, which provides the mechanism to interface Prisma Cloud’s alert system with Cortex XDR . After you set up your webhook, Cortex XDR begins
receiving alerts from Prisma Cloud Compute.
Cortex XDR then groups these alerts into incidents and adds them to the Alerts table. When Cortex XDR begins receiving the alerts, it creates a new Cortex
Query Language (XQL) dataset (prisma_cloud_compute_raw), which you can use to initiate XQL Search queries and to create Correlation Rules. The in-
app XQL Library contain sample search queries.
3. Specify the Name for the Prisma Cloud Compute Collector displayed in Cortex XDR.
4. Save & Generate Token. The token is displayed in a blue box, which is blurred in the image below.
Click the Copy icon next to the Username and Password, and record them in a safe place, as you will need to provide them when you configure the
Prisma Cloud Compute Collector for alerts integration. If you forget to record the key and close the window, you will need to generate a new key and
repeat this process. When you are finished, click Done to close the window.
In the Collection Integrations page for the Prisma Cloud Compute Collector that you created, select Copy api url, and record it somewhere safe. You will
need to provide this API URL when you set the Incoming Webhook URL as part of the configuration in Prisma Cloud Compute.
6. Create a webhook as explained in the Webhook Alerts section of the [Prisma Cloud Administrator’s Guide (Compute)].
b. In Incoming Webhook URL, paste the API URL that you copied and recorded from Copy api url.
c. In Credential Options, select Basic Authentication, and use the Username and Password that you saved when you generated the token.
e. Click Save.
In Cortex XDR, once alerts start to come in, a green check mark appears underneath the Prisma Cloud Compute Collector configuration with the
amount of data received.
8. After Cortex XDR begins receiving data from Prisma Cloud Compute, you can use XQL Search to search for specific data using the
prisma_cloud_compute_raw dataset and view alerts in the Alerts table. In the Cortex XDR Alerts table, the Prisma Cloud Compute alerts are listed as
Prisma Cloud Compute in the ALERT SOURCE column and are classified as Medium in the SEVERITY column.
Abstract
Configure Data Collection Settings in Cortex XDR to receive alerts from Prisma Cloud.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive alerts from Prisma Cloud, first configure the Collection Integrations settings in Cortex XDR. After you set up collection integration, Cortex XDR begins
to receive alerts from Prisma Cloud every 30 seconds.
Cortex XDR then groups these alerts into incidents and adds them to the Alerts table. When Cortex XDR begins receiving the alerts, it creates a new Cortex
Query Language (XQL) dataset (prisma_cloud_raw), which you can use to initiate XQL Search queries and create Correlation Rules. The in-app XQL
Library contains sample search queries.
You can also configure Cortex XDR to collect data directly from other cloud providers using an applicable collector. For more information on the cloud
collectors, see External Data Ingestion Vendor Support. The Prisma Cloud alerts are stitched to this data.
Complete the following tasks before you begin configuring Cortex XDR to receive alerts from Prisma Cloud.
Create an Access Key and Secret Key as explained in the Create and Manage Access Keys section of the [Prisma Cloud Administrator’s Guide]. Prisma
Cloud System Admin privileges are required for this task.
Copy or download the Access Key ID and Secret Key as you will need them when configuring the Prisma Cloud Collector in Cortex XDR.
You can find your default Prisma Cloud domain in the Prisma Cloud API URL table.
Specify the Prisma Cloud Access Key Id that you received when you created an Access Key.
Specify the Prisma Cloud Secret Key that you received when you created an Access Key.
4. To create Cortex XDR alerts from the ingested Prisma Cloud alerts, click Advanced Settings, and select the desired options:
Incidents: Create Cortex XDR alerts for runtime alerts detected by Prisma Cloud.
Risks: Create Cortex XDR alerts for Prisma Cloud findings and vulnerabilities that could be exploited by threat actors.
In Cortex XDR, once alerts start to come in, a green check mark appears underneath the Prisma Cloud Collector configuration with the amount of data
received.
After you enable the Prisma Cloud Collector, you can make additional changes, as needed.
7. After Cortex XDR begins receiving data from Prisma Cloud, you can use XQL Search to search for specific data, using the prisma_cloud_raw dataset
and to view alerts in the Alerts table. In the Cortex XDR Alerts table, the Prisma Cloud alerts are listed as Prisma Cloud in the ALERT SOURCE column.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To streamline the connection and management of all Palo Alto Networks generated logs across products in Cortex XDR with a Strata Logging Service, Cortex
XDR can ingest detection data from Strata Logging Service in a more flexible manner using the Strata Logging Service data collector.
You can configure the Strata Logging Service data collector to take logs from other Palo Alto Networks products already logging to 1 or more existing Strata
Logging Service.
Cortex XDR supports streaming data directly from Prisma Access accounts and New-Generation Firewalls (NGFW) and Panorama devices to your Cortex XDR
tenants using the Cortex Native Data Lake. Existing integrations should be migrated to the Cortex Native Data Lake. Make sure you select all your devices to
connect directly to Cortex XDR. Integrations not migrated manually will be migrated automatically 2 weeks before the end of the contract with Strata Logging
Service.
For stitched raw data, use the XQL query xdr_data dataset or any preset designated for stitched data, such as network_story. For query examples, refer
to the in-app XQL Library. Cortex XDR can also raise Cortex XDR alerts (Analytics, Correlation Rules, IOC, and BIOC only) when relevant from Strata Logging
Service detection data. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on
normalized logs.
IOC and BIOC alerts are applicable on stitched data only and are not available on raw data.
You can configure Cortex XDR to take Palo Alto generated firewall logs from other Palo Alto Networks products already logging to an existing Strata
Logging Service.
Select one or more existing Strata Logging Service instances that you want to connect to this Strata Logging Service instance.
Once events start to come in, a green check mark appears underneath the Strata Logging Service configuration.
After you create the Strata Logging Service Collector, you can make additional changes, as needed.
7. After Cortex XDR begins receiving data from a Strata Logging Service, you can use XQL Search to search for specific data, using the xdr_data
dataset.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
The Palo Alto Networks IoT Security solution discovers unmanaged devices, detects behavioral anomalies, recommends policy based on risk, and automates
enforcement without the need for additional sensors or infrastructure. The Cortex XDR IoT Security integration enables you to ingest alerts and device
information from your IoT Security instance.
Cortex XDR automatically creates a new dataset for device activities (panw_iot_security_devices_raw) and a new dataset for alerts
(panw_iot_security_alerts_raw), which you can use to initiate XQL Search queries and create Correlation Rules.
Before you configure the IoT Security Collector, generate an access key and a key ID for the integration.
1. Log in to the PAN IoT Security portal and click your user name.
2. Select Preferences.
3. In the User Role & Access section, Create an API Access Key.
4. Download and save the access key and key ID in a secure location.
For more information about the PAN IoT Secuity API, see Get Started with the IoT Security API.
Configure the IoT Security alerts and assets collection in Cortex XDR.
Customer ID: Tenant domain part of the FQDN used for your IoT Security account. For example, in yourcorp.iot.paloaltonetworks.com,
the customer ID is yourcorp. The customer ID is unique and case sensitive. After you save the integration instance, you can't edit the Customer
ID.
Integration Scope: Select at least one of the two values, Alerts and Devices depending on which information you want to ingest.
When events start to come in, a green check mark appears underneath the IoT Security Collector configuration with the data and time that the data was
last synced.
After you enable the IOT Security Collector, you can make additional changes as needed. To modify a configuration, select any of the following options.
6. After Cortex XDR begins receiving data from IOT Security, you can use the XQL Search to search for logs in the new datasets,
panw_iot_security_devices_raw for device activities, and panw_iot_security_alerts_raw for alerts.
Abstract
Learn about the implications of turning off or on collection of URL and File logs.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
For Palo Alto Networks integrations, you can choose whether to collect URL and File type logs. These logs enhance your cyber analytics, correlation rules and
visibility for investigation. However, if you want to reduce ingestion charges, you can globally turn off collection of URL and File log types for all Palo Alto
Networks Integrations.
When collection is turned off, some detectors won’t detect cyber attacks or provide full context, and correlation rules won’t be able to detect cyber events. For
a full list of affected detectors, see Detectors connected to URL and File log types.
You can also calculate the amount of ingestion that URL and File log types are consuming by looking at the NGFW dashboard. This dashboard provides an
overview of the PAN-NGFW ingestion status of all log types (including URL and File log types) and their daily consumption quota. For more information, see
Predefined dashboards.
You can turn on or off URL and File log types collection on the Collection Integrations page.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you turn off URL and File log types collection, some detectors are unable to detect cyber attacks or provide full context, and correlation rules are unable to
detect cyber events.
Read more...
Rare connection to external IP address or host by an application using RMI-IIOP or LDAP protocol
DNS Tunneling
Read more...
A user accessed a resource for the first time via SSO - silent
Abstract
Cortex XDR supports external data ingestion for a variety of service types and vendors.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Abstract
Learn more about integrating Slack and a Syslog Receiver to Cortex XDR.
Slack: To send outbound notifications to Slack. For more information, see Integrate Slack for outbound notifications.
Syslog server: To send Cortex XDR notifications to your Syslog server. For more information, see Integrate a syslog receiver.
Abstract
Cortex XDR can ingest network connection logs from different third-party sources.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can ingest network connection logs from different third-party sources.
Abstract
Take advantage of Cortex XDR investigation capabilities and set up network flow log ingestion for your Amazon S3 logs using an AWS CloudFormation Script.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward network flow logs to Cortex XDR from Amazon Simple Storage Service (Amazon S3).
To receive network flow logs from Amazon S3, you must first configure data collection from Amazon S3. You can then configure the Collection Integrations
settings in Cortex XDR for Amazon S3. After you set up collection integration, Cortex XDR begins receiving new logs and data from the source.
You can either configure Amazon S3 with SQS notification manually on your own or use the AWS CloudFormation Script that we have created for you to make
the process easier. The instructions below explain how to configure Cortex XDR to receive network flow logs from Amazon S3 using SQS. To perform these
steps manually, see Configure Data Collection from Amazon S3 Manually.
For more information on configuring data collection from Amazon S3, see the Amazon S3 Documentation.
As soon as Cortex XDR begins receiving logs, the app automatically creates an Amazon S3 Cortex Query Language (XQL) dataset (aws_s3_raw). This
enables you to search the logs with XQL Search using the dataset. For example, queries refer to the in-app XQL Library. For enhanced cloud protection, you
can also configure Cortex XDR to ingest network flow logs as Cortex XDR network connection stories, which you can query with XQL Search using the
xdr_data dataset with the preset called network_story. Cortex XDR can also raise Cortex XDR alerts (Analytics, Correlation Rules, IOC, and BIOC) when
relevant from Amazon S3 logs. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only
raised on normalized logs.
Cloud-tailored investigations
Be sure you do the following tasks before you begin configuring data collection from Amazon S3 using the AWS CloudFormation Script.
Ensure that you can access your Amazon Virtual Private Cloud (VPC) and have the necessary permissions to create flow logs.
Determine how you want to provide access to Cortex XDR to your logs and perform API operations. You have the following options:
Designate an AWS IAM user, where you will need to know the Account ID for the user and have the relevant permissions to create an access
key/id for the relevant IAM user. This is the default option as explained in Configure the Amazon S3 Collection in Cortex XDR by selecting Access
Key.
Create an assumed role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your flow logs. For
more information, see Creating a role to delegate permissions to an AWS service. This is the Assumed Role option as described in the Configure
the Amazon S3 collection in Cortex XDR. For more information on creating an assumed role for Cortex XDR, see Create an assumed role.
To collect Amazon S3 logs that use server-side encryption (SSE), the user role must have an IAM policy that states that Cortex XDR has kms:Decrypt
permissions. With this permission, Amazon S3 automatically detects if a bucket is encrypted and decrypts it. If you want to collect encrypted logs from
different accounts, you must have the decrypt permissions for the user role also in the key policy for the master account Key Management Service
(KMS). For more information, see Allowing users in other accounts to use a KMS key.
Configure Cortex XDR to receive network flow logs from Amazon S3 using the CloudFormation Script.
c. To provide access to Cortex XDR to your logs and to perform API operations using a designated AWS IAM user, leave the Access Key option
selected. Otherwise, select Assumed Role, and ensure that you Create an Assumed Role for before continuing with these instructions.
d. For the Log Type, select Flow Logs to configure your log collection to receive network flow logs from Amazon S3, and the following text is
displayed under the field Download CloudFormation Script. See instructions here.
e. Click the Download CloudFormation Script. link to download the script to your computer.
2. Create a new Stack in the CloudFormation Console with the script you downloaded from Cortex XDR.
For more information on creating a Stack, see Creating a stack on the AWS CloudFormation console.
b. From the CloudFormation → Stacks page, ensure that you have selected the correct region for your configuration.
d. Specify the template that you want AWS CloudFormation to use to create your stack. This template is the script that you downloaded from Cortex
XDR , which will create an Amazon S3 bucket, Amazon Simple Queue Service (SQS) queue, and Queue Policy. Configure the following settings in
the Specify template page.
Specify Template
Upload a template file: Choose file, and select the cortex-xdr-create-s3-with-sqs-flow-logs.json file that you
downloaded from Cortex XDR.
e. Click Next.
f. In the Specify stack details page, configure the following stack details.
Bucket Name: Specify the name of the S3 bucket to create, where you can leave the default populated name as xdr-flow-logs or
create a new one. The name must be unique.
Publisher Account ID: Specify the AWS IAM user account ID with whom you are sharing access.
Queue Name: Specify the name for your Amazon SQS queue to create, where you can leave the default populated name as xdr-flow
or create a new one. The name must be unique.
g. Click Next.
h. In the Configure stack options page, there is nothing to configure, so click Next.
i. In the Review page, look over the stack configurations settings that you have configured and if they are correct, click Create stack. If you need to
make a change, click Edit beside the particular step that you want to update.
The stack is created and is opened with the Events tab displayed. It can take a few minutes for the new Amazon S3 bucket, SQS queue, and
Queue Policy to be created. Click Refresh to get updates. Once everything is created, leave the stack opened in the current browser, because you
will need to access information in the stack for other steps detailed below.
For the Amazon S3 bucket created using CloudFormation, it is the customer’s responsibility to define a retention policy by creating a Lifecycle rule
in the Management tab. We recommend setting the retention policy to at least 7 days to ensure that the data is retrieved under all circumstances.
3. Configure your Amazon Virtual Private Cloud (VPC) with flow logs:
1. Open the Amazon VPC Console, and in the Resources by Region listed, select VPCs to view the VPCs configured for the current region selected.
To select another VPC from another region, select See all regions, and select one of them.
To create a new VPC, click Launch VPC Wizard. For more information, see AWS VPC Flow Logs.
2. From the list of Your VPCs, select the checkbox beside the VPC that you want to configure to create flow logs, and then select Actions → Create
flow log.
Maximum aggregation interval: If you anticipate a heavy flow of traffic, select 1 minute. Otherwise, leave the default setting as 10 minutes.
Destination: Select Send to an Amazon S3 bucket as the destination to publish the flow log data.
S3 bucket ARN:Specify the Amazon Resource Name (ARN) for your Amazon S3 bucket.
You can retrieve your bucket’s ARN by opening another instance of the AWS Management Console in a browser window and opening the
Amazon S3 console. In the Buckets section, select the bucket that you created for collecting the Amazon S3 flow logs when you created
your stack, click Copy ARN, and paste the ARN in this field.
Log record format: Select Custom Format, and in the Log Format field, specify the following fields to include in the flow log record, which you
can select from the list displayed:
action
az-id
bytes
dstaddr
dstport
end
flow-direction
instance-id
interface-id
packets
log-status
pkt-srcaddr
pkt-dstaddr
protocol
region
srcaddr
srcport
start
sublocation-id
sublocation-type
subnet-id
tcp-flags
type
vpc-id
version
Once the flow log is created, a message indicating that the flow log was successfully created is displayed at the top of the Your VPCs page.
In addition, if you open your Amazon S3 bucket configurations, by selecting the bucket from the Amazon S3 console, the Objects tab contains a
folder called AWSLogs/ to collect the flow logs.
4. Configure access keys for the AWS IAM user that Cortex XDR uses for API operations.
It is the responsibility of the customer’s organization to ensure that the user who performs this task of creating the access key is designated with
the relevant permissions. Otherwise, this can cause the process to fail with errors.
Skip this step if you are using an Assumed Role for Cortex XDR.
1. Open the AWS IAM Console, and in the navigation pane, select Access management → Users.
3. Select the Security credentials tab, scroll down to the Access keys section, and click Create access key.
4. Click the copy icon next to the Access key ID and Secret access key keys, where you must click Show secret access key to see the secret key
and record them somewhere safe before closing the window. You will need to provide these keys when you edit the Access policy of the SQS
queue and when setting the AWS Client ID and AWS Client Secret in Cortex XDR. If you forget to record the keys and close the window, you will
need to generate new keys and repeat this process.
For more information, see Managing access keys for IAM users.
5. When you create an Assumed Role in Cortex XDR, ensure that you edit the policy that defines the permissions for the role with the S3 Bucket ARN and
SQS ARN, which is taken from the Stack you created.
c. Set these parameters, where the parameters change depending on whether you configured an Access Key or Assumed Role.
SQS URL: Specify the SQS URL, which is taken from the stack you created. In the browser you left open after creating the stack, open the
Outputs tab, copy the Value of the QueueURL and paste it in this field.
AWS Client ID: Specify the Access key ID, which you received when you created access keys for the AWS IAM user in AWS.
AWS Client Secret: Specify the Secret access key you received when you created access keys for the AWS IAM user in AWS.
Role ARN: Specify the Role ARN for the Assumed Role you created for in AWS.
External Id:Specify the External Id for the Assumed Role you created for in AWS.
Log Type: Select Flow Logs to configure your log collection to receive network flow logs from Amazon S3. When configuring network flow log
collection, the following additional field is displayed for Enhanced Cloud Protection.
You can Normalize and enrich flow logs by selecting the checkbox. If selected, Cortex XDR ingests the network flow logs as XDR network
connection stories, which you can query using XQL Search from the xdr_data dataset using the preset called network_story.
Once events start to come in, a green check mark appears underneath the Amazon S3 configuration with the number of logs received.
Abstract
If you do not designate a separate AWS IAM user to provide access to Cortex XDR to your logs and to perform API operations, you can create an assumed
role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your logs. For more information, see Creating a role
to delegate permissions to an AWS service.
When setting up any type of Amazon S3 Collector in Cortex XDR, these instructions explain setting up an Assumed Role.
1. Log in to the AWS Management Console to create a role for Cortex XDR.
a. Create the role in the same region as your AWS account, and use the following values and options when creating the role.
Type of Trusted → Another AWS Account, and specify the Account ID as 006742885340. When using a Cortex XDR FedRAMP
environment, specify the Account ID as 685269782068.
Select Options for the Require external ID, which is a unique alphanumeric string, and generate a secure UUIDv4 using an Online UUID
Generator. Copy the External ID as you will use this when configuring the Amazon S3 Collector in Cortex XDR .
In AWS this is an optional field to configure, but this must be configured to set up the Amazon S3 Collector in Cortex XDR .
b. Click Next and add the AWS Managed Policy for Security Audit.
2. Create the policy that defines the permissions for the Cortex XDR role.
b. In the navigation pane on the left, select Access Management → Policies → Create Policy.
Copy the following JSON policy and paste it within the editor window.
The <s3-arn> and <sqs-arn> placeholders. These will be filled out later depending on which Amazon S3 logs you are configuring, including
network flow logs, audit logs, or generic logs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "<s3-arn>/*"
},
{
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:ChangeMessageVisibility"
],
"Resource": "<sqs-arn>"
}
]
}
3. Edit the role you created in Step 1 and attach the policy to the role.
5. Continue with the task for the applicable Amazon S3 logs you want to configure.
Abstract
Set up network flow log ingestion for your Amazon S3 logs manually (without a script).
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
There are various reasons why you may need to configure data collection from Amazon S3 manually, as opposed to using the CloudFormation Script provided
in Cortex XDR. For example, if your organization does not use CloudFormation scripts, you will need to follow the instructions below, which explain at a high-
level how to perform these steps manually with a link to the relevant topic in the Amazon S3 documentation with the detailed steps to follow.
As soon as Cortex XDR begins receiving logs, the app automatically creates an Amazon S3 Cortex Query Language (XQL) dataset (aws_s3_raw). This
enables you to search the logs with XQL Search using the dataset. For example queries, refer to the in-app XQL Library. For enhanced cloud protection, you
can also configure Cortex XDR to ingest network flow logs as Cortex XDR network connection stories, which you can query with XQL Search using the
xdr_dataset dataset with the preset called network_story. Cortex XDR can also raise Cortex XDR alerts (Analytics, Correlations, IOC, and BIOC) when
relevant from Amazon S3 logs. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only
raised on normalized logs.
Cloud-tailored investigations
Be sure you do the following tasks before you begin configuring data collection manually from Amazon CloudWatch to Amazon S3.
If you already have an Amazon S3 bucket configured with VPC flow logs that you want to use for this configuration, you do not need to perform the prerequisite
steps detailed in the first two bullets.
Ensure that you have at a minimum the following permissions in AWS for an Amazon S3 bucket and Amazon Simple Queue Service (SQS).
Create a dedicated Amazon S3 bucket for collecting network flow logs with the default settings. For more information, see Creating a bucket using the
Amazon S3 Console.
It is your responsibility to define a retention policy for your Amazon S3 bucket by creating a Lifecycle rule in the Management tab. We recommend setting
the retention policy to at least 7 days to ensure that the data is retrieved under all circumstances.
Ensure that you can access your Amazon Virtual Private Cloud (VPC) and have the necessary permissions to create flow logs.
Determine how you want to provide access to Cortex XDR to your logs and perform API operations. You have the following options.
Designate an AWS IAM user, where you will need to know the Account ID for the user and have the relevant permissions to create an access
key/id for the relevant IAM user. This is the default option as explained in Configure the Amazon S3 collection by selecting Access Key.
Create an assumed role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your flow logs. For
more information, see Creating a role to delegate permissions to an AWS service. This is the Assumed Role option as described in the Configure
the Amazon S3 collection. For more information on creating an assumed role for Cortex XDR , see Create an assumed role.
To collect Amazon S3 logs that use server-side encryption (SSE), the user role must have an IAM policy that states that Cortex XDR has kms:Decrypt
permissions. With this permission, Amazon S3 automatically detects if a bucket is encrypted and decrypts it. If you want to collect encrypted logs from
different accounts, you must have the decrypt permissions for the user role also in the key policy for the master account Key Management Service
(KMS). For more information, see Allowing users in other accounts to use a KMS key.
Configure Cortex XDR to receive network flow logs from Amazon S3 manually.
2. From the menu bar, ensure that you have selected the correct region for your configuration.
3. Configure your Amazon Virtual Private Cloud (VPC) with flow logs. For more information, see AWS VPC Flow Logs.
If you already have an Amazon S3 bucket configured with VPC flow logs, skip this step and go to Configure an Amazon Simple Queue Service (SQS).
4. Configure an Amazon Simple Queue Service (SQS). For more information, see Configuring Amazon SQS queues (console).
Ensure that you create your Amazon S3 bucket and Amazon SQS queue in the same region.
6. Configure access keys for the AWS IAM user that Cortex XDR uses for API operations. For more information, see Managing access keys for IAM users.
It is the responsibility of the customer’s organization to ensure that the user who performs this task of creating the access key is designated with
the relevant permissions. Otherwise, this can cause the process to fail with errors.
Skip this step if you are using an Assumed Role for Cortex XDR.
7. Update the Access Policy of your SQS queue and grant the required permissions mentioned above to the relevant IAM user. For more information, see
Granting permissions to publish event notification messages to a destination.
Skip this step if you are using an Assumed Role for Cortex XDR.
c. Set these parameters, where the parameters change depending on whether you configured an Access Key or Assumed Role.
To provide access to Cortex XDR to your logs and perform API operations using a designated AWS IAM user, leave the Access Key option
selected. Otherwise, select Assumed Role, and ensure that you Create an Assumed Role for Cortex XDR before continuing with these
instructions. In addition, when you create an Assumed Role for Cortex XDR, ensure that you edit the policy that defines the permissions for
the role with the Amazon S3 Bucket ARN and SQS ARN.
SQS URL: Specify the SQS URL, which is the ARN of the Amazon SQS that you configured in the AWS Management Console. For more
information on how to retrieve your Amazon SQS ARN, see the Specify SQS queue field when you configure an event notification to your
Amazon SQS whenever a file is written to your Amazon S3 bucket.
AWS Client ID: Specify the Access key ID, which you received when you created access keys for the AWS IAM user in AWS.
AWS Client Secret: Specify the Secret access key you received when you created access keys for the AWS IAM user in AWS.
Role ARN: Specify the Role ARN for the Assumed Role for Cortex XDR in AWS.
External Id: Specify the External Id for the Assumed Role for Cortex XDR in AWS.
Log Type: Select Flow Logs to configure your log collection to receive network flow logs from Amazon S3. When configuring network flow log
collection, the following additional field is displayed for Enhanced Cloud Protection.
You can Normalize and enrich flow logs by selecting the checkbox. When selected, Cortex XDR ingests the network flow logs as Cortex
XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset using the preset called
network_story.
Once events start to come in, a green check mark appears underneath the Amazon S3 configuration with the number of logs received.
Abstract
Take advantage of Cortex XDR investigation capabilities and set up network Route 53 ingestion for your Amazon S3 logs using an AWS CloudFormation Script.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward network AWS Route 53 DNS logs to Cortex XDR from Amazon Simple Storage Service (Amazon S3).
To receive network Route 53 DNS logs from Amazon S3, you must first configure data collection from Amazon S3. You can then configure the Collection
Integrations settings in Cortex XDR for Amazon S3. After you set up collection integration, Cortex XDR begins receiving new logs and data from the source.
You can configure Amazon S3 with SQS notification using the AWS CloudFormation Script that we have created for you to make the process easier. The
instructions below explain how to configure Cortex XDR to receive network Route 53 DNS logs from Amazon S3 using SQS.
For more information on configuring data collection from Amazon S3 for Route 53 DNS logs, see the AWS Documentation.
As soon as Cortex XDR begins receiving logs, the app automatically creates an Amazon Route 53 Cortex Query Language (XQL) dataset
(amazon_route53_raw). This enables you to search the logs with XQL Search using the dataset. For example, queries refer to the in-app XQL Library. For
Cloud-tailored investigations
Be sure you do the following tasks before you begin configuring data collection from Amazon S3 using the AWS CloudFormation Script.
Ensure that you have the proper permissions to run AWS CloudFormation with the script provided in Cortex XDR. You need at a minimum the following
permissions in AWS for an Amazon S3 bucket and Amazon Simple Queue Service (SQS):
Ensure that you can access your Amazon Virtual Private Cloud (VPC) and have the necessary permissions to create Route 53 Resolver Query logs.
Determine how you want to provide access to Cortex XDR to your logs and perform API operations. You have the following options.
Designate an AWS IAM user, where you will need to know the Account ID for the user and have the relevant permissions to create an access
key/id for the relevant IAM user. This is the default option when you configure the Amazon S3 collection by selecting Access Key.
Create an assumed role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your flow logs. For
more information, see Creating a role to delegate permissions to an AWS service. This is the Assumed Role option when you configure the Amazon
S3 collection in Cortex XDR. For more information on creating an assumed role for Cortex XDR, see Create an assumed role.
To collect Amazon S3 logs that use server-side encryption (SSE), the user role must have an IAM policy that states that Cortex XDR has kms:Decrypt
permissions. With this permission, Amazon S3 automatically detects if a bucket is encrypted and decrypts it. If you want to collect encrypted logs from
different accounts, you must have the decrypt permissions for the user role also in the key policy for the master account Key Management Service
(KMS). For more information, see Allowing users in other accounts to use a KMS key.
Configure Cortex XDR to receive network Route 53 DNS logs from Amazon S3 using the CloudFormation Script.
c. To provide access to Cortex XDR to your logs and to perform API operations using a designated AWS IAM user, leave the Access Key option
selected. Otherwise, select Assumed Role, and ensure that you Create an Assumed Role for before continuing with these instructions.
d. For the Log Type, select Route 53 to configure your log collection to receive network Route 53 DNS logs from Amazon S3, and the following text is
displayed under the field Download CloudFormation Script. See instructions here.
e. Click the Download CloudFormation Script. link to download the script to your computer.
2. Create a new Stack in the CloudFormation Console with the script you downloaded from Cortex XDR.
For more information on creating a Stack, see Creating a stack on the AWS CloudFormation console.
b. From the CloudFormation → Stacks page, ensure that you have selected the correct region for your configuration.
d. Specify the template that you want AWS CloudFormation to use to create your stack. This template is the script that you downloaded from Cortex
XDR , which will create an Amazon S3 bucket, Amazon Simple Queue Service (SQS) queue, and Queue Policy. Configure the following settings in
the Specify template page.
Specify Template
Upload a template file: Choose file, and select the CloudFormation-Script.json file that you downloaded.
f. In the Specify stack details page, configure the following stack details.
Bucket Name: Specify the name of the S3 bucket to create, where you can leave the default populated name as xdr-route53-logs or
create a new one. The name must be unique.
Publisher Account ID: Specify the AWS IAM user account ID with whom you are sharing access.
Queue Name: Specify the name for your Amazon SQS queue to create, where you can leave the default populated name as xdr-
route53 or create a new one. The name must be unique.
g. Click Next.
h. In the Configure stack options page, there is nothing to configure, so click Next.
i. In the Review page, look over the stack configurations settings that you have configured and if they are correct, click Create stack. If you need to
make a change, click Edit beside the particular step that you want to update.
The stack is created and is opened with the Events tab displayed. It can take a few minutes for the new Amazon S3 bucket, SQS queue, and
Queue Policy to be created. Click Refresh to get updates. Once everything is created, leave the stack opened in the current browser as you will
need to access information in the stack for other steps detailed below.
For the Amazon S3 bucket created using CloudFormation, it is the customer’s responsibility to define a retention policy by creating a Lifecycle rule
in the Management tab. We recommend setting the retention policy to at least 7 days to ensure that the data is retrieved under all circumstances.
b. From the menu bar, ensure that you have selected the correct region for your configuration.
e. Set the following parameters in the different sections on the Configure query logging page.
Destination for query logs: Select S3 bucket as the place where you want Resolver to publish query logs.
Amazon S3 bucket: Browse S3 to select the Amazon S3 bucket created after running the CloudFormation script, which is by default
called xdr-route53-logs or select the one that you created.
Add VPC: Clicking the Add VPC button opens the Add VPC page, where you can choose the VPCs that you want to log queries for.
When you are done, click Add.
4. Configure access keys for the AWS IAM user that Cortex XDR uses for API operations.
It is the responsibility of the customer’s organization to ensure that the user who performs this task of creating the access key is designated with
the relevant permissions. Otherwise, this can cause the process to fail with errors.
Skip this step if you are using an Assumed Role for Cortex XDR.
a. Open the AWS IAM Console, and in the navigation pane, select Access management → Users.
c. Select the Security credentials tab, scroll down to the Access keys section, and click Create access key.
d. Click the copy icon next to the Access key ID and Secret access key keys, where you must click Show secret access key to see the secret key
and record them somewhere safe before closing the window. You will need to provide these keys when you edit the Access policy of the SQS
queue and when setting the AWS Client ID and AWS Client Secret in Cortex XDR. If you forget to record the keys and close the window, you will
need to generate new keys and repeat this process.
For more information, see Managing access keys for IAM users.
Skip this step if you are using an Access Key to provide access to Cortex XDR.
c. Set these parameters, where the parameters change depending on whether you configured an Access Key or Assumed Role.
SQS URL: Specify the SQS URL, which is taken from the stack you created. In the browser you left open after creating the stack, open the
Outputs tab, copy the Value of the QueueURL and paste it in this field.
AWS Client ID: Specify the Access key ID, which you received when you created access keys for the AWS IAM user in AWS.
AWS Client Secret: Specify the Secret access key you received when you created access keys for the AWS IAM user in AWS.
Role ARN: Specify the Role ARN for the Assumed Role you created for Cortex XDRin AWS.
External Id: Specify the External Id for the Assumed Role you created for Cortex XDR in AWS.
Log Type: Select Route 53 to configure your log collection to receive network Route 53 DNS logs from Amazon S3. When configuring
network Route 53 log collection, the following additional field is displayed for Enhanced Cloud Protection.
You can Normalize DNS logs by selecting the checkbox (default configuration). When selected, Cortex XDR ingests the network Route 53
DNS logs as XDR network connection stories, which you can query using XQL Search from the xdr_data dataset using the preset called
network_story.
Once events start to come in, a green check mark appears underneath the Amazon S3 configuration with the number of logs received.
Abstract
To take advantage of Cortex XDR investigation and detection capabilities while using Check Point firewalls, forward your firewall logs to Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Check Point FW1/VPN1 firewalls, you can still take advantage of Cortex XDR investigation and detection capabilities by forwarding your Check Point
firewall logs to Cortex XDR. Check Point firewall logs can be used as the sole data source, however, you can also use Check Point firewall logs in conjunction
with Palo Alto Networks firewall logs and additional data sources.
Cortex XDR can stitch data from Check Point firewalls with other logs to make up network stories searchable in the Query Builder and in Cortex Query
Language (XQL) queries. Cortex XDR can also return raw data from Check Point firewalls in XQL queries.
In terms of alerts, Cortex XDR can both surface native Check Point firewall alerts and raise its own alerts on network activity. Alerts are displayed throughout
Cortex XDR alert, incident, and investigation views.
To integrate your logs, you first need to set up an applet in a Broker VM within your network to act as a Syslog Collector. You then configure your Check Point
firewall policy to log all traffic and set up the Log Exporter on your Check Point Log Server to forward logs to the Syslog Collector in a CEF format.
As soon as Cortex XDR starts to receive logs, the app can begin stitching network connection logs with other logs to form network stories. Cortex XDR can
also analyze your logs to raise Analytics alerts and can apply IOC, BIOC, and Correlation Rule matching. You can also use queries to search your network
connection logs.
1. Ensure that your Check Point firewalls meet the following requirements.
4. Configure the Check Point firewall to forward Syslog events in CEF format to the Syslog Collector.
Configure your firewall policy to log all traffic and set up the Log Exporter to forward logs to the Syslog Collector. For more information on setting up Log
Exporter, see the Check Point documentation.
Abstract
Extend Cortex XDR visibility into logs from Cisco ASA firewalls and Cisco AnyConnect VPN.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Cisco ASA firewalls or Cisco AnyConnect VPN, you can take advantage of Cortex XDR investigation and detection capabilities by forwarding your
firewall and AnyConnect VPN logs to Cortex XDR. This enables Cortex XDR to examine your network traffic to detect anomalous behavior. Cortex XDR can use
Cisco ASA firewall logs and AnyConnect VPN logs as the sole data source, but can also use Cisco ASA firewall logs in conjunction with Palo Alto Networks
firewall logs. For additional endpoint context, you can also use Cortex XDR to collect and alert on endpoint data.
As soon as Cortex XDR starts to receive logs, the app can begin stitching network connection logs with other logs to form network stories. Cortex XDR can
also analyze your logs to raise Analytics alerts and can apply IOC, BIOC, and Correlation Rules matching. You can also use queries to search your network
connection logs using the Cisco Cortex Query Language (XQL) dataset (cisco_asa_raw).
To integrate your logs, you first need to set up an applet in a Broker VM within your network to act as a Syslog Collector. You then configure forwarding on your
log devices to send logs to the Syslog Collector in a CISCO format.
1. Verify that your Cisco ASA firewall and Cisco AnyConnect VPN logs meet the following requirements.
For Cisco AnyConnect VPN: 113039, 716001, 722022, 722033, 722034, 722051, 722055, 722053, 113019, 716002, 722023, 722037
3. Increase log storage for Cisco ASA firewall and Cisco AnyConnect VPN logs.
As an estimate for initial sizing, note that the average Cisco ASA log size is roughly 180 bytes. For proper sizing calculations, test the log sizes and log
rates produced by your Cisco ASA firewalls and Cisco AnyConnect VPN logs. For more information, see Manage Your Log Storage within Cortex XDR.
4. Configure the Cisco ASA firewall and Cisco AnyConnect VPN, or the log devices forwarding logs from Cisco, to log to the Syslog Collector in a CISCO
format.
Configure your firewall and AnyConnect VPN policies to log all traffic and forward the traffic logs to the Syslog Collector in a CISCO format. By logging all
traffic, you enable Cortex XDR to detect anomalous behavior from Cisco ASA firewall logs and Cisco AnyConnect VPN logs. For more information on
setting up Log Forwarding on Cisco ASA firewalls or Cisco AnyConnect VPN, see the Cisco ASA Series documentation.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Corelight Zeek sensors for network monitoring, you can still take advantage of Cortex XDR investigation and detection capabilities by forwarding
your network connection logs to Cortex XDR . This enables Cortex XDR to examine your network traffic to detect anomalous behavior. Cortex XDR can use
Corelight Zeek logs as the sole data source, but can also use logs in conjunction with Palo Alto Networks or third-party firewall logs. For additional endpoint
context, you can also use Cortex XDR to collect and alert on endpoint data.
As soon as Cortex XDR starts to receive logs, the app can begin stitching network connection logs with other logs to form network stories. Cortex XDR can
also analyze your logs to raise Analytics alerts and can apply IOC, BIOC, and Correlation Rule matching. You can also use queries to search your network
connection logs.
During activation, you define the Listening Port over which you want the Syslog Collector to receive logs. You must also set TCP as the transport Protocol
and Corelight as the Syslog Format.
For proper sizing calculations, test the log sizes and log rates produced by your Corelight Zeek Sensors. Then adjust your Cortex XDR log storage. For
more information, see Manage Your Log Storage within Cortex XDR.
Cortex XDR can receive logs from Corelight Zeek sensors that use the Syslog export option of RFC5424 over TCP.
a. In the Syslog configuration of Corelight Zeek (Sensor → Export), specify the details for your Syslog Collector including the hostname or IP address
of the Broker VM and corresponding listening port that you defined during activation of the Syslog Collector, default Syslog format (RFC5424), and
any log exclusions or filters.
b. Save your Syslog configuration to apply the configuration to your Corelight Zeek Sensors.
Abstract
Extend Cortex XDR visibility into logs from Fortinet Fortigate firewalls.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Fortinet Fortigate firewalls, you can still take advantage of Cortex XDR investigation and detection capabilities by forwarding your firewall logs to
Cortex XDR . This enables Cortex XDR to examine your network traffic to detect anomalous behavior. Cortex XDR can use Fortinet Fortigate firewall logs as the
sole data source, but can also use Fortinet Fortigate firewall logs in conjunction with Palo Alto Networks firewall logs. For additional endpoint context, you can
also use Cortex XDR to collect and alert on endpoint data.
As soon as Cortex XDR starts to receive logs, the app can begin stitching network connection logs with other logs to form network stories. Cortex XDR can
also analyze your logs to raise Analytics alerts and can apply IOC, BIOC, and Correlation Rule matching. You can also use queries to search your network
connection logs.
To integrate your logs, you first need to set up an applet in a Broker VM within your network to act as a Syslog collector. You then configure forwarding on your
log devices to send logs to the Syslog collector in a CEF format.
1. Verify that your Fortinet Fortigate firewalls meet the following requirements.
As an estimate for initial sizing, note that the average Fortinet Fortigate log size is roughly 1,070 bytes. For proper sizing calculations, test the log sizes
and log rates produced by your Fortinet Fortigate firewalls. For more information, see Manage Your Log Storage within Cortex XDR.
4. Configure the log device that receives Fortinet Fortigate firewall logs to forward Syslog events to the Syslog collector in a CEF format.
Configure your firewall policy to log all traffic and forward the traffic logs to the Syslog collector in a CEF format. By logging all traffic, you enable Cortex
XDR to detect anomalous behavior from Fortinet Fortigate firewall logs. For more information on setting up Log Forwarding on Fortinet Fortigate firewalls,
see the Fortinet FortiOS documentation.
Abstract
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from GCP to Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from your GCP instance to Cortex XDR. Data from
GCP is then searchable in Cortex XDR to provide additional information and context to your investigations using the GCP Cortex Query Language (XQL)
You can also configure Cortex XDR to normalize different GCP logs as part of the enhanced cloud protection, which you can query with XQL Search using the
applicable dataset. Cortex XDR can also raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from GCP logs. While Correlation
Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
The following table lists the various GCP log types the XQL datasets you can use to query in XQL Search:
Cisco: cisco_asa_raw
Corelight: corelight_zeek_raw
JSON or Raw:
google_cloud_logging_raw
Google Cloud DNS logs google_dns_raw xdr_data: Once configured, Cortex XDR ingests Google Cloud DNS
logs as XDR network connection stories, which you can query with
XQL Search using the xdr_data dataset with the preset called
network_story.
Network flow logs google_cloud_logging_raw xdr_data: Once configured, Cortex XDR ingests network flow logs
as XDR network connection stories, which you can query with XQL
Search using the xdr_data dataset with the preset called
network_story.
When collecting flow logs, we recommend that you include GKE annotations in your logs, which enable you to view the names of the containers that
communicated with each other. GKE annotations are only included in logs if appended manually using the custom metadata configuration in GCP. For more
information, see VPC Flow Logs Overview. In addition, to customize metadata fields, you must use the gcloud command-line interface or the API. For more
information, see Using VPC Flow Logs.
To receive logs and data from GCP, you must first set up log forwarding using a Pub/Sub topic in GCP. You can configure GCP settings using either the GCP
web interface or a GCP cloud shell terminal. After you set up your service account in GCP, you configure the Data Collection settings in Cortex XDR. The setup
process requires the subscription name and authentication key from your GCP instance.
After you set up log collection, Cortex XDR immediately begins receiving new logs and data from GCP.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
c. To filter only specific types of data, select the filter or desired resource.
f. Enter a descriptive Name that identifies the sink purpose for Cortex XDR, and then Create.
a. Select the hamburger menu in G Cloud and then select Pub/Sub → Topics.
b. Select the name of the topic you created in the previous steps. Use the filters if necessary.
After the subscription is set up, G Cloud displays statistics and settings for the service.
Optionally, use the copy button to copy the name to the clipboard. You will need the name when you configure Collection in Cortex XDR.
You will use the key to enable Cortex XDR to authenticate with the subscription service.
a. Select the menu icon, and then select IAM & Admin → Service Accounts.
f. Locate the service account by name, using the filters to refine the results, if needed.
g. Click the Actions menu identified by the three dots in the row for the service account and then Create Key.
After you create the service account key, G Cloud automatically downloads it.
5. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
1. Launch the GCP cloud shell terminal or use your preferred shell with gcloud installed.
Note the subscription name you define in this step as you will need it to set up log ingestion from Cortex XDR.
During the logging sink creation, you can also define additional log filters to exclude specific logs. To filter logs, supply the optional parameter --log-
filter=<LOG_FILTER>
If setup is successful, the console displays a summary of your log sink settings:
Note the serviceAccount name from the previous step and use it to define the service for which you want to grant publish access.
For example, use cortex-xdr-sa as the service account name and Cortex XDR Service Account as the display name.
You will need the JSON file to enable Cortex XDR to authenticate with the GCP service. Specify the file destination and filename using a .json extension.
10. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
Abstract
Ingest logs from Microsoft Azure Event Hub with an option to ingest audit logs to use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from Microsoft Azure Event Hub using the Microsoft Azure Event Hub data collector. To receive logs from Azure
Event Hub, you must configure the settings in Cortex XDR based on your Microsoft Azure Event Hub configuration. After you set up data collection, Cortex
When Cortex XDR begins receiving logs, the app creates a new dataset (MSFT_Azure_raw) that you can use to initiate XQL Search queries. For example,
queries refer to the in-app XQL Library. For enhanced cloud protection, you can also configure Cortex XDR to normalize Azure Event Hub audit logs, including
Azure Kubernetes Service (AKS) audit logs, with other Cortex XDR authentication stories across all cloud providers using the same format, which you can
query with XQL Search using the cloud_audit_logs dataset. For logs that you do not configure Cortex XDR to normalize, you can change the default
dataset. Cortex XDR can also raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from Azure Event Hub logs. While
Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
In an existing Event Hub integration, do not change the mapping to a different Event Hub.
Do not use the same Event Hub for more than two purposes.
The following table provides a brief description of the different types of Azure audit logs you can collect.
For more information on Azure Event Hub audit logs, see Overview of Azure platform logs.
Activity logs Retrieves events related to the operations on each Azure resource in the subscription from the outside in addition to
updates on Service Health events.
Azure Active Directory (AD) Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant.
Activity logs and Azure Sign-
in logs Even though you can collect Azure AD Activity logs and Azure Sign-in logs using the Azure Event Hub data collector, we
recommend using the Microsoft 365 data collector, because it is easier to configure. In addition, ensure that you don't
configure both collectors to collect the same types of logs, because if you do so, you will be creating duplicate data in
Cortex XDR.
Resource logs, including Retrieves events related to operations that were performed within an Azure resource.
AKS audit logs
These logs are from the data plane.
Ensure that you do the following tasks before you begin configuring data collection from Azure Event Hub.
Before you set up an Azure Event Hub, calculate the quantity of data that you expect to send to Cortex XDR, taking into account potential data spikes
and potential increases in data ingestion, because partitions cannot be modified after creation. Use this information to ascertain the optimal number of
partitions and Throughput Units (for Azure Basic or Standard) or Processing Units (for Azure Premium). Configure your Event Hub accordingly.
Create an Azure Event Hub. We recommend using a dedicated Azure Event Hub for this Cortex XDR integration. For more information, see Quickstart:
Create an event hub using Azure portal.
Ensure the format for the logs you want collected from the Azure Event Hub is either JSON or raw.
1. In the Microsoft Azure console, open the Event Hubs page, and select the Azure Event Hub that you created for collection in Cortex XDR.
2. Record the following parameters from your configured event hub, which you will need when configuring data collection in Cortex XDR.
3. In the Consumer group table, copy the applicable value listed in the Name column for your Cortex XDR data collection configuration.
Your storage account connection string required for partitions lease management and checkpointing in Cortex XDR.
1. Open the Storage accounts page, and either create a new storage account or select an existing one, which will contain the storage account
connection string.
3. Configure diagnostic settings for the relevant log types you want to collect and then direct these diagnostic settings to the designated Azure Event Hub.
Activity logs Select Azure services → Activity log → Export Activity Logs, and +Add diagnostic setting.
Azure AD Activity logs and Azure 1. Select Azure services → Azure Active Directory.
Sign-in logs
2. Select Monitoring → Diagnostic settings, and +Add diagnostic setting.
Resource logs, including AKS 1. Search for Monitor, and select Settings → Diagnostic settings.
audit logs
2. From your list of available resources, select the resource that you want to configure for log
collection, and then select +Add diagnostic setting.
For every resource that you want to confiure, you'll have to repeat this step, or use Azure policy for
a general configuration.
Logs Categories/Metrics: The options listed are dependent on the type of logs you want to configure. For Activity logs and Azure AD logs
and Azure Sign-in logs, the option is called Logs Categories, and for Resource logs it's called Metrics.
Activity logs Select from the list of applicable Activity log categories, the ones that you want to configure your designated
resource to collect. We recommend selecting all of the options.
Administrative
Security
ServiceHealth
Alert
Recommendation
Policy
Autoscale
ResourceHealth
Azure AD Activity Select from the list of applicable Azure AD Activity and Azure Sign-in Logs Categories, the ones that you want
logs and Azure Sign- to configure your designated resource to collect. You can select any of the following categories to collect these
in logs types of Azure logs.
AuditLogs
SignInLogs
NonInteractiveUserSignInLogs
ServicePrincipalSignInLogs
ManagedIdentitySignInLogs
ADFSSignInLogs
There are additional log categories displayed. We recommend selecting all the available options.
Resource logs, The list displayed is dependent on the resource that you selected. We recommend selecting all the options
including AKS audit available for the resource.
logs
Destination details: Select Stream to event hub, where additional parameters are displayed that you need to configure. Ensure that you set
the following parameters using the same settings for the Azure Event Hub that you created for the collection.
Subscription: Select the applicable Subscription for the Azure Event Hub.
Event hub namespace: Select the applicable Subscription for the Azure Event Hub.
(Optional) Event hub name: Specify the name of your Azure Event Hub.
Event hub policy: Select the applicable Event hub policy for your Azure Event Hub.
b. In the Azure Event Hub configuration, click Add Instance to begin a new configuration.
Event Hub Connection String: Specify your event hub’s connection string for the designated policy.
Storage Account Connection String: Specify your storage account’s connection string for the designated policy.
Log Format: Select the log format for the logs collected from the Azure Event Hub as Raw, JSON, CEF, LEEF, Cisco-asa, or Corelight.
When you Normalize and enrich audit logs, the log format is automatically configured. As a result, the Log Format option is removed and is
no longer available to configure (default).
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the logs.
When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the Vendor and
Product fields in the Azure Event Hub data collector settings. Yet, when the values are blank in the event log row, Cortex XDR uses the
Vendor and Product that you specified in the Azure Event Hub data collector settings. If you did not specify a Vendor or Product in the
Azure Event Hub data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco-asa: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Msft
Product: Azure
Raw or JSON data can be queried in XQL Search using the msft_azure_raw dataset.
Vendor and Product: Specify the Vendor and Product for the type of logs you are ingesting.
The Vendor and Product are used to define the name of your Cortex Query Language (XQL) dataset (<vendor>_<product>_raw). The
Vendor and Product values vary depending on the Log Format selected. To uniquely identify the log source, consider changing the values if
the values are configurable.
When you Normalize and enrich audit logs, the Vendor and Product fields are automatically configured, so these fields are removed as
available options (default).
Normalize and enrich audit logs: (Optional) For enhanced cloud protection, you can Normalize and enrich audit logs by selecting the
checkbox (default). If selected, Cortex XDR normalizes and enriches Azure Event Hub audit logs with other Cortex XDR authentication
stories across all cloud providers using the same format. You can query this normalized data with XQL Search using the
cloud_audit_logs dataset.
When events start to come in, a green check mark appears underneath the Azure Event Hub configuration with the amount of data received.
16.3.5.2.9 | Ingest network flow logs from Microsoft Azure Network Watcher
Abstract
Ingest network security group (NSG) flow logs from Microsoft Azure Network Watcher for use in Cortex XDR network stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive network security group (NSG) flow logs from Azure Network Watcher, you must configure data collection from Microsoft Azure Network Watcher
using an Azure Function provided by Cortex XDR. This Azure Function requires a token that is generated when you configure your Azure Network Watcher
Collector in Cortex XDR. After you have configured the Cortex XDR collector and successfully deployed the Azure Function to your Azure account, Cortex XDR
will start receiving and ingesting network flow logs from Azure Network Watcher.
In addition to the user-specified storage account that captures the log blobs, the template also creates a secondary, internal storage account for internal
operations related to the function app. This internal storage account is used by the function app for operations such as storing function state, and intermediate
processing. To enhance security, public network access is disabled, and the account is restricted to private endpoints only. This additional internal storage
account allows the function app to securely store data without relying on the user-specified storage account for internal processes. This separation enhances
data security and isolation between user-facing storage and internal application operations. VNet integration is required only for the internal storage account's
internal operations. The user-specified storage account used for NSG flow logs does not require VNet integration.
When Cortex XDR begins receiving logs, the app creates a new dataset (MSFT_Azure_raw) that you can use to initiate XQL Search queries. For example
queries, refer to the in-app XQL Library. For enhanced cloud protection, you can also configure Cortex XDR to ingest network flow logs as Cortex XDR network
connection stories, which you can query with XQL Search using the xdr_dataset dataset with the preset called network_story. Cortex XDR can also raise
Cortex XDR alerts (Analytics, Correlation Rules, IOC, and BIOC) when relevant from Azure Network Watcher flow logs. While Correlation Rules alerts are raised
on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
Ensure that your NSG flow logs in Azure Network Watcher conform to the requirements as outlined in the Microsoft documentation. For more information,
see Introduction to flow logging for network security groups.
Ensure that you have an Azure subscription with user role permissions to deploy ARM templates and create the required resources.
The listKeys function in an Azure Resource Manager (ARM) template retrieves the storage account keys, and it requires special permissions to
execute. Specifically, the user or identity running the ARM template needs the following permission:
Microsoft.Storage/storageAccounts/listKeys/action. If the user or service principal running the ARM template has the necessary user role
(such as Owner or Storage Account Contributor), permission is implicitly granted for the template to retrieve the storage account keys.
Perform this procedure in the order shown below, because you need to save a token and a URL from Cortex XDR in earlier steps, and use them in Azure
in later steps.
b. In the Azure Network Watcher configuration, click Add Instance to begin a new configuration.
Enhanced Cloud Protection: (Optional) For enhanced cloud protection, you can normalize and enrich flow logs by selecting the Use flow
logs in analytics checkbox. If selected, Cortex XDR ingests network flow logs as Cortex XDR network connection stories, which you can
query with XQL Search using the xdr_dataset dataset with the preset called network_story.
Click the copy icon next to the key and save the copy of this token somewhere safe. You will need to provide this token when you configure the
Azure Function and set the Cortex Access Token value. If you forget to record the token and close the window, you will need to generate a new
one and repeat this process. When you are finished, click Done to close the window.
e. On the Integrations page for the Azure Network Watch Collector that you created, click the Copy API URL icon and save a copy of the URL
somewhere safe. You will need to provide this URL when you configure the Azure Function and set the Cortex Http Endpoint value.
d. Set these parameters, where some fields are mandatory to set and others may already be populated for you.
Resource group: Specify or create a resource group for your App Configuration store resource.
Unique Name: Enter a unique name for the function app. The name that you provide will be concatenated to some of the resource names, to
make it easier to locate the related resources later on. The name must only contain alphanumeric characters (letters and numbers, no
special symbols) and must contain no more than 10 characters.
Cortex Access Token: Cortex HTTP authorization key that you recorded when you configured the Azure Network Watcher collection in Cortex
XDR in an earlier step.
Target Storage Account Name: Enter the name of the Azure Storage Account that was created during the NSG flow logs setup in Azure
Network Watcher, where the log blobs are being stored.
Target Container Name: This field should be left empty for most use cases. The default value insights-logs-
networksecuritygroupflowevent is the name that is automatically created for the container during configuration of the network
watcher.
Location: The region where all the resources will be deployed (leave blank to use the same region as the resource group).
Cortex Http Endpoint: Specify the API URL that you recorded when you configured the Azure Network Watcher collection in Cortex XDR.
Remote Package: The URL of the remote package ZIP file containing the Azure Function code. Leave this field empty unless instructed
otherwise.
e. Click Review + Create to confirm your settings for the Azure Function.
f. Click Create. It can take a few minutes until the deployment is complete.
In addition to your storage account, the template automatically creates another storage account that is required by the function app for internal use only.
The internal storage account name is prefixed with cortex and is followed by a unique suffix based on the resource group, storage account, and
container names.
After events start to come in, a green check mark appears underneath the Azure Network Watcher configuration that you created in Cortex XDR, and the
amount of data received is displayed.
Abstract
Ingest authentication logs and data from Okta for use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive logs and data from Okta, you must configure the Collection Integrations settings in Cortex XDR. After you set up data collection, Cortex XDR
immediately begins receiving new logs and data from the source. The information from Okta is then searchable in XQL Search using the okta_sso_raw
dataset. In addition, depending on the event type, data is normalized to either xdr_data or saas_audit_logs datasets.
You can collect all types of events from Okta. When setting up the Okta data collector in Cortex XDR , a field called Okta Filter is available to configure
collection for events of your choosing. All events are collected by default unless you define an Okta API Filter expression for collecting the data, such as
filter=eventType eq “user.session.start”.\n. For Okta information to be weaved into authentication stories, “user.authentication.sso”
events must be collected.
Since the Okta API enforces concurrent rate limits, the Okta data collector is built with a mechanism to reduce the amount of requests whenever an error is
received from the Okta API indicating that too many requests have already been sent. In addition, to ensure you are properly notified about this, an alert is
displayed in the Notification Area and a record is added to the Management Audit Logs.
Before you begin configuring data collection from Okta, ensure your Okta user has administrator privileges with a role that can create API tokens, such as the
read-only administrator, Super administrator, and Organization administrator. For more information, see the Okta Administrators Documentation.
From the Dashboard of your Okta console, note your Org URL.
a. Specify the OKTA DOMAIN (Org URL) that you identified on your Okta console.
c. Specify the Okta Filter to configure collection for events of your choosing. All events are collected by default unless you define an Okta API Filter
expression for collecting the data, such as filter=eventType eq “user.session.start”.\n. For Okta information to be weaved into
authentication stories, “user.authentication.sso” events must be collected.
Once events start to come in, a green check mark appears underneath the Okta configuration with the amount of data received.
6. After Cortex XDR begins receiving information from the service, you can Create an XQL Query to search for specific data. When including authentication
events, you can also Create an Authentication Query to search for specific authentication data.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can configure Cortex XDR to receive Windows DHCP logs using Elasticsearch Filebeat with the following data collectors.
Abstract
Extend Cortex XDR visibility into logs from Windows DHCP using an XDR Collector Windows Filebeat profile.
Extend Cortex XDR visibility into logs from Windows DHCP using an XDR Collector Windows Filebeat profile.
You can enrich network logs with Windows DHCP data when defining data collection in an XDR Collector Windows Filebeat profile. When you add a XDR
Collector Windows Filebeat profile using the Elasticsearch Filebeat default configuration file called filebeat.yml, you can define whether the collected data
undergoes follow-up processing in the backend for Windows DHCP data. Cortex XDR uses Windows DHCP logs to enrich your network logs with hostnames
and MAC addresses that are searchable in XQL Search using the Windows DHCP Cortex Query Language (XQL) dataset (microsoft_dhcp_raw).
Configure Cortex XDR to receive logs from Windows DHCP using an XDR Collector Windows Filebeat profile.
Follow the steps for creating a Windows Filebeat profile as described in Add an XDR Collector profile for Windows, and in the Filebeat Configuration File
area, ensure that you select and Add the DHCP template. The template's content will be displayed here, and is editable.
2. To configure collection of Windows DHCP data, edit the template text as necessary for your system.
You can enrich network logs with Windows DHCP data when defining data collection by setting the vendor to “microsoft” , and product to “dhcp”
in the filebeat.yml file, which you can then query in the microsoft_dhcp_raw dataset.
To avoid formatting issues in filebeat.yml, we recommend that you edit the text file inside the user interface, instead of copying it and editing it
elsewhere. Validate the syntax of the YML file before you finish creating the profile.
Abstract
Extend Cortex XDR visibility into logs from Windows DHCP using Elasticsearch Filebeat with the Windows DHCP data collector.
Extend Cortex XDR visibility into logs from Windows DHCP using Elasticsearch Filebeat with the Windows DHCP data collector.
To receive Windows DHCP logs, you must configure data collection from Windows DHCP via Elasticsearch Filebeat. This is configured by setting up a Windows
DHCP Collector in Cortex XDR and installing and configuring an Elasticsearch Filebeat agent on your Windows DHCP Server. Cortex XDR supports using
Filebeat up to version 8.0.1 with the Windows DHCP Collector.
Certain settings in the Elasticsearch Filebeat default configuration file called filebeat.yml must be populated with values provided when you configure the
Collection Integrations settings in Cortex XDR for the Windows DHCP Collector. To help you configure the filebeat.yml correctly, Cortex XDR provides an
example file that you can download and customize. After you set up collection integration, Cortex XDR begins receiving new logs and data from the source.
For more information on configuring the filebeat.yml file, see the Elastic Filebeat Documentation.
Windows DHCP logs are stored as CSV (comma-separated values) log files. The logs rotate by days (DhcpSrvLog-<day>.log), and each file contains two
sections: Event ID Meaning and the events list.
As soon as Cortex XDR begins receiving logs, the app automatically creates a Windows DHCP XQL dataset (microsoft_dhcp_raw). Cortex XDR uses
Windows DHCP logs to enrich your network logs with hostnames and MAC addresses that are searchable in XQL Search using the Windows DHCP Cortex
Query Language (XQL) dataset.
Configure Cortex XDR to receive logs from Windows DHCP via Elasticsearch Filebeat with the Windows DHCP collector.
To help you configure your filebeat.yml file correctly, Cortex XDR provides an example filebeat.yml file that you can download and
customize. To download this file, use the link provided in this dialog box.
To avoid formatting issues in your filebeat.yml, we recommend that you use the download example file to make your customizations. Do not
copy and paste the code syntax examples provided later in this procedure into your file.
e. Save & Generate Token. The token is displayed in a blue box, which is blurred out in the image below.
Click the copy icon next to the key and record it somewhere safe. You will need to provide this key when you set the api_key value in the
Elasticsearch Output section in the filebeat.yml file as explained in Step #2. If you forget to record the key and close the window you will
need to generate a new key and repeat this process.
g. In the Integrations page for the Windows DHCP Collector that you created, select Copy api url and record it somewhere safe. You will need to
provide this URL when you set the hosts value in the Elasticsearch Output section in the filebeat.yml file as explained in Step #2.
a. Navigate to the Elasticsearch Filebeat installation directory, and open the filebeat.yml file to configure data collection with Cortex XDR. We
recommend that you use the download example file provided by Cortex XDR.
Filebeat inputs: Define the paths to crawl and fetch. The code below provides an example of how to configure the Filebeat inputs section
in the filebeat.yml file with these paths configured.
Elasticsearch Output: Set the hosts and api_key, where both of these values are obtained when you configured the Windows DHCP
Collector in Cortex XDR as explained in Step #1. The code below provides an example of how to configure the Elasticsearch Output
section in the filebeat.yml file and indicates which settings need to be obtained from Cortex XDR.
Processors: Set the tokenizer and add a drop_event processor to drop all events that do not start with an event ID. The code below
provides an example of how to configure the Processors section in the filebeat.yml file and indicates which settings need to be
obtained from Cortex XDR.
The tokenizer definition is dependent on the Windows server version that you are using as the log format differs.
Return to the Integrations page and view the statistics for the log collection configuration.
4. After Cortex XDR begins receiving logs from Windows DHCP via Elasticsearch Filebeat, you can use the XQL Search to search for logs in the new
dataset (microsoft_dhcp_raw).
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Zscaler Internet Access (ZIA) in your network, you can forward your firewall and network logs to Cortex XDR for analysis. This enables you to take
advantage of Cortex XDR anomalous behavior detection and investigation capabilities. Cortex XDR can use the firewall and network logs from ZIA as the sole
data source, and can also use these firewall and network logs from ZIA in conjunction with Palo Alto Networks firewall and network logs. For additional
endpoint context, you can also use Cortex XDR to collect and alert on endpoint data.
To integrate your logs, you first need to set up an applet in a broker VM within your network to act as a Syslog Collector. You then configure forwarding on your
log devices to send logs to the Syslog collector in a CEF format. To provide seamless log ingestion, Cortex XDR automatically maps the fields in your traffic
logs to the Cortex XDR log format.
As soon as Cortex XDR starts to receive logs, the app performs these actions.
Begins stitching network connection and firewall logs with other logs to form network stories. Cortex XDR can also analyze your logs to raise Analytics
alerts and can apply IOC, BIOC, and Correlation Rule matching. You can also use queries to search your network connection logs.
Creates a Zscaler Cortex Query Language (XQL) dataset, which enables you to search the logs using XQL Search. The Zscaler XQL datasets are
dependent on the ZIA NSS Feed that you've configured for the types of logs you want to collect.
2. Increase log storage for ZIA logs. For more information, see Manage Your Log Storage.
3. Configure NSS log forwarding in Zscaler Internet Access to the Syslog Collector in a CEF format.
a. In the Zscaler Internet Access application, select Administration → Nanolog Streaming Service.
c. In the Add NSS Feed screen, configure the fields for the Cortex XDR Syslog Collector.
The steps below differ depending on the type of NSS Feed you are configuring to collect either firewall logs or web logs. For more information on
all the configurations available on the screen, see the ZIA documentation:
The following image displays the fields required to add an NSS feed.
SIEM TCP Port: Specify the port that you set when activating the Syslog Collector in Cortex XDR. See Activate the Syslog Collector.
SIEM IP Address: Specify the IP that you set when activating the Syslog Collector in Cortex XDR. See Activate the Syslog Collector.
Feed Output Format: Specify the output format, which is dependent on the type of logs you are collecting as defined in the NSS Type field:
d. Click Save.
e. Click Save and activate the change according to the Zscaler Internet Access (ZIA) documentation.
Abstract
Extend Cortex XDR visibility into logs from Zscaler Private Access (ZPA).
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Zscaler Private Access (ZPA) in your network as an alternative to VPNs, you can forward your network logs to Cortex XDR for analysis. This enables
you to take advantage of Cortex XDR anomalous behavior detection and investigation capabilities. Cortex XDR can use the network logs from ZPA as the sole
data source, and can also use these network logs from ZPA in conjunction with Palo Alto Networks network logs.
As soon as Cortex XDR starts to receive logs, the following actions are performed:
Stitching network connection logs with other logs to form network stories. Cortex XDR can also analyze your logs to apply IOC, BIOC, and Correlation
Rules matching. You can also use queries to search your network connection logs.
Creates a Zscaler Cortex Query Language (XQL) dataset (zscaler_zpa_raw), which enables you to search the logs using XQL Search.
To integrate your logs, you first need to set up an applet in a Broker VM within your network to act as a Syslog Collector. You then configure forwarding on your
log devices to send logs to the Syslog collector in a LEEF format. To provide seamless log ingestion, Cortex XDR automatically maps the fields in your traffic
logs to the Cortex XDR log format.
Before you can add a log receiver in Zscaler Private Access, as explained in the task below, you must first deploy your App Connectors. For more information,
see App Connector Deployment Guides for Supported Platforms.
2. Increase log storage for ZPA logs. For more information, see Manage Your Log Storage.
3. Configure ZPA log forwarding in Zscaler Private Access to the Syslog Collector in a LEEF format.
For more information on configuring the parameters on the screen, see the Zscaler Private Access (ZPA) documentation for Configuring a Log
Receiver.
c. In the Add Log Receiver window, configure the following fields on the Log Receiver tab:
Name: Specify a name for the log receiver. The name cannot contain special characters, with the exception of periods (.), hyphens (-), and
underscores ( _ ).
Domain or IP Address: Specify the fully qualified domain name (FQDN) or IP address for the log receiver that you set when activating the
Syslog Collector in Cortex XDR. See Activate Syslog Collector.
TCP Port: Specify the TCP port number used by the log receiver that you set when activating the Syslog Collector in Cortex XDR. See
Activate Syslog Collector.
TLS Encryption: Toggle to Enabled to encrypt traffic between the log receiver and your Syslog Collector in Cortex XDRusing mutually
authenticated TLS communication. To use this setting, the log receiver must support TLS communication. For more information, see About
the Log Streaming Service.
App Connector Groups: (Optional) Select the App Connector groups that can forward logs to the receiver, and click Done. You can search
for a specific group, click Select All to apply all groups, or click Clear Selection to remove all selections.
d. Click Next.
You can only configure a ZPA log receiver to collect one type of log with your Syslog Collector in Cortex XDR. To configure more that one log
type, you'll need to add another log receiver.
User Activity: Information on end user requests to applications. For more information, see User Activity Log Fields.
User Status: Information related to an end user's availability and connection to ZPA. For more information, see User Status Log Fields.
App Connector Status: Information related to an App Connector's availability and connection to ZPA. For more information, see About
App Connector Status Log Fields.
Audit Logs: Session information for all admins accessing the ZPA Admin Portal. For more information, See About Audit Log
Fields and About Audit Logs.
Log Stream Content: From the table below, copy the applicable log template according to the Log Type you've selected and paste it into the
Log Stream Content field.
LEEF:1.0|Zscaler|ZPA|4.1|%s{ConnectionStatus}%s{InternalReason}|cat=ZPA User
User activity Activity\tdevTime=%s{LogTimestamp:epoch}\tCustomer=%s{Customer}\tSessionID=%s
{SessionID}\tConnectionID=%s{ConnectionID}\tInternalReason=%s{InternalReason}
\tConnectionStatus=%s{ConnectionStatus}\tproto=%d{IPProtocol}
\tDoubleEncryption=%d{DoubleEncryption}\tusrName=%s{Username}
\tdstPort=%d{ServicePort}\tsrc=%s{ClientPublicIP}\tsrcPreNAT=%s{ClientPrivateIP}
\tClientLatitude=%f{ClientLatitude}\tClientLongitude=%f{ClientLongitude}
\tClientCountryCode=%s{ClientCountryCode}\tClientZEN=%s{ClientZEN}
\tpolicy=%s{Policy}\tConnector=%s{Connector}\tConnectorZEN=%s{ConnectorZEN}
\tConnectorIP=%s{ConnectorIP}\tConnectorPort=%d{ConnectorPort}
\tApplicationName=%s{Host}\tApplicationSegment=%s{Application}\tAppGroup=%s{AppGroup}
\tServer=%s{Server}\tdst=%s{ServerIP}\tServerPort=%d{ServerPort}
\tPolicyProcessingTime=%d{PolicyProcessingTime}\tServerSetupTime=%d{ServerSetupTime}
\tTimestampConnectionStart:iso8601=%s{TimestampConnectionStart:iso8601}
\tTimestampConnectionEnd:iso8601=%s{TimestampConnectionEnd:iso8601}
\tTimestampCATx:iso8601=%s{TimestampCATx:iso8601}
\tTimestampCARx:iso8601=%s{TimestampCARx:iso8601}
\tTimestampAppLearnStart:iso8601=%s{TimestampAppLearnStart:iso8601}
\tTimestampZENFirstRxClient:iso8601=%s{TimestampZENFirstRxClient:iso8601}
\tTimestampZENFirstTxClient:iso8601=%s{TimestampZENFirstTxClient:iso8601}
\tTimestampZENLastRxClient:iso8601=%s{TimestampZENLastRxClient:iso8601}
\tTimestampZENLastTxClient:iso8601=%s{TimestampZENLastTxClient:iso8601}
\tTimestampConnectorZENSetupComplete:iso8601=%s{TimestampConnectorZENSetupComplete:iso8601}
\tTimestampZENFirstRxConnector:iso8601=%s{TimestampZENFirstRxConnector:iso8601}
\tTimestampZENFirstTxConnector:iso8601=%s{TimestampZENFirstTxConnector:iso8601}
\tTimestampZENLastRxConnector:iso8601=%s{TimestampZENLastRxConnector:iso8601}
\tTimestampZENLastTxConnector:iso8601=%s{TimestampZENLastTxConnector:iso8601}
\tZENTotalBytesRxClient=%d{ZENTotalBytesRxClient}\tZENBytesRxClient=%d{ZENBytesRxClient}
\tZENTotalBytesTxClient=%d{ZENTotalBytesTxClient}\tZENBytesTxClient=%d{ZENBytesTxClient}
\tZENTotalBytesRxConnector=%d{ZENTotalBytesRxConnector}
\tZENBytesRxConnector=%d{ZENBytesRxConnector}
\tZENTotalBytesTxConnector=%d{ZENTotalBytesTxConnector}
\tZENBytesTxConnector=%d{ZENBytesTxConnector}\tIdp=%s{Idp}\n
LEEF:1.0|Zscaler|ZPA|4.1|%s{SessionStatus}|cat=Connector Status
App connector status \tdevTime=%s{LogTimestamp:epoch}\tCustomer=%s{Customer}\tSessionID=%s{SessionID}
\tSessionType=%s{SessionType}\tVersion=%s{Version}\tPlatform=%s{Platform}
\tZEN=%s{ZEN}\tConnector=%s{Connector}\tConnectorGroup=%s{ConnectorGroup}
\tsrcPreNAT=%s{PrivateIP}\tsrc=%s{PublicIP}\tLatitude=%f{Latitude}
\tLongitude=%f{Longitude}\tCountryCode=%s{CountryCode}
\tTimestampAuthentication:iso8601=%s{TimestampAuthentication:iso8601}
\tTimestampUnAuthentication:iso8601=%s{TimestampUnAuthentication:iso8601}
\tCPUUtilization=%d{CPUUtilization}\tMemUtilization=%d{MemUtilization}
\tServiceCount=%d{ServiceCount}\tInterfaceDefRoute=%s{InterfaceDefRoute}
\tDefRouteGW=%s{DefRouteGW}\tPrimaryDNSResolver=%s{PrimaryDNSResolver}
\tHostStartTime=%s{HostStartTime}\tConnectorStartTime=%s{ConnectorStartTime}
\tNumOfInterfaces=%d{NumOfInterfaces}\tBytesRxInterface=%d{BytesRxInterface}
\tPacketsRxInterface=%d{PacketsRxInterface}\tErrorsRxInterface=%d{ErrorsRxInterface}
\tDiscardsRxInterface=%d{DiscardsRxInterface}\tBytesTxInterface=%d{BytesTxInterface}
\tPacketsTxInterface=%d{PacketsTxInterface}\tErrorsTxInterface=%d{ErrorsTxInterface}
\tDiscardsTxInterface=%d{DiscardsTxInterface}\tTotalBytesRx=%d{TotalBytesRx}
\tTotalBytesTx=%d{TotalBytesTx}
LEEF:1.0|Zscaler|ZPA|4.1|%s{auditOperationType}|cat=ZPA_Audit_Log
Audit logs \tdevTime=%s{LogTimestamp:epoch}\tcreationTime=%s{creationTime:iso8601}
\trequestId=%s{requestId}\tsessionId=%s{sessionId}\tauditOldValue=%s{auditOldValue}
\tauditNewValue=%s{auditNewValue}\tauditOperationType=%s{auditOperationType}
\tobjectType=%s{objectType}\tobjectName=%s{objectName}\tobjectId=%d{objectId}
\taccountName=%d{customerId}\tusrName=%s{modifiedByUser}\n
(Optional) You can define a streaming Policy for the log receiver. This entails configuring the SAML Attributes, Application Segments,
Segment Groups, Client Types, and Session Statuses. For more information on configuring these settings, see the Log Stream instructions.
f. Click Next.
h. Click Save.
Abstract
Ingest authentication logs from external authentication services—such as Okta and Azure AD—into authentication stories with Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
When you ingest authentication logs and data from an external source, Cortex XDR can weave that information into authentication stories. An authentication
story unites logs and data regardless of the information source (for example, from an on-premise KDC or from a cloud-based authentication service) into a
uniform schema. To search authentication stories, you can use the Query Builder or XQL Search.
Cortex XDR can ingest authentication logs and data from various authentication services.
Abstract
Take advantage of Cortex XDR investigation capabilities and set up audit log ingestion for your AWS CloudTrail logs.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward audit logs for the relative service to Cortex XDR from AWS CloudTrail.
To receive audit logs from Amazon Simple Storage Service (Amazon S3) via AWS CloudTrail, you must first configure data collection from Amazon S3. You can
then configure the Collection Integrations settings in Cortex XDR for Amazon S3. After you set up collection integration, Cortex XDR begins receiving new logs
and data from the source.
For more information on configuring data collection from Amazon S3 using AWS CloudTrail, see the AWS CloudTrail Documentation.
As soon as Cortex XDR begins receiving logs, the app automatically creates an Amazon S3 Cortex Query Language (XQL) dataset (aws_s3_raw). This
enables you to search the logs with XQL Search using the dataset. For example queries, refer to the in-app XQL Library. As part of the enhanced cloud
protection,
For enhanced cloud protection, you can also configure Cortex XDR to stitch Amazon S3 audit logs with other Cortex XDR authentication stories across all
cloud providers using the same format, which you can query with XQL Search using the cloud_audit_logs dataset. Cortex XDR can also raise Cortex XDR
alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from Amazon S3 logs. While Correlation Rules alerts are raised on non-normalized and
normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
Prerequisite Steps
Be sure you do the following tasks before you begin configuring data collection from Amazon S3 via AWS CloudTrail.
Ensure that you have the proper permissions to access AWS CloudTrail and have the necessary permissions to create audit logs. You need at a
minimum the following permissions in AWS for an Amazon S3 bucket and Amazon Simple Queue Service (SQS).
Determine how you want to provide access to Cortex XDR to your logs and to perform API operations. You have the following options:
Designate an AWS IAM user, where you will need to know the Account ID for the user and have the relevant permissions to create an access
key/id for the relevant IAM user. This is the default option as explained in Configure the Amazon S3 collection by selecting Access Key.
Create an assumed role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your flow logs. For
more information, see Creating a role to delegate permissions to an AWS service. This is the Assumed Role option described in the Amazon S3
collection configuration.
To collect Amazon S3 logs that use server-side encryption (SSE), the user role must have an IAM policy that states that Cortex XDR has kms:Decrypt
permissions. With this permission, Amazon S3 automatically detects if a bucket is encrypted and decrypts it. If you want to collect encrypted logs from
different accounts, you must have the decrypt permissions for the user role also in the key policy for the master account Key Management Service
(KMS). For more information, see Allowing users in other accounts to use a KMS key.
To configure Cortex XDR to receive audit logs from Amazon S3 via AWS Cloudtrail:
2. From the menu bar, ensure that you have selected the correct region for your configuration.
For more information on creating an AWS CloudTrail trail, see Create a trail.
If you already have an Amazon S3 bucket configured with AWS CloudTrail audit logs, skip this step and go to Configure an Amazon Simple Queue
Service (SQS).
b. Configure the following settings for your CloudTrail trail, where the default settings should be configured unless otherwise indicated.
Storage location: Select Create new S3 bucket to configure a new Amazon S3 bucket, and specify a unique name in the Trail log bucket and
folder field, or select Use existing S3 bucket and Browse to the S3 bucket you already created. If you select an existing Amazon S3 bucket,
the bucket policy must grant CloudTrail permission to write to it. For information about manually editing the bucket policy, see Amazon S3
Bucket Policy for CloudTrail.
It is the customer’s responsibility to define a retention policy for your Amazon S3 bucket by creating a Lifecycle rule in the Management tab.
We recommend setting the retention policy to at least 7 days to ensure that the data is retrieved under all circumstances.
Customer managed AWS KMS key: You can either select a New key and specify the AWS KMS alias, or select an Existing key, and select
the AWS KMS alias. The KMS key and S3 bucket must be in the same region.
SNS notification delivery: (Optional) If you want to be notified whenever CloudTrail publishes a new log to your Amazon S3 bucket, click
Enabled. Amazon Simple Notification Service (Amazon SNS) manages these notifications, which are sent for every log file delivery to your S3
bucket, as opposed to every event. When you enable this option, you can either Create a new SNS topic by selecting New and the SNS
topic is displayed in the field, or use an Existing topic and select the SNS topic. For more information, see Configure SNS Notifications for
CloudTrail.
The CloudWatch Logs - optional settings are not supported and should be left disabled.
c. Click Next, and configure the following Choose log events settings.
-API activity: For Management events, select the API activities you want to log. By default, the Read and Write activities are logged.
-Exclude AWS KMS events: (Optional) If you want to filter AWS Key Management Service (AWS KMS) events out of your trail, select the
checkbox. By default, all AWS KMS events are included.
Data events section: (Optional) This section is displayed when you configure the Event type to include Data events, which relate to resource
operations performed on or within a resource, such as reading and writing to a S3 bucket. For more information on configuring these optional
settings in AWS CloudTrail, see Creating a trail.
Insights events section: (Optional) This section is displayed when you configure the Event type to include Insight events, which relate to
unusual activities, errors, or user behavior on your account. For more information on configuring these optional settings in AWS CloudTrail,
see Creating a trail.
d. Click Next.
e. In the Review and create page, look over the trail configurations settings that you have configured and if they are correct, click Create trail. If you
need to make a change, click Edit beside the particular step that you want to update.
The new trail is listed in the Trails page, which lists the trails in your account from all Regions. It can take up to 15 minutes for CloudTrail to begin
publishing log files. You can see the log files in the S3 bucket that you specified. For more information, see Creating a trail.
Ensure that you create your Amazon S3 bucket and Amazon SQS queue in the same region.
b. Configure the following settings, where the default settings should be configured unless otherwise indicated.
Configuration section: Leave the default settings for the various fields.
Access policy → Choose method: Select Advanced and update the Access policy code in the editor window to enable your Amazon S3
bucket to publish event notification messages to your SQS queue. Use this sample code as a guide for defining the “Statement” with the
following definitions:
-“Resource”: Leave the automatically generated ARN for the SQS queue that is set in the code, which uses the format
“arn:sns:Region:account-id:topic-name”.
You can retrieve your bucket’s ARN by opening the Amazon S3 Console in a browser window. In the Buckets section, select the bucket that
you created for collecting the Amazon S3 flow logs, click Copy ARN, and paste the ARN in the field.
For more information on granting permissions to publish messages to an SQS queue, see Granting permissions to publish event notification
messages to a destination.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]",
"Condition": {
"ArnLike": {
"aws:SourceArn": "[ARN of your Amazon S3 bucket]"
}
}
},
]
}
Dead-letter queue section: We recommend that you configure a queue for sending undeliverable messages by selecting Enabled, and then
in the Choose queue field selecting the queue to send the messages. You may need to create a new queue for this, if you do not already
have one set up. For more information, see Amazon SQS dead-letter queues.
Once the SQS is created, a message indicating that the queue was successfully configured is displayed at the top of the page.
5. Configure an event notification to your Amazon SQS whenever a file is written to your Amazon S3 bucket.
a. Open the Amazon S3 Console and in the Properties tab of your Amazon S3 bucket, scroll down to the Event notifications section, and click Create
event notification.
Prefix: Do not set a prefix as the Amazon S3 bucket is meant to be a dedicated bucket for collecting audit logs.
Event types: Select All object create events for the type of event notifications that you want to receive.
Destination: Select SQS queue to send notifications to an SQS queue to be read by a server.
Specify SQS queue: You can either select Choose from your SQS queues and then select the SQS queue, or select Enter SQS queue ARN
and specify the ARN in the SQS queue field.
You can retrieve your SQS queue ARN by opening another instance of the AWS Management Console in a browser window, and opening the
Amazon SQS Console, and selecting the Amazon SQS that you created. In the Details section, under ARN, click the copy icon ( )), and
paste the ARN in the field.
Once the event notification is created, a message indicating that the event notification was successfully created is displayed at the top of the
page.
If your receive an error when trying to save your changes, you should ensure that the permissions are set up correctly.
6. Configure access keys for the AWS IAM user that Cortex XDR uses for API operations.
It is the responsibility of the customer’s organization to ensure that the user who performs this task of creating the access key is designated with
the relevant permissions. Otherwise, this can cause the process to fail with errors.
Skip this step if you are using an Assumed Role for Cortex XDR.
a. Open the AWS IAM Console, and in the navigation pane, select Access management → Users.
c. Select the Security credentials tab, scroll down to the Access keys section, and click Create access key.
d. Click the copy icon next to the Access key ID and Secret access key keys, where you must click Show secret access key to see the secret key
and record them somewhere safe before closing the window. You will need to provide these keys when you edit the Access policy of the SQS
queue and when setting the AWS Client ID and AWS Client Secret in Cortex XDR. If you forget to record the keys and close the window, you will
need to generate new keys and repeat this process.
For more information, see Managing access keys for IAM users.
Skip this step if you are using an Assumed Role for Cortex XDR.
a. In the Amazon SQS Console, select the SQS queue that you created in Configure an Amazon Simple Queue Service (SQS).
b. Select the Access policy tab, and Edit the Access policy code in the editor window to enable the IAM user to perform operations on the Amazon
SQS with permissions to SQS:ChangeMessageVisibility, SQS:DeleteMessage, and SQS:ReceiveMessage. Use this sample code as a
guide for defining the “Sid”: “__receiver_statement” with the following definitions:
“Resource”: Leave the automatically generated ARN for the SQS queue that is set in the code, which uses the format
“arn:sns:Region:account-id:topic-name”.
For more information on granting permissions to publish messages to an SQS queue, see Granting permissions to publish event notification
messages to a destination.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]",
"Condition": {
"ArnLike": {
"aws:SourceArn": "[ARN of your Amazon S3 bucket]"
}
}
},
{
"Sid": "__receiver_statement",
"Effect": "Allow",
"Principal": {
"AWS": "[Add the ARN for the AWS IAM user]"
},
"Action": [
"SQS:ChangeMessageVisibility",
"SQS:DeleteMessage",
"SQS:ReceiveMessage"
],
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]"
}
]
}
c. Set these parameters, where the parameters change depending on whether you configured an Access Key or Assumed Role.
To provide access to Cortex XDR to your logs and perform API operations using a designated AWS IAM user, leave the Access Key option
selected. Otherwise, select Assumed Role, and ensure that you Create an Assumed Role for Cortex XDR before continuing with these
instructions. In addition, when you create an Assumed Role for Cortex XDR, ensure that you edit the policy that defines the permissions for
the role with the Amazon S3 Bucket ARN and SQS ARN.
SQS URL: Specify the SQS URL, which is the ARN of the Amazon SQS that you configured in the AWS Management Console.
AWS Client ID: Specify the Access key ID, which you received when you configured access keys for the AWS IAM user in AWS.
AWS Client Secret: Specify the Secret access key you received when you configured access keys for the AWS IAM user in AWS.
Role ARN: Specify the Role ARN for the Assumed Role you created for in AWS.
External Id: Specify the External Id for the Assumed Role you created for in AWS.
Log Type: Select Audit Logs to configure your log collection to receive audit logs from Amazon S3 via AWS CloudTrail. When configuring
audit log collection, the following additional field is displayed for Enhanced Cloud Protection.
You can Normalize and enrich audit logs by selecting the checkbox. If selected, Cortex XDR stitches Amazon S3 audit logs with other
Cortex XDR authentication stories across all cloud providers using the same format, which you can query with XQL Search using the
cloud_audit_logs dataset.
Once events start to come in, a green check mark appears underneath the Amazon S3 configuration with the number of logs received.
Abstract
Ingest logs from Microsoft Azure Event Hub with an option to ingest audit logs to use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from Microsoft Azure Event Hub using the Microsoft Azure Event Hub data collector. To receive logs from Azure
Event Hub, you must configure the settings in Cortex XDR based on your Microsoft Azure Event Hub configuration. After you set up data collection, Cortex
XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset (MSFT_Azure_raw) that you can use to initiate XQL Search queries. For example,
queries refer to the in-app XQL Library. For enhanced cloud protection, you can also configure Cortex XDR to normalize Azure Event Hub audit logs, including
Azure Kubernetes Service (AKS) audit logs, with other Cortex XDR authentication stories across all cloud providers using the same format, which you can
query with XQL Search using the cloud_audit_logs dataset. For logs that you do not configure Cortex XDR to normalize, you can change the default
dataset. Cortex XDR can also raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from Azure Event Hub logs. While
Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
In an existing Event Hub integration, do not change the mapping to a different Event Hub.
Do not use the same Event Hub for more than two purposes.
The following table provides a brief description of the different types of Azure audit logs you can collect.
For more information on Azure Event Hub audit logs, see Overview of Azure platform logs.
Activity logs Retrieves events related to the operations on each Azure resource in the subscription from the outside in addition to
updates on Service Health events.
Azure Active Directory (AD) Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant.
Activity logs and Azure Sign-
in logs Even though you can collect Azure AD Activity logs and Azure Sign-in logs using the Azure Event Hub data collector, we
recommend using the Microsoft 365 data collector, because it is easier to configure. In addition, ensure that you don't
configure both collectors to collect the same types of logs, because if you do so, you will be creating duplicate data in
Cortex XDR.
Resource logs, including Retrieves events related to operations that were performed within an Azure resource.
AKS audit logs
These logs are from the data plane.
Ensure that you do the following tasks before you begin configuring data collection from Azure Event Hub.
Before you set up an Azure Event Hub, calculate the quantity of data that you expect to send to Cortex XDR, taking into account potential data spikes
and potential increases in data ingestion, because partitions cannot be modified after creation. Use this information to ascertain the optimal number of
partitions and Throughput Units (for Azure Basic or Standard) or Processing Units (for Azure Premium). Configure your Event Hub accordingly.
Create an Azure Event Hub. We recommend using a dedicated Azure Event Hub for this Cortex XDR integration. For more information, see Quickstart:
Create an event hub using Azure portal.
Ensure the format for the logs you want collected from the Azure Event Hub is either JSON or raw.
1. In the Microsoft Azure console, open the Event Hubs page, and select the Azure Event Hub that you created for collection in Cortex XDR.
2. Record the following parameters from your configured event hub, which you will need when configuring data collection in Cortex XDR.
3. In the Consumer group table, copy the applicable value listed in the Name column for your Cortex XDR data collection configuration.
Your storage account connection string required for partitions lease management and checkpointing in Cortex XDR.
1. Open the Storage accounts page, and either create a new storage account or select an existing one, which will contain the storage account
connection string.
3. Configure diagnostic settings for the relevant log types you want to collect and then direct these diagnostic settings to the designated Azure Event Hub.
Activity logs Select Azure services → Activity log → Export Activity Logs, and +Add diagnostic setting.
Azure AD Activity logs and Azure 1. Select Azure services → Azure Active Directory.
Sign-in logs
2. Select Monitoring → Diagnostic settings, and +Add diagnostic setting.
Resource logs, including AKS 1. Search for Monitor, and select Settings → Diagnostic settings.
audit logs
2. From your list of available resources, select the resource that you want to configure for log
collection, and then select +Add diagnostic setting.
For every resource that you want to confiure, you'll have to repeat this step, or use Azure policy for
a general configuration.
Logs Categories/Metrics: The options listed are dependent on the type of logs you want to configure. For Activity logs and Azure AD logs
and Azure Sign-in logs, the option is called Logs Categories, and for Resource logs it's called Metrics.
Activity logs Select from the list of applicable Activity log categories, the ones that you want to configure your designated
resource to collect. We recommend selecting all of the options.
Administrative
Security
ServiceHealth
Alert
Recommendation
Policy
Autoscale
ResourceHealth
Azure AD Activity Select from the list of applicable Azure AD Activity and Azure Sign-in Logs Categories, the ones that you want
logs and Azure Sign- to configure your designated resource to collect. You can select any of the following categories to collect these
in logs types of Azure logs.
AuditLogs
SignInLogs
NonInteractiveUserSignInLogs
ServicePrincipalSignInLogs
ManagedIdentitySignInLogs
ADFSSignInLogs
There are additional log categories displayed. We recommend selecting all the available options.
Resource logs, The list displayed is dependent on the resource that you selected. We recommend selecting all the options
including AKS audit available for the resource.
logs
Destination details: Select Stream to event hub, where additional parameters are displayed that you need to configure. Ensure that you set
the following parameters using the same settings for the Azure Event Hub that you created for the collection.
Subscription: Select the applicable Subscription for the Azure Event Hub.
Event hub namespace: Select the applicable Subscription for the Azure Event Hub.
(Optional) Event hub name: Specify the name of your Azure Event Hub.
Event hub policy: Select the applicable Event hub policy for your Azure Event Hub.
b. In the Azure Event Hub configuration, click Add Instance to begin a new configuration.
Event Hub Connection String: Specify your event hub’s connection string for the designated policy.
Storage Account Connection String: Specify your storage account’s connection string for the designated policy.
Log Format: Select the log format for the logs collected from the Azure Event Hub as Raw, JSON, CEF, LEEF, Cisco-asa, or Corelight.
When you Normalize and enrich audit logs, the log format is automatically configured. As a result, the Log Format option is removed and is
no longer available to configure (default).
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the logs.
When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the Vendor and
Product fields in the Azure Event Hub data collector settings. Yet, when the values are blank in the event log row, Cortex XDR uses the
Vendor and Product that you specified in the Azure Event Hub data collector settings. If you did not specify a Vendor or Product in the
Azure Event Hub data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco-asa: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Msft
Product: Azure
Raw or JSON data can be queried in XQL Search using the msft_azure_raw dataset.
Vendor and Product: Specify the Vendor and Product for the type of logs you are ingesting.
The Vendor and Product are used to define the name of your Cortex Query Language (XQL) dataset (<vendor>_<product>_raw). The
Vendor and Product values vary depending on the Log Format selected. To uniquely identify the log source, consider changing the values if
the values are configurable.
When you Normalize and enrich audit logs, the Vendor and Product fields are automatically configured, so these fields are removed as
available options (default).
Normalize and enrich audit logs: (Optional) For enhanced cloud protection, you can Normalize and enrich audit logs by selecting the
checkbox (default). If selected, Cortex XDR normalizes and enriches Azure Event Hub audit logs with other Cortex XDR authentication
stories across all cloud providers using the same format. You can query this normalized data with XQL Search using the
cloud_audit_logs dataset.
When events start to come in, a green check mark appears underneath the Azure Event Hub configuration with the amount of data received.
Abstract
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from GCP to Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from your GCP instance to Cortex XDR. Data from
GCP is then searchable in Cortex XDR to provide additional information and context to your investigations using the GCP Cortex Query Language (XQL)
dataset, which is dependent on the type of GCP logs collected. For example queries, refer to the in-app XQL Library. You can configure a Google Cloud
Platform collector to receive generic, flow, audit, or Google Cloud DNS logs. When configuring generic logs, you can receive logs in a Raw, JSON, CEF, LEEF,
Cisco, or Corelight format.
Cloud-tailored investigations
The following table lists the various GCP log types the XQL datasets you can use to query in XQL Search:
Cisco: cisco_asa_raw
Corelight: corelight_zeek_raw
JSON or Raw:
google_cloud_logging_raw
Google Cloud DNS logs google_dns_raw xdr_data: Once configured, Cortex XDR ingests Google Cloud DNS
logs as XDR network connection stories, which you can query with
XQL Search using the xdr_data dataset with the preset called
network_story.
Network flow logs google_cloud_logging_raw xdr_data: Once configured, Cortex XDR ingests network flow logs
as XDR network connection stories, which you can query with XQL
Search using the xdr_data dataset with the preset called
network_story.
When collecting flow logs, we recommend that you include GKE annotations in your logs, which enable you to view the names of the containers that
communicated with each other. GKE annotations are only included in logs if appended manually using the custom metadata configuration in GCP. For more
information, see VPC Flow Logs Overview. In addition, to customize metadata fields, you must use the gcloud command-line interface or the API. For more
information, see Using VPC Flow Logs.
To receive logs and data from GCP, you must first set up log forwarding using a Pub/Sub topic in GCP. You can configure GCP settings using either the GCP
web interface or a GCP cloud shell terminal. After you set up your service account in GCP, you configure the Data Collection settings in Cortex XDR. The setup
process requires the subscription name and authentication key from your GCP instance.
After you set up log collection, Cortex XDR immediately begins receiving new logs and data from GCP.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
c. To filter only specific types of data, select the filter or desired resource.
f. Enter a descriptive Name that identifies the sink purpose for Cortex XDR, and then Create.
a. Select the hamburger menu in G Cloud and then select Pub/Sub → Topics.
b. Select the name of the topic you created in the previous steps. Use the filters if necessary.
After the subscription is set up, G Cloud displays statistics and settings for the service.
Optionally, use the copy button to copy the name to the clipboard. You will need the name when you configure Collection in Cortex XDR.
You will use the key to enable Cortex XDR to authenticate with the subscription service.
a. Select the menu icon, and then select IAM & Admin → Service Accounts.
f. Locate the service account by name, using the filters to refine the results, if needed.
g. Click the Actions menu identified by the three dots in the row for the service account and then Create Key.
After you create the service account key, G Cloud automatically downloads it.
5. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
1. Launch the GCP cloud shell terminal or use your preferred shell with gcloud installed.
Note the subscription name you define in this step as you will need it to set up log ingestion from Cortex XDR.
During the logging sink creation, you can also define additional log filters to exclude specific logs. To filter logs, supply the optional parameter --log-
filter=<LOG_FILTER>
If setup is successful, the console displays a summary of your log sink settings:
Note the serviceAccount name from the previous step and use it to define the service for which you want to grant publish access.
For example, use cortex-xdr-sa as the service account name and Cortex XDR Service Account as the display name.
You will need the JSON file to enable Cortex XDR to authenticate with the GCP service. Specify the file destination and filename using a .json extension.
10. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
Abstract
Ingest logs and data from Google Workspace for use in Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest the following types of data from Google Workspace, where most of the data is collected as audit events from various Google reports,
using the Google Workspace data collector.
Admin Console
Google Chat
Enterprise Groups
Login
Rules
Google drive
Token
User Accounts
SAML
Alerts
All message details except email headers and email content (payload.body, payload.parts, and snippet).
Attachment details, when Get Attachment Info is selected, includes file name, size, and hash calculation.
The following Google APIs are required to collect the different types of data from Google Workspace.
For all data types, except alerts and emails: Admin Reports API (part of Admin SDK API).
For all types of data collected via the Admin Reports API, except alerts and emails, the log events are collected with a preset lag time as reported by
Google Workspace. For more information on these lag times for the different types of data, see Google Workspace Data retention and lag times.
Alerts require implementing an additional API: Alert Center API (part of Admin SDK API).
To receive logs from Google Workspace for any of the data types except emails, you must first enable the Google Workspace Admin SDK API with a user with
access to the Admin SDK Reports API. For emails, you must set up a compliance email account as explained in the prerequisite steps below and then enable
the Google Workspace Gmail API. Once implemented, you can then configure the Collection Integrations settings in Cortex XDR. After you set up data
collection, Cortex XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset for the different types of data that you are collecting, which you can use to initiate XQL
Search queries. For example queries, refer to the in-app XQL Library. For all logs, Cortex XDR can raise Cortex XDR alerts for Correlation Rules only, when
relevant from Google Workspace logs.
For the different types of data you can collect using the Google Workspace data collector, the following table lists the different datasets, vendors, and products
automatically configured, and whether the data is normalized.
Admin google_workspace_admin_console_raw Google Workspace When relevant, Cortex XDR normalizes Admin Console
console Admin audit logs into authentication stories. All SaaS audit logs
Console are collected in a dataset called saas_audit_logs
and specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Enterprise google_workspace_enterprise_groups_raw Google Workspace When relevant, Cortex XDR normalizes Enterprise Group
groups Enterprise audit logs into authentication stories. All SaaS audit logs
Groups are collected in a dataset called saas_audit_logs
and specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Login google_workspace_login_raw Google Workspace When relevant, Cortex XDR normalizes Login audit logs
Login into authentication stories. All SaaS audit logs are
collected in a dataset called saas_audit_logs and
specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Rules google_workspace_rules_raw Google Workspace When relevant, Cortex XDR normalizes Rules audit logs
Rules into authentication stories. All SaaS audit logs are
collected in a dataset called saas_audit_logs and
specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Google google_workspace_drive_raw Google Workspace When relevant, Cortex XDR normalizes Google drive
Drive Drive audit logs into authentication stories. All SaaS audit logs
are collected in a dataset called saas_audit_logs
and specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Token google_workspace_token_raw Google Workspace When relevant, Cortex XDR normalizes Token audit logs
Token into authentication stories. All SaaS audit logs are
collected in a dataset called saas_audit_logs and
specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
SAML google_workspace_saml_raw Google Workspace When relevant, Cortex XDR normalizes SAML audit logs
SAML into authentication stories. All SaaS audit logs are
collected in a dataset called saas_audit_logs and
specific relevant events are collected in the
authentication_story preset for the xdr_data
dataset.
Prerequisite Steps
Be sure you do the following tasks before you begin configuring data collection from Google Workspace using the instructions detailed below.
When you only want to collect Google Workspace alerts without configuring any other data types, you need to set up a Cloud Platform project.
Before you can collect Google emails, you need to set up the following:
2. The organization’s Google Workspace account administrator can now set up a BCC to this compliance email account for all incoming and
outgoing emails of any user in the organization.
a. Login to the Admin direct routing URL in Google Workspace for the user account that you want to configure.
b. Double-click Routing, and set the following parameters in the Add setting dialog.
Routing: Configure the compliance email account that you want to receive a BCC for emails from this user account using the format
BCC TO <compliance email>. For example, BCC TO admin@organization.com.
Select Inbound and Outbound to ensure all incoming and outgoing emails are sent.
(Optional) To configure another email address to receive a BCC for emails from this account, select Add more recipients in the Also
deliver to section, and then click Add.
Click Show options, and from the list displayed select Account types to affect → Users.
This configuration ensures to forward every message sent to a user account to a defined compliance mailbox. After the Google Workspace data
collector ingests the emails, they are deleted from the compliance mailbox to prevent email from building up over time (nothing touches the actual users’
mailboxes).
Spam emails from the compliance email account, and from all other monitored email accounts, are not collected.
Any draft emails written in the compliance email account are collected by the Google Workspace data collector, and are then deleted even if the
email was never sent.
1. Complete the applicable prerequisite steps for the types of data you want to collect from Google Workspace.
3. Perform Google Workspace Domain-Wide Delegation of Authority when collecting any type of data from Google Workspace except Google Emails.
When collecting any type of data from Google Workspace except emails, you need to set up Google Workspace enterprise applications to access users’
data without any manual authorization. This is performed by following these steps.
For more information on the entire process, see Perform Google Workspace Domain-Wide Delegation of Authority.
a. Enable the Admin SDK API to create a service account and set credentials for this service account.
As you complete this step, you need to gather information related to your service account, including the Client ID, Private key file, and Email
address, which you will need to use later on in this task.
2. Search for the Admin SDK API, and select the API from the results list.
Specify a service account name. This name is automatically used to populate the following field as the service account ID, where the
name is changed to lowercase letters and all spaces are changed to hyphens.
Specify the service account ID, where you can either leave the default service account ID or add a new one. This service account ID is
used to set the service account email using the following format: <id>@<project name>.iam.gserviceaccount.com.
9. Click Done.
10. Select your newly created Service Account from the list.
11. Create a service account private key and download the private key file as a JSON file.
In the Keys tab, select ADD KEY → Create new key, leave the default Key type set to JSON, and CREATE the private key. Once you’ve
downloaded the new private key pair to your machine, ensure that you store it in a secure location, because it’s the only copy of this key. You
will need to browse to this JSON file when configuring the Google Workplace data collector in Cortex XDR.
b. When collecting alerts, enable the Alert Center API to create a service account and set credentials for this service account.
When collecting Google Workspace alerts with other types of data, except emails, you need to configure a service account in Google with the
applicable permissions to collect events from the Google Reports API and alerts from the Alert Center API. If you prefer to use different service
accounts to collect events and alerts separately, you'll need to create two service accounts with different instances of the Google Workspace data
collector. One instance to collect events with a certain service account, and another instance to collect alerts using another service account. The
instructions below explain how to set up one Google Workspace instance to collect both event and alerts.
2. Search for the Alert Center API, and select the API from the results list.
5. Select the same service account in the Service Accounts section that you created for the Admin SDK API above.
c. Delegate domain-wide authority to your service account with the Admin Reports API and Alert Center API scopes.
3. Scroll down to the Domain wide delegation section, and select MANAGE DOMAIN WIDE DELEGATION.
5. Set the following settings to define permissions for the Admin SDK API.
Client ID: Specify the service account’s Unique ID, which you can obtain from the Service accounts page by clicking the email of the
service account to view further details. When creating a single Google Workspace data collector instance to collect both events and
alert data, provide the same service account ID as the Admin SDK API.
In the OAuth scopes (comma-delimited) field, paste in the first of the two Admin Reports API scopes:
https://www.googleapis.com/auth/admin.reports.audit.readonly
In the following OAuth scopes (comma-delimited) field, paste in the second Admin Reports API scope:
https://www.googleapis.com/auth/admin.reports.usage.readonly
For more information on the Admin Reports API scopes, see OAuth 2.0 Scopes for Google APIs.
When collecting alerts, add the following Alert Center API scope: https://www.googleapis.com/auth/apps.alerts
This ensures that your service account now has domain-wide access to the Google Admin SDK Reports API and Google Workspace Alert
Center API, if configured, for all of the users of your domain.
When you are configuring the Google Workspace data collector to collect Google emails, the instruction differ depending on whether you are configuring
the collection along with other types of data with the Admin SDK API already set up or you are configuring the collection to only include emails using only
the Gmail API. The steps below explain both scenarios.
b. Search for the Gmail API, and select the API from the results list.
The instructions for setting up credentials differ depending on whether you are setting up the Gmail API together with the Admin SDK API as you
are collecting other data types, or you are configuring collection for emails only with the Gmail API.
When you’re only collecting Google emails without the Admin SDK API, complete these steps.
-Specify a service account name. This name is automatically used to populate the following field as the service account ID, where the
name is changed to lowercase letters and all spaces are changed to hyphens.
-Specify the service account ID, where you can either leave the default service account ID or add a new one. This service account ID
is used to set the service account email using the following format: <id>@<project name>.iam.gserviceaccount.com.
4. (Optional) Decide whether you want to Grant this service account access to project or Grant users access to this service account.
5. Click Done.
7. Create a service account private key and download the private key file as a JSON file.
In the Keys tab, select ADD KEY → Create new key, leave the default Key type set to JSON, and CREATE the private key. Once you’ve
downloaded the new private key pair to your machine, ensure that you store it in a secure location as it’s the only copy of this key. You
will need to browse to this JSON file when configuring the Google Workplace data collector in Cortex XDR .
e. Delegate domain-wide authority to your service account with the Gmail API scopes.
3. Scroll down to the Domain wide delegation section, and select MANAGE DOMAIN WIDE DELEGATION.
This step explains how the following Gmail API scopes are added.
https://mail.google.com/
https://www.googleapis.com/auth/gmail.addons.current.action.compose
https://www.googleapis.com/auth/gmail.addons.current.message.action
https://www.googleapis.com/auth/gmail.addons.current.message.metadata
https://www.googleapis.com/auth/gmail.addons.current.message.readonly
https://www.googleapis.com/auth/gmail.compose
https://www.googleapis.com/auth/gmail.insert
https://www.googleapis.com/auth/gmail.labels
https://www.googleapis.com/auth/gmail.metadata
https://www.googleapis.com/auth/gmail.modify
https://www.googleapis.com/auth/gmail.readonly
https://www.googleapis.com/auth/gmail.send
https://www.googleapis.com/auth/gmail.settings.basic
https://www.googleapis.com/auth/gmail.settings.sharing
For more information on the Gmail API scopes, see OAuth 2.0 Scopes for Google APIs.
The instructions differ depending on whether you are setting up the Gmail API together with the Admin SDK API as you are collecting other
data types, or you are configuring collection for emails only with the Gmail API.
When you’re only collecting Google emails without the Admin SDK API, click Add New, and set the following settings to define
permissions for the Admin SDK API.
-Client ID—Specify the service account’s Unique ID, which you can obtain from the Service accounts page by clicking the email of the
service account to view further details.
In the OAuth scopes (comma-delimited) field, paste in the first of the Gmail API scopes listed above, and continue adding in the rest of
the scopes.
This ensures that your service account now has domain-wide access to the Google Gmail API for all of the users of your domain.
5. Prepare your service account to impersonate a user with access to the Admin SDK Reports API when collecting any type of data from Google
Workspace except Google emails.
Only users with access to the Admin APIs can access the Admin SDK Reports API. Therefore, your service account needs to be set up to impersonate
one of these users to access the Admin SDK Reports API. This means that when collecting any type of data from Google Workspace except Google
emails, you need to designate a user whose Roles permissions are set to access reports, where Security → Reports is selected. This user’s email will be
required when configuring the Google Workspace data collector in Cortex XDR.
b. From the list of users listed, select the user configured with the necessary permissions in Admin roles and privileges to view reports, such as a
Super Admin, that you want to set up your service account to impersonate.
c. Record the email of this user as you will need it in Cortex XDR .
7. In the Google Workspace configuration, click Add Instance to begin a new configuration.
b. Browse to the JSON file containing your service account key Credentials for the Google Workspace Admin SDK API that you enabled. If you’re
only collecting Google emails, ensure that you Browse to the JSON file containing your service account private key Credentials for the Gmail API
that you enabled.
c. Select the types of data that you want to Collect from Google Workspace.
Google Chrome: Chrome browser and Chrome OS events included in the Chrome activity reports.
Admin Console: Account information about different types of administrator activity events included in the Admin console application's activity
reports.
Google Chat: Chat activity events included in the Chat activity reports.
Enterprise Groups: Enterprise group activity events included in the Enterprise Groups activity reports.
Login: Account information about different types of login activity events included in the Login application's activity reports.
Google drive: Google Drive activity events included in the Google Drive application's activity reports.
Token: Token activity events included in the Token application's activity reports.
User Accounts: Account information about different types of User Accounts activity events included in the User Accounts application's
activity reports.
Alerts: Alerts from the Alert Center API beta version, which is still subject to change.
Emails: Collects email data (not emails reports). All message details except email headers and email content (payload.body,
payload.parts, and snippet).
For more information about the events collected from the various Google Reports, see Google Workspace Reports API Documentation.
For all options selected, except Emails, you must specify the Service Account Email. This is the email account of the user with access to the Admin
SDK Reports API that you prepared your service account to impersonate.
Get Attachment Info from the ingested email, which includes file name, size, and hash calculation.
To test the connection, you must select one or more log types. Cortex XDR then tests the connection settings for the selected log types.
Abstract
The Microsoft 365 email collector fetches emails through Microsoft Graph API, using an authorized app. A compliance mailbox is not required.
The Microsoft 365 email collector fetches emails through Microsoft Graph API, using an authorized app. A compliance mailbox is not required.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
A user account with the Microsoft Azure Account Administrator role is required to set up a new Microsoft 365 email collector.
The Microsoft 365 collector ingests emails and attachment metadata, including email subject and body. Attachment metadata includes data such as name,
type, size, and hash. The actual attached files are not saved.
Distribution List
Mail-enabled Users
Datasets
The Microsoft 365 collector ingests data into the following datasets:
msft_o365_emails_raw
msft_o365_users_raw
msft_o365_groups_raw
msft_o365_devices_raw
msft_o365_mailboxes_raw
msft_o365_rules_raw
Encryption
Cortex XDR stores email metadata as plain text, and encrypts emails' subject and body. The email body is saved for 48 hours, and then deleted. Analytical
detectors analyze raw and encrypted email data, and when necessary, create alerts. When an alert is created for a malicious email, the raw email, include its
subject and body (decrypted), is attached to the alert as an artifact. Therefore, you will not be able to perform threat hunting based on email subject and body.
Only email metadata such as date, From, or To, are available for threat hunting purposes.
1. On the Collection Integrations page, locate Microsoft 365, and select Add Instance to begin a new connection.
2. In the wizard that opens, ensure that you have configured the items listed on the Permissions page, and then click Next.
3. To confirm that you know that API authorization consent is required, click OK.
4. Select the Microsoft account from which you want to collect email data.
5. Click Next.
6. Enter your password for the Microsoft account, and click Sign in.
7. If you are asked to perform authentication using your organization's authentication tools, do so.
8. For the list of of permissions that Cortex Email Security requires, click Accept.
Entire organization: Emails will be collected from all mailboxes in your organization.
Specific groups: Enter the email addresses of group names, such as Microsoft 365 Groups, Mail-enabled Security Groups, Distribution Lists, or
Mail-enabled Users.
11. On the Details page, enter a meaningful instance name, and click Next.
12. On the Summary page, check your configurations, and then click Create.
After data starts to come in, a green check mark appears below the Microsoft 365 configuration, along with the amount of data received.
Abstract
Ingest logs and data from Microsoft Office 365 Management Activity API and Microsoft Graph API for use in Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Ingesting Microsoft Entra ID (formerly known as Azure AD) authentication and audit events from Microsoft Graph API requires a Microsoft Azure Premium 1 or
Premium 2 license. Alternatively, if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license
requirement.
Cortex XDR can ingest the following logs and data from Microsoft Office 365 Management Activity API and Microsoft Graph API using the Office 365 data
collector. Alerts are collected with a delay of 5 minutes. If your organization requires collection that is closer to real-time collection, we recommend using the
Microsoft Azure Event Hub integration instead.
To ingest email logs and data from Microsoft Office 365, use the dedicated data collector. For more information, see Ingest logs and data from Microsoft 365.
When auditing is turned off from the default setting, you need to first turn on auditing for your organization to collect Microsoft Office 365 audit events
from the Management Activity API. Log duplication of up to 5% in Microsoft products is considered normal. In some cases, such as login to a portal
using MFA, two log entries are recorded by design.
Microsoft Entra ID (Azure AD) authentication and audit events from Microsoft Graph API.
When collecting Azure AD Authentication Logs, Cortex XDR also collects by default all sign-in event types from a beta version of Microsoft Graph API,
which is still subject to change. In addition to classic interactive user sign-ins, selecting this option allows you to collect.
To address Azure reporting latency, there is a 10-minute latency period for Cortex XDR to receive Azure AD logs.
Microsoft 365 alerts from Microsoft Graph Security API are available for different products.
Microsoft Graph Security API v1: Alerts from the following products are available via the Microsoft Graph Security API v1:
Microsoft Defender for Cloud, Azure Active Directory Identity Protection, Microsoft Defender for Cloud Apps, Microsoft Defender for
Endpoint, Microsoft Defender for Identity, Microsoft 365, Azure Information Protection, and Azure Sentinel.
Microsoft Graph Security API v2: Alerts (alerts_v2) from the following products are available via the Microsoft Graph Security API v2 beta version,
which is still subject to change:
Microsoft 365 Defender unified alerts API, which serves alerts from Microsoft 365 Defender, Microsoft Defender for Endpoint, Microsoft
Defender for Office 365, Microsoft Defender for Identity, Microsoft Defender for Cloud Apps, and Microsoft Purview Data Loss Prevention
(including any future new signals integrated into M365D).
To view alerts from the various products via the Microsoft Graph Security API versions, you need to ensure that you've set up the applicable licenses in
Office 365. The table below lists the various licenses required for the different Microsoft Defender products. For more information on other Microsoft
product licenses, see the Microsoft documentation.
Product Standalone License E3 License E3 + Security Add-On License E5 License E5 Security License E5 Compliance Licence
Microsoft ✓ ✓ ✓ — — —
Defender
for
Endpoint
Plan 1
Microsoft — — ✓ ✓ ✓ —
Defender
for
Endpoint
Plan 2
Microsoft — — ✓ ✓ ✓ —
Defender
for
Identity
Microsoft ✓ — — — — —
Defender
for Office
365 Plan
1
Product Standalone License E3 License E3 + Security Add-On License E5 License E5 Security License E5 Compliance Licence
Microsoft ✓ — ✓ ✓ ✓ —
Defender
for Office
365 Plan
2
Microsoft — — ✓ ✓ ✓ ✓
Defender
for Cloud
Apps
For more information, see the Office 365 Management Activity API schema.
To receive logs from Microsoft Office 365, you must first configure the Collection Integrations settings in Cortex XDR. After you set up data collection, Cortex
XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset for the different types of logs and data that you are collecting, which you can use to
initiate XQL Search queries. For example queries, refer to the in-app XQL Library. For all Microsoft Office 365 logs, Cortex XDR can also raise Cortex XDR
alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from Office 365 logs. While Correlation Rules alerts are raised on non-normalized and
normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
For the different types of data you can collect using the Office 365 data collector, the following table lists the different datasets, vendors, and products
automatically configured, and whether the data is normalized.
Exchange Online msft_o365_exchange_online_raw msft O365 Cortex XDR supports normalizing Exchange
Exchange Online audit logs into stories, which are
Online collected in a dataset called
saas_audit_logs*.
SharePoint Online msft_o365_sharepoint_online_raw msft O365 Cortex XDR supports normalizing SharePoint
Sharepoint Online audit logs into stories, which are
Online collected in a dataset called
saas_audit_logs*.
General msft_o365_general_raw msft O365 General Cortex XDR supports normalizing General
audit logs into stories, which are collected in
a dataset called saas_audit_logs*.
Microsoft Entra ID (Azure msft_azure_ad_raw msft Azure AD When relevant, Cortex XDR normalizes Azure
AD) authentication events AD authentication logs and Azure AD Sign-in
from Microsoft Graph API logs to authentication stories.
Microsoft Entra ID (Azure msft_azure_ad_audit_raw msft Azure AD When relevant, Cortex XDR normalizes Azure
AD) audit events from Audit AD audit logs to cloud audit logs stories.
Microsoft Graph API
*Note: For the saas_audit_logs dataset, the Vendor is saas and Product is Audit Logs.
In FedRAMP environments, Azure sign-in logs are not supported, due to vendor technical constraints.
1. From the Microsoft Entra ID console (formerly Azure AD console), create an app for Cortex XDR with the applicable API permissions for the logs and
data you want to collect as detailed in the following table.
Microsoft Office 365 emails via Microsoft’s Graph API Microsoft Graph → Mail.ReadWrite
Azure AD authentication and audit events from Microsoft Graph API Microsoft Graph → AuditLog.Read.All
Alerts from Microsoft Graph Security API v1 and v2 Microsoft Graph → SecurityAlert.Read.All
For more information on Microsoft Azure, see the following instructions in the Microsoft documentation portal.
Register an app.
4. Integrate the applicable Microsoft Entra ID (Azure AD) service with Cortex XDR.
These values enable Cortex XDR to authenticate with your Microsoft Entra ID (Azure AD) service.
c. Select the types of logs that you want to receive from Office 365.
Azure AD: Includes subset of Azure AD audit events and Azure AD authentication events. There can be significant overlap between
these and the Azure AD Authentication Logs originating from Microsoft Graph API.
Use this option when you don’t want to grant permissions for Azure AD Authentication and Azure AD Audit.
Exchange Online: Includes audit logs on Azure Exchange mailboxes and Exchange admin activities on the Office 365 Exchange.
DLP: Includes Microsoft 365 DLP events for Exchange, Sharepoint, and OneDrive.
General: Includes audit logs for various Microsoft 365 applications, such as Power BI and Microsoft Forms.
Azure AD Authentication Logs and Collect all sign-in event types: Azure AD Sign-in logs includes by default all sign-in event types
from a beta version of Microsoft Graph API, which is still subject to change. In addition to classic interactive user sign-ins, selecting the
Collect all sign-in event types allows you to collect.
Azure AD Audit Logs: Azure AD Audit logs includes different categories, such as User Management, Group Management and
Application Management.
Alerts: When this checkbox is selected, alerts from the following products are collected via the Microsoft Graph Security API v1:
Microsoft Defender for Cloud, Azure Active Directory Identity Protection, Microsoft Defender for Cloud Apps, Microsoft Defender
for Endpoint, Microsoft Defender for Identity, Microsoft 365, Azure Information Protection, and Azure Sentinel.
Use Microsoft Graph API v2: When this checkbox is also selected, alerts (alerts_v2) from the following products are only
collected via the Microsoft Graph Security API v2 beta version, which is still subject to change:
Microsoft 365 Defender unified alerts API, which serves alerts from Microsoft 365 Defender, Microsoft Defender for
Endpoint, Microsoft Defender for Office 365, Microsoft Defender for Identity, Microsoft Defender for Cloud Apps, and
Microsoft Purview Data Loss Prevention (including any future new signals integrated into M365D).
Emails: Deprecated. Use the dedicated email collector instead. For more information, see Ingest logs and data from Microsoft 365.
To test the connection, you must select one or more log types. Cortex XDR then tests the connection settings for the selected log types.
Abstract
Ingest authentication logs and data from Okta for use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive logs and data from Okta, you must configure the Collection Integrations settings in Cortex XDR. After you set up data collection, Cortex XDR
immediately begins receiving new logs and data from the source. The information from Okta is then searchable in XQL Search using the okta_sso_raw
dataset. In addition, depending on the event type, data is normalized to either xdr_data or saas_audit_logs datasets.
You can collect all types of events from Okta. When setting up the Okta data collector in Cortex XDR , a field called Okta Filter is available to configure
collection for events of your choosing. All events are collected by default unless you define an Okta API Filter expression for collecting the data, such as
filter=eventType eq “user.session.start”.\n. For Okta information to be weaved into authentication stories, “user.authentication.sso”
events must be collected.
Before you begin configuring data collection from Okta, ensure your Okta user has administrator privileges with a role that can create API tokens, such as the
read-only administrator, Super administrator, and Organization administrator. For more information, see the Okta Administrators Documentation.
From the Dashboard of your Okta console, note your Org URL.
a. Specify the OKTA DOMAIN (Org URL) that you identified on your Okta console.
c. Specify the Okta Filter to configure collection for events of your choosing. All events are collected by default unless you define an Okta API Filter
expression for collecting the data, such as filter=eventType eq “user.session.start”.\n. For Okta information to be weaved into
authentication stories, “user.authentication.sso” events must be collected.
Once events start to come in, a green check mark appears underneath the Okta configuration with the amount of data received.
6. After Cortex XDR begins receiving information from the service, you can Create an XQL Query to search for specific data. When including authentication
events, you can also Create an Authentication Query to search for specific authentication data.
Abstract
Learn how to ingest different types of logs and data from OneLogin.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from OneLogin accounts using the OneLogin data collector.
When Cortex XDR begins receiving logs, the app creates a new dataset for the different types of data collected and normalizes the ingested data into
authentication stories, where specific relevant events are collected in the authentication_story preset for the xdr_data dataset. You can search these
datasets using XQL Search queries. For all logs, Cortex XDR can raise Cortex XDR alerts (Analytics, Correlation Rules, IOC, and BIOC), when relevant from
OneLogin logs. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on
normalized logs.
The following table provides a description of the different types of data you can collect, the collection method and fetch interval for the data collected, and the
name of the dataset to use in Cortex Query Language (XQL) queries.
Log collection
Directory
Before you configure Cortex XDR data collection from OneLogin, make sure you have the following.
Owner or administrator permissions in your OneLogin account which enable Cortex XDR to access the OneLogin account and generate the OAuth 2.0
access token.
A Cortex XDR user account with permissions to Read Log Collections, for example an Instance Administrator.
2. Under Administration → Developers → API Credentials, Create a New Credential with scope Read All.
3. In the credential details page, copy the Client ID and the Client Secret, and save them somewhere safe. You will need to provide these keys when you
configure the OneLogin data collector in Cortex XDR .
Client ID: Specify the Client ID for the OneLogin API credential pair.
Secret: Specify the Client Secret for the OneLogin API credential pair.
Collect: Select the types of data to collect. By default, all the options are selected.
Log Collection
Events: Retrieves user logins, administrative operations, provisioning, and OneLogin event types. After normalization, the event types
are enriched with the event name and description.
Directory
7. Test the connection settings. If successful, Enable the OneLogin log collection.
When events start to come in, a green check mark appears underneath the OneLogin configuration.
Abstract
Ingest authentication logs and data from PingFederate for use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive authentication logs from PingFederate, you must first write Audit and Provisioner Audit Logs to CEF in PingFederate and then set up a Syslog
Collector in Cortex XDR to receive the logs. After you set up log collection, Cortex XDR immediately begins receiving new authentication logs from the source.
Cortex XDR creates a dataset named ping_identity_pingfederate_raw. Logs from PingFederate are searchable in Cortex Query Language (XQL)
queries using the dataset and surfaced, when relevant, in authentication stories.
To set up the integration, you must have an account for the PingFederate management dashboard and access to create a subscription for SSO logs.
In your PingFederate deployment, write audit logs in CEF. During this set up you will need the IP address and port you configured in the Syslog Collector.
3. To search for specific authentication logs or data, you can Create an Authentication Query or use the XQL Search.
Abstract
Ingest authentication logs and data from PingOne for Enterprise for use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive authentication logs and data from PingOne for Enterprise, you must first set up a Poll subscription in PingOne and then configure the Collection
Integrations settings in Cortex XDR. After you set up collection integration, Cortex XDR immediately begins receiving new authentication logs and data from the
source. These logs and data are then searchable in Cortex XDR.
To set up the integration, you must have an account for the PingOne management dashboard and access to create a subscription for SSO logs.
1. Select the subscription you just set up and note the part of the poll URL between /reports/ and /poll-subscriptions. This is your PingOne
account ID.
For example:
https://admin-api.pingone.com/v3/reports/1234567890asdfghjk-123456-zxcvbn/poll-
subscriptions/***-0912348765-4567-98012***/events
2. Next, note the part of the poll URL between /poll-subscriptions/ and /events. This is your subscription ID.
After configuration is complete, Cortex XDR begins receiving information from the authentication service. From the Integrations page, you can view the
log collection summary.
5. To search for specific authentication logs or data, you can Create an Authentication Query or Create an XQL Query.
Abstract
Learn how to ingest operation and system logs from supported cloud providers into Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can ingest operation and system logs from supported cloud providers into Cortex XDR.
Abstract
Take advantage of Cortex XDR investigation capabilities and set up generic log ingestion for your Amazon S3 logs.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward generic logs for the relative service to Cortex XDR from Amazon S3.
To receive generic data from Amazon Simple Storage Service (Amazon S3), you must first configure data collection from Amazon S3. You can then configure
the Collection Integrations settings in Cortex XDR for Amazon S3. After you set up collection integration, Cortex XDR begins receiving new logs and data from
the source.
For more information on configuring data collection from Amazon S3, see the Amazon S3 Documentation.
As soon as Cortex XDR begins receiving logs, the app automatically creates an Amazon S3 Cortex Query Language (XQL) dataset
(<Vendor>_<Product>_raw). This enables you to search the logs using XQL Search with the dataset. For example queries, refer to the in-app XQL Library.
Cortex XDR can also raise Cortex XDR alerts (Correlation Rules only) when relevant from Amazon S3 logs.
You need to set up an Amazon S3 data collector to receive generic logs when collecting logs from BeyondTrust Privilege Management Cloud. For more
information, see Ingest logs from BeyondTrust Privilege Management Cloud.
Perform the following tasks before you begin configuring data collection from Amazon S3:
Create a dedicated Amazon S3 bucket, which collects the generic logs that you want capture. For more information, see Creating a bucket using the
Amazon S3 Console.
It is the customer’s responsibility to define a retention policy for your Amazon S3 bucket by creating a Lifecycle rule in the Management tab. We
recommend setting the retention policy to at least 7 days to ensure that the data is retrieved under all circumstances.
The logs collected by your dedicated Amazon S3 bucket must adhere to the following guidelines.
Each log file must use the 1 log per line format as multi-line format is not supported.
Ensure that you have at a minimum the following permissions in AWS for an Amazon S3 bucket and Amazon Simple Queue Service (SQS).
Determine how you want to provide access to Cortex XDR to your logs and perform API operations. You have the following options:
Designate an AWS IAM user, where you will need to know the Account ID for the user and have the relevant permissions to create an access
key/id for the relevant IAM user.
Create an assumed role in AWS to delegate permissions to a Cortex XDR AWS service. This role grants Cortex XDR access to your flow logs. For
more information, see Creating a role to delegate permissions to an AWS service. This is the Assumed Role option described in the configure the
Amazon S3 collection in Cortex XDR. For more information on creating an assumed role for Cortex XDR, see Create an assumed role.
To collect Amazon S3 logs that use server-side encryption (SSE), the user role must have an IAM policy that states that Cortex XDR has kms:Decrypt
permissions. With this permission, Amazon S3 automatically detects if a bucket is encrypted and decrypts it. If you want to collect encrypted logs from
different accounts, you must have the decrypt permissions for the user role also in the key policy for the master account Key Management Service
(KMS). For more information, see Allowing users in other accounts to use a KMS key.
2. From the menu bar, ensure that you have selected the correct region for your configuration.
Ensure that you create your Amazon S3 bucket and Amazon SQS queue in the same region.
b. Configure the following settings, where the default settings should be configured unless otherwise indicated.
Configuration section: Leave the default settings for the various fields.
Access policy → Choose method: Select Advanced and update the Access policy code in the editor window to enable your Amazon S3
bucket to publish event notification messages to your SQS queue. Use this sample code as a guide for defining the “Statement” with the
following definitions.
-“Resource”: Leave the automatically generated ARN for the SQS queue that is set in the code, which uses the format
“arn:sns:Region:account-id:topic-name”.
You can retrieve your bucket’s ARN by opening the Amazon S3 Console in a browser window. In the Buckets section, select the bucket that
you created for collecting the Amazon S3 flow logs, click Copy ARN, and paste the ARN in the field.
For more information on granting permissions to publish messages to an SQS queue, see Granting permissions to publish event notification
messages to a destination.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]",
"Condition": {
"ArnLike": {
"aws:SourceArn": "[ARN of your Amazon S3 bucket]"
}
}
}
]
}
Dead-letter queue section: We recommend that you configure a queue for sending undeliverable messages by selecting Enabled, and then
in the Choose queue field selecting the queue to send the messages. You may need to create a new queue for this, if you do not already
have one set up. For more information, see Amazon SQS dead-letter queues.
Once the SQS is created, a message indicating that the queue was successfully configured is displayed at the top of the page.
4. Configure an event notification to your Amazon SQS whenever a file is written to your Amazon S3 bucket.
a. Open the Amazon S3 Console and in the Properties tab of your Amazon S3 bucket, scroll down to the Event notifications section, and click Create
event notification.
Prefix: Do not set a prefix as the Amazon S3 bucket is meant to be a dedicated bucket for collecting only network flow logs.
Event types: Select All object create events for the type of event notifications that you want to receive.
Destination: Select SQS queue to send notifications to an SQS queue to be read by a server.
Specify SQS queue: You can either select Choose from your SQS queues and then select the SQS queue, or select Enter SQS queue ARN
and specify the ARN in the SQS queue field.
You can retrieve your SQS queue ARN by opening another instance of the AWS Management Console in a browser window, and opening the
Amazon SQS Console, and selecting the Amazon SQS that you created. In the Details section, under ARN, click the copy icon ( )), and
paste the ARN in the field.
Once the event notification is created, a message indicating that the event notification was successfully created is displayed at the top of the
page.
If your receive an error when trying to save your changes, you should ensure that the permissions are set up correctly.
It is the responsibility of your organization to ensure that the user who performs this task of creating the access key is assigned the relevant
permissions. Otherwise, this can cause the process to fail with errors.
Skip this step if you are using an Assumed Role for Cortex XDR.
a. Open the AWS IAM Console, and in the navigation pane, select Access management → Users.
c. Select the Security credentials tab, and scroll down to the Access keys section, and click Create access key.
d. Click the copy icon () next to the Access key ID and Secret access key keys, where you must click Show secret access key to see the secret key,
and record them somewhere safe before closing the window. You will need to provide these keys when you edit the Access policy of the SQS
queue and when setting the AWS Client ID and AWS Client Secret in Cortex XDR. If you forget to record the keys and close the window, you will
need to generate new keys and repeat this process.
For more information, see Managing access keys for IAM users.
Skip this step if you are using an Assumed Role for Cortex XDR.
a. In the Amazon SQS Console, select the SQS queue that you created when you configured an Amazon Simple Queue Service (SQS).
b. Select the Access policy tab, and Edit the Access policy code in the editor window to enable the IAM user to perform operations on the Amazon
SQS with permissions to SQS:ChangeMessageVisibility, SQS:DeleteMessage, and SQS:ReceiveMessage. Use this sample code as a
guide for defining the “Sid”: “__receiver_statement” with the following definitions.
“Resource”: Leave the automatically generated ARN for the SQS queue that is set in the code, which uses the format
“arn:sns:Region:account-id:topic-name”.
For more information on granting permissions to publish messages to an SQS queue, see Granting permissions to publish event notification
messages to a destination.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]",
"Condition": {
"ArnLike": {
"aws:SourceArn": "[ARN of your Amazon S3 bucket]"
}
}
},
{
"Sid": "__receiver_statement",
"Effect": "Allow",
"Principal": {
"AWS": "[Add the ARN for the AWS IAM user]"
},
"Action": [
"SQS:ChangeMessageVisibility",
"SQS:DeleteMessage",
"SQS:ReceiveMessage"
],
"Resource": "[Leave automatically generated ARN for the SQS queue defined by AWS]"
}
]
}
c. Set these parameters, where the parameters change depending on whether you configured an Access Key or Assumed Role.
SQS URL: Specify the SQS URL, which is the ARN of the Amazon SQS that you configured in the AWS Management Console.
AWS Client ID: Specify the Access key ID, which you received when you configured access keys for the AWS IAM user in AWS.
AWS Client Secret: Specify the Secret access key you received when you configured access keys for the AWS IAM user in AWS.
Role ARN: Specify the Role ARN for the Assumed Role you created for Cortex XDR in AWS.
External Id: Specify the External Id for the Assumed Role you created for Cortex XDR in AWS.
Log Type: Select Generic to configure your log collection to receive generic logs from Amazon S3, which can include different types of data,
such as file and metadata. When selecting this option, the following additional fields are displayed.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, Corelight, or Beyondtrust Cloud ECS.
-The Vendor and Product defaults to Auto-Detect when the Log Format is set to CEF or LEEF.
-For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the logs.
When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the Vendor and
Product fields in the Amazon S3 data collector settings. Yet, when the values are blank in the event log row, Cortex XDR uses the
Vendor and Product that you specified in these fields in the Amazon S3 data collector settings. If you did not specify a Vendor or
Product in the Amazon S3 data collector settings, and the values are blank in the event log row, the values for both fields are set to
unknown.
For a Log Format set to Beyondtrust Cloud ECS, the following fields are automatically set and are not configurable:
-Vendor: Beyondtrust
-Compression: Uncompressed
For more information, see Ingest logs from BeyondTrust Privilege Management Cloud.
For a Log Format set to Cisco, the following fields are automatically set and not configurable.
-Vendor: Cisco
-Product: ASA
For a Log Format set to Corelight, the following fields are automatically set and not configurable:
-Vendor: Corelight
-Product: Zeek
For a Log Format set to Raw or JSON, the following fields are automatically set and are configurable.
-Vendor: AMAZON
-Product: AWS
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically when
the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex as explained
below.
Vendor: (Optional) Specify a particular vendor name for the Amazon S3 generic data collection, which is used in the Amazon S3 XQL
dataset <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the Amazon S3 generic data collection, which is used in the Amazon S3 XQL
dataset name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Compression: Select whether the logs are compressed into a gzip file or are uncompressed.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Once events start to come in, a green check mark appears underneath the Amazon S3 configuration with the number of logs received.
Abstract
Take advantage of Cortex XDR investigation capabilities and set up generic or EKS log ingestion for your Amazon CloudWatch logs.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can forward generic and Elastic Kubernetes Service (EKS) logs to Cortex XDR from Amazon CloudWatch. When forwarding EKS logs, the following log
types are included:
You can ingest generic logs of the raw data or in a JSON format from Amazon Kinesis Firehose. EKS logs are automatically ingested in a JSON format from
Amazon Kinesis Firehose. To enable log forwarding, you set up Amazon Kinesis Firehose and then add that to your Amazon CloudWatch configuration. After
you complete the set up process, logs from the respective service are then searchable in Cortex XDR to provide additional information and context to your
investigations.
As soon as Cortex XDR begins receiving logs, the application automatically creates one of the following Cortex Query Language (XQL) datasets depending on
the type of logs you've configured:
Generic: <Vendor>_<Product>_raw
EKS: amazon_eks_raw
These datasets enable you to search the logs in XQL Search. For example, queries refer to the in-app XQL Library. For enhanced cloud protection, you can
also configure Cortex XDR to normalize EKS audit logs, which you can query with XQL Search using the cloud_audit_logs dataset. Cortex XDR can also
raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from AWS logs. While Correlation Rules alerts are raised on non-
normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
To set up Amazon CloudWatch integration, you require certain permissions in AWS. You need a role that enables access to configuring Amazon Kinesis
Firehose.
d. Select the Log Type as one of the following, where your selection changes the options displayed:
Log Format: Choose the format of the data input source (CloudWatch) that you'll export to Cortex XDR , either JSON or Raw.
Specify the Vendor and Product for the type of generic logs you are ingesting.
The vendor and product are used to define the name of your XQL dataset (<Vendor>_<Product>_raw). If you do not define a
vendor or product, Cortex XDR uses the default values of Amazon and AWS with the resulting dataset name as amazon_aws_raw. To
uniquely identify the log source, consider changing the values.
EKS: When selecting this log type, the following options are displayed:
The Vendor is automatically set to Amazon and Product to EKS , and is non-configurable. This means that all data for the EKS logs,
whether it's normalized or not, can be queried in XQL Search using the amazon_eks_raw dataset.
(Optional) You can decide whether to Normalize and enrich audit logs as part of the enhanced cloud protection by selecting the
checkbox (default). If selected, Cortex XDR is configured to normalize EKS audit logs, which you can query with XQL Search using the
cloud_audit_logs dataset.
Click the copy icon next to the key and record it somewhere safe. You will need to provide this key when you set up output settings in AWS Kinesis
Firehose. If you forget to record the key and close the window you will need to generate a new key and repeat this process.
a. Log in to the AWS Management Console, and open the Kinesis console.
Delivery stream name: Enter a descriptive name for your stream configuration.
Server-side encryption for source records in the delivery stream: Ensure this option is disabled.
Transform source records with AWS Lambda: Set the Data Transformation as Disabled.
Choose HTTP Endpoint as the destination and configure the HTTP endpoint configuration settings:
HTTP endpoint name: Specify the name you used to identify your AWS log collection configuration in Cortex XDR.
HTTP endpoint URL: Copy the API URL associated with your log collection from the Cortex XDR management console. The URL will include
your tenant name (https://api-<tenant external URL>/logs/v1/aws).
Access key: Paste in the token key you recorded earlier during the configuration of your Cortex XDR log collection settings.
Content encoding: Select GZIP. Disabling content encoding may result in high egress costs.
S3 bucket: Set the S3 backup mode as Failed data only. For the S3 bucket, we recommend that you create a dedicated bucket for Cortex
XDR integration.
HTTP endpoint buffer conditions: Set the Buffer size as 1 MiB and the Buffer interval as 60 seconds.
S3 buffer conditions: Use the default settings for Buffer size as 5 MiB and Buffer interval as 300 seconds unless you have alternative sizing
preferences.
S3 compression and encryption: Choose your desired compression and encryption settings.
Select Next.
When your delivery stream is ready, the status changes from Creating to Active.
3. To begin forwarding logs, add the Kinesis Firehose instance to your Amazon CloudWatch configuration.
Return to the Integrations page and view the statistics for the log collection configuration.
5. After Cortex XDR begins receiving logs from your Amazon services, you can use the XQL Search to search for logs in the new dataset.
Abstract
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from GCP to Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use the Pub/Sub messaging service from Global Cloud Platform (GCP), you can send logs and data from your GCP instance to Cortex XDR. Data from
GCP is then searchable in Cortex XDR to provide additional information and context to your investigations using the GCP Cortex Query Language (XQL)
dataset, which is dependent on the type of GCP logs collected. For example queries, refer to the in-app XQL Library. You can configure a Google Cloud
Platform collector to receive generic, flow, audit, or Google Cloud DNS logs. When configuring generic logs, you can receive logs in a Raw, JSON, CEF, LEEF,
Cisco, or Corelight format.
You can also configure Cortex XDR to normalize different GCP logs as part of the enhanced cloud protection, which you can query with XQL Search using the
applicable dataset. Cortex XDR can also raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from GCP logs. While Correlation
Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
The following table lists the various GCP log types the XQL datasets you can use to query in XQL Search:
Cisco: cisco_asa_raw
Corelight: corelight_zeek_raw
JSON or Raw:
google_cloud_logging_raw
Google Cloud DNS logs google_dns_raw xdr_data: Once configured, Cortex XDR ingests Google Cloud DNS
logs as XDR network connection stories, which you can query with
XQL Search using the xdr_data dataset with the preset called
network_story.
Network flow logs google_cloud_logging_raw xdr_data: Once configured, Cortex XDR ingests network flow logs
as XDR network connection stories, which you can query with XQL
Search using the xdr_data dataset with the preset called
network_story.
When collecting flow logs, we recommend that you include GKE annotations in your logs, which enable you to view the names of the containers that
communicated with each other. GKE annotations are only included in logs if appended manually using the custom metadata configuration in GCP. For more
information, see VPC Flow Logs Overview. In addition, to customize metadata fields, you must use the gcloud command-line interface or the API. For more
information, see Using VPC Flow Logs.
To receive logs and data from GCP, you must first set up log forwarding using a Pub/Sub topic in GCP. You can configure GCP settings using either the GCP
web interface or a GCP cloud shell terminal. After you set up your service account in GCP, you configure the Data Collection settings in Cortex XDR. The setup
process requires the subscription name and authentication key from your GCP instance.
After you set up log collection, Cortex XDR immediately begins receiving new logs and data from GCP.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
c. To filter only specific types of data, select the filter or desired resource.
f. Enter a descriptive Name that identifies the sink purpose for Cortex XDR, and then Create.
a. Select the hamburger menu in G Cloud and then select Pub/Sub → Topics.
b. Select the name of the topic you created in the previous steps. Use the filters if necessary.
After the subscription is set up, G Cloud displays statistics and settings for the service.
Optionally, use the copy button to copy the name to the clipboard. You will need the name when you configure Collection in Cortex XDR.
You will use the key to enable Cortex XDR to authenticate with the subscription service.
a. Select the menu icon, and then select IAM & Admin → Service Accounts.
f. Locate the service account by name, using the filters to refine the results, if needed.
g. Click the Actions menu identified by the three dots in the row for the service account and then Create Key.
After you create the service account key, G Cloud automatically downloads it.
5. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
d. Browse to the JSON file containing your authentication key for the service account.
e. Select the Log Type as one of the following, where your selection changes the options displayed.
(Optional) You can Normalize and enrich flow and audit logs by selecting the checkbox (default). If selected, Cortex XDR ingests the
network flow logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset
with the preset called network_story. In addition, you can configure Cortex XDR to normalize GCP audit logs, which you can query
with XQL Search using the cloud_audit_logs dataset.
The Vendor is automatically set to Google and Product to Cloud Logging, which is not configurable. This means that all GCP data for
the flow and audit logs, whether it's normalized or not, can be queried in XQL Search using the google_cloud_logging_raw
dataset.
Generic: When selecting this log type, you can configure the following settings.
Log Format: Select the log format type as Raw, JSON, CEF, LEEF, Cisco, or Corelight.
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the
logs. When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the
Vendor and Product fields in the GCP data collector settings. Yet, when the values are blank in the event log row, Cortex XDR
uses the Vendor and Product that you specified in the GCP data collector settings. If you did not specify a Vendor or Product in
the GCP data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Google
Raw or JSON data can be queried in XQL Search using the google_cloud_logging_raw dataset.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically
when the Log Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex
as explained below.
Vendor: (Optional) Specify a particular vendor name for the GCP generic data collection, which is used in the GCP XQL dataset
<Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Product: (Optional) Specify a particular product name for the GCP generic data collection, which is used in the GCP XQL dataset
name <Vendor>_<Product>_raw that Cortex XDR creates as soon as it begins receiving logs.
Multiline Parsing Regex: (Optional) This option is only displayed when the Log Format is set to Raw, where you can set the regular
expression that identifies when the multiline event starts in logs with multilines. It is assumed that when a new event begins, the
previous one has ended.
Google Cloud DNS: When selecting this log type, you can configure whether to normalize the logs as part of the enhanced cloud protection.
Optional) You can Normalize DNS logs by selecting the checkbox (default). If selected, Cortex XDR ingests the Google Cloud DNS
logs as Cortex XDR network connection stories, which you can query using XQL Search from the xdr_dataset dataset with the
preset called network_story.
The Vendor is automatically set to Google and Product to DNS , which is not configurable. This means that all Google Cloud DNS logs,
whether it's normalized or not, can be queried in XQL Search using the google_dns_raw dataset.
f. Test the provided settings and, if successful, proceed to Enable log collection.
1. Launch the GCP cloud shell terminal or use your preferred shell with gcloud installed.
Note the subscription name you define in this step as you will need it to set up log ingestion from Cortex XDR.
During the logging sink creation, you can also define additional log filters to exclude specific logs. To filter logs, supply the optional parameter --log-
filter=<LOG_FILTER>
If setup is successful, the console displays a summary of your log sink settings:
Note the serviceAccount name from the previous step and use it to define the service for which you want to grant publish access.
For example, use cortex-xdr-sa as the service account name and Cortex XDR Service Account as the display name.
You will need the JSON file to enable Cortex XDR to authenticate with the GCP service. Specify the file destination and filename using a .json extension.
10. After Cortex XDR begins receiving information from the GCP Pub/Sub service, you can use the XQL Query language to search for specific data.
Abstract
Forward your Google Kubernetes Engine (GKE) logs directly to Cortex XDR using Elasticsearch Filebeat.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Instead of forwarding Google Kubernetes Engine (GKE) logs directly to Google StackDrive, Cortex XDR can ingest container logs from GKE using
Elasticsearch Filebeat. To receive logs, you must install Filebeat on your containers and enable Data Collection settings for Filebeat.
After Cortex XDR creates the dataset, you can search your GKE logs using XQL Search.
Record your token key and API URL for the Filebeat Collector instance as you will need these later in this workflow.
This ensures there is a running instance of Filebeat on each node of the cluster.
a. Download the manifest file to a location where you can edit it.
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml
d. For the output.elasticsearch configuration, replace the hosts, username, and password with environment variable references for hosts
and api_key, and add a field and value for compression_level and bulk_max_size.
e. In the DaemonSet configuration, locate the env configuration and replace ELASTIC_CLOUD_AUTH, ELASTIC_CLOUD_ID,
ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD, ELASTICSEARCH_HOST, ELASTICSEARCH_PORT and their relative values with the
following.
ELASTICSEARCH_ENDPOINT: Specify the API URL for your Cortex XDR tenant. You can copy the URL from the Filebeat Collector instance
you set up for GKE in the Cortex XDR management console (Settings → ( ) → Configurations → Data Collection → Custom Collectors →
Copy API URL. The URL will include your tenant name (https://api-tenant external URL:443/logs/v1/filebeat)
ELASTICSEARCH_API_KEY: Specify the token key you recorded earlier during the configuration of your Filebeat Collector instance.
After you configure these settings your configuration should look like the following image.
4. If you use RedHat OpenShift, you must also specify additional settings.
See https://www.elastic.co/guide/en/beats/filebeat/7.10/running-on-kubernetes.html.
This deploys Filebeat in the kube-system namespace. If you want to deploy the Filebeat configuration in other namespaces, change the namespace
values in the YAML file (in any YAML inside this file) and add -n <your_namespace>.
Cortex XDR supports logs in single line format or multiline format. For more information on handling messages that span multiple lines of text in
Elasticsearch Filebeat, see Manage Multiline Messages.
6. After Cortex XDR begins receiving logs from GKE, you can use the XQL Search to search for logs in the new dataset.
Abstract
Ingest logs from Microsoft Azure Event Hub with an option to ingest audit logs to use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from Microsoft Azure Event Hub using the Microsoft Azure Event Hub data collector. To receive logs from Azure
Event Hub, you must configure the settings in Cortex XDR based on your Microsoft Azure Event Hub configuration. After you set up data collection, Cortex
XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset (MSFT_Azure_raw) that you can use to initiate XQL Search queries. For example,
queries refer to the in-app XQL Library. For enhanced cloud protection, you can also configure Cortex XDR to normalize Azure Event Hub audit logs, including
Azure Kubernetes Service (AKS) audit logs, with other Cortex XDR authentication stories across all cloud providers using the same format, which you can
query with XQL Search using the cloud_audit_logs dataset. For logs that you do not configure Cortex XDR to normalize, you can change the default
dataset. Cortex XDR can also raise Cortex XDR alerts (Analytics, IOC, BIOC, and Correlation Rules) when relevant from Azure Event Hub logs. While
Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are only raised on normalized logs.
Cloud-tailored investigations
In an existing Event Hub integration, do not change the mapping to a different Event Hub.
Do not use the same Event Hub for more than two purposes.
The following table provides a brief description of the different types of Azure audit logs you can collect.
For more information on Azure Event Hub audit logs, see Overview of Azure platform logs.
Activity logs Retrieves events related to the operations on each Azure resource in the subscription from the outside in addition to
updates on Service Health events.
Azure Active Directory (AD) Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant.
Activity logs and Azure Sign-
in logs Even though you can collect Azure AD Activity logs and Azure Sign-in logs using the Azure Event Hub data collector, we
recommend using the Microsoft 365 data collector, because it is easier to configure. In addition, ensure that you don't
configure both collectors to collect the same types of logs, because if you do so, you will be creating duplicate data in
Cortex XDR.
Resource logs, including Retrieves events related to operations that were performed within an Azure resource.
AKS audit logs
These logs are from the data plane.
Ensure that you do the following tasks before you begin configuring data collection from Azure Event Hub.
Create an Azure Event Hub. We recommend using a dedicated Azure Event Hub for this Cortex XDR integration. For more information, see Quickstart:
Create an event hub using Azure portal.
Ensure the format for the logs you want collected from the Azure Event Hub is either JSON or raw.
1. In the Microsoft Azure console, open the Event Hubs page, and select the Azure Event Hub that you created for collection in Cortex XDR.
2. Record the following parameters from your configured event hub, which you will need when configuring data collection in Cortex XDR.
3. In the Consumer group table, copy the applicable value listed in the Name column for your Cortex XDR data collection configuration.
Your storage account connection string required for partitions lease management and checkpointing in Cortex XDR.
1. Open the Storage accounts page, and either create a new storage account or select an existing one, which will contain the storage account
connection string.
3. Configure diagnostic settings for the relevant log types you want to collect and then direct these diagnostic settings to the designated Azure Event Hub.
Activity logs Select Azure services → Activity log → Export Activity Logs, and +Add diagnostic setting.
Azure AD Activity logs and Azure 1. Select Azure services → Azure Active Directory.
Sign-in logs
2. Select Monitoring → Diagnostic settings, and +Add diagnostic setting.
Resource logs, including AKS 1. Search for Monitor, and select Settings → Diagnostic settings.
audit logs
2. From your list of available resources, select the resource that you want to configure for log
collection, and then select +Add diagnostic setting.
For every resource that you want to confiure, you'll have to repeat this step, or use Azure policy for
a general configuration.
Logs Categories/Metrics: The options listed are dependent on the type of logs you want to configure. For Activity logs and Azure AD logs
and Azure Sign-in logs, the option is called Logs Categories, and for Resource logs it's called Metrics.
Activity logs Select from the list of applicable Activity log categories, the ones that you want to configure your designated
resource to collect. We recommend selecting all of the options.
Administrative
Security
ServiceHealth
Alert
Recommendation
Policy
Autoscale
ResourceHealth
Azure AD Activity Select from the list of applicable Azure AD Activity and Azure Sign-in Logs Categories, the ones that you want
logs and Azure Sign- to configure your designated resource to collect. You can select any of the following categories to collect these
in logs types of Azure logs.
AuditLogs
SignInLogs
NonInteractiveUserSignInLogs
ServicePrincipalSignInLogs
ManagedIdentitySignInLogs
ADFSSignInLogs
There are additional log categories displayed. We recommend selecting all the available options.
Resource logs, The list displayed is dependent on the resource that you selected. We recommend selecting all the options
including AKS audit available for the resource.
logs
Destination details: Select Stream to event hub, where additional parameters are displayed that you need to configure. Ensure that you set
the following parameters using the same settings for the Azure Event Hub that you created for the collection.
Subscription: Select the applicable Subscription for the Azure Event Hub.
Event hub namespace: Select the applicable Subscription for the Azure Event Hub.
(Optional) Event hub name: Specify the name of your Azure Event Hub.
Event hub policy: Select the applicable Event hub policy for your Azure Event Hub.
b. In the Azure Event Hub configuration, click Add Instance to begin a new configuration.
Event Hub Connection String: Specify your event hub’s connection string for the designated policy.
Storage Account Connection String: Specify your storage account’s connection string for the designated policy.
Log Format: Select the log format for the logs collected from the Azure Event Hub as Raw, JSON, CEF, LEEF, Cisco-asa, or Corelight.
When you Normalize and enrich audit logs, the log format is automatically configured. As a result, the Log Format option is removed and is
no longer available to configure (default).
For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the logs.
When the values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the Vendor and
Product fields in the Azure Event Hub data collector settings. Yet, when the values are blank in the event log row, Cortex XDR uses the
Vendor and Product that you specified in the Azure Event Hub data collector settings. If you did not specify a Vendor or Product in the
Azure Event Hub data collector settings, and the values are blank in the event log row, the values for both fields are set to unknown.
Cisco-asa: The following fields are automatically set and not configurable.
Vendor: Cisco
Product: ASA
Cisco data can be queried in XQL Search using the cisco_asa_raw dataset.
Corelight: The following fields are automatically set and not configurable.
Vendor: Corelight
Product: Zeek
Corelight data can be queried in XQL Search using the corelight_zeek_raw dataset.
Raw or JSON: The following fields are automatically set and are configurable.
Vendor: Msft
Product: Azure
Raw or JSON data can be queried in XQL Search using the msft_azure_raw dataset.
Vendor and Product: Specify the Vendor and Product for the type of logs you are ingesting.
The Vendor and Product are used to define the name of your Cortex Query Language (XQL) dataset (<vendor>_<product>_raw). The
Vendor and Product values vary depending on the Log Format selected. To uniquely identify the log source, consider changing the values if
the values are configurable.
When you Normalize and enrich audit logs, the Vendor and Product fields are automatically configured, so these fields are removed as
available options (default).
Normalize and enrich audit logs: (Optional) For enhanced cloud protection, you can Normalize and enrich audit logs by selecting the
checkbox (default). If selected, Cortex XDR normalizes and enriches Azure Event Hub audit logs with other Cortex XDR authentication
stories across all cloud providers using the same format. You can query this normalized data with XQL Search using the
cloud_audit_logs dataset.
When events start to come in, a green check mark appears underneath the Azure Event Hub configuration with the amount of data received.
Abstract
Ingest authentication logs and data from Okta for use in Cortex XDR authentication stories.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive logs and data from Okta, you must configure the Collection Integrations settings in Cortex XDR. After you set up data collection, Cortex XDR
immediately begins receiving new logs and data from the source. The information from Okta is then searchable in XQL Search using the okta_sso_raw
dataset. In addition, depending on the event type, data is normalized to either xdr_data or saas_audit_logs datasets.
You can collect all types of events from Okta. When setting up the Okta data collector in Cortex XDR , a field called Okta Filter is available to configure
collection for events of your choosing. All events are collected by default unless you define an Okta API Filter expression for collecting the data, such as
Since the Okta API enforces concurrent rate limits, the Okta data collector is built with a mechanism to reduce the amount of requests whenever an error is
received from the Okta API indicating that too many requests have already been sent. In addition, to ensure you are properly notified about this, an alert is
displayed in the Notification Area and a record is added to the Management Audit Logs.
Before you begin configuring data collection from Okta, ensure your Okta user has administrator privileges with a role that can create API tokens, such as the
read-only administrator, Super administrator, and Organization administrator. For more information, see the Okta Administrators Documentation.
From the Dashboard of your Okta console, note your Org URL.
a. Specify the OKTA DOMAIN (Org URL) that you identified on your Okta console.
c. Specify the Okta Filter to configure collection for events of your choosing. All events are collected by default unless you define an Okta API Filter
expression for collecting the data, such as filter=eventType eq “user.session.start”.\n. For Okta information to be weaved into
authentication stories, “user.authentication.sso” events must be collected.
Once events start to come in, a green check mark appears underneath the Okta configuration with the amount of data received.
6. After Cortex XDR begins receiving information from the service, you can Create an XQL Query to search for specific data. When including authentication
events, you can also Create an Authentication Query to search for specific authentication data.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR does not support ingestion of EDR data from third party vendors. Consider upgrading to Cortex XSIAM in order to ingest EDR data from third party
vendors.
Windows Events and other data using other Broker VM data collector applets
Abstract
You can ingest cloud assets from different third-party sources using Cortex XDR.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Abstract
Ingestion of cloud assets from AWS requires a Cortex XDR Pro per GB license.
Cortex XDR provides a unified, normalized asset inventory for cloud assets in AWS. This capability provides deeper visibility to all the assets and superior
context for incident investigation.
To receive cloud assets from AWS, you must configure the Collection Integrations settings in Cortex XDR using the Cloud Inventory data collector to configure
the AWS wizard. The AWS wizard includes instructions to be completed both in AWS and the AWS wizard screens. After you set up data collection, Cortex XDR
begins receiving new data from the source.
As soon as Cortex XDR begins receiving cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets pages
display the data in a table format.
c. Click AWS.
Setting the connection parameters on the right-side of the screen is dependent on certain configurations in AWS as explained below.
a. Select the Organization Level as either Account (default), Organization, or Organization Unit. The Organization Level that you select changes the
instructions and fields displayed on the screen.
c. Create a stack called XDRCloudApp using the preset Cortex XDR template in AWS.
The following details are automatically filled in for you in the AWS CloudFormation stack template:
CortexXDRRoleName: The name of the role that will be used by Cortex XDR to authenticate and access the resources in your AWS account.
External ID: The Cortex XDR Cloud ID, a randomly generated UUID that is used to enable the trust relationship in the role's trust policy.
To create the stack, accept the IAM acknowledgment for resource creation by selecting the I acknowledge that AWS CloudFormation might create
IAM resources with custom names checkbox, and click Create Stack.
d. Wait for the Status to update to CREATE_COMPLETE in the Stacks page that is displayed, and select the XDRCloudAPP stack under the Stack
name column in the table.
e. Select the Outputs tab and copy the Value of the Role ARN.
f. Paste the Role ARN value in one of the following fields in the Account Details screen in Cortex XDR. The field name is dependent on the
Organization Level that you selected.
Organization Unit: Paste the value in the Master Role ARN field.
This step is only relevant if you’ve configured the Organization Level as Organization in the Account Details screen in Cortex XDR. Otherwise, you
can skip this step if the Organization Level is set to Account or Organization Unit.
1. From the main menu of the AWS Console, select <your username> → My Organization.
2. Copy the Root ID displayed under the Root directory and paste it in the Root ID field in the Account Details screen in Cortex XDR.
This step is only relevant if you’ve configured the Organization Level as Organization Unit in the Account Details screen in Cortex XDR. Otherwise,
you can skip this step if the Organization Level is set to Account or Organization.
1. On the main menu of the AWS Console, select your username, and then My Organization.
2. Select the Organization Unit with an icon-ou ( ) beside it in the organizational structure that you want to configure.
3. Copy the ID and paste it in the Organization Unit ID field in the Account Details screen in Cortex XDR.
i. Define the following remaining connection parameters in the Account Details screen in Cortex XDR:
Cortex XDR Collection Name: Specify a name for your Cortex XDR collection that is displayed underneath the Cloud Inventory configuration
for this AWS collection.
j. Click Next.
This wizard screen is only displayed if you’ve configured the Organization Level as Organization or Organization Unit in the Account Details screen in
Cortex XDR. Otherwise, you can skip this step when the Organization Level is set to Account.
Configuring member accounts is dependent on creating a stack set and configuring stack instances in AWS, which can be performed using either the
Amazon Command Line Interface (CLI) or Cloud Formation template via the AWS Console. Use one of the following methods:
1. On the Configure Member Accounts page, select the Amazon CLI tab, which is displayed by default.
For more information on how to set up the AWS CLI tool, see the AWS Command Line Interface Documentation.
3. Run the following command to create a stack set, which you can copy from the Configure Member Accounts screen by selecting the copy icon ( ), and
paste in the Amazon CLI. This command includes the Role Name and External ID field values configured from the wizard screen.
4. Run the following command to add stack instances to your stack set, which you can copy from the Configure Member Accounts screen by selecting the
copy icon ( ), and paste in the Amazon CLI. For the --deployment-targets parameter, specify the organization root ID to deploy to all accounts in
your organization, or specify Organization Unit IDs to deploy to all accounts in these Organization Units. In this parameter, you will need to replace
<Org_OU_ID1>, <Org_OU_ID2>, and <Region> according to your AWS settings.
In this example, the Organization Units are populated with ou-rcuk-1x5j1lwo and ou-rcuk-slr5lh0a IDs.
Once completed, in the AWS Console, select Services → CloudFormation → StackSets, and you can see the StackSet is now listed in the table.
6. Click Create.
Once cloud assets from AWS start to come in, a green check mark appears underneath the Cloud Inventory configuration with the Last collection time
displayed. It can take a few minutes for the Last Collection time to display as the processing completes.
Whenever the Cloud Inventory data collector integrations are modified by using the Edit, Disable, or Delete options, it can take up to 10 minutes for these
changes to be reflected in Cortex XDR.
1. On the Configure Member Accounts page, select the Cloud Formation tab.
2. In the on-screen step Download the CloudFormation template, click template. Download the template file. The name of the downloaded file is cortex-
xdr-aws-master-ro-1.0.0.template.
3. Sign in to your AWS Master Account using the AWS console, select Services → CloudFormation → StackSets, and click Create StackSet.
-Select Upload a template file, Choose file, and select the CloudFormation template that you downloaded.
5. Click Next.
7. Click Next.
Deployment targets
Specify regions
Deployment options
-For the Maximum concurrent accounts, select Percentage, and in the field specify 100.
-For the Failure tolerance, select Percentage, and in the field specify 100.
11. To create the StackSet, accept the IAM acknowledgment for resource creation by selecting the I acknowledge that AWS CloudFormation might create
IAM resources with custom names checkbox, and click Submit.
When the process completes, the Status of the StackSet is SUCCEEDED in the StackSet details page.
Once cloud assets from AWS start to come in, a green check mark appears underneath the Cloud Inventory configuration with the Last collection time
displayed. It can take a few minutes for the Last Collection time to display as the processing completes.
Whenever the Cloud Inventory data collector integrations are modified by using the Edit, Disable, or Delete options, it can take up to 10 minutes for these
changes to be reflected in Cortex XDR.
After Cortex XDR begins receiving AWS cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets pages
display the data in a table format. For more information, see Cloud Inventory Assets.
Abstract
Extend Cortex XDR visibility into cloud assets from Google Cloud Platform.
Ingestion of cloud assets from Google Cloud Platform requires a Cortex XDR Pro per GB license.
Cortex XDR provides a unified, normalized asset inventory for cloud assets in Google Cloud Platform (GCP). This capability provides deeper visibility to all the
assets and superior context for incident investigation.
To receive cloud assets from GCP, you must configure the Collection Integrations settings in Cortex XDR using the Cloud Inventory data collector to configure
the GCP wizard. The GCP wizard includes instructions to be completed both in GCP and the GCP wizard screens. After you set up data collection, Cortex XDR
begins receiving new data from the source.
As soon as Cortex XDR begins receiving cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets pages
display the data in a table format.
Setting the connection parameters on the right-side of the screen is dependent on certain configurations in GCP as explained below.
a. Select the Organization Level as either Project (default), Folder, or Organization. The Organization Level that you select changes the instructions.
b. Register your application for Cloud Asset API in Google Cloud Platform, Select a project where your application will be registered, and click
Continue.
1. From the Select from menu, select the organization that you want.
2. The next steps to perform in Google Cloud Platform are dependent on the Organizational Level you selected in Cortex XDR - Project, Folder,
or Organization.
Project or Folder Organization Level: In the table, copy one of the following IDs that you want to configure and paste it in the
designated field in the Configure Account screen in Cortex XDR . The field in Cortex XDR is dependent on the Organizational Level
you selected.
-Project: Contains a project icon ( ) beside it, and the ID should be pasted in the Project ID field in Cortex XDR.
-Folder: Contains a folder icon ( ) beside it, and the ID should be pasted in the Folder ID field in Cortex XDR.
Organization is the Organization Level: Select the ellipsis icon ( ) → Settings. In the Settings page, copy the Organization ID for the
applicable organization that you want to configure and paste it in the Organization Id field in the Configure Account screen in Cortex
XDR.
g. You can either use an existing bucket from the list or create a new bucket. Copy the Name of the bucket and paste it in the Bucket Name field in
the Configure Account screen in Cortex XDR.
h. Define the following remaining connection parameters in the Configure Account screen in Cortex XDR.
Bucket Directory Name: You can either leave the default directory as Exported-Assets or define a new directory name that will be created for
the exported assets collected for the bucket configured in GCP.
Cortex XDR Collection Name: Specify a name for your Cortex XDR collection that is displayed underneath the Cloud Inventory configuration
for this GCP collection.
i. Click Next.
a. Download the Terraform script. The name of the file downloaded is dependent on the Organizational Level that you configured in the Configure
Account screen of the wizard.
Folder: cortex-xdr-gcp-folder-ro.tf
Project: cortex-xdr-gcp-project-ro.tf
Organization: cortex-xdr-gcp-organization-ro.tf
d. Select File → Open, and Open the Terraform script that you downloaded from Cortex XDR.
e. Use the following commands to upload the Terraform script, which you can copy from the Account Details screen in Cortex XDR using the copy
icon ( ).
1. terraform init: Initializes the Terraform script. You need to wait until the initialization is complete before running the next command as
indicated in the image below.
2. terraform apply: When running this command, you are asked to enter the following values.
var.host_project_id: Specify the GCP Project ID to host the XDR service account and bucket, which you registered your
application. Ensure that you use a permanent project.
var.project_id: Specify the Project ID, Folder ID, or Organization ID that you configured in the Configure Account screen of the
wizard from GCP.
After specifying all the values, you need to Authorize gcloud to use your credentials to make this GCP API call in the Authorize Cloud
Shell dialog box that is displayed.
Before the action completes, you need to confirm whether you want to perform these actions, and after the process finishes running
an Apply complete indication is displayed.
You can view the output JSON file called cortex-service-account-<GCP host project ID>.json by running the ls
command.
2. Select the JSON file produced after running the Terraform script, and click Download.
g. Upload the downloaded Service Account Key JSON file in the Configure Account screen in Cortex XDR. You can drag and drop the file, or Browse
to the file.
h. Click Next.
You can skip this step if you’ve already configured a Google Cloud Platform data collector with a Pub/Sub asset feed collection.
a. In the GCP Console, search for Topics, and select the Topics link.
b. CREATE TOPIC.
d. Run the following command to create a feed on an asset using the gcloud CLI tool, which you can copy from the Change Asset Logs screen in
Cortex XDR by selecting the copy icon ( ), and paste in the gcloud CLI tool.
For more information on the gcloud CLI tool. see gcloud tool overview.
gcloud asset feeds create <FEED_ID> --project=xdr-cloud-projectid --pubsub-topic="<Topic name>" --content-type=resource --asset-
types="compute.googleapis.com/Instance,compute.googleapis.com/Image,compute.googleapis.com/Disk,compute.googleapis.com/Network,compute.googleapis.com
/Subnetwork,compute.googleapis.com/Firewall,storage.googleapis.com/Bucket,cloudfunctions.googleapis.com/CloudFunction"
The command contains a parameter already populated and parameters that you need to replace before running the command.
--project: This parameter is automatically populated from the Project ID field in the Configure Account screen wizard in Cortex XDR.
<Topic name>—Replace this placeholder text with the topic name you created in the Topic details page in the GCP console.
e. In the GCP Console, search for Subscription, and select the Subscriptions link.
h. Click CREATE.
i. Select the subscription that you created for your topic and add PERMISSIONS for the subscriber in the Subscription details page.
j. ADD PRINCIPAL to add permissions for the Service Account that you created the key for in the JSON file and uploaded to the Configure Account
wizard screen in Cortex XDR. Set the following permissions for the Service Account.
New principals: Select the designated Service Account Key you created in the JSON file.
k. Copy the Subscription name and paste it in the Subscription Name field on the right-side of the Change Asset Logs screen in Cortex XDR , and
click Next.
The Subscription Name is the name of the new Google Cloud Platform data collector that is configured with a Pub/Sub asset feed collection.
6. Click Create.
Once cloud assets from GCP start to come in, a green check mark appears underneath the Cloud Inventory configuration with the Last collection time
displayed. It can take a few minutes for the Last Collection time to display as the processing completes.
Whenever the Cloud Inventory data collector integrations are modified by using the Edit, Disable, or Delete options, it can take up to 10 minutes for these
changes to be reflected in Cortex XDR.
In addition, if you created a Pub/Sub asset feed collection, a green check mark appears underneath the Google Cloud Platform configuration with the
amount of data received.
7. After Cortex XDR begins receiving GCP cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets
pages display the data in a table format. For more information, see Cloud Inventory Assets.
Abstract
Extend Cortex XDR visibility into cloud assets from Microsoft Azure.
Ingestion of Cloud Assets from Microsoft Azure requires a Cortex XDR Pro per GB license.
Cortex XDR provides a unified, normalized asset inventory for cloud assets in Microsoft Azure. This capability provides deeper visibility to all the assets and
superior context for incident investigation.
To receive cloud assets from Microsoft Azure, you must configure the Collection Integrations settings in Cortex XDR using the Cloud Inventory data collector to
configure the Microsoft Azure wizard. The Microsoft Azure wizard includes instructions to be completed both in Microsoft Azure and the Microsoft Azure wizard
screens. After you set up data collection, Cortex XDR begins receiving new data from the source.
As soon as Cortex XDR begins receiving cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets pages
display the data in a table format.
c. Click Azure.
Setting the connection parameters on the right-side of the screen are dependent on certain configurations in Microsoft Azure as explained below.
a. Select the Organization Level as either Subscription (default), Tenant, or Management Group. The Organization Level that you select changes the
instructions and fields displayed on the screen.
c. Search for Subscriptions, select Subscriptions, copy the applicable Subscription ID in Azure, and paste it in the Subscription ID field in the
Configure Account screen wizard in Cortex XDR.
This step is only relevant if you’ve configured the Organization Level as Subscription in the Configure Account screen in Cortex XDR. Otherwise,
you can skip this step if the Organization Level is set to Tenant or Management Group.
d. Search for Management groups, select Management groups, copy the applicable ID in Azure, and paste it in the Management Group ID field in
the Configure Account screen wizard in Cortex XDR.
This step is only relevant if you’ve configured the Organization Level as Management Group in the Configure Account screen in Cortex XDR.
Otherwise, you can skip this step if the Organization Level is set to Subscription or Tenant.
e. Search for Tenant properties, select Tenant properties, copy the Tenant ID in Azure, and paste it in the Tenant ID field in the Configure Account
screen wizard in Cortex XDR.
f. Specify a Cortex XDR Collection Name to be displayed underneath the Cloud Inventory configuration for this Azure collection.
g. Click Next.
a. Download the Terraform script. The name of the file downloaded is dependent on the Organization Level that you configured in the Configure
Account screen of the wizard.
Subscription: cortex-xdr-azure-subscription-ro.tf
Tenant: cortex-xdr-azure-org-ro.tf
To run the Terraform script when configuring the Organization Level at the Tenant level, you must first ensure that you elevate user access to
manage all Azure subscriptions and management groups for the User Access Administrator role. For more information, see the Microsoft
Azure documentation.
c. Click the upload/download icon ( ) to Upload the Terraform script to Cloud Shell, browse to the file, and click Open.
A notification with the Upload destination is displayed on the bottom-right corner of the screen.
d. Use the following commands to upload the Terraform script, which you can copy from the Account Details screen in Cortex XDR using the copy
icon ( ).
1. terraform init: Initializes the Terraform script. You need to wait until the initialization is complete before running the next command as
indicated in the image below.
2. terraform apply: When running this command you will be asked to enter the following values, which are dependent on the Organization
Level that you configured.
Before running this command, ensure that your Azure CLI client is logged in by running az login. From the returned message from the
login command, copy the code provided, go to the website mentioned in the message, and use the code to authenticate.
var.subscription_id: Specify the Subscription ID that you configured in the Configure Account screen of the wizard from
Microsoft Azure. This value only needs to be specified if the Subscription ID is set to Subscription.
var.management.group_id: Specify the Management Group ID that you configured in the Configure Account screen of the wizard
from Microsoft Azure. This value only needs to be specified if the Microsoft Group is set to Management Group.
var.tenant_id: Specify the Tenant ID that you configured in the Configure Account screen of the wizard from Microsoft Azure.
Before the action completes, you need to confirm whether you want to perform these actions, and after the process finishes running an Apply
complete indication is displayed.
e. Copy the client_id value displayed in the Cloud Shell window and paste it in the Application Client ID field in the Account Details screen in Cortex
XDR.
f. Copy the secret value displayed in the Cloud Shell window and paste it in the Secret field in the Account Details screen in Cortex XDR.
h. Click Next.
5. Click Create.
Once cloud assets from Azure start to come in, a green check mark appears underneath the Cloud Inventory configuration with the Last collection time
displayed. It can take a few minutes for the Last Collection time to be displayed.
Whenever the Cloud Inventory data collector integrations are modified by using the Edit, Disable, or Delete options, it can take up to 10 minutes for these
changes to be reflected in Cortex XDR.
6. After Cortex XDR begins receiving Azure cloud assets, you can view the data in Assets → Cloud Inventory, where All Assets and Specific Cloud Assets
pages display the data in a table format. For more information, see Cloud Inventory Assets.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
In addition to native log ingestion support, Cortex XDR also supports a number of custom log ingestion methods.
Abstract
To extend visibility, Cortex XDR can receive Syslog from additional vendors that use CEF or LEEF formatted over Syslog (TLS not supported).
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive Syslog from a variety of supported vendors (see External data ingestion vendor support). In addition, Cortex XDR can receive Syslog
from additional vendors that use CEF, LEEF, CISCO, CORELIGHT, or RAW formatted over Syslog.
After Cortex XDR begins receiving logs from the third-party source, Cortex XDR automatically parses the logs in CEF, LEEF, CISCO, CORELIGHT, or RAW
format and creates a dataset with the name <vendor>_<product>_raw. You can then use XQL Search queries to view logs and create new IOC, BIOC, and
Correlation Rules.
Abstract
Cortex XDR can receive logs and data from Apache Kafka directly to your log repository for query and visualization purposes.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive events from Apache Kafka clusters directly to your log repository for query and visualization purposes. After you activate the Kafka
Collector applet on a Broker VM in your network, which includes defining the connection details and settings related to the list of subscribed topics to monitor
and upload to Cortex XDR, you can collect events as datasets.
After Cortex XDR begins receiving topic events from the Kafka clusters, Cortex XDR automatically parses the events and creates a dataset with the specific
name you set as the target dataset when you configured the Kafka Collector, and adds the data in these files to the dataset. You can then use XQL Search
queries to view events and create new Correlation Rules.
Configure Cortex XDR to receive events as datasets from topics in Kafka clusters.
Abstract
Cortex XDR can receive CSV log files from a shared Windows directory, where the CSV log files must conform to specific guidelines.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive CSV log files from a shared Windows directory directly to your log repository for query and visualization purposes. After you activate
the CSV Collector applet on a Broker VM in your network, which includes defining the list of folders mounted to the Broker VM and setting the list of CSV files to
monitor and upload to Cortex XDR (using a username and password), you can ingest CSV files as datasets.
The ingested CSV log files must conform to the following guidelines:
Header field names must contain only letters (a-z, A-Z) or numbers (0-9) and must start with a letter. Spaces are converted to underscores (_).
After Cortex XDR begins receiving logs from the shared Windows directory, Cortex XDR automatically parses the logs and creates a dataset with the specific
name you set as the target dataset when you configured the CSV Collector. The CSV Collector checks for any changes in the configured CSV files, as well as
any new CSV files added to the configuration folders, in the Windows directory every 10 minutes and replaces the data in the dataset with the data from those
files. You can then use XQL Search queries to view logs and create new Correlation Rules.
Configure Cortex XDR to receive CSV files as datasets from a shared Windows directory.
1. Ensure that you share the applicable CSV files in your Windows directory.
Abstract
Cortex XDR can receive data from a client relational database directly to your log repository.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive data from a client relational database directly to your log repository for query and visualization purposes. After you activate the
Database Collector applet on a Broker VM in your network, which includes defining the database connection details and settings related to the query details
for collecting the data from the database to monitor and upload to Cortex XDR, you can collect data as datasets. For more information about activating this
collector applet, see Activate the Database Collector.
After Cortex XDR begins receiving data from a client relational database, Cortex XDR automatically parses the logs and creates a dataset with the specific
name you set as the target dataset when you configured the Database Collector using the format <Vendor>_<Product>_raw. The Database Collector
checks for any changes in the configured database based on the SQL Query defined in the database connection according to the execution frequency of
collection that you configured and appends the data to the dataset. You can then use XQL Search queries to view data and create new Correlation Rules.
Configure Cortex XDR to receive data as datasets data from a client relational database.
Abstract
Cortex XDR can receive logs from files and folders in a network share directly to your log repository for query and visualization purposes.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive logs from files and folders in a network share directly to your log repository for query and visualization purposes. After you activate the
Files and Folders Collector applet on a Broker VM in your network, which includes defining the connection details and settings related to the list of files to
monitor and upload to Cortex XDR, you can collect files as datasets.
After Cortex XDR begins receiving logs from files and folders in a network share, Cortex XDR automatically parses the logs and creates a dataset with the
specific name you set as the target dataset when you configured the Files and Folders Collector using the format <Vendor>_<Product>_raw. The Files and
Folders Collector reads and processes the configured files one by one, as well as any new files added to the configured files and folders, in the network share
The Files and Folders Collector applet only starts to collect files that are more than 256 bytes.
Configure Cortex XDR to receive logs as datasets from files and folders in a network share.
1. Activate the Files and Folders Collector applet on a Broker VM within your network.
Abstract
Cortex XDR can receive logs from files and folders via FTP, FTPS, and SFTP directly to your log repository for query and visualization purposes.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive logs from files and folders via FTP, FTPS, or SFTP directly to your log repository for query and visualization purposes. After you activate
the FTP Collector applet on a Broker VM in your network, which includes defining the connection details and settings related to the list of files to monitor and
upload to Cortex XDR, you can collect files as datasets.
After Cortex XDR begins receiving logs from files and folders via FTP, FTPS, or SFTP, Cortex XDR automatically parses the logs and creates a dataset with the
specific name you set as the target dataset when you configured the FTP Collector using the format <Vendor>_<Product>_raw. The FTP Collector reads
and processes the configured FTP files one by one, as well as any new FTP files added to the configured files and folders, in the FTP directory according to
the execution frequency of collection that you configured, and adds the data in these files to the dataset. You can then use XQL Search queries to view logs
and create new Correlation Rules.
Configure Cortex XDR to receive logs as datasets from files and folders via FTP, FTPS, or SFTP.
Abstract
Cortex XDR can receive NetFlow flow records and IPFIX from a UDP port directly to your log repository for query and visualization purposes.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can receive NetFlow flow records and IPFIX from a UDP port directly to your log repository for query and visualization purposes. After you activate
the NetFlow Collector applet on a Broker VM in your network, which includes configuring your NetFlow Collector settings, you can ingest NetFlow flow records
and IPFIX as datasets.
The ingested NetFlow flow record format must include, at the very least:
After Cortex XDR begins receiving flow records from the UDP port, Cortex XDR automatically parses the flow records and creates a dataset with the specific
name you set as the target dataset when you configured the NetFlow Collector. The NetFlow Collector adds the flow records to the dataset. You can then use
XQL Search queries to view those flow records and create new IOC, BIOC, and Correlation Rules. Cortex XDR can also analyze your logs to raise Analytics
alerts.
Configure Cortex XDR to receive NetFlow flow records as datasets from the routers and switches that support NetFlow.
1. Set up your NetFlow exporter to forward flow records to the IP address of the Broker VM that runs the NetFlow collector applet.
3. Use the XQL Search to query your flow records, using your designated dataset.
Abstract
You can set up Cortex XDR to receive logs from third-party sources, and automatically parse and process these logs.
In addition to logs from supported vendors, you can set up a custom HTTP log collector to receive logs in Raw, JSON, CEF, or LEEF format. The HTTP Log
Collector can ingest up to 80,000 events per sec.
After Cortex XDR begins receiving logs from the third-party source, Cortex XDR automatically parses the logs and creates a dataset with the name
<Vendor>_< Product>_raw. You can then use XQL Search queries to view logs and create new Correlation rules.
Cortex XDR supports logs in single line format or multiline format. For a JSON format, multiline logs are collected automatically when the Log
Format is configured as JSON. When configuring a Raw format, you must also define the Multiline Parsing Regex as explained below.
-The Vendor and Product defaults to Auto-Detect when the Log Format is set to CEF or LEEF.
-For a Log Format set to CEF or LEEF, Cortex XDR reads events row by row to look for the Vendor and Product configured in the logs. When the
values are populated in the event log row, Cortex XDR uses these values even if you specified a value in the Vendor and Product fields in the HTTP
collector settings. However, when the values are blank in the event log row, Cortex XDR uses the Vendor and Product that you specified in the
HTTP collector settings. If you did not specify a Vendor or Product in the HTTP collector settings, and the values are blank in the event log row, the
values for both fields are set to unknown.
f. Specify the Vendor and Product for the type of logs you are ingesting.
g. (Optional) Specify the Multiline Parsing Regex for logs with multilines.
This option is only displayed when the Log Format is set to Raw, so you can set the regular expression that identifies when the multiline event starts
in logs with multilines. It is assumed that when a new event begins, the previous one has ended.
Click the copy icon next to the key and record it somewhere safe. You will need to provide this key when you configure your HTTP POST request
and define the api_key. If you forget to record the key and close the window you will need to generate a new key and repeat this process.
a. Send an HTTP POST request to the URL for your HTTP Log Collector.
You can view a sample curl or python request on an HTTP collector instance by selecting View Example.
curl -X POST https://api-{tenant external URL}/logs/v1/event -H 'Authorization: {api_key}' -H 'Content-Type: text/plain' -d '{"example1": "test",
"timestamp": 1609100113039}
{"example2": [12321,546456,45687,1]}'
Python 3 example:
import requests
def test_http_collector(api_key):
headers = {
"Authorization": api_key,
"Content-Type": "text/plain"
}
# Note: the logs must be separated by a new line
body = "{'example1': 'test', 'timestamp': 1609100113039}" \
"{'example2': [12321,546456,45687,1]}"
res = requests.post(url="https://api-{tenant external URL}/logs/v1/event",
headers=headers,
data=body)
return res
Authorization: Paste the api_key you previously recorded for your HTTP log collector, which is defined in the header.
Content-Type: Depending on the data object format you selected during setup, this will be application/json for JSON format or
text/plain for Text format. This is defined as part of the header.
Body: The body contains the records you want to send to Cortex XDR. Separate records with a \n (new line) delimiter. The request body can
contain up to 10 Mib records, but 1 Mib is recommended. In the case of a curl command, the records are contained in the -d
‘<records>’ parameter.
c. Review the possible success and failure code responses to your HTTP Post requests.
The following table provides the various success and failure code responses to your HTTP Post requests, which can help you troubleshoot any
problems with your HTTP Collector configuration.
{ "error": "false"}
200 Success code that indicates there are no errors
and the request was successful.
You can return to the Settings → Configurations → Data Collection → Collection Integrations page to monitor the status of your HTTP Log Collection
configuration. For each instance, Cortex XDR displays the number of logs received in the last hour, day, and week. You can also use the Data Ingestion
Dashboard to view general statistics about your data ingestion configurations.
4. After Cortex XDR begins receiving logs, use the XQL Search to search your logs.
Abstract
Extend Cortex XDR visibility into logs from BeyondTrust Privilege Management Cloud.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use BeyondTrust Privilege Management Cloud, you can take advantage of Cortex XDR investigation and detection capabilities by forwarding your logs to
Cortex XDR. This enables Cortex XDR to help you expand visibility into computer, activity, and authorization requests in the organization, correlate and detect
access violations, and query BeyondTrust Endpoint Privilege Management logs using XQL Search.
As soon as Cortex XDR starts to receive logs, Cortex XDR can analyze your logs in XQL Search and you can create new Correlation Rules.
Before you begin configuring data collection verify that you are using BeyondTrust Privilege Management Cloud version 21.6.339 or later.
1. Configure SIEM settings and an AWS S3 Bucket according to the requirements provided in the BeyondTrust documentation.
Ensure that when you add the AWS S3 bucket in the PMC and set the SIEM settings, you select ECS - Elastic Common Schema as the SIEM Format.
2. Configure BeyondTrust logs collection with Cortex XDR using an Amazon S3 data collector for generic data.
Ensure your Amazon S3 data collector is configured with the following settings.
Log Type: Select Generic to configure your log collection to receive generic logs from Amazon S3.
Log Format: Select the log format type as Beyondtrust Cloud ECS.
For a Log Format set to Beyondtrust Cloud ECS, the following fields are automatically set and not configurable.
Vendor: Beyondtrust
Compression: Uncompressed
3. After Cortex XDR begins receiving data from BeyondTrust Privilege Management Cloud, you can use XQL Search to search your logs using the
beyondtrust_privilege_management_raw dataset that you configured when setting up your Amazon S3 data collector.
Abstract
Ingest logs and data from Box enterprise accounts via the Box REST APIs.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from Box enterprise accounts using the Box data collector. To receive logs and data from Box enterprise
accounts via the Box REST APIs, you must configure the Collection Integrations settings in Cortex XDR based on your Box enterprise account credentials. After
you set up data collection, Cortex XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset for the different types of data that you are collecting, which you can use to initiate XQL
Search queries. For example queries, refer to the in-app XQL Library. For all logs, Cortex XDR can raise Cortex XDR alerts (Analytics, Correlation Rules, IOC,
and BIOC), when relevant from Box logs. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and BIOC alerts are
only raised on normalized logs.
The following table provides a brief description of the different types of data you can collect, the collection method and fetch interval for new data collected,
the name of the dataset to use in Cortex XDR to query the data using XQL Search, and whether the data is normalized.
Type Of Data Description Collection Method Fetch Interval Dataset Name Normalized Data
Events Retrieves events related to Appends data 60 seconds box_admin_logs_raw When relevant, Cortex XDR
(admin_logs) file/folder management, normalizes SaaS audit event
permission changes, logs into stories, which are
access and login activities, collected in a dataset
user/groups management, called saas_audit_logs.
folder collaboration,
file/folder sharing, security
settings changes, tasks,
permission changes on
folders, storage expiration
and data retention, and
workflows.
Type Of Data Description Collection Method Fetch Interval Dataset Name Normalized Data
To collect Box Shield Alerts, you must purchase Box Shield and it must be enabled on Box enterprise.
2. Create a valid Box account that is assigned to a role with sufficient permissions for the data you want to collect. For example, create an account
assigned to an Admin role to enable Cortex XDR to collect all metadata for all files, folders, and enterprise events for the entire organization.
3. Enable two-factor authentication for the Box account. For more information, see the Box documentation.
1. Complete the prerequisites mentioned above for your Box enterprise account.
a. Log in to your Box account, and in the Dev Console, click Create New App.
The new app is created and the opened in the Configuration tab.
d. In the Configuration tab of the new app, scroll down to the following sections and configure the app.
In the Application Scopes section, set the following Administrative Action permissions depending on the type of data you want to collect.
There is a current bug with the Groups API from Box. If you don't configure the Box app with the proper
permissions for managing groups data, the Groups API from Box won't return an error message to Cortex
XDR indicating that the API failed to receive the data, and the Groups data will not be collected.
e. In the Authorization tab, click Review and Submit to send your changes to the administrator for approval.
In the Review App Authorization Submission dialog that is displayed, you can add a Description of the app changes, and then click Submit.
3. Ensure the new app changes are approved by an administrator in the Admin Console of the Box account.
b. In the table, look for the Name of the Box app with the changes, where the Authorization Status is set to Pending Authorization, and select the
options menu → Authorize App.
c. Click Authorize.
For any future change that you make to your Box app, ensure that you send the changes for approval to the administrator, who will need to approve them
as explained above.
6. Set the following parameters, where some values require you to log in to your Box account to copy and paste the values to the applicable fields:
Enterprise ID: Specify the unique identifier for your organization's Box instance, which is used to access the token request. This field can't be
edited once the Box data collector instance is created.
You can retrieve this value from your Box account in the the General Settings tab, and scrolling to the App Info section. Copy the Enterprise ID and
paste it in this field in Cortex XDR.
Client ID: Specify the client ID or API key for the Box app you created.
You can retrieve this value from your Box account in the Configuration tab, and scrolling down to the OAuth 2.0 Credentials section. COPY the
Client ID and paste it into this field in Cortex XDR.
Client Secret: The client secret or API secret fort he Box app you created.
You can retrieve this value from your Box account in the Configuration tab, and scrolling down to the OAuth 2.0 Credentials section. Click Fetch
Client Secret, where you will need to authenticate yourself according to the two-factor authentication method defined in your Box app before the
Client Secret is displayed. Copy this value and paste it in this field in Cortex XDR.
Collect: Select the types of data you want to collect from Box. All the options are selected by default.
Events (admin_logs): Collects events related to file/folder management, permission changes, access and login activities, user/groups
management, folder collaboration, file/folder sharing, security settings changes, tasks, permission changes on folders, storage
expiration and data retention, and workflows.
Box Shield Alerts: Collects security alerts related to suspicious locations, suspicious sessions, anomalous download, and malicious
content.
Once events start to come in, a green check mark appears underneath the Box configuration.
Abstract
Ingest logs and data from Dropbox Business accounts via the Dropbox Business API.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
Cortex XDR can ingest different types of data from Dropbox Business accounts using the Dropbox data collector. To receive logs and data from Dropbox
Business accounts via the Dropbox Business API, you must configure the Collection Integrations settings in Cortex XDR based on your Dropbox Business
Account credentials. After you set up data collection, Cortex XDR begins receiving new logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset for the different types of data that you are collecting, which you can use to initiate XQL
Search queries. For example queries, refer to the in-app XQL Library. For all logs, Cortex XDR can raise Cortex XDR alerts (Analytics, Correlation Rules, IOC,
and BIOC), when relevant from Dropbox Business logs. While Correlation Rules alerts are raised on non-normalized and normalized logs, Analytics, IOC, and
BIOC alerts are only raised on normalized logs.
The following table provides a brief description of the different types of data you can collect, the collection method and fetch interval for new data collected,
the name of the dataset to use in Cortex XDR to query the data using XQL Search, and whether the data is normalized.
Type Of Data Description Collection Method Fetch Interval Dataset Name Normalized Data
Log collection
Type Of Data Description Collection Method Fetch Interval Dataset Name Normalized Data
Events Retrieves team events, including Appends data 60 seconds dropbox_events_raw When relevant, Cortex XDR
access events, administrative normalizes SaaS audit
events, file/folders events, security event logs into stories,
settings events, and more. which are collected in a
dataset
team_log/get_events
called saas_audit_logs.
Member Lists all device sessions of a team. Overwrites data 10 minutes dropbox_members_devices_raw —
Devices
team/devices/list_members_devices
team/members/list_v2
team/groups/list
2. Create a Dropbox Business admin account with Security admin permissions, which is required to authorize Cortex XDR to access the Dropbox Business
account and generate the OAuth 2.0 access token.
1. Complete the prerequisite steps mentioned above for your Dropbox Business account.
2. Log in to Dropbox using an admin account designated with Security admin level permissions.
3. In the Dropbox App console, ensure that you either create a new app, or your existing app is created, with the following settings:
Choose the type of access you need: Select Full dropbox for access to all files and folders in a user's Dropbox.
4. In the Permissions tab of your app, ensure that the applicable permissions are selected under the relevant section heading for the type of data you want
to collect:
groups.read Groups
events.read Events
App Key: Specify the App key, which is taken from the Settings tab of your Dropbox app.
App Secret: Specify the App secret, which is taken from the Settings tab of your Dropbox app.
Access Code: After specifying an App Key, you can obtain the access code by hovering over the Access Code tooltip, clicking the here link, and
signing in with your Dropbox Business account credentials. The URL link is https://www.dropbox.com/oauth2/authorize?
client_id=%APP_KEY%&token_access_type=offline&response_type=code, where the %APP_KEY% is replaced with the App
Key value specified.
When the App Key field is empty, the here link in the tooltip is disabled. When an incorrect App Key is entered, clicking the link results in a 404
error.
To obtain the Access Code complete the following steps in the page that opens in your browser:
2. Review the permissions listed, which should match the permissions you configured in your Dropbox app in the Permissions tab according to
the type of data you want to collect, and click Allow.
3. Copy the Access Code Generated and paste it in the Access Code field in Cortex XDR. The access code is valid for around four minutes
from when it is generated.
Whenever you change the permissions of the Dropbox app, we recommend that you generate a new Access Code for the Dropbox data collector
instance so that the permissions match the updates.
Collect: Select the types of data you want to collect from Dropbox. All the options are selected by default.
Log collection
Events (get_events}: Retrieves team events, including access events, administrative events, file/folders events, security settings events
and more.
Once events start to come in, a green check mark appears underneath the Dropbox configuration.
Abstract
Cortex XDR can ingest logs from Elasticsearch Filebeat, a file system logger that logs file activity on your endpoints and servers.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you want to ingest logs about file activity on your endpoints and servers and do not use the Cortex XDR agent, you can install Elasticsearch Filebeat as a
system logger and then forward those logs to Cortex XDR. To facilitate log ingestion, Cortex XDR supports the same protocols that Filebeat and Elasticsearch
use to communicate. Cortex XDR supports using Filebeat up to version 8.2 with the Filebeat data collector. Cortex XDR also supports logs in single line format
or multiline format. For more information on handling messages that span multiple lines of text in Elasticsearch Filebeat, see Manage Multiline Messages.
Cortex XDR supports all sections in the filebeat.yml configuration file, such as support for Filebeat fields and tags. As a result, this enables you to use the
add_fields processor to identify the product/vendor for the data collected by Filebeat so the collected events go through the ingestion flow (Parsing Rules). To
configure the product/vendor ensure that you use the default fields attribute, as opposed to the target attribute, as shown in the following example.
To provide additional context during investigations, Cortex XDR automatically creates a new Cortex Query Language (XQL) dataset from your Filebeat logs.
You can then use the XQL dataset to search across the logs Cortex XDR received from Filebeat.
To receive logs, you configure collection settings for Filebeat in Cortex XDR and output settings in your Filebeat installations. As soon as Cortex XDR begins
receiving logs, the data is visible in XQL Search queries.
d. Specify the Vendor and Product for the type of logs you are ingesting.
The vendor and product are used to define the name of your XQL dataset (<vendor>_<product>_raw). If you do not define a vendor or
product, Cortex XDR examines the log header to identify the type and uses that to define the vendor and product in the dataset. For example, if
the type is Acme and you opt to let Cortex XDR determine the values, the dataset name would be acme_acme_raw.
Click the copy icon next to the key and record it somewhere safe. You will need to provide this key when you set up output settings on your
Filebeat instance. If you forget to record the key and close the window you will need to generate a new key and repeat this process.
hosts: Copy the API URL from your Filebeat configuration and paste it in this field.
compression_level: 5 (recommended)
api_key: Paste the key you created in when you configured Filebeat Log Collection in Cortex XDR.
proxy_url: (Optional) <server_ip>:<port_number>. You can specify your own <server_ip> or use the Broker VM to proxy Filebeat
communication using the format <Broker_VM_ip>:<port_number>. When using the Broker VM, ensure that you activate the Local Agent
Settings applet with the Agent Proxy enabled.
After Cortex XDR begins receiving logs from Filebeat, they will be available in XQL Search queries.
You can return to the Settings → Configurations → Data Collection → Custom Collectors page to monitor the status of your Filebeat configuration. For
each instance, Cortex XDR displays the number of logs received in the last hour, day, and week. You can also use the Data Ingestion Dashboard to view
general statistics about your data ingestion configurations.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
If you use Forcepoint DLP to prevent data loss over endpoint channels, you can take advantage of Cortex XDR investigation and detection capabilities by
forwarding your logs to Cortex XDR. This enables Cortex XDR to help you expand visibility into data violation by users and hosts in the organization, correlate
and detect DLP incidents, and query Forcepoint DLP logs using XQL Search.
As soon as Cortex XDR starts to receive logs, Cortex XDR can analyze your logs in XQL Search and you can create new Correlation Rules.
To integrate your logs, you first need to set up an applet in a Broker VM within your network to act as a Syslog Collector. You then configure forwarding on your
log devices to send logs to the Syslog Collector in a CEF or LEEF format.
As an estimate for initial sizing, note the average Forcepoint DLP log size. For proper sizing calculations, test the log sizes and log rates produced by
your Forcepoint DLP. For more information, see Manage Your Log Storage.
4. Configure the log device that receives Forcepoint DLP logs to forward syslog events to the Syslog Collector in a CEF or LEEF format.
5. After Cortex XDR begins receiving data from Forcepoint DLP, you can use XQL Search to search your logs using the forcepoint_dlp_endpoint
dataset.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive logs from Proofpoint Targeted Attack Protection (TAP), you must first configure TAP service credentials in the TAP dashboard, and then the
Collection Integrations settings in Cortex XDR based on your Proofpoint TAP configuration. After you set up data collection, Cortex XDR begins receiving new
logs and data from the source.
When Cortex XDR begins receiving logs, the app creates a new dataset (proofpoint_tap_raw) that you can use to initiate XQL Search queries. For
example queries, refer to the in-app XQL Library.
TAP service credentials can be generated in the TAP Dashboard, where you will receive a Proofpoint Service Principal for authentication and Proofpoint
API Secret for authentication. Record these credentials as you will need to provide them when configuring the Proofpoint Targeted Attack Protection data
collector in Cortex XDR. For more information on generating TAP service credentials, see Generate TAP Service Credentials.
Proofpoint Endpoint: All Proofpoint endpoints are available on the tap-api-v2.proofpoint.com host. You can leave the default
configuration or specify another host.
Service Principal: Specify the Proofpoint Service Principal for authentication. TAP service credentials can be generated in the TAP
Dashboard.
API Secret: Specify the Proofpoint API Secret for authentication. TAP service credentials can be generated in the TAP Dashboard.
Once events start to come in, a green check mark appears underneath the Proofpoint Targeted Attack Protection configuration with the amount of
data received.
After you enable the Proofpoint Targeted Attack Protection data collector, you can make additional changes as needed.
Abstract
Use the Cortex XDR data collector to collect Audit Trail and Security Monitoring event logs from Salesforce.com.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
The Cortex XDR data collector can collect Audit Trail and Security Monitoring event logs from Salesforce.com. During setup of this data collector, you can
choose to accept the default collection settings, or exclude the collection of content metadata and accounts.
The Salesforce.com data collector fetches events, and objects and metadata, including:
Login history
You can create multiple Salesforce.com data collector instances in Cortex XDR, for different parts of your organization.
Data are intentionally collected with a delay, to ensure that all the logs have been collected (to mitigate the effects of lags on the Salesforce.com side).
When Cortex XDR begins receiving logs, it creates new datasets for them, called salesforce_<object>_raw. Examples of <object> include:
permissionset
profile
groupmember
group
user
userrole
document
contentfolder
attachment
contentdistribution
tenantsecuritylogin
useraccountteammember
tenantsecurityuserperm
account
audit
login
eventlogfile
You can use these datasets to perform XQL search queries. For example queries, refer to the in-app XQL Library.
To manage collection integration in Cortex XDR, ensure that you have the privilege to View/Edit Log Collections (for example, Instance Administrator).
To avoid errors, the minimum required Salesforce.com editions are Professional Edition with API access enabled, or Enterprise Edition, or higher.
To use the client credentials flow required for Salesforce.com–Cortex XDR integration, you must create a connected app for Cortex XDR in Salesforce.com,
and configure its OAuth settings and access policies. Following these activities, configure Cortex XDR.
For more detailed reference information, see Configure a Connected App for the OAuth 2.0 Client Credentials Flow.
Unlike other data collector setups, in this case, the setup includes obtaining an OAuth 2.0 code from Salesforce.com, and this code is only valid for 15 minutes.
Therefore, make sure that you enable the data collector within 15 minutes of obtaining the authorization code.
Perform the following procedures in the order that they appear, below.
3. Enter a meaningful name for the connected application and for the API. For example, you could name it panw_cortex_integration.
4. Enter your email address. This address will be used to retrieve the Consumer Key and Consumer Secret.
https://login.salesforce.com/services/oauth2/callback
and
on separate lines, where {tenant external URL} is the name of your tenant as it appears in the URL of your Cortex XDR tenant.
7. For OAuth Scopes, select Full access (full) and Perform requests at any time (refresh_token, offline_access).
8. In the next options after OAuth Scopes, ensure that only the following checkboxes are selected:
Consumer Key will be used for client_id, and Consumer Secret will be used for client_secret in OAuth 2.0.
2. Find your connected application (the one that you defined for Cortex XDR). In the last column, click the arrow button and then click View.
3. In the API (Enable OAuth Settings) area, click Manage Consumer Details.
4. When you are asked to verify your identity, open the email that Salesforce sent to you, and copy the verification code. Go back to the Salesforce Verify
Your Identity page, paste the code in the Verification Code box, and click Verify. One of the following will happen:
The Consumer Key and Consumer Secret will be sent to the email address that you configured earlier for the Cortex XDR connected app.
On the Salesforce Connected App Name page, the Consumer Details area will display the Consumer Key and Consumer Secret, and you will be
able to copy them from here when required in the following procedures.
2. Find your connected application (the one that you defined for Cortex XDR). In the last column, click the arrow button and then click Manage.
Choose your refresh token policy. We recommend: Expire refresh token if not used for _ Day(s). For example, select this option and set it for 7
days.
Configure the OAuth 2.0 application to call the Salesforce.com API using client_id and client_secret.
References: https://help.salesforce.com/s/articleView?id=sf.remoteaccess_oauth_client_credentials_flow
2. Enter a unique Name for the instance, enter the Salesforce Domain Name, and the Consumer Key and the Consumer Secret credentials obtained earlier
in this workflow. For example, the domain could be the API URL from which logs are received, such as
https://MyDomainName.my.salesforce.com/services/data/vXX.X/resource/
When these options are cleared, only these data types will be omitted from collection. All other data will be collected as usual.
4. Click Enable. A popup which redirects you to your Salesforce instance appears, to get OAuth 2.0 authorization credentials and access.
5. Click OK.
A Salesforce data collection instance is created, and an authorization token is created and returned to Cortex XDR. Data collection begins.
You can edit and test an existing collector instance after a successful initial connection between Salesforce.com and Cortex XDR. Do this by clicking Edit
(pencil icon) for the collector instance. The log collection window will be displayed, where you can make changes or test, by clicking Test.
Troubleshooting
If for any reason, the token is not created and sent to Cortex XDR, after a timeout period, an authorization failure error will be returned for the collector instance.
In this case, try again by clicking Edit (pencil icon) for the collector instance. The log collection window will be displayed again, where you can edit settings
and retry getting the authorization code.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive data from the ServiceNow CMDB database, you must first configure data collection from ServiceNow CMDB. ServiceNow CMDB is a logical
representations of assets, services, and the relationships between them that comprise the infrastructure of an organization. It is built as a series of connected
tables that contain all the assets and business services controlled by a company and its configurations. You can configure the Collection Integration settings in
Cortex XDR for the ServiceNow CMDB database, which includes selecting the specific tables containing the data that you want to collect, in the ServiceNow
CMDB Collector. You can select from the list of default tables and also specify custom tables. By default, the ServiceNow CMDB Collector is configured to
collect data from the following tables, which you can always change depending on your system requirements.
cmdb_ci
cmdb_ci_computer
cmdb_rel_ci
cmdb_ci_application_software
As soon as Cortex XDR begins receiving data, the app automatically creates a ServiceNow CMDB dataset for each table using the format
servicenow_cmdb_<table name>_raw. You can then use XQL Search queries to view the data and create new Correlation Rules.
You can only configure a single ServiceNow CMDB Collector, which is automatically configured every 6 hours to reload the data from the configured tables
and replace the existing data. You can always use the Sync Now option to reload the data and replace the existing data whenever you want.
Complete the following task before you begin configuring Cortex XDR to receive data from ServiceNow CMDB.
Create a ServiceNow CMDB user with SNOW credentials, who is designated to access the tables from ServiceNow CMDB for data collection in Cortex
XDR. Record the credentials for this user as you will need them when configuring the ServiceNow CMDB Collector in Cortex XDR.
User Name: Specify the username for your ServiceNow CMDB user designated in Cortex XDR.
Password: Specify the password for your ServiceNow CMDB user designated in Cortex XDR.
Tables: You can do any of the following actions to configure the tables whose data is collected from ServiceNow CMDB.
Select the tables from the list of default ServiceNow CMDB tables that you want to collect from. After each table selection, select to add
the table to the tables already listed below for data collection.
Specify any custom tables that you want to configure for data collection.
From the default list of tables already configured, you can delete any of them by hovering over the table and selecting the X icon.
Once events start to come in, a green check mark appears underneath the ServiceNow CMDB Collector configuration with the data and time that the
data was last synced.
Sync Now to get the latest data from the tables configured. The data is replaced automatically every 6 hours, but you can always get the latest
data as needed.
6. After Cortex XDR begins receiving data from ServiceNow CMDB, you can use the XQL Search to search for logs in the new datasets, where each
dataset name is based on the table name using the format servicenow_cmdb_<table name>_raw.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
To receive Workday report data, you must first configure data collection from Workday using a Workday custom report to ingest the appropriate data. This is
configured by setting up a Workday Collector in Cortex XDR and configuring report data collection via this Workday custom report that you set up.
As soon as Cortex XDR begins receiving data, the app automatically creates a Workday Cortex Query Language (XQL) dataset (workday_workday_raw).
You can then use XQL Search queries to view the data and create new Correlation Rules. In addition, Cortex XDR adds the workday fields next to each user in
the Key Assets list in the Incident View, and in the User node in the Causality View of Identity Analytics alerts.
Any user with permissions to view alerts and incidents can view the Workday data.
You can only configure a single Workday Collector, which is automatically configured to run the report every 6 hours. You can always use the Sync Now option
to run the report whenever you want.
1. Create an Integration System User that is designated to access the custom report from Workday for data collection in Cortex XDR.
2. Create an Integration System Security Group for the Integration System User created in Step 1 for accessing the report. When setting this group ensure
to define the following:
Type of Tenanted Security Group: Select either Integration System Security Group (Constrained) or Integration System Security Group
(Unconstrained) depending on how your data is configured. For more information, see the Workday documentation.
Integration System User: Select the user that you defined in step 1 for accessing the custom report.
3. Create the Workday credentials for the Integration System User created in Step 1 so that the username and password can be used to access the report
in Cortex XDR. Record these credentials as you will need them when configuring the Workday Collector in Cortex XDR.
For more information on completing any of the prerequisite steps, see the Workday documentation.
b. In the search field, specify Create Custom Report to open the wizard.
Report Type: Select Advanced. When you select this option, the Enable As Web Service checkbox is displayed.
Enable As Web Service: Select this checkbox, so that you will be able to generate a URL of the report to configure in Cortex XDR.
Optimized for Performance: Select whether the data should be optimized for performance. The way this checkbox is configured
determines the Data Source options available to choose from.
Date Source: Select the applicable data source containing the data that is used to configure data collection from Workday to Cortex
XDR.
The Additional Info table in the Columns tab is where you can perform the following.
For the incident and card views in Cortex XDR, map the required fields from the Data Source configured by selecting the applicable Field
that you want to map to the Cortex XDR field name required for data collection in the Column Heading Override XML Alias column.
(Optional) You can map any additional fields from the Data Source configured that you want to be able to query in XQL Search using the
workday_workday_raw dataset. This is configured by selecting the applicable Field and leaving the default field name that is displayed in
the Column Heading Override XML Alias column. This default field name is what is used in XQL Search and the dataset to view and query
the data.
For the incident and card views in Cortex XDR, map the following fields in the table by selecting the applicable Field that contains the data
representing the Cortex XDR field name as provided below that should be added to the Column Heading Override XML Alias. For example, for
full_name, select the applicable Field from the Business Object defined that contains the full name of the user and in the Column Heading
Override XML Alias specify full_name to map the set Field to the Cortex XDR field name.
workday_user_id*
full_name*
workday_manager_user_id*
manager*
worker_type*
position_title*
department*
private_email_address*
business_email_address*
employment_start_date*
employment_end_date
phone_number
mailing_address
e. (Optional) Filter out any employees that you do not want included in the Filter tab.
f. Share access to the report with the designated Integration System User that you created by setting the following settings in the Share tab:
Report Definition Sharing Options: Select Share with specific authorized groups and users.
Authorized Users: Select the designated Integration System User that you created for accessing the custom report.
g. Ensure that the following Web Services Options settings in the Advanced tab are configured.
Here is an example of the configured settings, where the Web Service API Version and Namespace are automatically populated and dependent on
your report.
h. (Optional) Test the report to ensure all the fields are populated.
1. In the related actions menu, select Actions → Web Service → View URLs.
2. Click OK.
4. Hover over the JSON link and click the icon, which open a new tab in your browser with the URL for the report. You need to use the
designated user credentials to open the report.
5. Copy the URL for the report and record them somewhere as this URL needs to be provided when setting up the Workday Collector in Cortex
XDR.
URL: Specify the URL of the custom report you configured in Workday.
User Name: Specify the username for the designated Integration System User that you created for accessing the custom report in Workday.
Password: Specify the password for the designated Integration System User that you created for accessing the custom report in Workday.
A notification appears confirming that the Workday Collector was saved successfully, and closes on its own after a few seconds.
Once report data starts to come in, a green check mark appears underneath the Workday Collector configuration with the data and time that the
data was last synced.
After you enable the Workday Collector, you can make additional changes as needed. To modify a configuration, select any of the following options.
Sync Now to run the report to get the latest report data. The report is run automatically every 6 hours, but you can always get the latest data as
needed.
4. After Cortex XDR begins receiving report data from Workday, you can use the XQL Search to search for logs in the new dataset
(workday_workday_raw).
Abstract
For a more complete and detailed picture of the activity involved in an incident, Cortex XDR can ingest alerts from any external source.
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
For a more complete and detailed picture of the activity involved in an incident, Cortex XDR can ingest alerts from any external source. Cortex XDR stitches the
external alerts together with relevant endpoint data and displays alerts from external sources in relevant incidents and alerts tables. You can also see external
alerts and related artifacts and assets in Causality views.
To ingest alerts from an external source, you configure your alert source to forward alerts (in Auto-Detect (default), CEF, LEEF, CISCO, or CORELIGHT format)
to the Syslog collector. You can also ingest alerts from external sources using the Cortex XDR APIs.
After Cortex XDR begins receiving external alerts, you must map the following required fields to the Cortex XDR format.
TIMESTAMP
SEVERITY
ALERT NAME
In addition, these optional fields are available, if you want to map them to the Cortex XDR format.
SOURCE PORT
DESTINATION IP
DESTINATION PORT
DESCRIPTION
DIRECTION
EXTERNAL ID
CATEGORY
ACTION
PROCESS SHA256
DOMAIN
HOSTNAME
USERNAME
If you send pre-parsed alerts using the Cortex XDR API, additional mapping is not required.
Storage of external alerts is determined by your Cortex XDR tenant retention policy. For more information, see Dataset Management.
API: Use the Insert CEF Alerts API to send the raw Syslog alerts or use the Insert Parsed Alerts API to convert the Syslog alerts to the Cortex
XDR format before sending them to Cortex XDR. If you use the API to send logs, you do not need to perform the additional mapping step in Cortex
XDR.
Activate the Syslog collector and then configure the alert source to forward alerts to the Syslog collector. Then configure an alert mapping rule as
follows.
3. Right-click the Vendor Product for your alerts and select Filter and Map.
4. Use the filters at the top of the table to narrow the results to only the alerts you want to map.
Cortex XDR displays a limited sample of results during the mapping rule creation. As you define your filters, Cortex XDR applies the filter to the limited
sample but does not apply the filters across all alerts. As a result, you might not see any results from the alert sample during the rule creation.
a. Rule Information: Define the NAME and optional DESCRIPTION to identify your mapping rule.
b. Alerts Field: Map each required and any optional Cortex XDR field to a field in your alert source.
If needed, use the field converter ( ) to translate the source field to the Cortex XDR syntax.
For example, if you use a different severity system, you need to use the converter to map your severities fields to the Cortex XDR risks of Critical,
High, Medium, and Low.
You can also use regex to convert the fields to extract the data to facilitate matching with the Cortex XDR format. For example, if you need to map
the port, but your source field contains both the IP address and port (192.168.1.200:8080), to extract everything after the :, use the following
regex:
^[^:]*_
For additional context when you are investigating an incident, you can also map additional optional fields to fields in your alert source.
Abstract
Ingestion of logs and data requires a Cortex XDR Pro per GB license.
You can verify the connectivity status of a collector instance on the Collection Integrations page. Instances are grouped by integration, and a status icon shows
a summary of instance statuses for each integration. Expand the integration section to see the status of each individual instance, and hover over the status
icons to see details about warning or error statuses.
On the Collection Integrations page, instances in error status display an error icon. Hover over the error icon next to the instance name to see the error
message as received from the API.
Each status change of an instance is logged in the collection_auditing dataset. Querying this dataset can help you see all the connectivity changes of
an instance over time, the escalation or recovery of the connectivity status, and the error, warning, and informational messages related to status changes.
Example 56.
dataset = collection_auditing
|filter collector_type = "STRATA_IOT"
You can create correlation rules that are based on the fields in the collection_auditing dataset.
Example 57. Example: Trigger collection alerts for error statuses on the STRATA_IOT collector
In this example, a correlation rule triggers an alert if an integration of the Strata IOT collector changes to error status.
Example XQL:
dataset = collection_auditing
|filter classification = "Error" and collector_type = "STRATA_IOT"
Field Value
Severity Medium
Category Collection
The Dataset Management page enables you to manage your datasets and understand your overall data storage duration for different retention periods and
datasets based on your hot and cold storage licenses, and retention add-ons that extend your storage. You can view details about your Cortex XDR licenses
and retention add-ons by selecting Settings → Cortex XDR License. For more information on license retention and the defaults provided per license, see
License retention in Cortex XDR.
Cortex XDR enforces retention on all log-type datasets excluding Host Inventory, Vulnerability Assessment, Metrics, and Users.
Your current hot and cold storage licenses, including the default license retention and any additonal retention add-ons to extend storage, are listed within the
Hot Storage License and Cold Storage License sections of the Dataset Management page. Whenever you extend your license retention, depending on your
requirements and license add-ons for both hot storage and cold storage, the add-ons are listed.
Cold storage, in addition to a cold storage license, requires compute units (CU) to run cold storage queries. For more information on CU, see Manage compute
units. For information on the CU add-on license, see Understand Cortex XDR license plans.
You can expand your license retention to include flexible Hot Storage based retention to help accommodate varying storage requirements for different
retention periods and datasets. This add-on license is available to purchase based on your storage requirements for a minimum of 1,000 GB. If this license is
purchased, an Additional Storage subheading in the Hot Storage License section is displayed on the Dataset Management page with a bar indicating how
much of the storage is used.
Only datasets that are already handled as part of the GB license are supported for this license. In addition, the retention configuration is only available
in Cortex XDR, as opposed to the public APIs or configuration from the parent MSSP tenant.
On any dataset configured to use Additional Hot Storage, you can edit the retention period. This enables you to view the current retention details for hot and
cold storage and configure the retention. This includes setting the amount of flexible hot storage-based retention designated for a dataset and the priority for
the dataset's hot storage. This is used when the storage limit is exceeded to know the data most critical to preserve.
2. In the Datasets table, right-click any dataset designated with flexible hot storage, and select Edit Retention Plan.
Additional hot storage: Set the amount of flexible hot storage-based retention designated for this dataset in months, where a month is calculated as
31 days.
Hot Storage Priority: Select the priority designated for this dataset's hot storage as either Low, Medium, or High. This is used when the storage limit
is exceeded. Data is first deleted from lowest to highest, and then from the oldest to latest timestamp.
4. Click Save.
Datasets table
For each dataset listed in the table, the following information is available:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Datasets include dataset permission enforcements in the Cortex Query Language(XQL), Query Center, and XQL Widgets. For example, to view or
access any of the endpoints and host_inventory datasets, you need role-based access control (RBAC) permissions to the Endpoint Administration
and Host Inventory views. Managed Security Services Providers (MSSP) administration permissions are not enforced on child tenants, but only on the
MSSP tenant.
Field Description
*TYPE Displays the type of dataset based on the method used to upload the data. The possible values include: Correlation, Lookup, Raw,
Snapshot, System, and User. For more information on each dataset type, see What are datasets?.
*LOG UPDATE Event logs are updated either continuously (Logs) or the current state is updated periodically (State) as detailed in the Last Updated
TYPE column.
Field Description
*LAST UPDATED Last time the data in the dataset logs were updated.
This column is updated once a day. Therefore, if the dataset was created or updated by the target or lookup flows, it's possible that
the Last Updated value is a day behind when the queries or reports were run as it was before this column was updated.
*ADDITIONAL Amount of flexible hot storage-based retention designated for this dataset in months, where a month is calculated as 31 days.
STORAGE
*TOTAL DAYS Actual number of days that the data is stored in the Cortex XDRtenant, which is comprised of the HOT RANGE + the COLD RANGE.
STORED
*HOT RANGE Details the exact period of the Hot Storage from the start date to the end date.
*COLD RANGE Details the exact period of the Cold Storage from the start date to the end date.
*TOTAL SIZE Actual size of the data that is stored in the Cortex XDR tenant. This number is dependent on the events stored in the hot storage. For
STORED the xdr_data dataset, where the first 31 days of storage are included with your license, the first 31 days are not included in the
TOTAL SIZE STORED number.
*ADDITIONAL Actual size of the additional flexible hot storage data that is stored in the Cortex XDR tenant in GB. This number is dependent on the
SIZE STORED events stored in the hot storage.
*AVERAGE Average daily amount stored in the Cortex XDR tenant. This number is dependent on the events stored in the hot storage.
DAILY SIZE
*HOT STORAGE Indicates the priority set for the dataset's hot storage as either Low, Medium, or High. This is used when the storage limit is exceeded.
PRIORITY Data is first deleted from lowest to highest, and then from the oldest to latest timestamp.
*TOTAL EVENTS Number of total events/logs that are stored in the Cortex XDR tenant. This number is dependent on the events stored in the hot
storage.
*AVERAGE Average size of a single event in the dataset (TOTAL SIZE STORED divided by the TOTAL EVENTS). This number is dependent on the
EVENT SIZE events stored in the hot storage.
*TTL For lookup datasets, displays the value of the time to live (TTL) configured for when lookup entries expire and are removed
automatically from the dataset. The possible values are:
Custom: Lookup entries expire according to a set number of days, hours, and minutes. The maximum number of days is 99999.
For more information, see Set time to live for lookup datasets.
DEFAULT QUERY Details whether the dataset is configured to use as your default query target in XQL Search, so when you write your queries you do
TARGET not need to define a dataset. By default, only the xdr_data dataset is configured as the DEFAULT QUERY TARGET and this field is
set to Yes. All other datasets have this field set to No. When setting multiple default datasets, your query does not need to mention
any of the dataset names, and Cortex XDR queries the default datasets using a join.
TOTAL HOT Total hot storage retention configured for the dataset in months, where a month is calculated as 31 days.
RETENTION
Field Description
TOTAL COLD Total cold storage retention configured for the dataset in months, where a month is calculated as 31 days.
RETENTION
Abstract
Learn how to import, delete, and interact with custom or third-party datasets in Cortex XDR.
Cortex XDR runs every Cortex Query Language (XQL) query against a dataset. A dataset is a collection of column:value sets. If you do not specify a dataset in
your query, Cortex XDR runs the query against the default datasets configured, which is by default xdr_data. The xdr_data dataset contains all of the
endpoint and network data that Cortex XDR collects. You can always change the default datasets using the set to default option. You can also upload datasets
as a CSV, TSV, or JSON file that contains the data you are interested in querying. These uploaded datasets are called lookup datasets.
Set a dataset as default, which enables you to query the datasets without specifying them in the query.
Name a specific dataset at the beginning of your query with the dataset stage command.
Dataset types
The type of dataset is based on the method used to upload the data. The possible types include:
Lookup: A dataset containing key-value pairs that can be used as a reference to correlate to events. For example, a user list with corresponding access
privileges. You can import or create a lookup dataset, and then reference the values for a certain key, run queries and take action. For more information,
see Lookup datasets.
Raw: Every dataset where PANW data is ingested out-of-the-box or third-party data is ingested using a configured dedicated collector.
Snapshot: A dataset that contains only the last successful snapshot of the data, such as Workday or ServiceNow CMDB tables.
User: If saved by a query using the target command, the Type can be either User or Lookup.
Datasets in XQL
By default, forensic datasets are not included in XQL query results, unless the dataset query is explicitly defined to use a forensic dataset.
Cortex Query Language (XQL) supports using different languages for dataset and field names. In addition, when setting up your XQL query, it is important to
keep in mind the following:
The dataset formats supported are dependent on the data retention offerings available in Cortex XDR according to whether you want to query hot
storage or cold storage.
Hot Storage queries are performed on a dataset using the format dataset = <dataset name>. This is the default option.
dataset = xdr_data
Cold Storage queries are performed using the format cold_dataset = <dataset name>.
cold_dataset = xdr_data
The refresh times for datasets, where all Cortex XDR system datasets, which are created out-of-the-box, are continuously ingested in near real-time as
the data comes in, except for the following:
Forensics datasets: The Forensics data is not configured to be updated by default. When you enable a collection in the Agent Settings profile, the
data is collected only once unless you specify an interval. If you specify an interval, the data is collected every <interval> number of hours with
the minimum being 12.
Query against a dataset by selecting it with the dataset command when you create an XQL query. For more information, see Create XQL query.
After you query runs, you can always save your query results as a dataset. You can use the target stage command to save query results as a dataset.
Managing datasets
You can manage your datasets in Cortex XDR from the Settings → Configurations → Data Management → Dataset Management page.
Below are some of the main tasks available for all dataset types by right-clicking a particular dataset listed in the Datasets table. Only tasks that need further
explanation are explained below. Datasets can only be deleted if there are no other dependencies. For example, if a Correlation Rule is based on a dataset,
you wouldn't be able to delete the dataset until you removed the dataset view from the XQL query of the Correlation Rule.
For more information on tasks specific to lookup datasets, see Lookup datasets.
View Schema
Select View Schema to view the schema information for every field found in the dataset result set in the Schema tab after running the query in XQL. Each
system field in the schema is written with an underscore (_) before the name of the field in the FIELD NAME column in the table.
Set as default
Select Set as default to query the dataset without having to specify it in your queries in XQL by typing dataset = <name of dataset>. Once configured,
the DEFAULT QUERY TARGET column entry for this dataset is set to Yes. By default, this option is not available when right-clicking the xdr_data dataset as
this dataset is the only dataset configured as the DEFAULT QUERY TARGET as it contains all of the endpoint and network data that Cortex XDR collects. Once
you Set as default another dataset, you can always remove it by right-clicking the dataset and selecting Remove from defaults. When setting multiple default
datasets, your query does not need to mention any of the dataset names, and Cortex XDR queries the default datasets using a join.
Select Copy text to clipboard to copy the name of the dataset to your clipboard.
Abstract
Learn more about lookup datasets to correlate data from a data source with events in your environment.
Lookup datasets enable you to correlate data from a data source you provide with the events in your environment. For example, you can create a lookup with a
list of high-value assets, terminated employees, or service accounts in your environment. Use lookups in your search, detection rules, and threat hunting.
Lookups are stored as name-value pairs and are cached for optimal query performance and low latency.
Investigate threats and respond to incidents quickly with the rapid import of IP addresses, file hashes, and other data from CSV files. After you import the
data, use lookup name-value pairs for joins and filters in threat hunting and general queries.
Import business data as a lookup. For example, import user lists with privileged system access, or terminated employees. Then, use the lookup to create
allowlists and blocklists to detect or prevent those users from logging in to the network.
Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the
alert. Prevent benign events from becoming alerts.
Enrich event data. Use lookups to enrich your event data with name-value combinations derived from external data sources.
You can import or create a lookup dataset, and then reference the values for a certain key, run queries and take action. Lookup datasets are created by any of
the following methods:
Manual upload from a CSV, TSV, or JSON file to Cortex XDR from the Dataset Management page. For more information, see Import a lookup dataset.
Query results are saved to a lookup dataset. If saved using the target stage, the Type can be either User or Lookup. For more information, see the
target stage in the XQL Language Reference Guide.
After a lookup a dataset is imported, you can always edit the dataset to update the data manually by right-clicking the dataset and selecting Edit.
A lookup dataset can only be deleted if there are no other dependencies. For example, if a Correlation Rule is based on a lookup dataset, you wouldn't be able
to delete the lookup dataset until you removed the dataset from the XQL query of the Correlation Rule.
Abstract
Learn more about importing data from an external file to create or update a lookup dataset in Cortex XDR.
You can import data from CSV, TSV, or JSON files into Cortex XDR to create or update lookup datasets.
The maximum size for the total data to be imported into a lookup dataset is 30 MB.
Field names can contain characters from different languages, special characters, numbers (0-9), and underscores (_).
Field names can't contain duplicate names, white spaces, or carriage returns.
The file doesn't contain a byte array (binary data) as it can't be uploaded.
2. Browse to your CSV, TSV, or JSON file. You can only upload a TSV file if it contains a .tsv file extension.
3. (Optional) Under Name, type a new name for the target dataset.
By default, Cortex XDR uses the name of the original file as the dataset name. You can change this name to something that will be more meaningful for
your users when they query the dataset. For example, if the original file name is mrkdptusrsnov23.json, you can save the dataset as
marketing_dept_users_Nov_2023.
Dataset names can contain special characters from different languages, numbers (0-9) and underscores (_). You can create dataset names using
uppercase characters, but in queries, dataset names are always treated as if they are lowercase.
The name of a dataset created from a TSV file must always include the extension. For example, if the original file name is mrkdptusrsnov23.tsv, you
can save the dataset with the name marketing_dept_users_Nov_2023.tsv.
4. Replace the existing data in the dataset overwrites the data in an existing lookup dataset with the contents of the new file.
6. After receiving a notification reporting that the upload succeeded, Refresh to view it in your list of datasets.
Abstract
You can only download a JSON file for a lookup dataset, where the Type set to Lookup on the Dataset Management page. This option is not available for any
other dataset type.
When you download a lookup dataset with field names in a foreign language, the downloaded JSON file displays the fields as COL_<randomstring> as
opposed to returning the fields in the foreign language as expected.
2. In the Datasets table, right-click the lookup dataset that you want to download as a JSON file, and select Download.
Abstract
Learn more about setting the time to live (TTL) for lookup datasets in Cortex XDR.
You can specify when lookup entries expire and are removed automatically from the lookup dataset by configuring the time to live (TTL). The time period of the
TTL interval is based on when the data was last updated. The default is forever and the entries never expire. You can also configure a specific time according
to the days, hours, and minutes. Expired elements are removed from the lookup dataset by a scheduled job that runs every five minutes.
2. In the Datasets table, right-click the lookup dataset, and select Set TTL.
3. Select one of the following to configure when lookup dataset entries expire and are removed:
Custom: Lookup entries expire according to a set number of days, hours, and minutes. The maximum number of days is 99999.
The TTL column in the Datasets table is updated with the changes and these changes are applied immediately on all existing lookup entries.
Abstract
Learn more about the monitored Cortex XDR datasets and dataset views activities.
Cortex XDR logs entries for events related to datasets monitored activities. Cortex XDR stores the logs for 365 days. To view the datasets audit logs, select
Settings → Management Audit Logs.
You can customize your view of the logs by adding or removing filters to the Management Audit Logs table. You can also filter the page result to narrow down
your search. The following table describes the default and optional fields that you can view in the Cortex XDR Management Audit Logs table:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Critical
High
Medium
Low
Informational
Type* and Sub-Type* Additional classifications of dataset logs (Type and Sub-Type):
Datasets:
Create Dataset
Delete Dataset
Update Dataset
Abstract
Learn more about what are Parsing Rules and what they are used for.
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
Cortex XDR includes an editor for creating 3rd party Parsing Rules, which enables you to:
Remove unused data that is not required for analytics, hunting, or regulation.
Easily identify and resolve Parsing Rules errors so you can troubleshoot them quickly.
Test your Parsing Rules on actual logs and validate their outputs before implementation.
Parsing Rules take raw log input, perform an arbitrary number of transitions and modifications to the data using Cortex Query Language (XQL), and
return zero, one, or more rows that are eventually inserted into the Cortex XDR tenant.
Parsing Rules can be grouped together by a no-match policy. If all the rules of a group did not produce an output for a specific log record, a no-match
policy defines what to do, such as drop the log or keep the log in some default format.
Upon ingestion, all fields are retained even fields with a null value. You can also use XQL to query parsing rules for null values.
Abstract
Learn about the Parsing Rules editor User Defined Rules, Default Rules, Both, and Simulate views.
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
Default Rules: Displays the parsing rules that are provided by default with Cortex XDR in read-only mode and a List of Errors section to view any errors in
your Parsing Rules.
Both: Side-by-side view of both the Default Rules and User Defined rules, so you can easily view the different rules on one screen. In addition, the LIst of
Errors section helps you troubleshoot any errors in your Parsing Rules.
Simulate: Enables you to test your Parsing Rules on actual logs and validate their outputs, which helps minimize your errors when creating Parsing Rules.
The editor includes the following sections.
User defined: A list of the current User defined rules on the left side of the window.
XQL Samples: A table of the existing Cortex Query Language (XQL) raw data samples on the right side of the window, which contain sample logs
listing the Vendor, Product, Raw Log, and Sample Time. For each Vendor and Product, up to 5 different samples are available to choose from.
From this list, you can select the logs used to simulate the rule.
Logs Output: Displays in a table format the following columns per dataset at the bottom of the window.
Dataset: Displays the applicable dataset name and a line number associated to this dataset in the User defined section.
Logs Output: Displays the output logs that are available based on your User defined rules and XQL Samples selected after simulating the
results. When there is no output log to display, the text Output logs is not available with the corresponding error message is
displayed. When there is no output due to a missing rule in the User defined section for the logs selected, the text No output logs. You can
change your parsing rules and try again is displayed.
Input Logs: Displays the relevant input log with a right-click pivot to Show diff between the Output Logs and Input Logs.
Abstract
The Parsing Rules file consists of multiple sections of three types, which also represent the custom syntax specific to Parsing Rules.
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
File structure
The Parsing Rules file consists of multiple sections of these three types, which also represent the custom syntax specific to Parsing Rules.
COLLECT (Optional): This section defines a rule that enables data reduction and data manipulation at the Broker VM to help avoid sending unnecessary
data to the Cortex XDR server and reduce traffic, storage, and computing costs. In addition, the COLLECT section is used to manipulate, alter, and enrich
the data before it’s passed to the Cortex XDR server. While this rule is optional to configure, once added this rule runs before the INGEST section.
CONST (Optional): This section is used to define strings and numbers that can be reused multiple times within Cortex Query Language (XQL) statements
in other INGEST sections by using $constName.
RULE (Optional): Rules are part of the XQL syntax, which are tagged with a name, and can be reused in the code in the INGEST sections by using
[rule:ruleName].
The order of the sections is unimportant. The data of each section type gets grouped together during the parsing stage. Before any action takes place all
COLLECT, CONST, RULE, and INGEST objects are grouped together and collected to the same list.
Syntax
The syntax used in the Parsing Rules file is derived from XQL, but with a few modifications. This subset of XQL is called XQL for Parsing (XQLp).
For more information on the XQL syntax, see Cortex XQL Language Reference.
The COLLECT, CONST, INGEST, and RULE syntax is derived from XQL, but with the following modifications for XQLp:
Only the following XQL stages are permitted: alter, fields, filter, and join. In addition, a new call stage is supported, which is used to invoke another rule.
An inner type of join stage is only supported in CONST, INGEST, and RULE sections and is not supported in a COLLECT section.
Only the following XQL functions are permitted in all sections: parse_timestamp, parse_epoch, and regexcapture.
The regexcapture function is only supported in Parsing Rules and cannot be used in any other XQL query.
A join inner query is restricted to using a lookup as a data source and is only supported in XQLp stages.
There is no default lookup, so all join inner queries must start with dataset=<lookup> | ....
An IN condition can only take a sequence list, such as device_name in (“device1”, “device2”, “device3”) and not another XQL or XQLp
inner queries.
Comments in C programming language can be used anywhere throughout the Parsing Rules file:
// line comment
/* inner comment */
Every statement in the Parsing Rules file must end with a semicolon (;).
16.5.3.1 | INGEST
Abstract
Understanding how to write a [INGEST] section in a Parsing Rules file and the syntax to use.
An INGEST section is used to define the resulting dataset. The COLLECT, CONST, and RULE sections are only add-ons, used to help organize the INGEST
sections, and are optional to configure. Yet, a Parsing Rules file that contains no INGEST sections, generates no Parsing Rules. Therefore, the INGEST section
is mandatory to configure.
INGEST syntax is derived from Cortex Query Language (XQL) with a few modifications as explained in the Parsing Rules syntax. In addition, INGEST sections
contain the following syntax add-ons:
INGEST sections can have more than one XQLp statement, separated by a semicolon (;). Each statement creates a different Parsing Rule.
The following XQL functions and stages are also supported in the INGEST section:
drop takes a condition similar to the XQL filter stage (same syntax), but drops every log entry that passes that condition. One can think of it as
a negative filter, so drop <condition> is not equivalent to filter not <condition>.
drop can only appear last in a statement. No other XQLp rules can follow.
INGEST sections take parameters, and not names as RULE sections use, where some are mandatory and others optional.
Parameter Description
vendor The vendor that the specified Parsing Rules apply to (mandatory).
product The product that the specified Parsing Rules apply to (mandatory).
dataset The name of the dataset to insert every row with the results after applying any of the specified Parsing Rules (mandatory).
Parameter Description
no_hit No-match strategy to use for the entire specified group of rules (optional). The default is keep.
If no_hit = drop, then in a scenario where none of the rules in the group generates output for a given log record, that record is
discarded.
If no_hit = keep, then in a scenario where none of the rules in the group generates output for a given log record, that record is
kept in the _raw_log field. This record is inserted into the group's dataset once, but every column holds NULL except for
_raw_log, which holds the original JSON log record.
ingestnull Defines whether null value fields are ingested (optional). By default this is set to true, so you only need to set this parameter when you
want to overwrite the default definition.
Each statement represents a different Parsing Rule in the same group as depicted in the following example:
Example 58.
[CONST]
DEVICE_NAME = "ngfw";
[rule:use_two_rules]
filter severity = "medium" | call basic_rule | call use_xql_and_another_rule;
[rule:basic_rule]
fields log_type, severity | filter log_type="eal" and severity="HIGH" and type="something";
[rule:use_xql_and_another_rule]call multiline_statement | filter severity = "medium";
[rule:multiline_statement]
alter url = json_extract(_raw_log, "$.url")
| join type = inner conflict_strategy = both (dataset=my_lookup) as inn url=inn.url
|filter severity = "medium";
[ingest:vendor=panw, product=ngfw, dataset=panw_ngfw_ds, no_hit=drop]
filter log_type="traffic" | alter url = json_extract(_raw_log, "$.url");
call use_two_rules | join type = inner conflict_strategy = both (dataset=my_lookup) as inn severity=inn.severity | fields severity, log_type | drop device_name =
$DEVICE_NAME;
This generates 1 group of 2 Parsing Rules for panw/ngfw, where all the ingested data into panw_ngfw_ds dataset.
Rule #1:
filter log_type="traffic" | alter url = json_extract(_raw_log, "$.url");
Rule #2:
filter severity = "medium"
| fields log_type, severity
| filter log_type="eal" and severity="HIGH" and type="something"
| alter url = json_extract(_raw_log, "$.url")
| join type = inner conflict_strategy = both (dataset=my_lookup) as inn url=inn.url
| filter severity = "medium"
| filter severity = "medium"
| join type = inner conflict_strategy = both (dataset=my_lookup) as inn severity=inn.severity
| fields severity, log_type
| drop device_name = $DEVICE_NAME
Since section order is unimportant, you do not have to declare a RULE or a CONST before using it in an INGEST section.
You can have multiple INGEST sections with the same vendor, product, dataset , and no_hit values. Yet, this can lead to unexpected results.
Consider the following example:
Example 59.
[ingest:vendor=panw, product=ngfw, dataset=panw_ngfw_ds, no_hit=keep]
filter raw_log not contains "alert";
[ingest:vendor=panw, product=ngfw, dataset=panw_ngfw_ds, no_hit=keep]
filter device_type not contains "agent";
Let lw be a log row. If lw.raw_log contains an alert and lw.device_type contains an agent, then lw is inserted twice into the pan_ngfw_ds
dataset as every section is standalone.
To eliminate these kind of errors and misunderstandings, it is highly advised to group all rules having the same vendor, product, dataset , and
no_hit values in a single INGEST section.
Logs that were discarded by a drop stage are considered ingested with a no-match policy. This means they are not kept even if no_hit = keep.
Keep in mind that all rules inside a group get evaluated independently. This is in contrast to firewall-like rules, which stop evaluating the first rule
that is able to make a decision. Therefore, without proper filtering, it is possible to ingest the same log more than once.
You can override the default raw dataset in INGEST sections. For more information, see Parsing Rules Raw Dataset.
Cortex XDR supports configuring case sensitivity in Parsing Rules only within the INGEST section using the following configuration stage:
You can add a single tag to the ingested data as part of the ingestion flow that you can easily query. You can add tags as part of the INGEST section or
use both the INGEST and RULE sections. The following are examples of each:
INGEST section:
Example 60.
Example 61.
[RULE:new_tag_rule]
tag add "test";
[RULE:new_tag_rule]
tag add "test1", "test2", "test3";
16.5.3.2 | COLLECT
Abstract
Understand how to write a [COLLECT] section in a Parsing Rules file, and the syntax to use.
A COLLECT section defines a rule that enables data reduction and data manipulation at the Broker VM to help avoid sending unnecessary data to the Cortex
XDR server and reduces traffic, storage, and computing costs. In addition, the COLLECT section is used to manipulate, alter, and enrich the data before it’s
passed to the Cortex XDR server. While this rule is optional to configure, once added, this rule runs before the INGEST section.
To avoid performance issues on the Broker VM, Cortex XDR does not permit all Parsing Rules to run on the Broker VM by default, but only the Parsing Rules
that you designate.
The Broker VM is directly affected by the [COLLECT] rules you create, so depending on the complexity of the rules more hardware resources on the Broker
VM may be required. As a result, ensure that your Broker VM meets the following minimum hardware requirements to run [COLLECT] rules:
8-core processor
8GB RAM
512GB disk
Plan for a max of 10K eps (events per second) per core.
COLLECT syntax is derived from Cortex Query Language (XQL) with a few modifications as explained in the Parsing Rules syntax. In addition, COLLECT rules
contain the following syntax add-ons:
COLLECT rules can have more than one XQLp statement, separated by a semicolon (;). Each statement creates a different data reduction and
manipulation at the Broker VM for a different vendor and product.
While the XQL stages alter and fields are permitted in COLLECT rules for various vendors and products, you should avoid using them for supported
vendors that can be used for Analytics as these stages can disrupt the operation of the Analytics Engine. For a list of these vendors, see the Visibility of
logs and alerts from external sources table specifically those vendors with Normalized Log Visibility.
drop takes a condition similar to the XQL filter stage (same syntax), but drops every log entry that passes that condition. One can think of it as
a negative filter, so drop <condition> is not equivalent to filter not <condition>.
drop can only appear last in a statement. No other XQLp syntax can follow.
COLLECT sections take parameters, where some are mandatory and others optional.
Parameter Description
vendor The vendor that the specified COLLECT rule for data reduction and data manipulation at the Broker VM applies to (mandatory).
product The product that the specified COLLECT rule for data reduction and data manipulation at the Broker VM applies to (mandatory).
target_brokers Specifies the list of Brokers to run the COLLECT rule for data reduction and data manipulation based on the vendor and product
configured (mandatory). When target_brokers=*, the COLLECT rule applies to all the data collected by the Broker VM applets.
The CSV Collector applet is not affected by the COLLECT rules applied to a Broker VM.
no_hit No-match strategy to use for the entire specified group of COLLECT rules (optional). The default is keep.
If no_hit = drop, then in a scenario where none of the COLLECT rules in the group generates output for a given event, that
event is discarded.
If no_hit = keep, then in a scenario where none of the COLLECT rules in the group generates output for a given event, that
event is passed to the Cortex XDR server.
The following is an example of using a COLLECT rule to filter data for a specific vendor and product that will run before the INGEST section.
Example 62.
[COLLECT:vendor="Apache", product="ApacheServer", target_brokers = (bvm1, bvm2, bvm3), no_hit = drop]
alter source_log = json_extract_scalar(_raw_log, "$.source")
| filter source_log = "WebApp-Logs"
| fields source_log, _raw_log;
[INGEST:vendor="Apache", product="ApacheServer", target_dataset = "dvwa_application_log"]
alter log_timestamp = json_extract_scalar(_raw_log, "$.timestamp")
| alter log_msg = json_extract_scalar(_raw_log, "$.msg")
| alter log_remote_ip = json_extract_scalar(_raw_log, "$.Remote_IP")
| alter scanned_ip = json_extract_scalar(_raw_log, "$.Scanned_IP")
| fields log_msg ,log_remote_ip ,log_timestamp ,source_log ,scanned_ip , _raw_log;
There are no COLLECT rules by default, so all collected events are forwarded by the Broker VM to the Cortex XDR server.
To reduce the amount of data transmitted to Cortex XDR from the broker, use filters to drop logs. Yet, be aware that once the logs are modified using
alter or fields stages, the Broker VM will convert the original log into a JSON format, which could increase the data size being sent from the broker
to Cortex XDR.
When COLLECT rules are defined, the designated Broker VMs check every collected event versus each rule. When there is a match for a given product
or vendor, the Broker VM checks if it meets the filter criteria.
If it meets the criteria, the event is passed to the Cortex XDR server.
-If no_hit=drop, then this COLLECT rule will not pass the event. Yet, the event still goes through other rules on this Broker VM.
-If no_hit=keep, the event is passed to the Cortex XDR server, and goes through other rules on this Broker VM.
When the evaluated event, doesn’t match any product or vendor for a defined COLLECT rule, the event is passed to the Cortex XDR server.
16.5.3.3 | CONST
Abstract
Learn how to write a [CONST] section in a Parsing Rules file and the syntax to use.
A CONST section is used to define strings and numbers that can be reused multiple times within Cortex Query Language (XQL) statements in other INGEST
sections by using $constName. This can be helpful to avoid writing the same value in multiple sections, similar to constants in modern programming
languages.
Example 63.
[CONST]
DEFAULT_DEVICE_NAME = "firewall3060"; // string
FILE_REGEX = "c:\\users\\[a-zA-Z0-9.]*"; // complex string
my_num = 3; /* int */
An example of using a CONST inside XQL statements in other INGEST sections using $constName:
The dollar sign ($) must be adjacent to the [CONST] name, without any whitespace in between.
...
| filter device_name = $DEFAULT_DEVICE_NAME
| alter new_field = JSON_EXTRACT(field, $FILE_REGEX)
| filter age < $MAX_TIMEOUT
| join type=$DEFAULT_JOIN_TYPE conflict_strategy=$DEFAULT_JOIN_CONFLICT_STRATEGY (dataset=my_lookup) as inn url=inn.url
...
Only quoted or integer terminal values are considered valid for CONST sections.
Example 64.
[CONST]
WORD_CONST = abcde; //invalid
func_val = regex_extract(_raw_log, "regex"); // not possible
RECURSIVE_CONST = $WORD_CONST; // not terminal - not possible
CONST sections are meant to replace values. Other types, such as column names, are not supported:
...
| filter $DEVICE_NAME = "my_device" // illegal
...
CONST names must be unique inside a section, and across all sections of the file. You cannot have the same CONST name defined again in the same
section, or in any other CONST sections in the file.
Since section order is unimportant, you do not have to declare a CONST before using it. You can have the CONST section written below other sections that
use those CONST sections.
CONST syntax is derived from XQL, but a few modifications as explained in the Parsing Rules syntax.
16.5.3.4 | RULE
Abstract
Understanding how to write a [RULE] section in a Parsing Rules file and the syntax to use.
Rules are very similar to functions in modern programming languages. They are essentially pieces of Cortex Query Language (XQL) syntax, tagged with a
name - alias, for easier code reuse and avoiding code duplications. A RULE is an add-on to the Parsing Rule syntax and is optional to configure.
RULE syntax is derived from XQL with a few modifications as explained in the Parsing Rules syntax.
For more information on the XQL syntax, see Cortex XQL Language Reference guide.
Example 65.
[rule:filter_alerts]
filter raw_log not contains "alert";
Rules are invoked by using a call keyword as depicted in the following example:
Example 66.
[rule:filter_alerts]
filter raw_log not contains "alert";
[rule:use_another_rule]
filter severity="LOW" | call filter_alerts | fields - raw_log;
[rule:use_another_rule]
filter severity="LOW" | filter raw_log not contains "alert" | fields - raw_log;
Rule names are not case-sensitive. They can be written in any user-desired casing, such as UPPER_SNAKE, lower_snake, camelCase, and CamelCase).
For example, MY_RULE=My_Rule=my_rule.
Rule names must be unique across the entire file. This means you cannot have the same rule name defined more than once in the same file.
Since section order is unimportant, you do not have to declare a rule before using it. You can have the rule definition section written below other
sections that use this rule.
You can add a single tag to the ingested data as part of the ingestion flow that you can easily query. You can add tags using both the INGEST and RULE
sections.
Example 67.
[RULE:new_tag_rule]
tag add "test";
Example 68.
[RULE:new_tag_rule]
tag add "test1", "test2", "test3";
You can also add tags using only the INGEST section. For more information, see INGEST.
Abstract
Cortex XDR includes an editor for creating 3rd party Parsing Rules.
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
Cortex XDR provides a number of default Parsing Rules that you can easily override as required using XQL and additional custom syntax that is specific to
creating Parsing Rules. Before creating your own Parsing Rules, we recommend you review the following:
2. Select the Parsing Rules editor view for writing your Parsing Rules.
Default Rules: Select this view to understand which parsing rules are provided by default with Cortex XDR in read-only mode.
Both: Select this view to see the Parsing Rules editor as well as the default rules as you write your Parsing Rules.
Simulate: Select this view to test your Parsing Rules on actual logs and validate their outputs as you write your Parsing Rules.
3. Write your Parsing Rules using XQL syntax and the syntax specific for Parsing Rules.
4. (Optional) Test your Parsing Rules on actual logs and validate their outputs using the Simulate view.
You need Cortex XDR administrator or Instance Administrator permissions to access the Simulate view and perform these tests.
b. For the User defined rules that you want to test, select the logs from the XQL Samples listed that you want to use to simulate the rule. For each
Vendor and Product, up to 5 different samples are available to choose from.
You can also pivot (right-click) any of the logs that you’ve selected to Simulate the rules.
d. Review the results in the Logs output table to determine if your User defined rules are fine or need further changes.
The Logs output table displays the following columns per dataset at the bottom of the window.
Dataset: Displays the applicable dataset name and a line number associated with this dataset in the User defined rules section.
Output Logs: Displays the available output log. When there is no output log to display, the text Output logs is not available with the
corresponding error message is displayed. When there is no output due to a missing rule in the User defined rules section for the logs
selected, the text No output logs. You can change your parsing rules and try again is displayed.
Input Logs: Displays the relevant input log with a right-click pivot to Show diff between the Output Logs and Input Logs.
e. (Optional) Modify your User defined rules and repeat steps #2-4 until you are satisfied with the results.
Abstract
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
To help you easily identify and resolve parsing errors in Cortex XDR, all parsing errors are saved to a separate dataset called parsing_rules_errors. This
dataset displays important information about each error, including the RAW_LOG, log metadata, Parsing Rule metadata, and error description, which you need
to effectively troubleshoot the problem. In addition, a Parsing Rules Error notification is sent to the Notification Center whenever a new parsing error is added to
the dataset.
Compilation Errors: Unable to compile a rule for different reasons including invalid function parameters, such as invalid regex.
Data Format Errors: A mismatch between the expected data type, such as CEF, LEEF, or JSON with the actual data, such as TEXT or CSV.
Runtime Errors: Unable to apply a rule to the data, such as an attempt to add a String to a Number.
All parsing errors and Cortex Data Model (XDM) errors are saved to a dataset called parsing_rules_errors. The following table describes the fields that
are available when running a query in XQL Search for the parsing_rules_errors dataset in alphabetical order.
Some errors can only be found after the applicable logs are collected in Cortex XDR.
Read more...
_BROKER_DEVICE_ID Displays the ID of the Broker VM associated to the log that triggered this error. Log Metadata
_BROKER_DEVICE_IP Displays the IP address of the Broker VM associated to the log that triggered this error. Log Metadata
_BROKER_DEVICE_NAME Displays the device name of the Broker VM associated to the log that triggered this error. Log Metadata
_COLLECTOR_HOSTNAME Displays the host name of the data collector associated to the log that triggered this error. Log Metadata
_COLLECTOR_ID Displays the ID of the data collector associated to the log that triggered this error. Log Metadata
_COLLECTOR_IP_ADDRESS Displays the IP address of the data collector associated to the log that triggered this error. Log Metadata
_COLLECTOR_NAME Displays the name of the data collector associated to the log that triggered this error. Log Metadata
_COLLECTOR_TYPE Displays the type of data collector associated to the log that triggered this error. Log Metadata
CONTENT_ID Displays the package_id of a content pack containing the default Parsing Rule for which Parsing Rule
this error was generated.
CREATED_AT Displays a timestamp for when the rule, which generated the error, was created. Parsing Rule
END_LINE Displays the last line of the particular rule associated to this error. Parsing Rule
ERROR_CATEGORY Displays the category of the error, which can be one of the following: N/A
Compile: Compilation error, such as syntax error, missing argument, and invalid
regex.
Data format: Errors relating to the data format, such as received LEEF when
expected CEF.
_FINAL_REPORTING_DEVICE_IP Displays the IP address of the device that the log was collected from that triggered this Log Metadata
error.
_FINAL_REPORTING_DEVICE_NAME Displays the name of the device that the log was collected from that triggered this error. Log Metadata
_ID Displays the Rule ID that triggered this error. Parsing Rule
INGEST_NULL Displays a boolean value of either TRUE or FALSE to indicate whether null value fields are Parsing Rule
configured to be ingested or not. By default, null fields are ingested.
NO_HIT Displays the no-match strategy configured for the rule group that generated the parsing Parsing Rule
error.
_PRODUCT Displays the defined PRODUCT associated to the log (for data format errors) or rule (for Log Metadata or
compilation and runtime errors) that triggered this error. Parsing Rule
RAW_LOG Displays the raw log for the Parsing Rule error or parsed log for the Data Model Rule error. Raw log
_REPORTING_DEVICE_IP Displays the IP address of the device that the log originated from that triggered this error. Log Metadata
_REPORTING_DEVICE_NAME Displays the name of the device that the log originated from that triggered this error. Log Metadata
RULE_TYPE Displays the type of rule that triggered this error. Parsing Rule
START_LINE Displays the first line of the particular rule associated to this error. Parsing Rule
TARGET_DATASET Displays the Target dataset associated to the rule that triggered this error. Parsing Rule
_TIME Displays the timestamp when the error was generated. Raw log
_VENDOR Displays the defined VENDOR associated to the log (for data format errors) or rule (for Raw log or Parsing
compilation and runtime errors) that triggered this error. Rule
XDRC_ID Displays the ID of the XDR Collector associated to the log that triggered this error. Log Metadata
XDRC_IP Displays the IP address of the XDR Collector associated to the log that triggered this error. Log Metadata
XDRC_NAME Displays the name of the XDR Collector associated to the log that triggered this error. Log Metadata
XQL_TEXT Displays the specific section of the rule related to the error generated. Parsing Rule
Abstract
Each vendor and product has its own raw dataset with its own default format that can be overridden in an INGEST section.
Parsing Rules requires a Cortex XDR Pro per GB license and a user with Cortex Account Administrator or Instance Administrator permissions.
Each vendor and product has its own raw dataset that uses the format <vendor>_<product>_raw. For example, for Palo Alto Networks Next-Generation
Firewall, the dataset is called panw_ngfw_raw. This raw dataset by default keeps all raw logs, whether ingested or dropped for other datasets.
You can override the default raw dataset, by creating an INGEST section referring to that dataset.
Example 69.
Save your ingested, parsed data in an external location by exporting your event logs to a temporary GCP storage bucket.
This feature requires a Cortex XDR Pro license and an Event Forwarding add-on license. Only Administrators have access to this screen.
You can save your ingested, parsed data in an external location by exporting your event logs to a temporary storage bucket on Google Cloud Platform (GCP),
from where you can download them for up to 7 days.
Use the Event Forwarding page to activate your Event Forwarding licenses, to retrieve the path and credentials of your external storage destination on GPC.
Once this page is activated, Cortex XDR automatically creates the GCP bucket.
1. Under Settings → Configurations → Data Management → Event Forwarding, activate the licenses in the Activation section.
Enable GB Event Forwarding to export parsed logs for Cortex XDR Pro per GB to an external SIEM for storage. This enables you to keep data in
your own storage in addition to the Cortex XDR data layer, for compliance requirements and machine learning purposes. The exported logs are
raw data, without any stories. Cortex XDR exports all the data without filtering or configuration options.
Enable Endpoints Event Forwarding to export raw endpoint data for Cortex XDR Pro EP and Cloud Endpoints. The exported logs are raw data,
without any stories. Cortex XDR exports a subset of the endpoint data without filtering or configuration options.
The Destination section displays the details of the GCP bucket created by Cortex XDR, where your data is stored for 7 days. The data is compressed
and saved as a line-delimited JSON gzip file.
b. Generate and download the Service Account JSON WEB TOKEN, which contains the access key.
Save it in a secure location. If you need to regenerate the access token, Replace and download a new token. This action invalidates the previous
token.
The token provides access to all your data stored in this bucket and must be saved in a safe place.
Use the storage path and access key to manually retrieve your files or use an API for automated retrieval.
c. Using the storage path and the access key, retrieve your files manually or using an API.
4. (Optional) Use the Pub/Sub subscription to ensure reliable data retrieval without any loss.
b. Configure your application or system to receive messages from the Pub/Sub subscription.
Whenever a new file is added to the GCS bucket, a message is sent to the Pub/Sub subscription. The object path of the file in the bucket has the
prefix internal/.
c. Process the received message to initiate the download of the corresponding file.
Abstract
Learn more about the included/excluded fields by event type for Endpoint Event Forwarding in Cortex XDR.
Endpoints Event Forwarding exports ingested, parsed endpoint data for Cortex XDR pro EP and Cloud Endpoints. The exported logs are raw data, without any
stories. Cortex XDR exports the data without filtering or configuration options. The tables below list the fields that are included and excluded for:
The table below lists the types of events exported for the endpoints and the fields that are included and excluded:
action_remote_ip action_proxy
action_remote_port action_network_app_ids
action_local_ip action_network_rule_ids
action_local_port action_network_dpi_fields
action_network_connection_id action_network_is_loopback
action_network_is_server action_upload
action_network_creation_time action_download
action_total_upload action_network_stats_seq
action_total_download action_network_is_ipv6
action_network_protocol
action_network_stats_is_last
action_process_os_pid action_process_is_causality_root
action_process_instance_id action_process_is_replay
action_process_image_md5 action_process_yara_file_scan_result
action_process_image_sha256 action_process_wf_verdict
action_process_image_path action_process_static_analysis_score
action_process_image_name execution_actor_causality_id
action_process_image_extension action_process_ns_pid
action_process_image_command_line action_process_container_id
action_process_signature_product action_process_is_container_root
action_process_signature_vendor action_process_image_command_line_indices
action_process_signature_is_embedded action_process_is_special
action_process_signature_status action_process_ns_user_sid
action_process_integrity_level action_process_ns_user_real_sid
action_process_username action_process_file_size
action_process_user_sid action_process_file_create_time
action_process_in_txn action_process_file_mod_time
action_process_pe_load_info action_process_remote_session_ip
action_process_peb action_process_file_info
action_process_peb32 action_process_device_info
action_process_last_writer_actor execution_actor_instance_id
action_process_token action_process_user_real_sid
action_process_privileges action_process_requested_parent_pid
action_process_fds action_process_requested_parent_iid
action_process_scheduled_task_name
action_process_termination_date
action_process_instance_execution_time
action_process_termination_code
action_file_name action_file_yara_file_scan_result
action_file_previous_file_path action_file_dir_query
action_file_previous_file_name action_file_previous_device_info
action_file_md5 action_file_device_info
action_file_sha256 action_file_reparse_path
action_file_size action_file_reparse_count
action_file_attributes action_file_dirty_reason
action_file_create_time action_file_remote_ip
action_file_mod_time action_file_remote_port
action_file_access_time action_file_remote_file_ip
action_file_type action_file_remote_file_host
action_file_operation_flags action_file_sec_desc
action_file_mode action_file_previous_file_extension
action_file_owner action_file_extension
action_file_owner_name action_file_archive_list
action_file_group action_file_contents
action_file_group_name
action_file_device_type
action_file_signature_product
action_file_signature_vendor
action_file_signature_is_embedded
action_file_signature_status
action_file_pe_info
action_file_prev_type
action_file_last_writer_actor
action_file_is_anonymous
Registry action_registry_value_type
action_registry_key_name
action_registry_data
action_registry_value_name
action_registry_old_key_name
action_registry_file_path
action_registry_return_val
action_remote_process_os_pid action_remote_process_is_causality_root
action_remote_process_instance_id action_remote_process_is_replay
action_remote_process_image_md5 action_remote_process_image_extension
action_remote_process_image_sha256 action_remote_process_image_command_line_indices
action_remote_process_image_path action_remote_process_is_special
action_remote_process_image_name action_remote_process_file_size
action_remote_process_image_command_line action_remote_process_file_create_time
action_remote_process_signature_product action_remote_process_file_mod_time
action_remote_process_signature_vendor action_remote_process_file_info
action_remote_process_signature_is_embedded
action_remote_process_signature_status
action_remote_process_thread_start_address
action_remote_process_integrity_level
action_remote_process_username
action_remote_process_user_sid
address_mapping
action_module_md5 action_module_yara_file_scan_result
action_module_sha256 action_module_file_size
action_module_base_address action_module_file_create_time
action_module_image_size action_module_file_mod_time
action_module_signature_product action_module_file_access_time
action_module_signature_vendor action_module_device_info
action_module_signature_is_embedded action_module_wf_verdict
action_module_signature_status
action_module_file_info
action_module_last_writer_actor
action_module_other_load_location
action_module_page_protection
action_module_system_properties
action_module_code_integrity
action_module_boot_code_integrity
action_username
action_user_status_sid
action_user_session_id
action_user_is_local_session
action_powered_off
agent_status_component
host_metadata_hostname
host_metadata_domain
The table below lists the common fields for all event types and the fields that are included and excluded.
Common Fields For All Event Types Included Field Excluded Field
agent_hostname event_utc_diff_minutes
agent_interface_map manifest_file_version
Common Fields For All Event Types Included Field Excluded Field
agent_os_sub_type source_message_id
agent_os_type zip_id
agent_version agent_request_time
agent_id server_request_time
agent_ip_addresses agent_id_hash
agent_ip_addresses_v6 agent_id_hash_bre
backtrace_identities
_product
_vendor
actor_fields
agent_is_vdi
event_type event_is_replay
event_sub_type event_impersonation_status
event_id event_is_simulated
event_timestamp event_user_presence
event_rpc_interface_uuid agent_host_boot_time
event_rpc_func_opnum agent_session_start_time
event_validity_enum
event_invalidity_field
event_rpc_inteface_version_major
Common Fields For All Event Types Included Field Excluded Field
event_rpc_inteface_version_minor
event_rpc_protocol
event_address_mapped
event_user_presence_status
os_actor_local_port actor_process_auth_id
os_actor_primary_user_sid actor_process_causality_id
os_actor_primary_username actor_process_ns_pid
os_actor_process_command_line actor_process_session_id
os_actor_process_image_md5 actor_process_signature_is_embedded
os_actor_process_image_name actor_process_signature_product
os_actor_process_image_path actor_process_signature_vendor
os_actor_process_image_sha256 actor_remote_host
os_actor_process_signature_status actor_remote_pipe_name
os_actor_process_logon_id actor_remote_port
os_actor_process_os_pid actor_rpc_interface_version_major
os_actor_remote_ip actor_rpc_interface_version_minor
os_actor_process_instance_id actor_rpc_protocol
os_actor_thread_thread_id actor_type
actor_rpc_func_opnum
actor_rpc_interface_uuid
Common Fields For All Event Types Included Field Excluded Field
actor_process_device_info
actor_process_execution_time
actor_process_file_create_time
actor_process_file_mod_time
actor_process_file_size
actor_process_image_extension
actor_process_instance_id
actor_process_command_line_indices
actor_process_integrity_level
actor_process_is_special
actor_process_last_writer_actor
actor_process_instance_id
actor_thread_thread_id
actor_is_injected_thread
actor_causality_id
actor_effective_username
actor_effective_user_sid
Learn more about managing and tracking your compute units usage for API and Cold Storage XQL queries.
Cortex XDR uses compute units (CU) for these types of queries:
Cold Storage Queries: Cold Storage is a data retention offering for cheaper storage usually for long-term compliance needs with limited search options.
You can perform queries on Cold Storage data using the dataset format cold_dataset = <dataset name>, which consumes CU according to the
following calculations.
Timeframe, complexity, and the number of Cold Storage response results of each XQL Cold Storage query.
When you query Cold Storage data, the rewarmed data is saved in a temporary hot storage cache that is available for subsequent queries on the same
time-range at no additional cost. The rewarmed data is available in the cache for 24 hours and on each re-query the cached data is extended for 24
hours, for up to 7 days.
The CU consumption of cold storage queries are based on the number of days in the query time frame. For example, when querying 1 hour of a specific
day, the CU of querying this entire day are consumed. When querying 1 hour that extends past 2 days, such as from 23:50 to 00:50 of the following day,
the CU of querying these two days are consumed.
Abstract
Learn more about how to compute units (CU) works according to your license and available options after reaching your quota.
Cortex XDR provides a free daily quota of compute units (CU) allocated according to your license size. Queries called without enough quota will fail. To expand
your investigation capabilities, you can purchase additional CU by enabling the Compute Unit add-on.
The Compute Unit add-on provides an additional 1 compute unit per day, in addition to your free daily quota. For example, if you have allocated 5 free daily
CU, with the add-on you will have a total of 6 daily compute units. The CU are refreshed every 24 hours according to UTC time. You can purchase a minimum
of 50 compute units.
To gauge how many CU you require, Cortex XDR provides a 30-day free trial period with a total of three times your allocated CU to run XQL API and Cold
Storage queries. You can then track the cost of each XQL API and Cold Storage query responses and the Compute Units Usage page. In addition, Cortex
XDR sends a notification when the Compute Units add-on has reached your daily threshold.
To enable the add-on, select Settings → Configurations → Cortex XDR License → Addons tile, and select the Compute Unit tile and Enable.
2. In the Daily Usage in Compute Units section, monitor the amount of quota units used over the past 24 hours and the amount of free daily quota allocated
according to your license size and the additional amount you have purchased. The time frame is calculated according to UTC time.
3. In the Compute Units over last 30 Days section, track your quota usage over the past 30 days. The red line represents your daily license quota. For
Managed Security tenants, make sure you select from the MSSP Tenant Selection drop-down menu, the tenant for which you want to display the
information. To investigate further.
Hover over each bar to view the total number of query units used each day.
Select a bar to display in the XCompute Unit Usage table the list of queries executed on the selected day.
4. In the Compute Units Usage table, investigate all the queries that were executed on your tenant. For Managed Security tenants, make sure you select
from the MSSP Tenant Selection drop-down menu, the tenant for which you want to display the information. You can filter and sort according to the
following fields.
Timestamp
Compute Unit Usage: Displays how many query units were used to execute the query .
Tenant: Appears only in a Managed Security tenant. Displays which tenant executed an API query or Cold Storage query.
In the Compute Units Usage table, locate an XQL API or Cold Storage query, right-click and select Show Results.
The query is displayed in the query field of the Query Builder where you can view the query results. For more information, see How to build XQL queries.
XQL is the Palo Alto Networks Cortex Query Language used in Cortex XDR.
XQL is the Cortex Query Language. It allows you to form complex queries against data stored in Cortex XDR. This section introduces XQL, and it provides
reference information on the various stages, functions, and aggregates that XQL supports.
Abstract
Learn more about the Cortex Query Language features to query for raw network and endpoint data.
The Cortex Query Language (XQL) enables you to query for information contained in a wide variety of data sources in Cortex XDR for rigorous endpoint and
network event analysis. Queries require a dataset, or data source, to run against. Unless otherwise specified, the query runs against the xdr_data dataset,
which contains all raw log information that Cortex XDR collects from all Cortex product agents, including EDR data, and PAN NGFW data. You can also import
data from third parties and then query against those datasets as well.
You submit XQL queries to Cortex XDR using the Incident Response → Investigation → Query Builder → XQL Search user interface.
XQL is similar to other query languages, and it uses some of the same functions as can be found in many SQL implementations, but it is not SQL. XQL forms
queries in stages. Each stage performs a specific query operation and is separated by a pipe (|) character. To help you create an effective XQL query with the
proper syntax, the query field in the user interface provides suggestions and definitions as you type. For example, the following query uses three stages to
identify the dataset to query, identify the field to be retrieved from the dataset, and then set a filter that identifies which records should be retrieved as part of
the query:
dataset = xdr_data
| fields os_actor_process_file_size as osapfs
| filter to_string(osapfs) = "12345"
XQL supports:
Aggregations.
Queries against presets, which are collections of information that are specific to a given type of network or endpoint activity, such as authentication or file
transfers.
Abstract
Learn more about the Cortex Query Language structure when creating a query.
Cortex Query Language (XQL) queries usually begin by defining a data source, be it a dataset or a preset. After that, you use zero or more stages to form the
XQL query. Each stage is delimited using a pipe (|). The function performed by each stage is identified by the stage keyword that you provide.
Specifying a dataset is not required because Cortex XDR uses xdr_data as the default dataset. If you have more than one dataset or lookup, you can change
your default dataset by navigating to Settings → Configurations → Data Management → Dataset Management, right-click on the appropriate dataset, and
select Set as default.
In the simplest case, you can specify a dataset using one of the following formats.
Hot Storage queries are performed on a dataset using the format dataset = <dataset name>. This is the default option.
dataset = xdr_data
Cold Storage queries are performed using the format cold_dataset = <dataset name>.
cold_dataset = xdr_data
You can also build a query that investigates data in both a cold dataset and hot dataset in the same query. In addition, as the hot storage dataset format is the
default option and represents the fully searchable storage, this format is used throughout this guide. For more information on hot and cold storage, see Dataset
management.
When using the hot storage default format, this returns every xdr_data record contained in your Cortex XDR instance over the time range that you provide to
the Query Builder user interface. This can be a large amount of data, which might take a long time to retrieve. You can use a limit stage to specify how many
records you want to retrieve.
The records resulting from this query, or the result set, are returned in unsorted order. Every time you run the query, it will probably return a different set of
records in no specific order. To create a predictable result set, use other stages to define sort order, filter the result set to identify exactly what records you want
returned, to create fields containing aggregations , and more.
There is no practical limit to the number of stages that you can specify. See Stages for information on all the supported stages.
In the xdr_data dataset, every user field included in the raw data, for network, authentication, and login events, has an equivalent normalized user field
associated with it that displays the user information in the following standardized format:
<company domain>\<username>
For example, the login_data field has the login_data_dst_normalized_user field to display the content in the standardized format. We recommend
that you use these normalized_user fields when building your queries to ensure the most accurate results.
Abstract
You can add comments in any section when building a query in Cortex Query Language (XQL).
//<comments>
For example,
dataset = xdr_data
| filter event_type=1
//ENUM.process
and event_sub_type = 1
//ENUM.execution
To write a comment that extends over multiple lines use the following syntax.
/*multi-line <comments> */
For example,
dataset = xdr_data
| filter
/*multi-line Adding comments is a great thing.
Here is an example */
event_type=1
Abstract
Cortex Query Language supports specific comparison, boolean, and set operators in Cortex XDR.
Cortex Query Language (XQL) queries support the following comparison, boolean, string, range, and add operators.
Operator Description
Comparison operators
Boolean operators
or Boolean or
IN, NOT IN Returns true if the integer or string field value is one of the options specified. For example:
action_local_port in(5900,5999)
For string field values, wildcards are supported. In this example a wildcard (*) is used to search if the value contains the strings
"word_1" or "word_2" anywhere in the output, or exactly matches the string "word":
In some cases, using an IN or NOT IN operator combined with a dataset and filter stage can be a better alternative to using a join
stage.
Operator Description
CONTAINS, Performs a search for an integer or string. Returns true if the specified string is contained in the field. Contains and Not Contains are
NOT also supported within arrays for integers and strings.
CONTAINS
Example 70.
lowercase(actor_process_image_name) contains "psexec"
Example 71.
action_process_image_name ~= ".*?\.(?:pdf|docx)\.exe"
INCIDR, NOT Performs a search for an IPv4 address or IPv4 range using CIDR notation, and returns true if the address is in range.
INCIDR
Example 72.
action_remote_ip incidr "192.1.1.1/24"
It is also possible to define multiple CIDRs with comma separated syntax when building a XQL query with the Query Builder or in
Correlation Rules. When defining multiple CIDRs, the logical OR is used between the CIDRS listed, so as long as one address is in range
the entire statement returns true. The same logic is used when using the incidr() function. For more information on how this logic
works to determine whether the incidr or not incidr operators return true or false, see incidr.
Example 73.
action_remote_ip incidr "192.168.0.0/24, 1.168.0.0/24"
Both the IPv4 address and CIDR ranges can be either an explicit string using quotes (""), such as "192.168.0.1", or a string field.
INCIDR6, Performs a search for an IPv6 address or IPv6 range using CIDR notation, and returns true if the address is in range.
NOT INCIDR6
Example 74.
action_remote_ip incidr6 “3031:3233:3435:3637:0000:0000:0000:0000/64”
It is also possible to define multiple CIDRs with comma separated syntax when building a XQL query with the Query Builder or in
Correlation Rules. When defining multiple CIDRs, the logical OR is used between the CIDRS listed, so as long as one address is in range
the entire statement returns true. The same logic is used when using the incidr6() function. For more information on how this logic
works to determine whether the incidr6 or not incidr6 operators return true or false, see incidr6.
Example 75.
action_remote_ip incidr6 "2001:0db8:85a3:0000:0000:8a2e:0000:0000/64, fe80::/10"
Both the IPv6 address and CIDR ranges can be either an explicit string using quotes (""), such
as “3031:3233:3435:3637:0000:0000:0000:0000/64”, or a string field.
Operator Description
add The add operator is used in combination with the tag command to add a single tag or list of tags to a field that you can easily query in
the dataset.
Example 76.
dataset = xdr_data
| tag add "test"
dataset = xdr_data
| tag add "test1", "test2", "test3"
Abstract
The Cortex Query Language supports built-in datasets, custom datasets, and presets.
Every Cortex Query Language (XQL) dataset query begins by identifying a data source that the query will run against. Each data source has a unique name,
and a series of fields. Your query specifies the data source, and then provides stages that identify fields of interest and perform operations against those fields.
You can query against either datasets or Presets in a dataset query. XQL supports using different languages for dataset and field names. In addition, the
dataset formats supported are dependent on the data retention offerings available in Cortex XDR according to whether you want to query hot storage (default)
or cold storage. For more information, see XQL Language Structure.
Datasets
The standard, built-in data source that is available in every Cortex XDR instance is the xdr_data dataset. This is a very large dataset with many available
fields. For more information about this dataset, see Cortex XDR XQL Schema Reference. Cortex Query Language (XQL) supports using different languages for
dataset and field names. In addition, the dataset formats supported are dependent on the data retention offerings available in Cortex XDR according to
whether you want to query hot storage (default) or cold storage. For more information, see XQL Language Structure.
This dataset is comprised of both raw Endpoint Detection and Response (EDR) events reported by the Cortex XDR agent, and of logs from different sources
such as third-party logs. To help you investigate events more efficiently, Cortex XDR also stitches these logs and events together into common schemas called
stories. These stories are available using the Cortex XDR Presets.
When building queries in XQL, keep the following in mind with respect to datasets:
Dataset names can use uppercase characters, but in queries dataset names are always treated as if they are lowercase. In addition, dataset names are
supported using different languages, numbers (0-9), and underscores (_). Yet, underscores cannot be the first character of the name.
Upon ingestion, all fields are retained even fields with a null value. You can also use XQL to query parsing rules for null values.
Available datasets
Depending on your integrations, you can have the following datasets available for queries:
Data Dataset
To set up this Cloud Identity Engine (previously called Directory Sync Service (DSS)) dataset, you need to
set up a Cloud Identity Engine. Otherwise, you will not have a pan_dss_raw dataset. For more information,
see Set up Cloud Identity Engine.
Data Dataset
The alert fields included in this dataset are limited to certain fields available in the API. For the full list,
see Get Alerts Multi-Events v2 API.
Generic logs
<Vendor>_<Product>_raw
Normalize and enrich flow logs: xdr_dataset dataset with a preset called network_story
The fields contained in this dataset are a subset of the fields in the xdr_data dataset.
Normalize and enrich flow logs: xdr_dataset dataset with a preset called network_story
box_admin_logs_raw
box_shield_alerts_raw
Users
box_users_raw
Groups
box_groups_raw
cisco_asa_raw
Data Dataset
CSV files in shared Windows directory Custom datasets: Select from pre-existing user-created datasets or add a new dataset.
metrics_source
Dropbox Events
dropbox_events_raw
Member Devices
dropbox_members_devices_raw
Users
dropbox_users_raw
Groups
dropbox_groups_raw
If the vendor and product are not specified in the Winlogbeat profile’s configuration file, Cortex XDR creates
a default dataset called microsoft_windows_raw.
Data Dataset
To ensure GlobalProtect access authentication logs are sent to Cortex XDR, verify that your PANW firewall’s
Log Settings for GlobalProtect has the Cortex Data Lake checkbox selected.
Login: google_workspace_login_raw
Rules: google_workspace_rules_raw
Token: google_workspace_token_raw
SAML: google_workspace_saml_raw
Alerts: google_workspace_alerts_raw
Emails: google_gmail_raw
Data Dataset
va_cves
va_endpoints
Presets
host_inventory
host_inventory_accessibility
host_inventory_applications
host_inventory_auto_runs
host_inventory_cpus
host_inventory_daemons
host_inventory_disks
host_inventory_drivers
host_inventory_endpoints
host_inventory_extensions
host_inventory_groups
host_inventory_kbs
host_inventory_mounts
host_inventory_services
host_inventory_shares
host_inventory_users
host_inventory_volumes
host_inventory_vss
The fields contained in this dataset are a subset of the fields in the xdr_data dataset.
Data Dataset
msft_o365_users_raw
msft_o365_groups_raw
msft_o365_devices_raw
msft_o365_mailboxes_raw
msft_o365_rules_raw
Microsoft Office 365 Microsoft Office 365 audit events from Management Activity API:
DLP: msft_o365_dlp_raw
General: msft_o365_general_raw
Okta okta_sso_raw
onelogin_events_raw
Directory
onelogin_users_raw
onelogin_groups_raw
onelogin_apps_raw
panw_iot_security_alerts_raw
Devices
panw_iot_security_devices_raw
Data Dataset
*These datasets use the query field names as described in the Cortex schema documentation.
PingFederate ping_identity_pingfederate_raw
ServiceNow CMDB A ServiceNow CMDB dataset is created for each table configured for data collection using the format
servicenow_cmdb_<table name>_raw.
Data Dataset
Salesforce.com salesforce_connectedapplication_raw
salesforce_permissionset_raw
salesforce_profile_raw
salesforce_groupmember_raw
salesforce_group_raw
salesforce_user_raw
salesforce_userrole_raw
salesforce_document_raw
salesforce_contentfolder_raw
salesforce_attachment_raw
salesforce_contentdistribution_raw
salesforce_tenantsecuritylogin_raw
salesforce_useraccountteammember_raw
salesforce_tenantsecurityuserperm_raw
salesforce_account_raw
salesforce_audit_raw
salesforce_login_raw
salesforce_eventlogfile_raw
Syslog/CEF <CEFVendor>_<CEFProduct>_raw
To view in an XQL query these events, the Device Configuration of the endpoint profile must be set to
Block. Otherwise, the USB events are not captured. The events are also captured when a group of
device types are blocked on the endpoints with a permanent or temporary exception in place. For
more information, see [Ingest Connect and Disconnect Events of USB Devices] in Device control.
The fields contained in this dataset are a subset of the fields in the xdr_data dataset.
Data Dataset
forensics_arp_cache
forensics_background_activity_monitor
forensics_chrome_history
forensics_cid_size_mru
forensics_command_history
forensics_dns_cache
forensics_edge_anaheim_history
forensics_edge_spartan_history
forensics_event_log
forensics_file_access
forensics_file_listing
forensics_firefox_history
forensics_handles
forensics_hosts_file
forensics_internet_explorer_history
forensics_jumplist
forensics_last_visited_pidl_mru
forensics_log_me_in
forensics_net_sessions
forensics_network
forensics_network_connectivity_usage
forensics_network_data_usage
forensics_open_save_pidl_mru
forensics_port_listing
forensics_prefetch
forensics_process_execution
forensics_process_listing
forensics_psreadline
forensics_recent_files
forensics_recentfilecache
forensics_recycle_bin
forensics_registry
forensics_remote_access
forensics_seven_zip_folder_history
forensics_shellbags
forensics_shimcache
forensics_team_viewer
Data Dataset
forensics_typed_paths
forensics_typed_urls
forensics_user_access_logging
forensics_user_assist
forensics_windows_activities
forensics_winrar_arc_history
forensics_word_wheel_query
microsoft_windows_raw
Normalized Stories
Workday workday_workday_raw
ZPA
zscaler_zpa_raw
Presets
Presets offer groupings of xdr_data fields that are useful for analyzing specific areas of network and endpoint activity. All of the fields available for a preset
are also available on the larger xdr_data dataset, but by using the preset your query can run more efficiently. Presets are sorted at random by the first one
million results found.
Two of the available presets are stories. These contain information stitched together from Cortex XDR agent events and log files to form a common schema.
They are authentication_story and network_story.
Abstract
Learn more about the Cortex Query Language (XQL) examples provided.
The examples included in topics are intended to illustrate the behavior or usage of a particular stage or function. While these examples can be based on real
data that you could use on real-world queries, you may need to tweak these queries to perform investigations or otherwise solve real world problems.
Abstract
Learn more about how Cortex XDR treats JSON functions in the Cortex Query Language.
The Cortex Query Language (XQL) includes a number of JSON functions. Before using any of these functions, it's important to understand how Cortex XDR
treats a JSON so you can accurately formulate your queries using the correct syntax.
JSON field names are case sensitive, so the key to field pairing must be identical in an XQL query for results to be found. For example, if a field value is
"TIMESTAMP" and your query is defined to look for "timestamp", no results will be found.
<json_path>
Each JSON function includes defining a <json_path> in both the regular syntax or when using the syntatic sugar format. The <json_path> argument
identifies the data of the JSON object you want to extract using dot-notation. When using the regular syntax, the beginning of the object is represented by a $.
This $ is not required when using the syntatic sugar format.
Example 77.
{
"a_field" : "This is a_field value",
"b_field" : {
"c_field" : "This is c_field value"
}
}
$.a_field
Returns "This is a_field value", while the path using the regular syntax:
$.b_field.c_field
When using the regular syntax to write your XQL queries and a field in the <json_path> contains characters, such as a dot (.) or colon (:), the syntax needs
to be tweaked slightly to account for the <json_field>.
For example, when using the json_extract function, the previous regular syntax would need to be changed to an updated syntax to account for the field in
the <json_path> containing characters.
json_extract(<json_object_formatted_string>, <json_path>)
Updated regular syntax for the json_extract function, where the <json_field> now includes single quotation marks as '<json_field>':
json_extract(<json_object_formatted_string>, "['<json_field>']")
For each JSON function, the regular syntax can change slightly, but the "['<json_field>']" format is the same. The "['<json_field>']" identifies the
data you want to extract using dot-notation, where the data extracted is dependent on your syntax.
Example 78.
{"a.b":
{"inn":
{"one":1}
}
}
To extract the data {"one":1}, the "['<json_field>']" would need to be defined as "$['a.b'].inn" for all JSON functions. For example, when using
the json_extract function, the regular syntax is:
json_extract(field_json_1, "$['a.b'].inn")
json_extract(field_json_1, "$['a.b']")
Example 79.
{"a.b":
{"inn.inn":
{"one":1}
}
}
To extract the data {"one":1}, the "['<json_field>']" would need to be defined as "$['a.b']['inn.inn']" for all JSON functions. For example,
when using the json_extract function, the regular syntax is:
json_extract(json_field, "$['a.b']['inn.inn']")
To make it easier for you to write your XQL queries, each JSON function includes an optional syntatic sugar format as opposed to using the regular syntax.
When defining the syntatic sugar format and a field in the <json_path> contains characters, such as a dot (.) or colon (:), the syntax needs to be tweaked
slightly to account for the <json_field>.
For example, when using the json_extract function, the previous syntatic sugar format would need to be changed to an updated syntax to account for the
field in the <json_path> containing characters.
Updated syntatic sugar format for the json_extract function, where the <json_field> now includes quotations as "<json_field>":
For each JSON function, the syntax of the syntatic sugar format can change slightly, but the ["<json_field>"] format is the same. The ["
<json_field>"] identifies the data you want to extract using dot-notation, where the data extracted is dependent on your syntax.
Example 80.
{"a.b":
{"inn":
{"one":1}
}
}
To extract the data {"one":1}, the ["<json_field>"] would need to be defined as ["a.b"].inn for all JSON functions. For example, when using the
json_extract function, the syntatic sugar format is:
To extract the data {"inn": {"one":1}}, the ["<json_field>"] would need to be defined as ["a.b"] for all JSON functions. For example, when using
the json_extract function, the syntatic sugar format is:
Example 81.
{"a.b":
{"inn.inn":
{"one":1}
}
}
To extract the data {"one":1}, the ["<json_field>"] would need to be defined as ["a.b"]["inn.inn"] for all JSON functions. For example, when
using the json_extract function, the syntatic sugar format is:
Abstract
Learn how to filter for empty values in the results table in Cortex Query Language.
When building a query you can filter for empty values in the results table, which can include or exclude null or empty strings. In the query syntax, empty strings
are represented as "", while null fields are represented as null.
Example 82.
Below is an example of filtering your endpoint data in the results table to exclude all null values and any empty strings for a user.
Learn more about how to build Cortex Query Language (XQL) queries using the Query Builder.
To support investigation and analysis, you can search your data by creating queries in the Query Builder. You can create queries with the Cortex Query
Language (XQL) or by using the predefined queries for different types of entities.
Abstract
The Query Builder facilitates threat detection, incident expansion, and data analytics for suspected threats.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Query Builder aids in the detection of threats by allowing you to search for indicators of compromise and suspicious patterns within data sources. It assists
in expanding incident investigations by identifying related events and entities, such as activities associated with specific user accounts or network lateral
movement. In addition, the Query Builder enables data analytics on suspected threats, helping organizations analyze large volumes of data to identify trends,
anomalies, and correlations that may indicate potential security issues.
To support investigation and analysis, you can search all of the data ingested by Cortex XDR by creating queries in the Query Builder. You can create queries
that investigate leads, expose the root cause of an alert, perform damage assessment, and hunt for threats from your data sources.
Cortex XDR provides different options in the Query Builder for creating queries:
You can use the Cortex Query Language (XQL) to build complex and flexible queries that search specific datasets or presets, or the entire xdr_data
dataset. With XQL Search you create queries based on stages, functions, and operators. To help you build your queries, Cortex XDR provides tools in
the interface that provide suggestions as you type, or you can look up predefined queries, common stages and examples.
Abstract
Learn more about how to build XQL queries in the Query Builder.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Cortex Query Language (XQL) enables you to query data ingested into Cortex XDR for rigorous endpoint and network event analysis returning up to 1M
results. To help you create an effective XQL query with the proper syntax, the query field in the user interface provides suggestions and definitions as you type.
Forensic datasets are not inlcuded by default in XQL query results, unless the dataset query is explicitly defined to use a forensic dataset.
In a dataset query, unless otherwise specified, the query runs against the xdr_data dataset, which contains all log information that Cortex XDR collects from
all Cortex product agents, including EDR data, and PAN NGFW data. In a dataset query, if you are running your query against a dataset that has been set as
default, there is no need to specify a dataset. Otherwise, specify a dataset in your query. The Dataset Queries lists the available datasets, depending on
system configuration.
Users with different dataset permissions can receive different results for the same XQL query.
An administrator or a user with a predefined user role can create and view queries built with an unknown dataset that currently does not exist in Cortex
XDR. All other users can only create and view queries built with an existing dataset.
When you have more than one dataset or lookup, you can change your default dataset by navigating to Settings → Configurations → Data Management
→ Dataset Management, right-click on the appropriate dataset, and select Set as default. For more information about setting default datasets, see
Dataset management.
The basic syntax structure for querying datasets that are not mapped to the XDM is:
or
You can specify a dataset using one of the following formats, which is based on the data retention offerings available in Cortex XDR.
Hot Storage queries use the format dataset = <dataset name>. This is the default option.
Example 23.
dataset = xdr_data
Example 24.
cold_dataset = xdr_data
You can build a query that investigates data in both a cold dataset and a hot dataset in the same query. In addition, as the hot storage dataset format is
the default option and represents the fully searchable storage, this format is used throughout this guide for investigation and threat hunting. For more
information on hot and cold storage, see Dataset management.
When using the hot storage default format, this returns every xdr_data record contained in your Cortex XDR instance over the time range that you provide to
the Query Builder user interface. This can be a large amount of data, which may take a long time to retrieve. You can use a limit stage to specify how many
records you want to retrieve.
There is no practical limit to the number of stages that you can specify. See Stages for information on all the supported stages.
In the xdr_data dataset, every user field included in the raw data for network, authentication, and login events has an equivalent normalized user field
associated with it that displays the user information in the following standardized format:
<company domain>\<username>
For example, the login_data field has the login_data_dst_normalized_user field to display the content in the standardized format. To ensure the most
accurate results, we recommend that you use these normalized_user fields when building your queries.
Additional components
XQL queries can contain different components, such as functions and stages, depending on the type of query you want to build.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Before you begin running XQL queries, consider the following information:
Cortex XDR offers features in the XQL search interface to help you to build queries. For more information see Useful XQL user interface features.
Before you run a query, review this list to better understand query behavior and results. For more information, see Expected results when querying fields.
If you have existing Splunk queries, you can translate them to XQL. For more information, see Translate to XQL.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The user interface contains several useful features for querying data, and for viewing results:
XQL query: The XQL query field is where you define the parameters of your query. To help you create an effective XQL query, the search field provides
suggestions and definitions as you type.
Translate to XQL: Converts your existing Splunk queries to the XQL syntax. When you enable Translate to XQL , both an SPL query field and an XQL
query field are displayed. You can easily add a Splunk query, which is converted automatically into XQL in the XQL query field. This option is disabled by
default.
Query Results: After you create and run an XQL query, you can view, filter, and visualize your Query Results.
XQL Helper: Describes common stage commands and provides examples that you can use to build a query.
Query Library: Contains common, predefined queries that you can use or modify to your liking. In addition, there is a personal query library for saving
and managing your own queries so that you can share with others, and queries can be shared with you. For more information, see Manage your personal
query library.
Schema: Contains schema information for every field found in the result set. This information includes the field name, data type, descriptive text (if
available), and the dataset that contains the field. Contains the list of all the fields of all the datasets that were involved in the query.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Cortex XDR includes built-in mechanisms for mitigating long-running queries, such as default limits for the maximum number of allowed alerts. The following
suggestions can help you to streamline your queries:
The default results for any query is a maximum of 1,000,000 results, when no limit is explicitly stated in the query. Queries based on XQL query entities
are limited to 10,000 results. Adding a smaller limit can greatly reduce the response time.
Example 25.
dataset = microsoft_windows_raw
| fields *host*
| limit 100
Use a small time frame for queries by specifying the specific date and time in the custom option, instead of picking the nearest larger option available.
Use filters that exclude data, along with other possible filters.
Select the specific fields that you would like to see in the query results.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
If specific fields are stated in the fields stage, those exact fields will be returned.
The _time system field will not be added to queries that contain the comp stage.
All current system fields will be returned, even if they are not stated in the query.
Each new column in the result set created by the alter stage will be added as the last column. You can specify a different column order by modifying the
field order in the fields stage of the query.
Each new column in the result set created by the comp stage will be added as the last column. Other fields that are not in the group by /
calculated column will be removed from the result set, including the core fields and _time system field.
When no limit is explicitly stated in a datamodel query, a maximum of 1,000,000 results are returned (default). When this limit is applied to results using
the limit stage, it will be indicated in the user interface.
Abstract
Learn how to create queries using the Cortex Query Language (XQL).
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Build Cortex Query Language (XQL) queries to analyze raw log data stored in Cortex XDR. You can query the Cortex Data Model (XDM) or datasets using
specific syntax.
1. From Cortex XDR, select Incident Response → Investigation → Query BuilderXQL Search.
2. (Optional) To change the default time period against which to run your query, at the top right of the window, select the required time period, or create a
customized one.
Whenever the time period is changed in the query window, the config timeframe is automatically set to the time period defined, but this won't be
visible as part of the query. Only if you manually type in the config timeframe will this be seen in the query.
3. (Optional) To translate Splunk queries to XQL queries, enable Translate to XQL. If you choose to use this feature, enter your Splunk query in the Splunk
field, click the arrow icon ( ) to convert to XQL, and then go to Step 5.
4. Create your query by typing in the query field. Relevant commands, their definitions, and operators are suggested as you type. When multiple
suggestions are displayed, use the arrow keys to select a suggestion and to view an explanation for each one.
You only need to specify a dataset if you are running your query against a dataset that you have not set as default. Otherwise, the query runs
against the xdr_data dataset. For more information, see How to build XQL queries.
Example 26.
dataset = xdr_data
b. Press Enter, and then type the pipe character (|). Select a command, and complete the command using the suggested options.
Example 27.
dataset = xdr_data
| filter agent_os_type = ENUM.AGENT_OS_MAC
| limit 250
Run the query by the specified date and time, or on a specific date, by selecting the calendar icon ( ).
6. (Optional) The Save As options save your query for future use:
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see Manage your personal query library.
Widget to Library: For more information, see Create custom XQL widgets.
While the query is running, you can navigate away from the page. A notification is sent when the query has finished. You can also Cancel the query or run a
new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about reviewing the results returned from an XQL query.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The results of a Cortex Query Language (XQL) query are displayed in a tab called Query Results.
It's also possible to graph the results displayed. For more information, see Graph query results.
Use the following options in the Query Results tab to investigate your query results:
Option Use
Table tab Displays results in rows and columns according to the entity fields. Columns can be filtered, using their filter icons.
More options (kebab icon ) displays table layout options, which are divided into different sections:
In the Appearance section, you can Show line breaks for any text field in the Query Results. By default, the text in these fields are
wrapped unless the Show line breaks option is selected. In addition, you can change the way rows and columns are displayed.
In the Log Format section, you can change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
In the Search column section, you can find a specific column; enable or disable display of columns using the checkboxes.
Show and hide rows according to a specific field in a specific event: select a cell, right-click it, and then select either Show rows with … or
Hide rows with …
Graph tab Use the Chart Editor to visualize the query results.
Advanced Displays results in a table format which aggregates the entity fields into one column. You can change the layout, decide whether to Show
tab line breaks for any text field in the results table, and change the log format from the menu.
Select Show more to pivot an Expanded View of the event results that include NULL values. You can toggle between the JSON and Tree
views, search, and Copy to clipboard.
Option Use
More options ( ) works in a similar way to how it works on the Table tab.
Show more in the bottom left corner of each row opens the Expanded View of the event results that also include NULL values. Here,
you can toggle between the JSON and Tree views, search, and Copy to clipboard.
Log format options change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
Free text Searches the query results for text that you specify in the free text search. Click the Free text search icon to reveal or hide the free text
search search field.
Filter Enables you to filter a particular field in the interface that is displayed to specify your filter criteria.
For integer, boolean, and timestamp (such as _time) fields, we recommend that you use the Filter instead of the Free text search, in order
to retrieve the most accurate query results.
Fields menu Filters query results. To quickly set a filter, Cortex XDR displays the top ten results from which you can choose to build your filter. This
option is only available in the Table and Advanced tabs,
From within the Fields menu, click on any field (excluding JSON and array fields) to see a histogram of all the values found in the result set
for that field. This histogram includes:
A count of the total number of times a value was found in the result set.
The value's frequency as a percentage of the total number of values found for the field.
In order for Cortex XDR to provide a histogram for a field, the field must not contain an array or a JSON object.
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see personal query library.
Widget to Library: For more information, see Create custom XQL widgets.
You can continue investigating the query results in the Causality View or Timeline by right-clicking the event and selecting the desired view. This option is
available for the following types of events:
Network
File
Registry
Injection
Load image
System calls
For network stories, you can pivot to the Causality View only. For cloud Cortex XDR events and Cloud Audit Logs, you can only pivot to the Cloud Causality
View, while software-as-a-service (SaaS) related alerts for audit stories, such as Office 365 audit logs and normalized logs, you can only pivot to the SaaS
Causality View.
Add a file path to your existing Malware Profile allowed list by right-clicking a <path> field, such as target_process_path, and select Add <path type> to
malware profile allow list.
Abstract
Learn how to translate your Splunk queries to XQL queries in Cortex XDR.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
To help you easily convert your existing Splunk queries to the Cortex Query Language (XQL) syntax, Cortex XDR includes a toggle called Translate to XQL in
the query field in the user interface. When building your XQL query and this option is selected, both a SPL query field and XQL query field are displayed, so
you can easily add a Splunk query, which is converted to XQL in the XQL query field. This option is disabled by default, so only the XQL query field is
displayed.
This feature is still in a Beta state and you will find that not all Splunk queries can be converted to XQL. This feature will be improved upon in the upcoming
releases to support greater Splunk query translations to XQL.
The following table details the supported functions in Splunk that can be converted to XQL in Cortex XDR with an example of a Splunk query and the resulting
XQL query. In each of these examples, the xdr_data dataset is used.
bin index = xdr_data | bin _time span=5m dataset in (xdr_data) | bin _time span=5m
count index=xdr_data | stats count(_product) BY _time dataset in (xdr_data) | comp count(_product) by _time
ctime index=xdr_data | convert ctime(field) as field dataset in (xdr_data) | alter field = format_timestamp(
earliest index = xdr_data earliest=24d dataset in (xdr_data) | filter _time >= to_timestamp(ad
eval index=xdr_data | eval field = "test" dataset in (xdr_data) | alter field = "test"
floor index=xdr_data | eval floor_test = floor(1.9) dataset in (xdr_data) | alter floor_test = floor(1.9)
json_extract index= xdr_data | eval dataset in (xdr_data) | alter London = dfe_labels -> df
London=json_extract(dfe_labels,"dfe_labels{0}")
join join agent_hostname [index = xdr_data] join type=left conflict_strategy=right (dataset in (xdr
len index = xdr_data | where uri != null | eval dataset in (xdr_data) | filter agent_ip_addresses != nu
length = len(agent_ip_address) len(agent_ip_addresses)
lower index = xdr_data | eval field = lower("TEST") dataset in (xdr_data) | alter field = lowercase("TEST")
md5 index=xdr_data | eval md5_test = md5("test") dataset in (xdr_data) | alter md5_test = md5("test")
mvcount index = xdr_data | where http_data != null | dataset in (xdr_data) | filter http_data != null | alte
eval http_data_array_length =
mvcount(http_data)
mvexpand index = xdr_data | mvexpand dfe_labels limit = dataset in (xdr_data) | arrayexpand dfe_labels limit 10
100
pow index=xdr_data | eval pow_test = pow(2, 3) dataset in (xdr_data) | alter pow_test = pow(2, 3)
relative_time(X,Y) index ="xdr_data" | where _time > dataset in (xdr_data) | filter _time >
relative_time(now(),"-7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
index ="xdr_data" | where _time > dataset in (xdr_data)| filter _time >
relative_time(now(),"+7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
replace index= xdr_data | eval description = dataset in (xdr_data) | alter description = replace(age
replace(agent_hostname,"\("."NEW")
round index=xdr_data | eval round_num = round(3.5) dataset in (xdr_data) | alter round_num = round(3.5)
search index = xdr_data | eval ip="192.0.2.56" | dataset in (xdr_data) | alter ip = "192.0.2.56" | filte
search ip="192.0.2.0/24"
sha256 index = xdr_data | eval sha256_test = dataset in (xdr_data) | alter sha256_test = sha256("tes
sha256("test")
sort (ascending index = xdr_data | sort action_file_size dataset in (xdr_data) | sort asc action_file_size | lim
order)
sort (descending index = xdr_data | sort -action_file_size dataset in (xdr_data) | sort desc action_file_size | li
order)
spath index = xdr_data | spath output=myfield dataset in (xdr_data) | alter myfield = json_extract(ac
input=action_network_http path=headers.User-
Agent
split index = xdr_data | where mac != null | eval dataset in (xdr_data)\n | filter mac != null\n | alter
split_mac_address = split(mac, ":")
stats dc index = xdr_data | stats dc(_product) BY _time dataset in (xdr_data) | comp count_distinct(_product) b
sum index=xdr_data | where action_file_size != null dataset in (xdr_data) | filter action_file_size != null
| stats sum(action_file_size) by _time
table index = xdr_data | table _time, agent_hostname, dataset in (xdr_data) | fields _time, agent_hostname, a
agent_ip_addresses, _product
showperc percentfield
percentfield
upper index=xdr_data | eval field = upper("test") dataset in (xdr_data) | alter field = uppercase("test")
var index=xdr_data | stats var (event_type) by dataset in (xdr_data) | comp var(event_type) by _time
_time
2. Toggle to Translate to XQL, where both a SPL query field and XQL query field are displayed.
The XQL query field displays the equivalent Splunk query using the XQL syntax.
You can now decide what to do with this query based on the instructions explained in Create XQL query.
Abstract
Cortex XDR enables you to generate helpful visualizations of your XQL query results.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
To help you better understand your Cortex Query Language (XQL) query results and share your insights with others, Cortex XDR enables you to generate
graphs and outputs of your query data directly from query results page.
Example 28.
dataset = xdr_data
| fields action_total_upload, _time
| limit 10
The query returns the action_total_upload, a number field, and _time, a string field, for up to 10 results.
Navigate to Query Results → Chart Editor ( ) to manually build and view the graph using the selected graph parameters:
Main
Graph Type: Type of graphs and output options available: Area, Bubble, Column, Funnel, Gauge, Line, Map, Pie, Scatter, Single Value, or
Word Cloud.
Subtype and Layout: Depending on the selected type of graph, choose from the available display options.
Data
Depending on the selected type of graph, customize the Color, Font, and Legend.
You can express any chart preferences in XQL. This is helpful when you want to save your chart preferences in a query and generate a chart every time
that you run it. To define the parameters, either:
Example 29.
view graph type = column subtype = grouped header = “Test 1” xaxis = _time yaxis = _product,action_total_upload
Select ADD TO QUERY to insert your chart preferences into the query itself.
To easily track your query results, you can create custom widgets based on the query results. The custom widgets you create can be used in your
custom dashboards and reports. For more information, see Create custom XQL widgets.
Select Save to Widget Library to pivot to the Widget Library and generate a custom widget based on the query results.
Abstract
Learn more about the Cortex Query Language (XQL) entities available in the Query Builder.
With Query Builder, you can build complex queries for entities and entity attributes so that you can surface and identify connections between them. Cortex XDR
provides Cortex Query Language (XQL) queries for different types of entities in the Query Builder that search predefined datasets. The Query Builder searches
the raw data and logs stored in Cortex XDR tenant and for the entities and attributes you specify, it returns up to 1,000,000 results.
The Query Builder provides queries for the following types of entities:
File: Search on file creation and modification activity by file name and path. See Create file query.
Network: Search network activity by IP address, port, host name, protocol, and more. See Create network query.
Image Load: Search on module load into process events by module IDs and more. See Create image load query.
Registry: Search on registry creation and modification activity by key, key value, path, and data. See Create registry query.
Event Log: Search Windows event logs and Linux system authentication logs by username, log event ID (Windows only), log level, and message. See
Create event log query.
Network Connections: Search security event logs by firewall logs, endpoint raw data over your network. See Create network connections query.
Authentications: Search on authentication events by identity, target outcome, and more. See Create authentication query.
All Actions: Search across all network, registry, file, and process activity by endpoint or process. See Query across all entities.
The Query Builder also provides flexibility for both on-demand query generation and scheduled queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate authentication activity across all ingested authentication logs and data.
2. Select AUTHENTICATION.
By default, Cortex XDR will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
5. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate Windows and Linux event log attributes and investigate event logs across endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can search Windows and Linux event log attributes and investigate event logs across endpoints with a Cortex XDR agent installed.
3. Enter the search criteria for your Windows or Linux event log query.
Define any event attributes for which you want to search. By default, Cortex XDR will return the events that match the attribute you specify. To exclude an
attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific dateor Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
7. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between file activity and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between file activity and endpoints. The Query Builder searches your logs and endpoint data for the
file activity that you specify. To search for files on endpoints instead of file-related activity, build an XQL query. For more information, see How to build XQL
queries.
2. Select FILE.
File attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
notepad.exe|chrome.exe). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value,
toggle the = option to =!. Attributes are:
ACTION_IS_VFS: Denotes if the file is on a virtual file system on the disk. This is relevant only for files that are written to disk.
DEVICE TYPE: Type of device used to run the file: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the file.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors—The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent
process that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about create a query to investigate the connections between image load activity, acting processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate connections between image load activity, acting processes, and endpoints.
3. Enter the search criteria for the image load activity query.
Identifying information about the image module: Full Module Path, Module MD5, or Module SHA256.
By default, Cortex XDR will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the same
search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between firewall logs, endpoints, and network activity.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate network events stitched across endpoints and the Palo Alto Networks next-generation firewall logs.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
PROTOCOL: Network transport protocol over which the traffic was sent.
SESSION STATUS
PRODUCT
VENDOR
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the
same search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
Destination TARGET HOST,NAME, PORT, HOST NAME, PROCESS USER NAME, HOST IP, CMD, HOST OS, MD5, PROCESS PATH, USER ID,
SHA256, SIGNATURE, or PID
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between network activity, acting processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder, you can investigate connections between network activity, acting processes, and endpoints.
2. Select NETWORK.
Network traffic type: Select the type or types of network traffic alerts you want to search: Incoming, Outgoing, or Failed.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
LOCAL IP: Local IP address related to the communication. Matches can return additional data if a machine has more than one NIC.
PROTOCOL: Network transport protocol over which the traffic was sent.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate connections between processes, child processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between processes, child processes, and endpoints.
For example, you can create a process query to search for processes executed on a specific endpoint.
2. Select PROCESS.
Process action: Select the type of process action you want to search: On process Execution or Injection into another process.
Process attributes—Define any additional process attributes for which you want to search.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
By default, Cortex XDR will return results that match the attribute you specify. To exclude an attribute value, toggle the operator from = to !=.
Attributes are:
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
PROCESS_FILE_INFO: Metadata of the process file, including file property details, file entropy, company name, encryption status, and
version number.
PROCESS_SCHEDULED_TASK_NAME: Name of the task scheduled by the process to run in the Task Scheduler.
DEVICE TYPE: Type of device used to run the process: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the process.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
CMD: Command-line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash
Run search on process, Causality and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate a process,
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
INSTALLATION TYPE can be either Cortex XDR agent or Data Collector. For more information about the data collector applet, Activate Pathfinder.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate connections between registry activity, processes, and endpoints.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
From the Query Builder you can investigate connections between registry activity, processes, and endpoints.
2. Select REGISTRY.
Registry attributes: Define any additional registry attributes for which you want to search. By default, Cortex XDR will return the events that match
the attribute you specify. To exclude an attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
From the Cortex XDR management console, you can search for endpoints and processes across all endpoint activity.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Some examples of queries you can run across all entities include:
Select Add Process to your search, and specify one or more of the following attributes for the acting (parent) process. Use a pipe (|) to separate multiple
values. Use an asterisk (*) to match any string of characters.
Field Description
CMD Command line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash.
Run search on The causality actor, also referred to as the causality group owner (CGO), is the parent process in the execution chain that
process, Causality the agent identified as being responsible for initiating the process tree. The OS actor is the parent process that creates an
and OS actors OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiating process, clear this option.
Select Add Host to your search and specify one or more of the following attributes:
HOST: HOST NAME, HOST IP address, HOST OS, HOST ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME , PATH , CMD , MD5 , SHA256 , USER NAME , SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last7D (days), Last1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run the query immediately and view the results in the Query Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about viewing the results of a query, modifying a query, and rerunning queries from Query Center.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Query Center displays information about all queries that were run in the Query Builder. From the Query Center you can manage your queries, view query
results, and adjust and rerun queries. Right-click a query to see the available options.
The Query Description column displays the parameters that were defined for a query. If necessary, use the Filter to reduce the number of queries that
Cortex XDR displays.
Queries that were created from a Query Builder template are prefixed with the template name.
4. (Optional) Export to file to export the results to a tab-separated values (TSV) file.
Right-click a value in the results table to see the options for further investigation.
Modify a query
After you run a query, you might need to change your search parameters to refine the search results or correct a search parameter. You can modify a query
from the Results page:
For queries created in XQL, the Results page includes the XQL query builder with the defined parameters. Modify the query and Run, schedule, or save
the query.
For queries created with a Query Builder template, the defined parameters are shown at the top of the Results page. Select Back to edit to modify the
query with the template format or Continue in XQL to open the query in XQL.
If you want to rerun a query, you can either schedule it to run on or before a specific date, or you can rerun it immediately. Cortex XDR creates a new query in
the Query Center, and when the query completes, it displays a notification in the notification bar.
To rerun a query immediately, right-click anywhere in the query and then select Rerun Query.
1. In the Query Center, right-click anywhere in the query and then select Schedule.
2. Choose a schedule option and the date and time that the query should run:
Cortex XDR creates a new query and schedules it to run on or by the selected date and time.
4. View the status of the scheduled query on the Scheduled Queries page.
You can also make changes to the query, edit the frequency, view when the query will next run, or disable the query. For more information, see Manage
scheduled queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The table below lists the common fields in the Query Center.
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated; this field allows you to view data for queries
performed before deprecation.
COMPUTE UNIT USAGE Number of query units that were used to execute the API query and Cold Storage
query.
EXECUTION ID Unique identifier of Cortex Query Language (XQL) queries in the tenant. The
identifier ID generated for queries executed in Cortex XDR and XQL query API.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME* For saved queries, the Query Name identifies the query specified by the
administrator.
For scheduled queries, the Query Name identifies the auto-generated name
of the parent query. Scheduled queries also display an icon to the left of the
name to indicate that the query is recurring.
Field Description
Queued: The query is queued and will run when there is an available slot.
Running
Failed
Partially completed: The query was stopped after exceeding the maximum
number of permitted results. The default results for any query is a maximum
of 1,000,000 results, when no limit is explicitly stated in the query. Queries
based on XQL query entities are limited to 10,000 results. To reduce the
number of results returned, you can adjust the query settings and rerun.
Completed
SIMULATED COMPUTE UNITS Number of query units that were used to execute the Hot Storage query.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The Scheduled Queries page displays information about your scheduled and recurring queries. From this page, you can edit scheduled query parameters,
view previous executions, disable, and remove scheduled queries. Right-click a query to see the available options.
2. Locate the scheduled query for which you want to view previous executions.
3. Right-click anywhere in the query row, and select Show executed queries.
Abstract
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
The table below ists the common fields in the Scheduled Queries page.
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated, this field allows you to view data for queries performed before
deprecation.
NEXT EXECUTION For queries that are scheduled to run at a specific frequency, this displays the next execution time.
For queries that were scheduled to run at a specific time and date, this field will show None.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME For saved queries, the Query Name identifies the query specified by the administrator.
For scheduled queries, the Query Name identifies the auto-generated name of the parent query.
Scheduled queries also display an icon to the left of the name to indicate that the query is
reoccurring.
SCHEDULE TIME Frequency or time at which the query was scheduled to run.
Abstract
Cortex XDR provides as part of the Query Library a personal library for saving and managing your own queries.
Building Cortex Query Language (XQL) queries in the Query Builder requires a Cortex XDR Pro license.
Cortex XDR provides as part of the Query Library a personal query library for saving and managing your own queries. When creating a query in XQL Search or
managing your queries from the Query Center, you can save queries to your personal library. You can also decide whether the query is shared with others (on
The queries listed in your Query Library have different icons to help you identify the different states of the queries:
The Query Library contains a powerful search mechanism that enables you to search in any field related to the query, such as the query name, description,
creator, query text, and labels. In addition, adding a label to your query enables you to search for these queries using these labels in the Query Library.
2. Locate the query that you want to save to your personal query library.
3. Right-click anywhere in the query row, and select Save query to library.
Query Name: Specify a unique name for the query. Query names must be unique in both private and shared lists, which includes other people’s
queries.
Labels (Optional): Specify a label that is associated with your query. You can select a label from the list of predefined labels or add your label and
then select Create Label. Adding a label to your query enables you to search for queries using this label in the Query Library.
Share with others: You can either set the query to be private and only accessible by you (default) or move the toggle to Share with others the
query, so that other users using the same tenant can access the query in their Query Library.
3. Click Save.
A notification appears confirming that the query was saved successfully to the library, and closes on its own after a few seconds.
The query that you added is now listed as the first entry in the Query Library. The query editor is opened to the right of the query.
As needed, you can return to your queries in the Query Library to manage your queries. Here are the actions available to you.
Search query data and metadata: Use the Query Library’s powerful search mechanism that enables you to search in any field related to the query,
such as the query name, description, creator, query text, and label. The Search query data and metadata field is available at the top of your list of
queries in the Query Library.
Show: Filter the list of queries from the Show menu. You can filter by the Palo Alto Networks queries provided with Cortex XDR , filter by the queries
Created by Me, or filter by the queries Created by Others. To view the entire list, Select all (default).
Save as new: Duplicate the query and save it as a new query. This action is available from the query menu by selecting the 3 vertical dots.
Share with others: If your query is currently unshared, you can share with other users on the same tenant your query, which will be available in their
Query Library. This action is only available from the query menu by selecting the 3 vertical dots when your query is unshared.
Unshare: If your query is currently shared with other users, you can Unshare the query and remove it from their Query Library. This action is only
available from the query menu by selecting the 3 vertical dots when your query is shared with others. You can only Unshare a query that you
created. If another user created the query, this option is disabled in the query menu.
Delete the query. You can only delete queries that you created. If another user created the query, this option is disabled in the query menu when
selecting the 3 vertical dots.
17.3 | Stages
Abstract
Stages perform certain operations in evaluating queries. For example, the dataset stage specifies a dataset to run the query. Commonly used stages include
dataset, fields, filters, join, and sort. The stages supported in Cortex Query Language are detailed below.
17.3.1 | alter
Abstract
Syntax
Description
The alter stage is used to change the values of an existing field (column) or to create a new field (column) based on constant values or existing fields
(columns). The alter stage does this by assigning a value to a field name based on the returned value of the specified function. The field does not have to be
known to the dataset or preset schema that you are querying. Further, you can overwrite the current value for a known field using this stage.
After defining a field using the alter stage, you can apply other stages, such as filtering, to the new field or field value.
Examples
Given three username fields, use the coalesce function to return a username value in the default_username field, making sure to never have a
default_username that is root.
dataset = xdr_data
| fields actor_primary_username,
os_actor_primary_username,
causality_actor_primary_username
| alter default_username = coalesce(actor_primary_username,
os_actor_primary_username,
causality_actor_primary_username)
| filter default_username != "root"
17.3.2 | arrayexpand
Abstract
Syntax
The arrayexpand stage expands the values of a mulit-value array field into separate events and creates one record in the result set for each item in the array,
up to a <limit number> of records.
Example
Then if you run an arrayexpand stage using the array_values field, with a limit of 3, the result set includes the following records:
dataset=my_dataset
| arrayexpand array_values limit 3
123456 ajohnson 2
123456 ajohnson 1
123456 ajohnson 3
The result records created by arrayexpand are in no particular order. However, you can use the sort stage to sort the results:
dataset=my_dataset
| arrayexpand array_values
| sort asc array_values
17.3.3 | bin
Abstract
Learn more about the Cortex Query Language bin stage to group events by quantity or time span.
Syntax
Quantity
Time Span
bin <field> span = <time> [timeshift = <epoch time> [timezone = "<time zone>"]]
Description
The bin stage enables you to group events by quantity or time span. The most common use case is for timecharts.
You can add the bin stage to your queries using two different formats depending on whether you are grouping events by quantity or time span. Currently, the
bin stage is only supported using the equal sign (=) operator in your queries without any boolean operators (and, or).
When you group events of a particular field by quantity, the bin stage is used with bins to define how to divide the events.
When you group events of a particular field by time, the bin stage is used with span = <time>, where <time> is a combination of a number and time suffix.
Set one time suffix from the list of available options listed in the table below. In addition, you can define a particular start time for grouping the events in your
query according to the Unix epoch time by setting timeshift = <epoch time> timezone = "<time zone>", which are both optional. You can
configure the <time zone> offset using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as
"America/Chicago". The query still runs without defining the epoch time or time zone. If no timeshift = <epoch time> timezone = "<time
zone>" is set, the query runs according to last time set in the log.
When you group events by quantity, the <field> in the bin stage must be a number, and when you group by time, the <field> must be a date type.
Otherwise, your query will fail.
MS milliseconds
S seconds
M minutes
H hours
D days
W weeks
MO months
Y years
Examples
Quantity Example
Return a maximum of 1,000 xdr_data records with the events of the action_total_upload field grouped by 50MB. Records with the
action_total_upload value set to 0 or null are not included in the results.
dataset = xdr_data
| filter action_total_upload != 0 and action_total_upload != null
| bin action_total_upload bins = 50
| limit 1000
Return a maximum of 1,000 xdr_data records with the events of the _time field grouped by 1-hour increments starting from the epoch time
1615353499, and includes a time zone using an hours offset of “+08:00”.
dataset = xdr_data
| bin _time span = 1h timeshift = 1615353499 timezone = “+08:00”
| limit 1000
Return a maximum of 1,000 xdr_data records with the events of the _time field grouped by 1-hour increments starting from the epoch time
1615353499, and includes an "America/Los_Angeles" time zone.
dataset = xdr_data
| bin _time span = 1h timeshift = 1615353499 timezone = “America/Los_Angeles”
| limit 1000
17.3.4 | call
Abstract
Learn more about the Cortex Query Language call stage to reference a predefined query from the Query Library.
Syntax
The call stage is used to reference a predefined query from the Query Library, including your Personal Query Library. In addition, if your query includes
parameters you can reference them in the call stage using the syntax <param_name1> = <value1> <param_name2> = <value2>.... When using
parameters in your call stage, you need to ensure that a query already exists that uses these parameters.
For the predefined query called "CreateRole operation parsed to fields", returns a maximum of 100 records, where the accessKeyId equals "1234".
Using the same example above, this example shows how to use the same call stage with parameters. This example assumes that there is a query that is
already saved with a parameter $key_id = "1234".
Saved query:
dataset = dataset_name
| filter field_name = $key_id
17.3.5 | comp
Abstract
Learn more about the Cortex Query Language comp stage that precedes functions calculating statistics.
Syntax
comp <aggregate function1> (<field>) [as <alias>][,<aggregate function2>(<field>) [as <alias>],...] [by <field1>[,<field2>...]]
[addrawdata = true|false [as <target field>]]
Description
The comp stage precedes functions calculating statistics for results to compute values over a group of rows and return a single result for a group of rows.
At least one of the comp aggregate functions or comp approximate aggregate functions must be used. Yet, it's also possible to define a comp stage with both
types of aggragate functions.
Use approximate aggregate functions to produce approximate results, instead of exact results used with regular aggregate functions, which are more scalable
in terms of memory usage and time.
Use the alias clause to provide a column label for the comp results, and is optional.
The by clause identifies the rows in the result set that will be aggregated. This clause is optional. Provide one or more fields to this clause. All fields with
matching values are used to perform the aggregation. For example, if you had records such as:
number,id,product
100,"se1","A55"
50,"se1","A60"
50,"se1","A60"
25,"se2","A55"
25,"se2","A60"
The you can aggregate on the number column, and perform aggregation based on matching values in the id and/or product column. So if you sum the number
column by the id column, you would get two results:
50 for "se2"
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Wildcard Aggregates
You can use a wildcard to perform an aggregate for every field contained in the result set, except for the field(s) specified in the by clause.
Wildcards are only supported with aggregate functions and not approximate aggregate functions.
For wildcards to work, all of the fields contained in the result set that are not identified in the by clause must be aggregatable.
Examples
Sum the action_total_download values for all records with matching pairs of values for the actor_process_image_path and
actor_process_command_line fields. The query calculates a maximum of 100 xdr_data records and includes a raw_data column listing the raw data
events used to display the final comp results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path,
actor_process_command_line as Process_CMD,
action_total_download as Download
| filter Download > 0
| limit 100
| comp sum(Download) as total by Process_Path, Process_CMD addrawdata = true as raw_data
Using the panw_ngfw_traffic_raw dataset, sum the bytes_total, bytes_received, and bytes_sent values for every record contained in the result
set with a matching value for source_ip. The query calculates a maximum of 1000 xdr_data records and includes a raw_data column listing the raw data
events used to display the final comp results.
The comp stage runs on 1000 raw data events, but only a 100 will be displayed in the raw_data column.
dataset = panw_ngfw_traffic_raw
| fields bytes_total, bytes_received, bytes_sent, source_ip
| limit 1000
| comp sum(*) as * by source_ip addrawdata = true as raw_data
The aggregate functions you can use with the comp stage are:
count
count_distinct
earliest
first
last
latest
list
max
median
min
stddev_population
stddev_sample
sum
values
var
The approximate aggregate functions you can use with the comp stage are:
approx_count
approx_quantiles
approx_top
17.3.6 | config
Abstract
Learn more about the Cortex Query Language config stage that configures the query behavior.
Syntax
config <function>
Description
The config stage configures the query behavior. It must be used with one of the config Functions. This stage must be presented as the first stage in the
query.
config Functions
case_sensitive
timeframe
17.3.6.1 | case_sensitive
Abstract
Learn more about the Cortex Query Language case_sensitive config stage.
Syntax
The case_sensitive configuration identifies whether field values are evaluated as case sensitive or case insensitive. The config case_sensitive stage
must be added at the beginning of the query. You can also add another config case_sensitive stage when adding a join or union stage to a query.
If you do not provide this stage in your query, the default behavior is false, and case is not considered when evaluating field values.
The Settings → Configurations → XQL Configuration → Case Sensitivity (case_sensitive) setting can overwrite this case_sensitive configuration for
all fields in the application, except for BIOCs, which will remain case insensitive no matter what this setting is set to.
From Cortex XDR version 3.3, the default case sensitivity setting was changed to case insensitive (config case_sensitive = false). If you've
been using Cortex XDR before this version was released, the default case sensitivity setting is still configured to be case sensitive (config
case_sensitive = true).
The config case_sensitive stage can't be used to compare a field to an inner query. In this situation, ensure to use the lowercase or uppercase
functions on the field and inner query stages and functions syntax.
Example 90.
This query won't provide the correct results of comparing the agent_hostname field with the inner query:
The config case_sensitive stage can't be used to compare a field to an array that contains non-literal strings, for example a field name or function.
Example 91.
The results of this example are true, where the left side (uppercase("a")) is lowercase as it's not an array, and the right side (("x", "A")) is also an
array that contains only literal strings.
Example 92.
The results of this example are false, where the left side (uppercase("a")) is lowercase as it's not an array, and the right side ("x",
uppercase("a"))) is an array that contains a function (uppercase("a")).
Examples
17.3.6.2 | timeframe
Abstract
Cortex Query Language timeframe configuration enables performing searches within a specific time frame from the query execution.
Exact Time
config timeframe between "<Year-Month-Day H:M:S ±Timezone>" and "<Year-Month-Day H:M:S ±Timezone>"
Relative Time
Description
The timeframe configuration enables you to perform searches within a specific time frame from the query execution. The results for the time frame are based
on times listed in the _Time column in the results table.
You can add the timeframe configuration to your queries using different formats depending on whether the time frame you are setting is an exact time or
relative time.
When you set an exact time, include the config timeframe details: between "<Year-Month-Day H:M:S ±Timezone>" and "<Year-Month-Day
H:M:S ±Timezone>". The ±Timezone format is: ±xxxx. When you do not configure a timezone, the default is UTC. The exact time is based on a static time
frame according to when the query is sent.
When you set a relative time, you have a few options for setting the config timeframe, where the syntax <+|-> indicates whether to go back (-) or forward
(+) in time. The default is back (-).
<number><time unit>
Enables setting a static time frame according to when the query is sent, where you choose the <time unit> from the available time unit options listed
in the table below.
Enables setting a time frame between a defined start time, where you choose the <time unit> from the available time unit options listed in the table
below, and the end time as the time the query is run with the preset keyword "now".
Enables setting a time frame between a preset start time according to the Unix epoch time 00:00:00 UTC on 1 January 1970 with the "begin" keyword,
and a defined ending time, where you choose the <time unit> from the available time unit options listed in the table below.
Enables setting a time frame between a defined starting and ending time, where you choose the <time unit> from the available time unit options listed
in the table below.
When a query includes any inner queries, the inner queries receives its time frame from the outer query unless the inner query has a separate time frame
defined.
When using the Query Builder to define a query, the time period can be set at the top right of the query window using the time picker, and the default is 24
hours. Whenever the time period is changed in the query window, the config timeframe is automatically set to the time period defined, but this won't be
visible as part of the query. Only if you manually type in the config timeframe will this be seen in the query.
S seconds
M minutes
H hours
D days
W weeks
MO months
Y years
Examples
Relative Time
For the last 10 hours from when the query is sent, return a maximum of 100 xdr_data records.
Since the last two days until now when the query is run, return a maximum of 100 xdr_data records.
Since the Unix epoch time 00:00:00 UTC on 1 January 1970 until the past 2 years when the query is run, return a maximum of 100 xdr_data records.
Since the last four days until the next 5 days when the query is run, return a maximum of 100 xdr_data records.
Exact Time
From April 1, 2021 at 9:00 a.m. UTC -02:00 until April 2, 2021 at 10:00 a.m. UTC -02:00, return a maximum of 100 xdr_data records.
config timeframe between "2021-04-01 09:00:00 -0200" and "2021-04-02 10:00:00 -0200"
| dataset = xdr_data
| limit 100
17.3.7 | dedup
Abstract
Learn more about the Cortex Query Language dedup stage that removes duplicate occurrences of field values.
Syntax
Description
The dedup stage removes all records that contain duplicate values (or duplicate sets of values) from the result set. The record that is returned is identified by
the by clause, which selects the record by either the first or last occurance of the field specified in this clause.
The dedup stage can only be used with fields that contain numbers or strings.
Return unique values for the actor_primary_username field. For any given field value, return the first chronologically occurring record.
dataset = xdr_data
| fields actor_primary_username as apu
| filter apu != null
| dedup apu by asc _time
Return the last chronologically occurring record for any given actor_primary_username value.
dataset = xdr_data
| fields actor_primary_username as apu
| filter apu != null
| dedup apu by desc _time
Return the first occurrence seen by for any given actor_primary_username. field value.
dataset = xdr_data
| fields actor_primary_username as apu
| filter apu != null
| dedup apu by asc apu
Return unique groups of actor_primary_username and os_actor_primary_username field values. For each unique grouping, return the pair that first
appears on a record with a non-NULL action_file_size field.
dataset = xdr_data
| fields actor_primary_username as apu,
os_actor_primary_username as oapu,
action_file_size as afs
| filter apu != null and afs != null
| dedup apu, oapu by asc afs
17.3.8 | fields
Abstract
Learn more about the Cortex Query Language fields stage that defines the fields returned in the result set.
Syntax
Description
The fields stage declares which fields are returned in the result set, including name changes. If this stage is used, then subsequent stages can operate only
on the fields identified by this stage.
Use a wildcard (*) to include all fields that match the pattern. Use a minus character (-) to exclude a field from the result set. The following system fields
cannot be excluded and are always displayed:
_time
_insert_time
_raw_log
_product
_vendor
_tag
_snapshot_id
_snapshot_log_count
_snapshot_collection_ts
_id
Use the as clause to set an alias for a field. If you use the as clause, then subsequent stages must use that alias to refer to the field.
Examples
Return the action_country field from all xdr_data records where the action_country field is both not null and not "-". Also include all fields with names
that match event_* except for event_type.
dataset = xdr_data
| fields action_country as ac
| fields event_*
17.3.9 | filter
Abstract
Learn more about the Cortex Query Language filter stage that narrows down the displayed results.
Syntax
Description
The filter stage identifies which data records should be returned by the query. Filters are boolean expressions that can use a wide range of functions and
operators to express the filter. If a record matches the filter as the filter expression returns true when applied to the record, the record is returned in the
query's result set.
The functions you can use with a filter are described in Functions. For a list of supported operators, see Supported operators.
Cortex XDR enables you to use single or triple double quotes in a filter stage with or without wildcards.
Using single double quotes with the filter stage, returns the results that contain the <text> specified. Here are a few examples:
When using triple double quotes with the filter stage, the query results only include results that exactly match the <text> results. Here are a few examples:
Examples
Return xdr_data records where the event_type is NETWORK and the event_sub_type is NETWORK_HTTP_HEADER.
dataset = xdr_data
| filter event_type = NETWORK and event_sub_type = NETWORK_HTTP_HEADER
When entering filters to the XQL Search user interface, possible field values for fields of type enum are available using the auto-complete feature. However, the
autocomplete can only show enum values that are known to the schema. In some cases, on data import an enum value is included that is not known to the
defined schema. In this case, the value will appear in the result set as an unknown value, such as event_type_unknown_4. Be aware that even though this
value appears in the result set, you cannot create a filter using it. For example, this query will fail, even if you know the value appears in your result set:
dataset = xdr_data
| filter event_type = event_type_unknown_4
Syntax format A
Syntax format B
17.3.10 | getrole
Abstract
Learn more about the Cortex Query Language getrole stage that enriches events with specific roles associated with usernames or endpoints.
This stage requires an Identity Threat Module license to view the results.
Description
The getrole stage enriches events with specific roles associated with usernames or endpoints. The getrole stage receives as an input a string field that is
either a username in the NETBIOS\SAM format, such as mydomain\myuser, or the agent ID of a host. The agent ID can be found in the endpoints dataset
as endpoint_id or in the xdr_data dataset as agent_id.
The roles for this field are displayed in a column called asset_roles in the results table. If there is one or more roles associated with the field, the values are
represented as a string array, such as ['ADMIN', 'USER'], and are listed in the asset_roles column. If there are no roles, the resulting column is an empty
array.
You can also change the name of the column using as in the syntax to define an alias: getrole <field> as <alias>.
In addition, it is possible to use the filter stage with a new ROLE prefix to display the results of a particular role using the syntax:
Examples
Return a maximum of 100 xdr_data records with the enriched events including specific roles associated with usernames. If there are one or more roles
associated with the value of the user_id string field column, the output is displayed in the asset_roles column in the results table. Otherwise, the field is
empty.
dataset = xdr_data
| limit 100
| getrole user_id
Return a maximum of 100 xdr_data records of all the powershell executions made by the SERVICE_ACCOUNTS user role in the organization. The first filter
stage indicates how to filter for the parent process, which is powershell.exe. The fields stage indicates the field columns to include in the results table and
which ones are renamed in the table: action_process_image_name to process_name and action_process_image_command_line to process_cmd.
The getrole stage indicates the enriched events to include for the specific roles associated with usernames. If the ROLE.SERVICE_ACCOUNTS role is
associated with any values in the actor_effective_username string field column, the row is displayed in the results table. Otherwise, the entire row is
excluded from the results table.
dataset = xdr_data
| filter event_type = ENUM.PROCESS and event_sub_type = ENUM.PROCESS_START and lowercase(actor_process_image_name) = "powershell.exe"
| fields action_process_image_name as process_name, action_process_image_command_line as process_cmd, event_id, actor_effective_username
| getrole actor_effective_username as user_roles
| filter user_roles = ROLE.SERVICE_ACCOUNTS
| limit 100
17.3.11 | iploc
Abstract
Learn more about the Cortex Query Language iploc stage that associates IPv4 addresses of fields to a list of predefined attributes related to the geolocation.
Syntax
iploc <field>
Description
The iploc stage associates the IPv4 address of any field to a list of predefined attributes related to the geolocation. By default, when using this stage in your
queries, the geolocation data is added to the results table in these predefined column names: LOC_ASN_ORG, LOC_ASN, LOC_CITY, LOC_CONTINENT,
LOC_COUNTRY, LOC_LATLON, LOC_REGION, and LOC_TIMEZONE.
The following options are available to you when using this stage in your queries:
You can specify the geolocation fields that you want added to the results table.
You can append a suffix to the name of the geolocation field column in the results table.
You can change the name of the geolocation field column in the results table.
You can also view the geolocation data on a graph type called map, where the xaxis is set to either loc_country or loc_latlon, and the yaxis is
a number field.
The iploc stage can only be used with fields that contain numbers or strings.
To improve your query performance, we recommend that you filter the data in your query before the iploc stage is run. In addition, limiting the
number of fields in the results table further improves the performance.
Examples
Return a maximum of 1000 xdr_data records with the specific geolocation data associated with the action_remote_ip field, where no record with a null
value for action_remote_ip is included, and displays the name of the city in a column called city and a combination of the latitude and longitude in a
column called loc_latlon with comma-separated values of latitude and longitude.
dataset = xdr_data
| limit 1000
| filter action_remote_ip != null
| iploc action_remote_ip loc_city as city, loc_latlon
Return a maximum of 1000 xdr_data records with all the available geolocation data with the predefined column names, and add the specified suffix
_remote_id to each predefined column name, where no record with a null value for action_remote_ip is included.
dataset = xdr_data
| limit 1000
| filter action_remote_ip != null
| iploc action_remote_ip suffix=_remote_id
Return a maximum of 1000 xdr_data records with the specific geolocation data associated with the action_remote_ip field that includes the name of the
country (contained in loc_country) in a column called country, where no record with a null value for either country or action_remote_ip is included.
The comp stage is used to count the number of IP addresses per country. The results are displayed in a graph type of kind map, where the x-axis
represents the country and the y-axis the action_remote_ip.
dataset = xdr_data
| limit 1000
| iploc action_remote_ip loc_country as country
| filter country != null and action_remote_ip != null
| comp count() as ip_count by country
| view graph type = map xaxis = country yaxis = ip_count
17.3.12 | join
Abstract
Learn more about the Cortex Query Language join stage that combines the results of two queries into a single result set.
Syntax
Description
The join() stage combines the results of two queries into a single result set. This stage is conceptually identical to a SQL join.
Parameter/Clause Description
conflict_strategy Identifies the join conflict strategy when there is a conflict in the column names between the 2 result sets
which one should be chosen, either:
right: The column from the inner join query is used (default), which implements a right outer join.
left: The column from the orignal result set in the dataset is used, which implements a left outer join.
both: Both columns are used. The original result set column from the dataset keeps the current
name, while the inner join query result set column name includes the following suffix added to the
current name _joined_10, such as <original column name>_joined_10, and depending on
the number of conflicted fields the suffix increases to _joined_11, _joined_12....
inner
Returns all the records in common between the queries that are being joined. This is the default join
type.
right
Returns all records from the join result set, plus any records from the parent result set that intersect
with the join result set.
left
Returns all records from the parent result set, plus any records from the join result set that intersect
with the parent result set.
<xql query> Provides the XQL query to be joined with the parent query.
as <execution_name> Provides an alias for the join query's result set. For example, if you specify an execution name of join1,
and in the join query you return field agent_id, then you can subsequently refer to that field as
join1.agent_id.
<boolean_expr> Identifies the conditions that must be met in order to place a record in the join result set.
This stage does not preserve sort order. If you are combing this stage with a sort stage, specify the sort stage after the join.
Examples
Return microsoft_windows_raw records, which are combined with the xdr_data records to include a new column called edr. For the event_type set to
EVENT_LOG, the actor_process_image_name and event_id fields are returned from all xdr_data records, which are then compared to the fields inside
the microsoft_windows_raw dataset, where edr.event_id = edr_event_id, and the results are added to the new edr column.
dataset = microsoft_windows_raw
| join (dataset = xdr_data | filter event_type = EVENT_LOG | fields actor_process_image_name, event_id )
as edr edr.event_id = edr_event_id
Return a maximum of 100 xdr_data records with the events of the agent_id, event_id , and _product fields, where the _product field is displayed as
product. The agent_id, event_id, and _product fields are returned from all xdr_data records and are then compared to the fields inside the
panw_ngfw_filedata_raw dataset, where _time = panw.time, and the results are added to the new panw column. When there is a conflict in the
column names between the 2 result sets both columns are used.
dataset = xdr_data
| fields agent_id, event_id, _product as product
| join conflict_strategy = both (dataset = panw_ngfw_filedata_raw | fields _product as product)
as panw _time = panw._time
| limit 100
17.3.13 | limit
Abstract
Learn more about the Cortex Query Language limit stage that sets the maximum number of records that can be returned in the result set.
limit <number>
Description
The limit stage sets the maximum number of records that can be returned in the result set. If this stage is not specified in the query, 1,000,000 is used.
Using a small limit can greatly increase the performance of your query by reducing the number of records that Cortex XDR can return in the result set.
Examples
17.3.14 | replacenull
Abstract
Learn more about the Cortex Query Language replacenull stage that replaces null field values with a text string.
Syntax
Description
The replacenull stage replaces null field values with the specified text string. This guarantees that every field in your result set will contain a value.
If you use the replacenull stage, then all subsequent stages that refer to the field's null value must use the replacement text string.
Examples
Return the action_country field from every xdr_data records where the action_country field is null, using the text string N/A in the place of an empty
field value.
dataset = xdr_data
| fields action_country as ac
| replacenull ac = "N/A"
| filter ac = "N/A"
17.3.15 | sort
Abstract
Learn more about the Cortex Query Language sort stage that identifies the sort order for records returned in the result set.
Syntax
Description
The sort stage identifies the sort order for records returned in the result set. Records can be returned in ascending (asc) or descending (desc) order. If you
include more than one field in the sort stage, records are sorted in field specification order.
Keep the following points in mind before running a query with the sort stage:
To acheive the correct sorting results when a query includes strings representing numbers, it's recommend to sort by integer fields and to convert all
string fields to integers; for example, by using the to_integer function.
When sorting by multiple columns, the sort is saved correctly, but the user interface will only display the results according to the first sorted column.
Examples
Return the action_boot_time and event_timestamp fields from all xdr_data records. Sort the result set first by the action_boot_time field value in
descending order, then by event_timestamp field in ascending order.
dataset = xdr_data
| fields action_boot_time as abt, event_timestamp as et
| sort desc abt, asc et
| limit 1
Abstract
Learn more about the Cortex Query Language tag stage that adds a single tag or list of tags to the _tag system field.
Syntax
Description
The tag stage is used in combination with the add operator to append a single tag or list of tags to the _tag system field, which you can easily query in the
dataset.
Examples
In the xdr_data dataset, add a single tag called "test" to the _tag system field.
dataset = xdr_data
| tag add "test"
In the xdr_data dataset, add a list of tags, "test1", "test2", and "test3", to the _tag system field.
dataset = xdr_data
| tag add "test1", "test2", "test3"
17.3.17 | target
Abstract
Learn more about the Cortex Query Language target stage that saves query results to a dataset or lookup dataset.
Syntax
Description
The target() stage saves query results to a named dataset or lookup. These are persistent and can be used in subsequent queries. This stage must be the
last stage specified in the query.
The type argument defines the type of dataset to create, when a new one needs to be created. The following types are supported:
dataset: A regular dataset of type USER. Use dataset if you are saving the query results for use in future queries.
lookup: A small lookup table with a 50MB limit. This lookup table can be used with parsing rules and downloaded as a JSON file. Use lookup if you
want to export the query results to a disk.
Optional Append
Use append to define whether the data from the current query should be appended to the dataset (true) or re-created as a new dataset (false). If no
append is included, the default is false. This means that after the query runs the data in an existing dataset is replaced with the new data.
Example 1
dataset = xdr_data
| fields action_boot_time as abt
| filter abt != null
| target type=dataset abt_dataset
Subsequently, you can query the new dataset. Notice that the field names used by the new dataset conform to the aliases that you used when you created the
dataset:
dataset = abt_dataset
| filter abt = 1603986614040
Example 2
The following example creates a dataset with the number of agents per country.
{
"tables": [
"xdr_data"
],
"original_query": "\n
dataset=xdr_data\n
| fields agent_id, action_country \n
| comp count_distinct(agent_id) as count by action_country\n
| target type=dataset append=false agents_per_country\n
", "stages":
[
{
"FIELD_SELECT": {
"fields": [
{ "name": "agent_id", "as": None
},
{ "name": "action_country", "as": None
}
],
"exclude": []
}
},
{
"GROUP": {
"aggregations": [
{
"function": "count_distinct",
"parameters": [
"$agent_id"
],
"name": "count"
}
],
"key": [
"action_country"
]
}
}
],
"output": [
{
"TARGET": {
"type": "dataset",
"target": "agents_per_country",
"append": False
}
}
]
}
17.3.18 | top
Abstract
Learn more about the Cortex Query Language top stage that returns the approximate count of top elements for a field and percentage of the count results.
Syntax
top <integer> <field> [by <field1> ,<field2>...] [top_count as <column name>, top_percent as <column name>]
Description
The top stage returns the approximate count of top elements for a given field and the percentage of the count results relative to the total number of values for
the designated field. Use this top stage to produce approximate results, which are more scalable in terms of memory usage and time.
The <integer> in the syntax represents the number of top elements to return. If a number is not specified, up to 10 elements are returned by default. The
approximate count is listed in the results table in a column called TOP_COUNT and the percentage in a column called TOP_PERCENT. You can update the
column names for both tables by defining top_count as <column name> , top_percent as <column name> in the syntax. If you only define one column
name to update in the syntax, the results table displays that column without displaying the other column.
Examples
Returns a table with 3 columns called EVENT_ID, TOP_COUNT, and TOP_PERCENT with up to 10 unique values for event_id with the corresponding counts
and percentages.
Returns a table with 3 columns called ACTION_COUNTRY, EVENT_ID, and TOTAL with a single unique value for the event_id for each action_country
with the corresponding count in the TOTAL column.
dataset = xdr_data
| top 1 event_id by action_country top_count as total
17.3.19 | transaction
Abstract
Learn more about the Cortex Query Language transaction stage used to find transactions based on events that meet certain constraints.
Syntax
transaction <field_1, field_2, ...> [span = <time> [timeshift = <epoch time> [timezone = "<time zone>"]] | startswith = <condition> endswith = <condition>
allowunclosed= true|false] maxevents = <number of events per transaction>
Description
The transaction stage is used to find transactions based on events that meet certain constraints. This stage aggregates all fields in a JSON string array by
fields defined as transaction fields. For example, using the transaction stage to find transactions based on the user and user_ip fields will make the
aggregation of json strings of all fields by the user and user_ip fields. A maximum of 50 fields can be aggregated in a transation stage.
You can also configure whether the transactions falls within a certain time frame, which is optional to define. You can set one of the following:
span=<time>: Use this command to set a time frame per transaction, where <time> is a combination of a number and time suffix. Set one time suffix
from the list of available options listed in the table below. In addition, you can define a particular start time for grouping the events in your query
according to the Unix epoch time by setting timeshift = <epoch time> timezone = "<time zone>", which are both optional. You can
configure the <time zone> offset using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as
"America/Chicago". The query still runs without defining the epoch time or time zone. If no timeshift = <epoch time> timezone = "<time
zone>" is set, the query runs according to last time set in the log.
startswith and endswith: Use these commands to set a condition for the beginning or end of the transaction, where the condition can be a logical
expression or free text search.
Set the allowunclosed flag to true to include transactions which don't contain an ending event. The last event will be 12 hours after the starting event. By
default, this is set to true and transactions without an ending event are included.
Use the maxevents command to define the maximum number of events to include per transaction. If this command is not set, the default value is 100.
When using the transaction stage, 5 additional fields are added to the results displayed:
_duration: Displays the difference in seconds between the timestamps for the first and last events in the transaction.
MS milliseconds
S seconds
M minutes
H hours
D days
W weeks
MO months
Y years
Return a maximum of 10 events per transaction from the xdr_data records based on the user and agent_id fields, where the transaction time frame is 1
hour.
dataset=xdr_data
|transaction user, agent_id span=1h timeshift = 1615353499
timezone = “+08:00” maxevents=10
{'TRANSACTION': {'fields': ['user', 'agent_id'], 'maxevents': 10, 'span': {'amount': 1, 'units': 'h', 'timeshift': None}}}
Return a maximum of 99 events per transaction from the xdr_data records based on the f1 and f2 fields. The starting event of each transaction is an event,
where one of the fields contains a string "str_1", and the ending event of each transaction is an event, where one of the fields contains a string "str_2".
dataset=xdr_data
| transaction f1, f2 startswith="str_1" endswith="str2" maxevents=99
{'TRANSACTION': {'fields': ['f1', 'f2'], 'search': {'startswith': {'filter': {'free_text': 'str_1'}}, 'endswith': {'filter': {'free_text': 'str2'}}}, 'maxevents':
99}}
17.3.20 | union
Abstract
Learn more about the Cortex Query Language union stage that combines two result sets into a single result set.
Syntax
union <datasetname>
Description
The union() stage combines two result sets into one result. It can be used in two different ways.
If a dataset name is provided with no other arguments, the two datasets are combined for the duration of the query, and the fields in both datasets are
available to subsequent stages.
If a Cortex Query Language (XQL) query is provided to this stage, the result set from that XQL union query is combined with the result set from the rest of the
query. This is effectively an inner join statement.
Examples
First, create a dataset using the target stage. This results in a persistent stage that we can use later with a union stage.
dataset = xdr_data
| filter event_type = FILE and event_sub_type = FILE_WRITE
| fields agent_id, action_file_sha256 as file_hash, agent_hostname
| target type=dataset file_event
Then run a second query, using union so that the query can access the contents of the file_event dataset. Notice that this second query uses the
file_hash alias that was defined for the file_event dataset.
dataset = xdr_data
| filter event_type = PROCESS and event_sub_type = PROCESS_START
| union file_event
| fields agent_id, agent_hostname, file_hash,
actor_process_image_path as executed_by,
17.3.21 | view
Abstract
Lear more about the Cortex Query Language view stage that configures the display of the result set.
Syntax
Description
The view() stage configures the display of the result set in the following ways:
highlight: Highlights specified strings that Cortex XDR finds on specified fields. The highlight values that you provide are performed as a substring
search, so only partial value can be highlighted in the final results table.
graph type: Creates a column, line, or pie chart based on the values found for the fields specified in the xaxis and yaxis parameters. In this mode,
view also offers a large number of parameters that allow you to control colors, decorations, and other behavior used for the final chart. You can also
define a graph subtype, when setting the graph type to either column or pie.
If you use graph type, the fields specified for xaxis and yaxis must be collatable or the query will fail.
column order: Enables you to list the query results by popularity, where the most non-null returned fields are displayed first using the syntax view
column order = populated. By default, if column order is not defined (or view column order=default), the original column order is used.
This option does not apply to Cortex Query Language (XQL) queries in widgets, Correlation Rules, public APIs, reports, and dashboards. If you include
the view column order syntax in these types of queries, Cortex XDR disregards the stage from the query and completes the rest of the query.
Examples
Use the dedup stage collect unique combinations of event_type and event_sub_type values. Highlight the word "STREAM" when it appears in the result
set.
dataset = xdr_data
| fields event_type, event_sub_type
| dedup event_type, event_sub_type by asc _time
| view highlight fields = event_sub_type values = "STREAM"
Count the number of unique files accessed by each user, and show a column graph of the results. This query uses comp count_distinct to calculate the
number of unique files per username.
dataset = xdr_data
| fields actor_effective_username as username, action_file_path as file_path
| filter file_path != null and username != null
| comp count_distinct(file_path) as file_count by username
| view graph type = column xaxis = username yaxis = file_count
Count the number of unique files accessed by each user, and display the results by popularity according to the most non-null values returned fields. This
query uses comp count_distinct to calculate the number of unique files per username.
dataset = xdr_data
| fields actor_effective_username as username, action_file_path as file_path
| filter file_path != null and username != null
| comp count_distinct(file_path) as file_count by username
| view column order = populated
17.3.22 | windowcomp
Abstract
Learn more about the Cortex Query Language windowcomp stage that precedes functions calculating statistics.
Syntax
windowcomp <analytic function> (<field>)[by <fieldA> [,<fieldB>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number>
[and 0|null|<number>|-<number>] [frame_type=range]] [as <alias>]
Defining a field with an analytic function is optional when using a count function. For rank and row_number functions, it's not allowed.
The windowcomp stage precedes functions calculating statistics. The results compute values over a group of rows and return a single result for each row, for
all records that contain matching values for the fields identified using a combination of the by clause, sort, and range. Only one function can be defined per
field, while the other parameters are optional. Yet, it's possible to define multiple fields.
Example 93.
| windowcomp sum(field_1) by field_2 sort field_3 as field_4, min(field_5) by field_6 sort field_7 as field_8
Supported functions
row_number
lag
last_value
stddev_population
count
max
median
min
sum
Optional parameters
The optional parameters available to define in the windowcomp function are explained in the following table:
By clause [by <fieldA> [, The by clause is used to break up the input field rows into separate partitions, over which
<fieldB>,...] the windowcomp function is independently evaluated.
When this optional clause is omitted, all rows in the input table comprise a single
partition.
Sort [sort [asc|desc] <field1> Defines how field rows are ordered within a partition as either ascending (asc) or
[,[asc|desc] descending (desc). This clause is optional in most situations, but is required in some
<field2>,...]] cases for navigation functions and rank function.
Between window frame [between 0|null| Sets the window frame around the current row within a partition, over which the window
clause <number>|-<number> [and function is evaluated. Numbering functions and the lag function can't be used in the
0|null|<number>|- window frame clause. Creates a window frame with a lower and upper boundary. The first
<number>] boundary represents the lower boundary. The second boundary represents the upper
boundary. Every boundary can include the following options:
null: Starts at the beginning or at the end of the partition, depending on the
placement of the null.
0: Is set to the current row, where the window frame starts or ends at the current row.
positive/negative <number>: The end of the window frame or the start of the window
frame relative to the current row.
If a start <number> and end <number> are defined, the end <number> must
be greater than the start <number>.
If the sort is included, but the window frame clause isn't, the following window frame
clause is used by default:
rows (default): Computes the window frame based on physical offsets from the
current row. For example, you could include two rows before and after the current
row. To apply the default frame_type=rows, nothing needs to be added to the
windowcomp stage syntax as it's automatically built into the query.
range: Computes the window frame based on a logical range of rows around the
current row, based on the current row’s sort key value. The provided range value is
added or subtracted to the current row's key value to define a starting or ending
range boundary for the window frame. Setting the range with start or end numeric,
nonzero boundaries requires using exactly one numeric type of sort field.
Example 94.
This is unsupported:
Or
Alias clause [as <alias>] Use the alias clause to provide a column label (field name) for the windowcomp results.
When the new field name already exists in the schema, it's replaced with the new name.
Example 95.
If the xdr_data dataset already has a field in the schema called existing_field, the
new existing_field replaces the old one.
dataset = xdr_data
| windowcomp sum(field_a) as existing_field
The examples provided are based on the following data table for a dataset called ips:
Ip Category Logins
192.168.10.1 pc 23
192.168.10.2 server 2
192.168.20.1 pc 9
192.168.20.4 server 8
192.168.20.5 pc 2
192.168.30.1 pc 10
192.168.10.2 2 server 54
192.168.20.5 2 pc 54
192.168.20.4 8 server 54
192.168.20.1 9 pc 54
192.168.30.1 10 pc 54
192.168.10.1 23 pc 54
192.168.10.2 2 server 10
192.168.20.4 8 server 10
192.168.20.5 2 pc 44
192.168.20.1 9 pc 44
192.168.30.1 10 pc 44
192.168.10.1 23 pc 44
The sum is computed with respect to the order defined using the sort clause. These two queries produce the same results:
dataset = ips
| windowcomp sum(logins) by category sort asc logins between null and 0 as total_logins
OR
dataset = ips
| windowcomp sum(logins) by category sort asc logins between null as total_logins
192.168.10.2 2 server 2
192.168.20.4 8 server 10
192.168.20.5 2 pc 2
192.168.20.1 9 pc 11
192.168.30.1 10 pc 21
192.168.10.1 23 pc 44
Query 4: Compute a cumulative sum, where only preceding rows are analyzed.
The analysis starts two rows before the current row in the partition.
dataset = ips
| windowcomp sum(logins) sort asc logins between null and -2 as total_logins
192.168.20.5 2 pc NULL
192.168.20.4 8 server 2
192.168.20.1 9 pc 4
192.168.30.1 10 pc 12
192.168.10.1 23 pc 21
The lower boundary is 1 row before the current row. The upper boundary is 1 row after the current row.
dataset = ips
| windowcomp avg(logins) sort asc logins between -1 and 1 as avg_logins
192.168.10.2 2 server 2
192.168.20.5 2 pc 4
192.168.20.1 9 pc 9
192.168.30.1 10 pc 14
192.168.10.1 23 pc 16.5
Defines how rows in a window are partitioned and ordered in each partition.
dataset = ips
| windowcomp last_value(ip) by category sort asc logins between null and null as most_popular
192.168.20.5 2 pc 192.168.10.1
192.168.20.1 9 pc 192.168.10.1
192.168.30.1 10 pc 192.168.10.1
192.168.10.1 23 pc 192.168.10.1
Query 7: Calculate the rank of each IP within the category based on the login
dataset = ips
| windowcomp rank() by category sort asc logins as rank
192.168.10.2 2 server 1
192.168.20.4 8 server 2
192.168.20.5 2 pc 1
192.168.20.1 9 pc 2
192.168.30.1 10 pc 3
192.168.10.1 23 pc 4
Query 8: Retrieve the most popular IP in a specific window frame by range and not category
dataset = ips
| windowcomp last_value(ip) by category sort asc logins between -1 and 1 as most_popular
192.168.20.5 2 pc 192.168.20.1
192.168.20.1 9 pc 192.168.30.1
192.168.30.1 10 pc 192.168.10.1
192.168.10.1 23 pc 192.168.10.1
192.168.10.5 2 pc 2
192.168.10.2 2 server 2
192.168.20.4 8 server 2
192.168.20.1 9 pc 3
192.168.30.1 10 pc 2
192.168.10.1 23 pc 1
17.4 | Functions
Abstract
Learn more the functions that can be used with Cortex Query Language (XQL) stages in Cortex XDR.
Some Cortex Query Language (XQL) stages can call XQL functions to convert the data to a desired format. For example, the current_time() function
returns the current timestamp, while the extract_time() function can obtain the hour information in the timestamp. Functions may or may not need input
parameters. The filter and alter stages are the two stages that can use functions for data transformations.
17.4.1 | add
Abstract
Learn more about the Cortex Query Language add() function that adds two integers.
Syntax
Description
The add() function adds two positive integers. Parameters can be either integer literals, or integers as a string type, such as might be contained in a data
field.
Example
dataset = xdr_data
| alter mynum = add(action_file_size, 3)
| fields action_file_size, mynum
| filter action_file_size > 0
| limit 1
17.4.2 | approx_count
Abstract
Learn more about the Cortex Query Language approx_count approximate aggregate comp function.
Syntax
comp approx_count(<field>) [as <alias>] [by <field1>[,<field2>...]] [addrawdata = true|false [as <target field>]]
The approx_count approximate aggregate is a comp function that counts the number of distinct values in the given field over a group of rows. For the group
of rows, the function returns an approximate result as a single interger value, for all records that contain matching values for the fields identified in the by
clause. Use this approximate aggregate function to produce approximate results, instead of exact results used with regular aggregate functions, which are
more scalable in terms of memory usage and time. This approximate aggregate function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to configure
the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Example
Returns a single integer value after approximately counting the number of distinct values in the event_id field over a group of rows.
dataset = xdr_data
| fields event_id
| comp approx_count(event_id)
17.4.3 | approx_quantiles
Abstract
Learn more about the Cortex Query Language approx_quantiles approximate aggregate comp function.
Syntax
comp approx_quantiles(<field>, <number>, <true|false>) [as <alias>] [by <field1>[,<field2>...]][addrawdata = true|false [as <target field>]]
Description
The approx_quantiles approximate aggregate is a comp function returns the approximate boundaries as a single value for a group of distinct or non-
distinct values (default false) for the specified field over a group of rows, for all records that contain matching values for the fields identified in the by clause.
This function returns an array of <number> + 1 elements, where the first element is the approximate minimum and the last element is the approximate
maximum. Use this approximate aggregate function to produce approximate results, instead of exact results used with regular aggregate functions, which are
more scalable in terms of memory usage and time. This approximate aggregate function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Returns the approximate boundaries for a group of distinct values in the event_id field.
dataset = xdr_data
| fields event_id
| comp approx_quantiles(event_id, 100, true)
Returns the approximate boundaries for a group of non-distinct values in the event_id field.
dataset = xdr_data
| fields event_id
| comp approx_quantiles(event_id, 100)
17.4.4 | approx_top
Abstract
Learn more about the Cortex Query Language approx_top approximate aggregate comp function.
Syntax
comp approx_top(<string field>, <number>) [as <alias>] [by <field1>[,<field2>...]][addrawdata = true|false [as <target field>]]
comp approx_top(<string field>, <number>, <weight string field>) [as <alias>] [by <field1>[,<field2>...]][addrawdata = true|false [as <target field>]]
The approx_top approximate aggregate is a comp function that, depending on the number of parameters, returns either an approximate count or sum of top
elements. This approximate aggregate function returns a single value for the given field over a group of rows, for all records that contain matching values for
the fields identified in the by clause. This function is used in combination with a comp stage. When a third parameter is specified, it references a field that
contains a numeric value (weight) that is used to calculate a sum. The return value is an array with up to <number> of JSON strings. Each string represents an
object (struct) containing 2 keys and corresponding values. The keys depend on whether a third parameter has been supplied or not.
When defining approx_top to count and the third parameter is omitted, each struct will have these keys: "value" and "count", where the "value" specifies a
unique field value and "count" specifies the number of occurrences. When the third parameter is specified in approx_top, it has to be a name of a field that
contains a numeric value that is used to calculate the final sum for each unique value in the first specified field. Each struct in this case will have these keys:
"value" and "sum".
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Use this approximate aggregate function to produce approximate results, instead of exact results used with regular aggregate functions, which are more
scalable in terms of memory usage and time.
Examples
Returns an approximate count of the top 10 agent IDs in the agent_id field that appear the most frequently. The return value is an array containing 10 JSON
strings with a "value" and "count".
dataset = xdr_data
| fields agent_id
| comp approx_top(agent_id, 10)
Returns an approximate sum of the top 10 agent IDs in the agent_id field by their action_session_duration. The return value is an array containing 10
JSON strings with a "value" and "sum" for each agent_id.
dataset = xdr_data
| fields agent_id, action_session_duration
| comp approx_top(agent_id, 10, action_session_duration)
17.4.5 | array_all
Abstract
Syntax
The <operator> can be any of the ones supported, such as = and !=.
Description
The array_all() function returns true when all the elements in a particular array match the condition in the specified array element. Otherwise, the function
returns false.
Example
When the dfe_labels array is not empty, use the alter stage to create a new column called x that returns true when all the elements in the dfe_labels
array is equal to network; otherwise, the function returns false.
dataset = xdr_data
| filter dfe_labels != null
| alter x = array_all(dfe_labels , "@element" = "network")
| fields x, dfe_labels
| limit 100
17.4.6 | array_any
Abstract
The <operator> can be any of the ones supported, such as = and !=.
Description
The array_any() function returns true when at least 1 element in a particular array matches the condition in the specified array element. Otherwise, the
function returns false.
Example
When the dfe_labels array is not empty, use the alter stage to create a new column called x that returns true when at least 1 element in the dfe_labels
array is equal to network; otherwise, the function returns false.
dataset = xdr_data
| filter dfe_labels != null
| alter x = array_any(dfe_labels , "@element" = "network")
| fields x, dfe_labels
| limit 100
17.4.7 | arrayconcat
Abstract
Learn more about the Cortex Query Language arrayconcat() function that returns an array containing unique values found in the original array.
Syntax
arrayconcat (<array1>,<array2>[,<array3>...])
Description
The arrayconcat() function accepts two or more arrays, and it joins them into a single array.
Example
first_array : [1,2,3]
second_array : [44,55]
third_array : [4,5,6]
[1,2,3,44,55,4,5,6]
17.4.8 | arraycreate
Abstract
Learn more about the Cortex Query Language arraycreate() function that returns an array based on the given parameters defined for the array elements.
Syntax
Description
The arraycreate() function returns an array based on the given parameters defined for the array elements.
Example
Returns a final array to a field called x that is comprised of the elements [1,2].
dataset = xdr_data
| alter x = arraycreate("1", "2")
| fields x
Abstract
Learn more about the Cortex Query Language arraydistinct() function that returns an array containing unique values found in the original array.
Syntax
arraydistinct (<array>)
Description
The arraydistinct() function accepts an array, and it returns a new array containing only unique elements found in the original array. That is, given the
array:
[0,1,1,1,4,5,5]
[0,1,4,5]
17.4.10 | arrayfilter
Abstract
Syntax
arrayfilter(<array>, <condition>)
The <operator> can be any of the ones supported, such as = and !=.
Description
The arrayfilter() function returns a new array with the elements which meet the given condition. The function does this by filtering the results of an array in
one of the following ways:
Returns the results when a particular array is set to a specified array element.
Though it's possible to define the arrayfilter() function with any condition, the examples below focus on conditions using the @element that are based
on the current element being tested.
Basic Example
When the dfe_labels array is not empty, use the alter stage to assign a value to a field called x that returns the value of the arrayfilter function. The
arrayfilter function filters the dfe_labels array for the array element set to network.
dataset = xdr_data
| filter dfe_labels != null
| alter x = arrayfilter(dfe_labels , "@element" = "network")
| fields x, dfe_labels
| limit 100
Advanced Example
This queries below illustrate how to check whether any IPs are included or not included in the blocked list called CIDRS. The Query Results tables are also
included to help explain what happens as the arrayfilter() function is slightly modified.
This query returns results for each IP that don't match anything in the CIDRS array blocked list:
dataset = xdr_data
| limit 1
| alter cidrs = arraycreate("10.0.0.0/8","172.16.0.0/16"), ip = arraycreate("192.168.1.1", "172.16.20.18")
| fields cidrs, ip
| arrayexpand ip
| alter non_matching_cidrs = arrayfilter(cidrs, ip not incidr "@element")
Results:
The following table details for each IP the logic that is first performed before the final results for the query are displayed:
IP Statement TRUE/FALSE
For each IP, an array of CIDRS is returned in the NON_MATCHING_CIDRS column, which doesn't match the CIDRS array. In addition, from the above table,
arrayfilter() only returns anything that resolves as TRUE. This explains the query results displayed in the following table:
IP CIDRS NON_MATCHING_CIDRS
Now, let's update the query to return results for each IP that match anything in the CIDRS array:
dataset = xdr_data
| limit 1
| alter cidrs = arraycreate("10.0.0.0/8","172.16.0.0/16"), ip = arraycreate("192.168.1.1", "172.16.20.18")
| fields cidrs, ip
| arrayexpand ip
| alter matching_cidrs = arrayfilter(cidrs, ip incidr "@element")
Results:
The following table details for each IP the logic that is first performed before the final results for the query are displayed:
IP Statement TRUE/FALSE
For each IP, an array of CIDRS is returned in the MATCHING_CIDRS column, which matches the CIDRS array. In addition, from the above table,
arrayfilter() only returns anything that resolves as TRUE. This explains the query results displayed in the following table:
IP CIDRS MATCHING_CIDRS
IP CIDRS MATCHING_CIDRS
17.4.11 | arrayindex
Abstract
Learn more about the Cortex Query Language arrayindex() function that returns the array element contained at the specified index.
Syntax
arrayindex(<array>, <index>)
Description
The arrayindex() function returns the value contained in the specified array position. Arrays are 0-based, and negative indexing is supported.
Examples
Use the split function to split IP addresses into an array of octets. Return the 3rd octet contained in the IP address.
dataset = xdr_data
| fields action_local_ip as alii
| alter ip_third_octet = arrayindex(split(alii, "."), 2)
| filter alii != null and alii != "0.0.0.0"
| limit 10
17.4.12 | arrayindexof
Abstract
Learn more about the Cortex Query Language arrayindexof() function that returns the index value of an array.
Syntax
arrayindexof(<array>, <condition>)
The <operator> can be any of the ones supported, such as = and !=.
Description
The arrayindexof() function enables you to return a value related to an array in one of the following ways.
Returns 0 if a particular array is not empty and the specified condition is true. If the condition is not met, a NULL value is returned.
Returns the 0-based index of a particular array element if a particular array is not empty and the specified condition using an @element is true. If the
condition is not met, a NULL value is returned.
Examples
Condition
Use the alter stage to assign a value returned by the arrayindexof function to a field called x. The arrayindexof function reviews the dfe_labels array
and returns 0 if the array is not empty and the backtrace_identities array contains more than 1 element. Otherwise, a NULL value is assigned to the x
field.
dataset in (xdr_data)
| alter x = arrayindexof(dfe_labels , array_length(backtrace_identities) > 1)
| fields x, dfe_labels
| limit 100
@Element
When the dfe_labels array is not empty, use the alter stage to assign the 0-based index value returned by the arrayindexof function to a field called x.
The arrayindexof function reviews the dfe_labels array and looks for the array element set to network. Otherwise, a NULL value is assigned to the x
field.
dataset = xdr_data
| filter dfe_labels != null
17.4.13 | array_length
Abstract
Learn more about the Cortex Query Language array_length() function that returns the length of an array.
Syntax
array_length (<array>)
Description
Example
dataset = xdr_data
| fields action_local_ip as alii
| alter ip_len = array_length(split(alii, "."))
| filter alii != null and alii != "0.0.0.0"
| limit 1
17.4.14 | arraymap
Abstract
Learn more about the Cortex Query Language arraymap() function that applies a callable function to every element of an array.
Syntax
Description
The arraymap() function applies a specified function to every element of an array. For functions that require a fieldname, use "@element".
Examples
Extract the MAC address from the agent_interface_map field. This example uses the json_extract_scalar, to_json_string, json_extract_array, and
arraystring functions to extract the desired information.
dataset = xdr_data
| alter mac =
arraystring (
arraymap (
json_extract_array (to_json_string(agent_interface_map),"$."),
json_extract_scalar ("@element", "$.mac")
), ",")
17.4.15 | arraymerge
Abstract
Learn more about the Cortex Query Language arraymerge() function that returns an array created from a merge of the inner json-string arrays.
Syntax
arraymerge(<field>)
Description
The arraymerge() function returns an array, which is created from a merge of the inner json-string arrays, including merging a number of arraymap() function
arrays. This function accepts a single array of json-strings, which is the <field> in the syntax.
Example 1
Returns a final array called result that is created from a merge of the inner json-string arrays from array x and array y with the values ["a", "b", "c", "d"].
dataset = xdr_data
| alter x= to_json_string(arraycreate("a","b")), y = to_json_string(arraycreate("c","d"))
Example 2
Returns a final array that is created from a merge of the arraymap by extracting the IP address from the agent_interface_map field and the first IPV4 address
found in the first element of the agent_interface_map array. This example uses the to_json_string and json_extract_array functions to extract the desired
information.
dataset = xdr_data
| alter a =
arraymerge (arraymap (agent_interface_map, to_json_string (json_extract_array (to_json_string("@element"), "$.ipv4") ) ) )
17.4.16 | arrayrange
Abstract
Learn more about the Cortex Query Language arrayrange() function that returns a portion of an array based on specified array indices.
Syntax
Description
The arrayrange() function returns a portion, or a slice, of an array given a start and end range. Indices are 0-based, and the start range is inclusive, but the
end range is exclusive.
Example
[0,1,2,3,4,5,6]
arrayrange(<array>, 2, 4)
[2,3]
If you specify an end index that is higher than the last element in the array, the resulting array contains the starting element to the end of the array.
arrayrange(<array>, 2, 8)
[2,3,4,5,6]
17.4.17 | arraystring
Abstract
Learn more about the Cortex Query Language arraystring() function that returns a string from an array, where each array element is joined by a defined
delimiter.
Syntax
Description
The arraystring() function returns a string from an array, where each array element is joined by a defined delimiter.
Examples
Retrieve all action_app_id_transitions that are not null, combine each array into a string where array elements are delimited by " : ", and then use
dedup the resulting string.
dataset = xdr_data
| fields action_app_id_transitions as aait
| alter transitions_string = arraystring(aait, " : ")
| dedup transitions_string by asc _time
| filter aait != null
Abstract
Learn more about the Cortex Query Language avg used with both comp and windowcomp stages.
Syntax
comp stage
comp avg(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
windowcomp avg(<field>) [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The avg() function is used to return the average value of an integer field over a group of rows. The function syntax and application is based on the preceding
stage:
comp stage
When the avg aggregation function is used with a comp stage, the function returns a single average value of an integer field for a group of rows, for all records
that contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the avg aggregate function is used with a windowcomp stage, the function returns a single average value of an integer field for each row in the group of
rows, for all records that contain matching values for the fields identified using a combination of the by clause, sort, and between window frame clause. The
results are provided in a new column in the results table.
Examples
comp example
Return a single average value of the action_total_download field for a group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp avg(Download) as average_download by Process_Path, Process_CMD
addrawdata = true as raw_data
windowcomp example
Return the events that are above average per Process_Path and Process_CMD. The query returns a maximum of 100 xdr_data records in a column called
avg_download.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| windowcomp avg(Download) by Process_Path, Process_CMD as avg_download
| filter Download > avg_download
17.4.19 | coalesce
Abstract
Learn more about the Cortex Query Language coalesce() function that returns the first value that is not null from a defined list of fields.
Syntax
Description
The coalesce() function takes an arbitrary number of arguments and returns the first value that is not NULL.
Given a list of fields that contain usernames, select the first one that is not null and display it in the username column.
dataset = xdr_data
| fields actor_primary_username,
os_actor_primary_username,
causality_actor_primary_username
| alter username = coalesce(actor_primary_username,
os_actor_primary_username,
causality_actor_primary_username)
17.4.20 | concat
Abstract
Learn more about the Cortex Query Language concat() function joins multiple strings into a single string.
Syntax
Description
Example
Display the first non-NULL action_boot_time field value. In a second column called abt_string, use the concat() function to prepend "str: " to the
value, and then display it.
dataset = xdr_data
| fields action_boot_time as abt
| filter abt != null
| alter abt_string = concat("str: ", to_string(abt))
| limit 1
17.4.21 | convert_from_base_64
Abstract
Syntax
convert_from_base_64("<base64-encoded input>")
Description
The convert_from_base_64() function converts the base64-encoded input to the decoded string format.
Example
Returns the decoded string format Hello world from the base64-encoded input "SGVsbG8gd29ybGQ=".
convert_from_base_64("SGVsbG8gd29ybGQ=")
17.4.22 | count
Abstract
Learn more about the Cortex Query Language count function used with both comp and windowcomp stages.
Syntax
comp stage
comp count([<field>]) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
windowcomp count([<field>]) [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
The count() function is used to return a single count for the number of rows either for a field over a group of rows, where only the number of non-null values
found are returned, or without a field to count the number of rows, including null values. The function syntax and application is based on the preceding stage:
comp stage
When the count aggregation function is used with a comp stage, the function returns one of the following:
With a field: Returns a single count for the number of non-null rows, for all records that contain matching values for the fields identified in the by clause.
Without a field: Counts the number of rows and includes null values.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Use count_distinct to retrieve the number of unique values in the result set.
windowcomp stage
When the count aggregate function is used with a windowcomp stage, the function returns one of the following:
With a field: Returns a single count for the number of non-null rows for all records that contain matching values for the fields identified using a
combination of the by clause, sort, and between window frame clause. The results are provided in a new column in the results table.
Without a field: Counts the number of rows and includes null values.
Examples
comp example
Return a single count of all values found for the actor_process_image_path field in the group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp count(Process_Path) as num_process_path by process_path, process_cmd addrawdata = true as raw_data
| sort desc process_path
windowcomp example
Return a single count for the number of values found in the dns_query_name field for each row in the group of rows, for all records that contain matching
values in the agent_ip_addresses field. The query returns a maximum of 100 xdr_data records. The results are provided in the
count_dns_query_name column.
dataset = xdr_data
| limit 100
| windowcomp count(dns_query_name) by agent_ip_addresses as count_dns_query_name
17.4.23 | count_distinct
Abstract
Learn more about the Cortex Query Language count_distinct aggregate comp function that counts the number of unique values found for a field in the
result set.
Syntax
comp count_distinct(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata= true|false [as <target field>]]
Description
The count_distinct aggregation is a comp function that returns a single value for the number of unique values found for a field over a group of rows, for all
records that contain matching values for the fields identified in the by clause. This aggregate function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Use count to retrieve the total number of values in the result set.
Examples
Return a single count of the number of unique values found for the actor_process_image_path field over a group of rows, for all records that have
matching values for their actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp count_distinct(Process_Path) as num_process_path by process_path, process_cmd addrawdata = true as raw_data
| sort desc process_path
17.4.24 | current_time
Abstract
Learn more about the Cortex Query Language current_time() function that returns the current time as a timestamp.
Syntax
current_time()
Description
The current_time() function returns a timestamp value representing the current time in the format MMM dd YYYY HH:mm:ss, such as Jul 12th 2023
20:51:34.
Example
From the xdr_data dataset, returns the events of the last 24 hours whose actor process started running more than 30 days ago.
dataset = xdr_data
| filter timestamp_diff(current_time(),to_timestamp(actor_process_execution_time, "MILLIS"), "DAY") > 30
17.4.25 | date_floor
Abstract
Syntax
Description
The date_floor() function converts a timestamp value for a particular field or function result that contains a number, and returns a timestamp rounded down
to the nearest whole value of a specified <time unit>, including a year (y), month (mo), week (w), day (d), or hour (h). The <time zone> offset is optional
to configure using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as "America/Chicago". When you
do not configure a time zone, the default is UTC.
Example
Returns a maximum of 100 xdr_data records with the events of the _time field that are less than equal to a timestamp value. The timestamp value
undergoes a number of different function manipulations. The current time is first rounded to the nearest whole value for the week according to the
America/Los_Angeles time zone. This timestamp value is then converted to the Unix epoch timestamp format in seconds and is added to the -2073600 Unix
epoch time. This Unix epoch time value in seconds is then converted to the final timestamp value that is used to filter the _time fields and return the resulting
records.
dataset = xdr_data
| filter _time < to_timestamp(add(to_epoch(date_floor(current_time(),"w", "America/Los_Angeles")),-2073600))
| limit 100
17.4.26 | divide
Abstract
Learn more about the Cortex Query Language divide() function that divides two integers.
Syntax
The divide() function divides two positive integers. Parameters can be either integer literals, or integers as a string type, such as might be contained in a
data field.
Example
dataset = xdr_data
| alter mynum = divide(action_file_size, 3)
| fields action_file_size, mynum
| filter action_file_size > 3
| limit 1
17.4.27 | earliest
Abstract
Learn more about the Cortex Query Language earliest aggregate comp function that returns the earliest field value found with the matching criteria.
Syntax
comp earliest(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The earliest aggregation is a comp function that returns the chronologically earliest value found for a field over a group of rows that has matching values for
the fields identified in the by clause. This function is dependent on a time-related field, so for your query to be considered valid, ensure that the dataset
running this query contains a time-related field. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Return the chronologically earliest timestamp found for any given action_total_download value for all records that have matching values for their
actor_process_image_path and actor_process_command_line fields. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing the raw data events used to display the final comp results.
dataset = xdr_data
| fields _time, actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp earliest(_time) as download_time by Process_Path, Process_CMD addrawdata = true as raw_data
17.4.28 | extract_time
Abstract
Learn more about the Cortex Query Language extract_time() function that returns a specified portion of a timestamp.
Syntax
Description
The extract_time() function returns a specified part of a timestamp. The part parameter must be one of the following keywords:
DAYOFWEEK
DAYOFYEAR
HOUR
MICROSECOND
MILLISECOND
MINUTE
MONTH
QUARTER
SECOND
YEAR
Example
dataset = xdr_data
| alter timepart = extract_time(current_time(), "HOUR")
| fields timepart
| limit 1
17.4.29 | extract_url_host
Abstract
Syntax
extract_url_host ("<URL>")
Description
The extract_url_host() function returns the host of the URL. The function always returns a value in lowercase characters even if the URL provided
contains uppercase characters.
Example
extract_url_host ("https://www.paloaltonetworks.com")
extract_url_host ("//user:password@a.b:80/path?query")
Returns www.example.co.uk in lowercase for the complete URL: www.Example.Co.UK, which includes uppercase characters.
extract_url_host ("www.Example.Co.UK")
extract_url_host ("https://www.test.paloaltonetworks.com/suffix/another_suffix")
Returns one xdr_data record in the results table where the host of the URL https://www.test.paloaltonetworks.com is listed in the URL_HOST
column as www.test.paloaltonetworks.com.
dataset = xdr_data
| alter url_host = extract_url_host("https://www.test.paloaltonetworks.com")
| fields url_host
| limit 1
Abstract
Syntax
extract_url_pub_suffix ("<URL>")
Description
The extract_url_pub_suffix() function returns the public suffix of the URL, such as com, org, or net. The function always returns a value in lowercase
characters even if the URL provided contains uppercase characters.
Example
extract_url_pub_suffix ("https://paloaltonetworks.com")
extract_url_pub_suffix ("https://www.test.paloaltonetworks.com/suffix/another_suffix")
Returns one xdr_data record in the results table where the public suffix of the URL https://www.paloaltonetworks.com is listed in the
URL_PUB_SUFFIX column as com.
dataset = xdr_data
| alter url_pub_suffix = extract_url_pub_suffix("https://paloaltonetworks.com")
| fields url_pub_suffix
| limit 1
17.4.31 | extract_url_registered_domain
Abstract
Syntax
extract_url_registered_domain ("<URL>")
Description
The extract_url_registered_domain() function returns the registered domain or registerable domain, the public suffix plus one preceding label, of a
URL. The function always returns a value in lowercase characters even if the URL provided contains uppercase characters.
Examples
extract_url_registered_domain ("https://www.paloaltonetworks.com")
extract_url_registered_domain ("//user:password@a.b:80/path?query")
Returns example.co.uk in lowercase for the complete URL: www.Example.Co.UK, which includes uppercase characters.
extract_url_registered_domain ("www.Example.Co.UK")
extract_url_registered_domain ("https://www.test.paloaltonetworks.com/suffix/another_suffix")
Returns one xdr_data record in the results table where the registered domain of the URL https://www.test.paloaltonetworks.com is listed in the
REGISTERED_DOMAIN column as paloaltonetworks.com.
dataset = xdr_data
| alter registered_domain = extract_url_registered_domain("https://www.test.paloaltonetworks.com")
17.4.32 | first
Abstract
Learn more about the Cortex Query Language first aggregate comp function that returns the first field value found in the dataset with the matching criteria.
Syntax
comp first(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The first aggregation is a comp function that returns a single first value found for a field in the dataset over a group of rows that has matching values for the
fields identified in the by clause. This function is dependent on a time-related field, so for your query to be considered valid, ensure that the dataset running
this query contains a time-related field. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Return the first timestamp found in the dataset for any given action_total_download value for all records that have matching values for their
actor_process_image_path and actor_process_command_line fields. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing the raw data events used to display the final comp results.
dataset = xdr_data
| fields _time,actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp first(_time) as download_time by Process_Path, Process_CMD addrawdata = true as raw_data
17.4.33 | first_value
Abstract
Learn more about the Cortex Query Language first_value() navigation function that is used with a windowcomp stage.
Syntax
windowcomp first_value(<field>) [by <field> [,<field>,...]] sort [asc|desc] <field1> [, [asc|desc] <field2>,...] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The first_value() function is a navigation function that is used in combination with a windowcomp stage. This function is used to return a single value of a
field for the first row of each row in the group of rows in the current window frame, for all records that contain matching values for the fields identified using a
combination of the by clause, sort (mandatory), and between window frame clause.
Example
preset = authentication_story
| filter auth_identity not in (null, """""") and auth_outcome = """SUCCESS""" and action_country != UNKNOWN
| alter et = to_epoch(_time), t = _time
| bin t span = 1d
| limit 100
| windowcomp first_value(action_local_ip) by auth_identity, t sort asc et between null and null as first_action_local_ip
| fields auth_identity , *action_local_ip
17.4.34 | floor
Abstract
Learn more about the Cortex Query Language floor() function that rounds a field that contains a number down to the nearest whole integer.
Syntax
floor (<number>)
The floor() function converts a field that contains a number, and returns an integer rounded down to the nearest whole number.
Example
17.4.35 | format_string
Abstract
Syntax
Description
The format_string() function returns a string from a format string that contains zero or more format specifiers, along with a variable length list of additional
arguments that matches the format specifiers. A format specifier is initiated by the % symbol, and must map to one or more of the remaining arguments.
Usually, this is a one-to-one mapping, except when the * specifier is used.
Examples
STRING
dataset = xdr_data
| alter stylished_action_category_appID = format_string("-%s-", action_category_of_app_id )
| fields stylished_action_category_appID
| limit 100
Simple integer
dataset = xdr_data
| filter action_remote_ip_int != null
| alter simple_int = format_string("%d", action_remote_ip_int)
| fields simple_int
| limit 100
dataset = xdr_data
| filter action_remote_ip_int != null
| alter int_with_left_blank = format_string("|%100d|", action_remote_ip_int)
| fields int_with_left_blank
| limit 100
dataset = xdr_data
| filter action_remote_ip_int != null
| alter int_with_left_zero_padding = format_string("+%0100d+", action_remote_ip_int)
| fields int_with_left_zero_padding
| limit 100
17.4.36 | format_timestamp
Abstract
Learn more about the Cortex Query Language format_timestamp() function that returns a string after formatting a timestamp according to a specified
string format.
Syntax
Description
The format_timestamp() function returns a string after formatting a timestamp according to a specified string format. The <time zone> is optional to
configure using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as "America/Chicago". The
format_timestamp() function should include an alter stage. For more information, see the examples below.
Examples
Returns a maximum of 100 xdr_data records, which includes a string field called new_time in the format YYYY/MM/dd HH:mm:ss, such as
2021/11/12 12:10:30. This format is detailed in the format_timestamp function, which defines retrieving the new_time (%Y/%m/%d %H:%M:%S) from
the _time field.
dataset = xdr_data
| alter new_time = format_timestamp("%Y/%m/%d %H:%M:%S", _time)
| fields new_time
| limit 100
Returns a maximum of 100 xdr_data records, which includes a string field called new_time in the format YYYY/MM/dd HH:mm:ss, such as 2021/11/12
01:53:35. This format is detailed in the format_timestamp function, which defines the retrieving the new_time (%Y/%m/%d %H:%M:%S) from the _time
field and adding +03:00 hours as the time zone format.
dataset = xdr_data
| alter new_time = format_timestamp("%Y/%m/%d %H:%M:%S", _time, "+03:00")
| fields new_time
| limit 100
Returns a maximum of 100 xdr_data records, which includes a string field called new_time in the format YYYY/MM/dd HH:mm:ss, such as
2021/11/12 01:53:35. This format is detailed in the format_timestamp function, which defines the retrieving the new_time (%Y/%m/%d
%H:%M:%S) from the _time field, and includes an "America/Chicago" time zone.
dataset = xdr_data
| fields _time
| alter new_time = format_timestamp("%Y/%m/%d %H:%M:%S", _time, "America/Chicago")
| fields new_time
| limit 100
17.4.37 | if
Abstract
Learn more about the Cortex Query Language if() function that returns a result after evaluating a condition.
Syntax
Regular if statement
if (<boolean expression>, <true return expression>[, <false return expression>])
if(<boolean expression1>, <true return expression1>, <boolean expression2>, <true return expression2>[, <boolean expression3>, <true return
expression3>,...][, <false return expression>])
if(<boolean expression1>, if(<boolean expression2>, <true return expression2> [,<false return expression2>])...[,<false return expression1>])
In the above syntax, if(<boolean expression2>, <true return expression2> [,<false return expression2>]) represents
the <true return expression1>.
Description
The if() function evaluates a single expression or group of expressions depending on the syntax used to define the function. The syntax can be set up in the
following ways:
Regular if statement: A single boolean expression is evaluated. If the expression evaluates as true, the function returns the results defined in the
second function argument. If the expression evaluates as false and a false return expression is defined, the function returns the results of the third
function argument; otherwise, if no false return expression is set, returns null.
Nested if/else statment: At least two boolean expressions and two true return expressions are required when using this option. The first boolean
expression is evaluated. If the first expression evaluates as true, the function returns the results defined in the second function argument. The second
boolean expression is evaluated. If the second expression evaluates as true, the function returns the results defined in the fourth function argument. If
there are any other boolean expressions defined, they are evaluated following the same pattern when evaluated as true. If any of the expressions
evaluates as false and a false return expression is defined, the function returns the results defined in the last function argument for the false return
expression; otherwise, if no false return expression is set, returns null.
Examples
Regular if statement
If '.exe' is present on the action_process_image_name field value, replace that substring with an empty string. This example uses the replace and
lowercase functions, as well as the contains operator to perform the conditional check. When the '.exe' is not present, the value is returned as is.
Return a maximum of 1 xdr_data record from the past 7 days. The table results include a new column called check_ip, which evaluates and returns the
following:
If the action_local_ip contains an IP address that begins with 10, return Local 10.
If the action_local_ip contains an IP address that begins with 172, return Local 172 ?.
If the action_local_ip contains an IP address that begins with 192.168, return Local 192.
17.4.38 | incidr
Abstract
Syntax
Description
The incidr() function accepts an IPv4 address, and an IPv4 range or comma separated IPv4 ranges using CIDR notation, and returns true if the address is
in range. Both the IPv4 address and CIDR ranges can be either an explicit string using quotes (""), such as "192.168.0.1", or a string field.
The first parameter must contain an IPv4 address contained in an IPv4 field. For production purposes, this IPv4 address will normally be carried in a field that
you retrieve from a dataset. For manual usage, assign the IPv4 address to a field, and then use that field with this function.
Multiple CIDRs are defined with comma separated syntax when building an XQL query with the Query Builder or in Correlation Rules. When defining multiple
CIDRs, the logical OR is used between the CIDRS listed, so as long as one address is in range the entire statement returns true. Here are a few examples of
how this logic works to determine whether the incidr() function returns true and displays results or false, where no results are displayed:
dataset = test
| alter ip_address = "192.168.0.1"
| filter incidr(ip_address, "192.168.0.0/24, 1.168.0.0/24") = true
dataset = test
| alter ip_address = "192.168.0.1"
| filter incidr(ip_address, "2.168.0.0/24, 1.168.0.0/24") = true
dataset = test
| alter ip_address = "192.168.0.1"
| filter incidr(ip_address, "192.168.0.0/24, 1.168.0.0/24") = false
dataset = test
| alter ip_address = "192.168.0.1"
| filter incidr(ip_address, "2.168.0.0/24, 1.168.0.0/24") = false
The same logic is used when using the incidr and not incidr operators. For more information, see Supported operators.
Examples
Return a maximum of 10 xdr_data records, if the IPV4 address (192.168.10.14) is in range by verifying against a single CIDR (192.168.10.0/24):
Return a maximum of 10 xdr_data records, if the IPV4 address (192.168.0.1) is in range by verifying against multiple CIDRs (192.168.0.0/24 or
1.168.0.0/24):
dataset = xdr_data
| alter ip_address = "192.168.0.1"
| filter incidr(ip_address, "192.168.0.0/24, 1.168.0.0/24") = true
| limit 10
17.4.39 | incidr6
Abstract
Syntax
Description
The incidr6() function accepts an IPv6 address, and an IPv6 range or comma separated IPv6 ranges using CIDR notation, and returns true if the address
is in range. Both the IPv6 address and CIDR ranges can be either an explicit string using quotes (""), such as
"3031:3233:3435:3637:3839:4041:4243:4445", or a string field.
The first parameter must contain an IPv6 address contained in an IPv6 field. For production purposes, this IPv6 address will normally be carried in a field that
you retrieve from a dataset. For manual usage, assign the IPv6 address to a field, and then use that field with this function.
Multiple CIDRs are defined with comma separated syntax when building an XQL query with the Query Builder or in Correlation Rules. When defining multiple
CIDRs, the logical OR is used between the CIDRS listed, so as long as one address is in range the entire statement returns true. Here are a few examples of
how this logic works to determine whether the incidr6() function returns true and displays results or false, where no results are displayed:
dataset = test
| alter ip_address = "3031:3233:3435:3637:3839:4041:4243:4445"
| filter incidr(ip_address, "3031:3233:3435:3637:0000:0000:0000:0000/64, 6081:6233:6435:6637:0000:0000:0000:0000/64") = true
dataset = test
| alter ip_address = "3031:3233:3435:3637:3839:4041:4243:4445"
| filter incidr(ip_address, "6081:6233:6435:6637:0000:0000:0000:0000/64, 7081:7234:7435:7737:0000:0000:0000:0000/64, fe80::/10") = true
dataset = test
| alter ip_address = "3031:3233:3435:3637:3839:4041:4243:4445"
| filter incidr(ip_address, "3031:3233:3435:3637:0000:0000:0000:0000/64, 7081:7234:7435:7737:0000:0000:0000:0000/64, fe80::/10") = false
dataset = test
| alter ip_address = "3031:3233:3435:3637:3839:4041:4243:4445"
| filter incidr(ip_address, "6081:6233:6435:6637:0000:0000:0000:0000/64, 7081:7234:7435:7737:0000:0000:0000:0000/64, fe80::/10") = false
The same logic is used when using the incidr6 and not incidr6 operators. For more information, see Supported operators.
Example
Return a maximum of 10 xdr_data records, if the IPV6 address (3031:3233:3435:3637:3839:4041:4243:4445) is in range by verifying against a single
CIDR (3031:3233:3435:3637:0000:0000:0000:0000/64):
Return a maximum of 10 xdr_data records, if the IPV6 address (3031:3233:3435:3637:3839:4041:4243:4445) is in range by verifying against multiple
CIDRs (2001:0db8:85a3:0000:0000:8a2e:0000:0000/64 or fe80::/10):
dataset = xdr_data
| alter ip_address = "fe80::1"
| filter incidr6(ip_address, "2001:0db8:85a3:0000:0000:8a2e:0000:0000/64, fe80::/10") = true
| limit 10
Abstract
Syntax
Description
The incidrlist() function accepts a string containing a comma-separated list of IP addresses, and an IP range using CIDR notation, and returns true if all
the addresses are in range.
Examples
Return true if the list of IP addresses fall within the specified IP range. Note that the input type is a comma-separated list of IP addresses, and not an array of
IP addresses.
If you want to evaluate a true array of IP addresses, convert the array to a comma-separated list using arraystring(). For example, using the
pan_ngfw_traffic_raw dataset:
dataset = panw_ngfw_traffic_raw
| filter dest_ip != null
| comp values(dest_ip) as dips by source_ip,action
| alter dips = arraystring(dips, ", ")
| alter inrange = incidrlist(dips, "192.168.10.0/24")
| fields source_ip, action, dips, inrange
| limit 100
17.4.41 | int_to_ip
Abstract
Learn more about the Cortex Query Language int_to_ip() function that safely converts a signed integer representation of an IPv4 address to a string
equivalent.
Syntax
int_to_ip(<IPv4_integer>)
Description
The int_to_ip() function tries to safely convert a signed integer representation of an IPv4 address into its string equivalent.
Examples
Returns the IPv4 address "4.130.58.140" from the integer representation of the IPv4 address provided as 75643532.
int_to_ip(75643532)
Returns the IPv4 address "251.125.197.116" from the integer representation of the IPv4 address provided as -75643532.
int_to_ip(-75643532)
17.4.42 | ip_to_int
Abstract
Learn more about the Cortex Query Language ip_to_int() function that safely converts a string representation of an IPv4 address to an integer equivalent.
Syntax
ip_to_int(<IPv4_address>)
This function was previously called safe_ip_to_int() and was renamed to ip_to_int().
Description
The ip_to_int() function tries to safely convert a string representation of an IPv4 address into its integer equivalent.
Returns the integer 808530483 from the string representation of the IPv4 address provided as "48.49.50.51".
ip_to_int("48.49.50.51")
17.4.43 | json_extract
Abstract
Learn more about the Cortex Query Language json_extract() function that accepts a string representing a JSON object, and returns a field value from that
object.
Before using this JSON function, it's important that you understand how Cortex XDR treats a JSON in the Cortex Query Language. For more information, see
JSON functions.
Syntax
Regular Syntax
json_extract(<json_object_formatted_string>, <json_path>)
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
json_extract(<json_object_formatted_string>, "['<json_field>']")
To make it easier for you to write your XQL queries, you can also use the following syntactic sugar format.
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
Description
The json_extract() function extracts inner JSON objects by retrieving the value from the identified field. The returned datatype is always a string. If the
input string does not represent a JSON object, this function fails to parse. To convert a string field to a JSON object, use the to_json_string function.
JSON field names are case sensitive, so the key to field pairing must be identical in an XQL query for results to be found. For example, if a field value is
"TIMESTAMP" and your query is defined to look for "timestamp", no results will be found.
The field value is always returned as a string. To return the scalar values, which are not an object or an array, use json_extract_scalar.
Examples
dataset = xdr_data
| fields action_file_device_info as afdi
| alter sdn = json_extract(to_json_string(afdi), "$.storage_device_name")
| filter afdi != null
dataset = xdr_data
| fields action_file_device_info as afdi
| alter sdn = to_json_string(afdi)->storage_device_name{}
| filter afdi != null
17.4.44 | json_extract_array
Abstract
Learn more about the Cortex Query Language json_extract_array() function that accepts a string representing a JSON array, and returns an XQL-native
array.
Before using this JSON function, it's important that you understand how Cortex XDR treats a JSON in the Cortex Query Language. For more information, see
JSON functions.
Syntax
Regular Syntax
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
json_extract_array(<json_array_string>, "['<json_field>']")
To make it easier for you to write your XQL queries, you can also use the following syntactic sugar format.
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
Description
The json_extract_array() function accepts a string representing a JSON array, and returns an XQL-native array. To convert a string field to a JSON
object, use the to_json_string function.
JSON field names are case sensitive, so the key to field pairing must be identical in an XQL query for results to be found. For example, if a field value is
"TIMESTAMP" and your query is defined to look for "timestamp", no results will be found.
Examples
Regular Syntax
Extract the first IPV4 address found in the first element of the agent_interface_map array.
dataset = xdr_data
| fields agent_interface_map as aim
| alter ipv4 = json_extract_array(to_json_string(arrayindex(aim, 0)) , "$.ipv4")
| filter aim != null
| limit 10
dataset = xdr_data
| fields agent_interface_map as aim
| alter ipv4 = to_json_string(aim)->[0].ipv4[0]
| filter aim != null
| limit 10
17.4.45 | json_extract_scalar
Abstract
Before using this JSON function, it's important that you understand how Cortex XDR treats a JSON in the Cortex Query Language. For more information, see
JSON functions.
Syntax
Regular Syntax
json_extract_scalar(<json_object_formatted_string>, <field_path>)
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
json_extract_scalar(<json_object_formatted_string>, "['<json_field>']")
To make it easier for you to write your XQL queries, you can also use the following syntactic sugar format:
When a field in the <json_path> contains characters, such as a dot (.) or colon (:), use the syntax:
Description
The json_extract_scalar() function accepts a string representing a JSON object, and it retrieves the value from the identified field as a string. This
function always returns a string. If the JSON field is an object or array, it will return a null value. To retrieve an XQL-native datatype, use an appropriate function,
JSON field names are case sensitive, so the key to field pairing must be identical in an XQL query for results to be found. For example, if a field value is
"TIMESTAMP" and your query is defined to look for "timestamp", no results will be found.
Examples
Return the storage_device_drive_type value from the action_file_device_info field, and return the record if it is 1.
There are two ways that you can build this query either with a filter using an XQL-native datatype or string.
dataset = xdr_data
| fields action_file_device_info as afdi
| alter sdn = to_integer(json_extract_scalar(to_json_string(afdi), "$.storage_device_drive_type"))
| filter sdn = 1
| limit 10
dataset = xdr_data
| fields action_file_device_info as afdi
| alter sdn = json_extract_scalar(to_json_string(afdi), "$.storage_device_drive_type")
| filter sdn = "1"
| limit 10
dataset = xdr_data
| fields action_file_device_info as afdi
| alter sdn = to_integer(to_json_string(afdi)->storage_device_drive_type)
| filter sdn = 1
| limit 10
17.4.46 | json_extract_scalar_array
Abstract
Before using this JSON function, it's important that you understand how Cortex XDR treats a JSON in the Cortex Query Language. This function doesn't have a
syntatic sugar format. For more information, see JSON functions.
Syntax
json_extract_scalar_array(<json_array_string>, <json_path>)
A field in the <json_path> that contains characters, such as a dot (.) or colon (:) and should be escaped as it's an invalid JSON path, is currently
unsupported. This should be fixed in an upcoming release.
Description
The json_extract_scalar_array() function accepts a string representing a JSON array, and returns an XQL-native array. This function is equivalent to
the json_extract_array except that the final output isn't displayed in double quotes ("..."). To convert a string field to a JSON object, use the to_json_string
function.
JSON field names are case sensitive, so the key to field pairing must be identical in an XQL query for results to be found. For example, if a field value is
"TIMESTAMP" and your query is defined to look for "timestamp", no results will be found.
Example
Extract the first IPV4 address found in the first element of the agent_interface_map array. The values of the IPv4 addresses in the array will not contain any
double quotes.
dataset = xdr_data
| fields agent_interface_map as aim
| alter ipv4 = json_extract_scalar_array(to_json_string(arrayindex(aim, 0)) , "$.ipv4")
| filter aim != null
| limit 10
Final output with 1 row from the results table. Notice that the IPV4 column doesn't contain any double quotes (" ") around the IP address 172.16.15.42:
Aug 9th 2023 [{"ipv4":["172.16.15.42"], "ipv6": [], "mac": XDR agent PANW Aug 17th 2023 172.16.15.42
10:04:39 "00:50:56:9f:30:a9"}] 19:25:48
In contrast, compare the above results to the same query using the json_extract_array() function. The final output with the same row from the results
table has in the IPV4 column the IP address in double quotes "172.16.15.42".
Aug 9th 2023 [{"ipv4":["172.16.15.42"], "ipv6": [], "mac": XDR agent PANW Aug 17th 2023 "172.16.15.42"
10:04:39 "00:50:56:9f:30:a9"}] 19:25:48
17.4.47 | lag
Abstract
Learn more about the Cortex Query Language lag() navigation function that is used with a windowcomp stage.
Syntax
windowcomp lag(<field>) [by <field> [,<field>,...]] sort [asc|desc] <field1> [, [asc|desc] <field2>,...] [as <alias>]
Description
The lag() function is a navigation function that is used in combination with a windowcomp stage. This function is used to return a single value of a field on a
preceding row for each row in the group of rows using a combination of the by clause and sort (mandatory).
Example
Retrieve for each event the timestamp of the previous successful login since the last one.
preset = authentication_story
| filter auth_identity not in (null, """""") and auth_outcome = """SUCCESS"""
| alter ep = to_epoch(_time)
| limit 100
| windowcomp lag(_time) by auth_identity sort asc ep as previous_login
17.4.48 | last
Abstract
Learn more about the Cortex Query Language last aggregate comp function that returns the last field value found in the dataset with the matching criteria.
Syntax
comp last(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The last aggregation is a comp function that returns the last value found for a field in the dataset over a group of rows that has matching values for the fields
identified in the by clause. This function is dependent on a time-related field, so for your query to be considered valid, ensure that the dataset running this
query contains a time-related field. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Return the last timestamp found in the dataset for any given action_total_download value for all records that have matching values for their
actor_process_image_path and actor_process_command_line fields. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing the raw data events used to display the final comp results.
dataset = xdr_data
| fields _time, actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
17.4.49 | last_value
Abstract
Learn more about the Cortex Query Language last_value() navigation function that is used with a windowcomp stage.
Syntax
windowcomp last_value(<field>) [by <field> [,<field>,...]] sort [asc|desc] <field1> [, [asc|desc] <field2>,...] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The last_value() function is a navigation function that is used in combination with a windowcomp stage. This function is used to return a single value of a
field for the last row of each row in the group of rows in the current window frame, for all records that contain matching values for the fields identified using a
combination of the by clause, sort (mandatory), and between window frame clause.
Example
preset = authentication_story
| filter auth_identity not in (null, """""") and auth_outcome = """SUCCESS""" and action_country != UNKNOWN
| alter et = to_epoch(_time), t = _time
| bin t span = 1d
| limit 100
| windowcomp last_value(action_local_ip) by auth_identity, t sort asc et between null and null as first_action_local_ip
| fields auth_identity , *action_local_ip
17.4.50 | latest
Abstract
Learn more about the Cortex Query Language latest aggregate comp function that returns the latest field value found with the matching criteria.
Syntax
comp latest(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The latest aggregation is a comp function that returns a single chronologically latest value found for a field over a group of rows that has matching values for
the fields identified in the by clause. This function is dependent on a time-related field, so for your query to be considered valid, ensure that the dataset
running this query contains a time-related field. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Return the chronologically latest timestamp found for any given action_total_download value for all records that have matching values for their
actor_process_image_path and actor_process_command_line fields. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing the raw data events used to display the final comp results.
dataset = xdr_data
| fields _time, actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp latest(_time) as download_time by Process_Path, Process_CMD addrawdata = true as raw_data
17.4.51 | len
Abstract
Learn more about the Cortex Query Language len function that returns the number of characters contained in a string.
Syntax
len (<string>)
Examples
Show domain names that are more than 100 characters in length.
dataset = xdr_data
| fields dns_query_name
| filter len(dns_query_name) > 100
| limit 10
17.4.52 | list
Abstract
Learn more about the Cortex Query Language list aggregate comp function that returns an array for up to 100 values for a field in the result set.
Syntax
comp list(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The list aggregation is a comp function that returns a single array of up to 100 values found for a given field over a group of rows, for all records that contain
matching values for the fields identified in the by clause. The array values are all non-null, so null values are filtered out. The values returned in the array are
non-unique, so if a value repeats multiple times it is included as part of the list of up to 100 values. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Examples
Return an array containing up to 100 values seen for the action_total_download field over a group of rows, for all records that have matching values for
their actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and
includes a raw_data column listing the raw data events used to display the final comp results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download Download
| filter Download > 0
| limit 100
| comp list(Download) as list_download by Process_Path, Process_CMD addrawdata = true as raw_data
17.4.53 | lowercase
Abstract
Learn more about the Cortex Query Language lowercase() function that converts a string field to all lowercase letters.
Syntax
lowercase (<string>)
Description
Examples
Convert all actor_process_image_name field values that are not null to lowercase, and return a list of unique values.
dataset = xdr_data
| fields actor_process_image_name as apin
| dedup apin by asc _time
| filter apin != null
| alter apin = lowercase(apin)
Abstract
Learn more about the Cortex Query Language ltrim(), rtrim(), and trim() functions that removes spaces or characters from the beginning or end of a
string.
trim (<string>,[trim_characters])
rtrim (<string>,[trim_characters])
ltrim (<string>,[trim_characters])
Description
The trim() function removes specified characters from the beginning and end of a string. The rtrim() removes specific characters from the end of a string.
The ltrim() function removes specific characters from the beginning of a string.
If you do not specify trim characters, then whitespace (spaces and tabs) are removed.
Examples
dataset = xdr_data
| fields action_process_image_name as apin
| filter apin != null
| alter remove_exe_process = rtrim(apin, ".exe")
| limit 10
17.4.55 | max
Abstract
Learn more about the Cortex Query Language max function used with both comp and windowcomp stages.
Syntax
comp stage
comp max(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
windowcomp max(<field>) [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The max() function is used to return the maximum value of an integer field over a group of rows. The function syntax and application is based on the
preceding stage:
comp stage
When the max aggregation function is used with a comp stage, the function returns a single maximum value of an integer field for a group of rows, for all
records that contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the max aggregate function is used with a windowcomp stage, the function returns a single maximum value of an integer field for each row in the group
of rows, for all records that contain matching values for the fields identified using a combination of the by clause, sort, and between window frame clause.
The results are provided in a new column in the results table.
Examples
comp example
Return a single maximum value of the action_total_download field for a group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp max(Download) as max_download by Process_Path, Process_CMD addrawdata = true as raw_data
windowcomp example
Return the last login time. The query returns a maximum of 100 authentication_story records in a column called action_user_agent.
17.4.56 | median
Abstract
Learn more about the Cortex Query Language median function used with both comp and windowcomp stages.
Syntax
comp stage
comp median(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
windowcomp median(<field>) [by <field> [,<field>,...]] [as <alias>]
Description
The median() function is used to return the median value of a field over a group of rows. The function syntax and application is based on the preceding
stage:
comp stage
When the median aggregation function is used with a comp stage, the function returns a single median value of a field for a group of rows, for all records that
contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the median aggregate function is used with a windowcomp stage, the function returns a single median value of a field for each row in the group of rows,
for all records that contain matching values for the fields identified in the by clause. In a median function, the sort and between window frame clause are not
used. The results are provided in a new column in the results table.
Examples
comp example
Return a single median value of the action_total_download field over a group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp median(Download) as median_download by Process_Path, Process_CMD
addrawdata = true as raw_data
windowcomp example
Return all events where the Download field is greater than the median by reviewing each individual event and how it compares to the median. The query
returns a maximum of 100 xdr_data records in a column called median_download.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| windowcomp median(Download) by Process_Path, Process_CMD as median_download
| filter Download > median_download
17.4.57 | min
Abstract
Learn more about the Cortex Query Language min function used with both comp and windowcomp stages.
Syntax
comp stage
comp min(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
Description
The min() function is used to return the minimum value of an integer field over a group of rows. The function syntax and application is based on the preceding
stage:
comp stage
When the min aggregation function is used with a comp stage, the function returns a single minimum value of an integer field for a group of rows, for all
records that contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the min aggregate function is used with a windowcomp stage, the function returns a single minimum value of an integer field for each row in the group of
rows, for all records that contain matching values for the fields identified using a combination of the by clause, sort, and between window frame clause. The
results are provided in a new column in the results table.
Examples
comp example
Return a single minimum value of the action_total_download field for a group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp min(Download) as min_download by Process_Path, Process_CMD addrawdata = true as raw_data
windowcomp example
Return the first login time. The query returns a maximum of 100 authentication_story records in a column called action_user_agent.
preset = authentication_story
| limit 100
| windowcomp min(_time) by action_user_agent
17.4.58 | multiply
Abstract
Learn more about the Cortex Query Language multiply() function that multiplies two integers.
Syntax
Description
The multiply() function multiplies two positive integers. Parameters can be either integer literals, or integers as a string type such as might be contained in
a data field.
Example
dataset = xdr_data
| alter mynum = multiply(action_file_size, 3)
| fields action_file_size, mynum
| filter action_file_size > 0
| limit 1
17.4.59 | object_merge
Abstract
Syntax
The object_merge() function returns a new object, which is created from a merge of a number of objects. When there is a key name that is duplicated in
any of the objects, the value in the new object is determined by the latter argument.
Example
Two objects are created and merged, where some key names are duplicated, including name, last_name, and age. Since the name value is the same for
both objects, the same name is used in the new object. Yet, the last_name and age key values differ, so the values from the second object are used in the
new object.
dataset = xdr_data
| alter
obj1 = object_create("name", "jane", "last_name", "doe", "age", 33),
obj2 = object_create("name", "jane", "last_name", "simon", "age", 34, "city", "new-york")
| alter result = object_merge(obj1, obj2)
| fields result
The function returns the following new object in the RESULT column of the results table:
17.4.60 | object_create
Abstract
Syntax
Description
The object_create() function returns an object based on the given parameters defined for the key and value pairs. Accepts n > 1 even number of
parameters.
Example
Returns a final object to a field called a that contains the key and value pair {“2”:“password”}, where the "password" value is comprised by joining 2 values
together.
dataset = xdr_data
| alter a = object_create("2", concat("pass", "word"))
| fields a
17.4.61 | parse_epoch
Abstract
Learn more about the Cortex Query Language parse_epoch() function that returns a Unix epoch TIMESTAMP object.
Syntax
Description
The parse_epoch() function returns a Unix epoch TIMESTAMP object after converting a string representation of a timestamp. The <time zone> offset is
optional to configure using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as "America/Chicago".
When you do not configure a timezone, the default is UTC. The <time unit> is optional to configure and indicates whether the Unix epoch integer value
represents seconds, milliseconds, or microseconds. These values are supported, and the default is used when none is configured:
SECONDS (default)
MILLIS
MICROS
The order of the <time zone> and <time unit> matters. The <time zone> must be defined first followed by the <time unit>. If the <time zone> is
set after the <time unit>, the default time zone is used and the configured value is ignored.
Returns a maximum of 100 xdr_data records, which includes a timestamp field called new_time in the format MMM dd YYYY HH:mm:ss, such as
Dec 25th 2008 04:30:00. This new_time field is comprised by taking a character string representation of a timestamp "Thu Dec 25 07:30:00 2008"
and adding to it +03:00 hours as the time zone format. This string timestamp is then converted to a Unix epoch TIMESTAMP object in milliseconds using
the parse_epoch function, and this resulting value is converted to the final timestamp using the to_timestamp function.
dataset = xdr_data
| alter new_time = to_timestamp(parse_epoch("%c", "Thu Dec 25 07:30:00 2008", "+3", "millis"))
| fields new_time
| limit 100
Returns a maximum of 100 xdr_data records, which includes a timestamp field called new_time in the format MMM dd YYYY HH:mm:ss, such as
Dec 25th 2008 04:30:00. This new_time field is comprised by taking a character string representation of a timestamp "Thu Dec 25 07:30:00 2008"
and adding to it a UTC time zone format (default when none configured). This string timestamp is then converted to a Unix epoch TIMESTAMP object in
seconds (default when none configured) using the parse_epoch function, and this resulting value is converted to the final timestamp using the
to_timestamp function.
dataset = xdr_data
| alter new_time = to_timestamp(parse_epoch("%c", "Thu Dec 25 07:30:00 2008"))
| fields new_time
| limit 100
17.4.62 | parse_timestamp
Abstract
Learn more about the Cortex Query Language parse_timestamp() function that returns a TIMESTAMP object.
Syntax
parse_timestamp("<format time string>", "<time string>" | format_string(<time field>) | <time string field>)
parse_timestamp("<format time string>", "<time string>" | format_string(<time field>) | <time string field>, "<time zone>")
Description
The parse_timestamp() function returns a TIMESTAMP object after converting a string representation of a timestamp. The <time zone> offset is optional
to configure using an hours offset, such as “+08:00”, or using a time zone name from the List of Supported Time Zones, such as "America/Chicago". The
parse_timestamp() function can include both an alter stage and format_string function. For more information, see the examples below. The
format_string function contains the format elements that define how the parse_timestamp string is formatted. Each element in the parse_timestamp
string must have a corresponding element in format_string. The location of each element in the format_string must match the location of each element
in parse_timestamp.
Returns a maximum of 100 microsoft_dhcp_raw records, which includes a TIMESTAMP object in the p_t_test field in the format MMM dd YYYY
HH:mm:ss, such as Jun 25th 2021 18:31:25. This format is detailed in the format_string function, which includes merging both the date and time
fields.
dataset = microsoft_dhcp_raw
| alter p_t_test = parse_timestamp("%m/%d/%Y %H:%M:%S", format_string("%s %s", date, time))
| fields p_t_test
| limit 100
Returns a maximum of 100 microsoft_dhcp_raw records, which includes a TIMESTAMP object in the p_t_test field in the format MMM dd YYYY
HH:mm:ss, such as Jun 25th 2021 18:31:25. This format is detailed in the format_string function, which includes merging both the date and time
fields, and includes a "Asia/Singapore" time zone.
dataset = microsoft_dhcp_raw
| alter p_t_test = parse_timestamp("%m/%d/%Y %H:%M:%S", format_string("%s %s", date, time), "Asia/Singapore")
| fields p_t_test
| limit 100
Returns a maximum of 100 microsoft_dhcp_raw records, which includes a TIMESTAMP object in the p_t_test field in the format MMM dd YYYY
HH:mm:ss, such as Jun 25th 2021 18:31:25. This format is detailed in the format_string function, which includes merging both the date and time
fields, and includes a time zone using an hours offset of “+08:00”.
dataset = microsoft_dhcp_raw
| alter p_t_test = parse_timestamp("%m/%d/%Y %H:%M:%S", format_string("%s %s", date, time), “+08:00”)
| fields p_t_test
| limit 100
17.4.63 | pow
Abstract
Learn more about the Cortex Query Language pow() function that returns the value of a number raised to the power of another number.
Syntax
pow (<x,n>)
Description
The pow() function returns the value of a number (x) raised to the power of another number (n).
17.4.64 | rank
Abstract
Learn more about the Cortex Query Language rank() numbering function that is used with a windowcomp stage.
Syntax
windowcomp rank() [by <field> [,<field>,...]] sort [asc|desc] <field1> [, [asc|desc] <field2>,...] [as <alias>]
Description
The rank() function is a numbering function that is used in combination with a windowcomp stage. This function is used to return a single value for the ordinal
(1-based) rank for each row in the group of rows using a combination of the by clause and sort (mandatory).
Example
Return an average ranking for the avgerage CPU usage on metric_type=HOST. Allows you to see changes in the CPU usage compared to all hosts in the
environment. The query returns a maximum of 100 it_metrics records. The results are ordered by ft in decending order in the rank column.
dataset = it_metrics
| filter metric_type = HOST
| alter cpu_avg_str = to_string(cpu_avg)
| alter ft = date_floor(_time, "w")
| alter dt = date_floor(_time, "d")
| limit 100
| windowcomp rank() by ft sort desc cpu_avg_str as rank
| filter (agent_hostname contains $host_name)
| comp avg(rank) by dt
17.4.65 | regexcapture
Abstract
Learn more about the Cortex Query Language regexcapture() function used in Parsing Rules to extract data from fields using regular expression named
groups from a given string.
The regexcapture() function is only supported in the XQL syntax for Parsing Rules.
Syntax
regexcapture(<field>, "<pattern>")
Description
In Parsing Rules, the regexcapture() function is used to extract data from fields using regular expression named groups from a given string and returns a
JSON object with captured groups. This function can be used in any section of a Parsing Rule. The regexcapture() function is useful when the regex
pattern is not identical throughout the log, which is required when using the regextract function.
XQL uses RE2 for its regular expression implementation. When using the (?i) syntax for case-insensitive mode in your query, this syntax should be added
only once at the beginning of the inline regular expression.
Example
Parsing Rule to ceate a dataset called my_regexcapture_test, where the vendor and product that the specified Parsing Rules applies to is called
regexcapture_vendor and regexcapture_product. The output results includes a new field called regexcaptureResult, which extract data from the
_raw_log field using regular expression named groups as defined and returns the captured groups.
Parsing Rule:
Log:
XQL Query:
dataset = my_regexcapture_test
| fields regexcaptureResult
{
"ip": "192.168.1.1",
"user": "john",
"timestamp": "10/Mar/2024:12:34:56 +0000",
"request": "GET /index.html HTTP/1.1",
"status": "200",
"bytes": "1234"
}
17.4.66 | regextract
Abstract
Learn more about the Cortex Query Language regextract() function that uses regular expressions to assemble an array of matching substrings from a
string.
Syntax
Description
The regextract() function accepts a string and a regular expression, and it returns an array containing substrings that match the expression.
Cortex Query Language (XQL) uses RE2 for its regular expression implementation. While capturing multiple groups is unsupported, capturing one group in
queries is supported.
When using the (?i) syntax for case-insensitive mode in your query, this syntax should be added only once at the beginning of the inline regular expression.
Examples
Extract the Account Name from the action_evtlog_message. Use the arrayindex and split functions to extract the actual account name from the array
created by regextract.
dataset = xdr_data
| fields action_evtlog_message as aem
| filter aem != null
| alter account_name =
arrayindex(
split(
arrayindex(
regextract(aem, "Account Name:\t\t.*\r\n")
,0)
, ":")
,1)
| filter account_name != null
| limit 10
Extract from the log_example field all of the values included for the id objects.
dataset = xdr_data
| limit 1
| alter
log_example = "{\"events\":[{\"id\": \"1\", \"type\": \"process\", \"size\": 123, \"processID\": 40540},{\"id\": \"2\", \"type\": \"request\", \"size\": 456,
\"srcOS\": \"MAC\"}],\"host\": \"LocalHost\",\"date\": {\"day\": 4, \"month\": 7, \"year\": 2024},\"tags\":[\"agent\", \"auth\", \"low\"]}"
| alter
one_capture_group_usage = regextract(log_example, "\"id\":\s*\"([^\"]+)\"")
| fields log_example, one_capture_group_usage
17.4.67 | replace
Abstract
Learn more about the Cortex Query Language replace() function that performs a substring replacement.
Syntax
Description
The replace() function accepts a string field, and replaces all occurrences of a substring with a replacement string.
Examples
If '.exe' is present on the action_process_image_name field value, replace that substring with an empty string. This example uses the if and lowercase
functions, as well as the contains operator to perform the conditional check.
dataset = xdr_data
| fields action_process_image_name as apin
| filter apin != null
| alter remove_exe_process = if(lowercase(apin) contains ".exe",
replace(lowercase(apin),".exe",""),
lowercase(apin))
| limit 10
17.4.68 | replex
Abstract
Learn more about the Cortex Query Language replex() function that uses a regular expression to identify and replace substrings.
Syntax
Description
The replex() function accepts a string, and then uses a regular expression to identify a substring, and then replaces matching substrings with a new string.
Examples
For any agent_id that contains a dotted decimal IP address, mask the IP address. Use the dedup stage to reduce the result set to first-seen agent_id
values.
dataset = xdr_data
| fields agent_id
| alter clean_agent_id = replex(agent_id,
"[\d]+\.[\d]+\.[\d]+\.[\d]+",
"xxx.xxx.xx.xx")
| dedup agent_id by asc _time
17.4.69 | round
Abstract
Learn more about the Cortex Query Language round() function that returns the input value rounded to the nearest integer.
Syntax
Description
The round() function accepts either a float or an integer as an input value, and it returns the input value rounded to the nearest integer.
Example
dataset = xdr_data
| alter mynum = divide(action_file_size, 7)
| alter mynum2 = round(mynum)
| fields action_file_size, mynum, mynum2
| filter action_file_size > 3
| limit 1
17.4.70 | row_number
Abstract
Learn more about the Cortex Query Language row_number() numbering function that is used with a windowcomp stage.
Syntax
windowcomp row_number() [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [as <alias>]
Description
The row_number() function is a numbering function that is used in combination with a windowcomp stage. This function is used to return a single value for
the sequential row ordinal (1-based) for each row from a group of rows using a combination of the by clause and sort.
Example
Return a single value for the sequential row ordinal (1-based) for each row in the group of rows. The query returns a maximum of 100 xdr_data records. The
results are ordered by the source_ip in ascending order in the row_number_dns_query_name column.
dataset = xdr_data
| limit 100
| windowcomp row_number() sort source_ip as row_number_dns_query_name
17.4.71 | split
Abstract
Learn more about the Cortex Query Language split() function that splits a string and returns an array of string parts.
Syntax
Description
The split() function splits a string using an optional delimiter, and returns the resulting substrings in an array. If no delimiter is specified, a space (' ') is used.
Split IP addresses into an array, each element of the array containing an IP octet.
dataset = xdr_data
| fields action_local_ip as alii
| alter ip_octets = split(alii, ".")
| limit 10
17.4.72 | stddev_population
Abstract
Learn more about the Cortex Query Language stddev_population() function used with both comp and windowcomp stages.
Syntax
comp stage
comp stddev_population(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp stage
windowcomp stddev_population(<field>) [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number> [and
0|null|<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The stddev_population() function is used to return a single population (biased) variance value of a field for a group of rows. The function syntax and
application is based on the preceding stage:
comp stage
When the stddev_population aggregation function is used with a comp stage, the function returns a single population (biased) variance value of a field
over a group of rows, for all records that contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the stddev_population statistical aggregate function is used with a windowcomp stage, the function returns a single population (biased) variance
value of a field for each row in the group of rows, for all records that contain matching values for the fields identified using a combination of the by clause,
sort, and between window frame clause. The results are provided in a new column in the results table.
Examples
comp example
Calculates a maximum of 100 metrics_source records, where the _broker_device_id is 655AYUWF, and include a single population (biased) variance
value of the total_size_rate field for a group of rows.
dataset = metrics_source
| filter _broker_device_id = "655AYUWF"
| comp stddev_population(total_size_rate)
| limit 100
windowcomp example
Return maximum of 100 metrics_source records and include a single population (biased) variance value of the total_size_rate field for each row in the
group of rows, for all records that contain matching values in the _broker_device_id field. The results are provided in the stddev_population column.
dataset = metrics_source
| limit 100
| windowcomp stddev_population(total_size_rate) by _broker_device_id as `stddev_population`
17.4.73 | stddev_sample
Abstract
Learn more about the Cortex Query Language stddev_sample() function used with both comp and windowcomp stages.
Syntax
comp stage
comp stddev_sample(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The stddev_sample() function is used to return a single sample (unbiased) standard deviation value of a field for a group of rows. The function syntax and
application is based on the preceding stage:
comp stage
When the stddev_sample aggregation function is used with a comp stage, the function returns a single sample (unbiased) standard deviation value of a field
over a group of rows, for all records that contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the stddev_sample statistical aggregate function is used with a windowcomp stage, the function returns a single sample (unbiased) standard
deviation value of a field for each row in the group of rows, for all records that contain matching values for the fields identified using a combination of the by
clause, sort, and between window frame clause. The results are provided in a new column in the results table.
Examples
Calculate a maximum of 100 metrics_source records, where the _broker_device_ip is 172.16.1.25, and include a single sample (unbiased) standard
deviation value of the total_size_bytes field for a group of rows.
dataset = metrics_source
| filter _broker_device_ip = "172.16.1.25"
| comp stddev_sample(total_size_bytes)
| limit 100
Return a maximum of 100 metrics_source records and include a single sample (unbiased) standard deviation value of the total_size_rate field for
each row in the group of rows, for all records that contain matching values in the _broker_device_id field. The results are provided in the stddev_sample
column.
dataset = metrics_source
| limit 100
| windowcomp stddev_sample(total_size_rate) by _broker_device_id as `stddev_sample`
17.4.74 | string_count
Abstract
Learn more about the Cortex Query Language string_count() function that returns the number of times a substring appears in a string.
Syntax
Description
The string_count() function returns the number of times a substring appears in a string.
Example
dataset = xdr_data
| fields actor_primary_username as apu
| filter string_count(apu, "e") > 1
17.4.75 | subtract
Abstract
Learn more about the Cortex Query Language subtract() function that subtracts two integers.
Syntax
The subtract() function subtracts two positive integers by subtracting the second argument from the first argument. Parameters may be either integer
literals, or integers as a string type such as might be contained in a data field.
Example
dataset = xdr_data
| alter mynum = subtract(action_file_size, 3)
| fields action_file_size, mynum
| filter action_file_size > 3
| limit 1
17.4.76 | sum
Abstract
Cortex Query Language sum function used with both comp and windowcomp stages.
Syntax
comp stage
comp sum(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
windowcomp
windowcomp sum(<field>) [by <field> [,<field>,...]] [sort [asc|desc] <field1> [, [asc|desc] <field2>,...]] [between 0|null|<number>|-<number> [and 0|null|
<number>|-<number>] [frame_type=range]] [as <alias>]
Description
The sum() function is used to return the sum of an integer field over a group of rows. The function syntax and application is based on the preceding stage:
comp stage
When the sum aggregation function is used with a comp stage, the function returns a single sum of an integer field for a group of rows, for all records that
contain matching values for the fields identified in the by clause.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
windowcomp stage
When the sum aggregate function is used with a windowcomp stage, the function returns a single sum of an integer field for each row in the group of rows, for
all records that contain matching values for the fields identified using a combination of the by clause, sort, and between window frame clause. The results
are provided in a new column in the results table.
Examples
comp example
Return a single sum of the action_total_download field for a group of rows, for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing a single value for the results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp sum(Download) as total_download by Process_Path, Process_CMD addrawdata = true as raw_data
windowcomp
Return the download to upload ratio per process. The query returns a maximum of 100 xdr_data records in new columns called sum_upload and
sum_download.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download, action_total_upload as Upload
| filter Download > 0
| limit 100
| windowcomp sum(Download) by Process_Path, Process_CMD as sum_download
| windowcomp sum(Upload) by Process_Path, Process_CMD as sum_upload
| fields - Download ,Upload
| dedup Process_CMD, Process_Path, sum_download ,sum_upload
| alter ration = divide(sum_download ,sum_upload)
Abstract
Learn more about the Cortex Query Language time_frame_end() function that returns the end time of the time range specified for the query.
Syntax
time_frame_end(<time frame>)
Description
The time_frame_end() function returns the timestamp object for the string representation of the end of the time frame configured for the query in the format
MMM dd YYYY HH:mm:ss, such as Jun 8th 2022 15:20:06. You can configure the time frame using the config timeframe function, where the range can be
relative or exact.
If the time frame is relative, for example last 24H, the function returns the current_time. This function is useful when the query uses a custom time frame whose
end time is in the past.
For the last 5 days from when the query is sent, returns a maximum of 100 xdr_data records with the events of the _time field with a new field called "x". The
"x" field lists the final timestamp at the end of 5 days from when the query was sent for the events in descending order. For more information on this relative
timeframe range, see the config timeframe function.
config timeframe = 5d
| dataset = xdr_data
| alter x = time_frame_end()
| fields x
| sort desc x
For the last 5 days from when the query is run until now, returns a maximum of 100 xdr_data records with the events of the _time field with a new field called "x".
The "x" field lists the final timestamp at the end of 5 days from when the query runs for the events in descending order. For more information on this relative time
frame range, see the config timeframe function.
17.4.78 | timestamp_diff
Abstract
Learn more about the Cortex Query Language timestamp_diff() function that returns the difference between two timestamp objects.
Syntax
Description
The timestamp_diff() function returns the difference between two timestamp objects. The units used to express the difference is identified by the part
parameter. The second timestamp is subtracted from the first timestamp. If the first timestamp is greater than the second, a positive value is returned. If the
result of this function is between 0 and 1, 0 is returned.
DAY
HOUR
MINUTE
SECOND
MILLISECOND
MICROSECOND
Example
dataset = xdr_data
| filter story_publish_timestamp != null
| alter ts = to_timestamp(story_publish_timestamp, "MILLIS")
17.4.79 | timestamp_seconds
Abstract
Syntax
timestamp_seconds (<integer>)
Description
The timestamp_seconds() function converts an epoch time Integer value in seconds to a TIMESTAMP compatible value.
Endpoint Detection and Response (EDR) columns store epoch milliseconds values so this function is more useful for values that you insert.
Example
17.4.80 | to_boolean
Abstract
Learn more about the Cortex Query Language to_boolean() function that converts a string to a boolean.
Syntax
to_boolean(<string>)
Description
The to_boolean() function converts a string that represents a boolean to a boolean value.
The input value to this string must be either TRUE or FALSE, case insensitive.
Example
17.4.81 | to_epoch
Abstract
Learn more about the Cortex Query Language to_epoch() function that converts a timestamp value for a field or function to the Unix epoch timestamp
format.
Syntax
Description
The to_epoch() function converts a timestamp value for a particular field or function to the Unix epoch timestamp format. This function requires a <time
unit> value, which indicates whether the integer value for the Unix epoch timestamp format represents seconds (default), milliseconds, or microseconds. If
no <time unit> is configured, the default is used. Supported values are:
SECONDS
MILLIS
MICROS
Example
Returns a maximum of 100 xdr_data records with the events of the _time field, which includes a timestamp field in the Unix epoch format called ts. The ts
field contains the equivalent Unix epoch values in milliseconds for the timestamps listed in the _time field.
17.4.82 | to_float
Abstract
Learn more about the Cortex Query Language to_float() function that converts a string to a floating point number.
Syntax
to_float(<string>)
Description
The to_float() function converts a string that represents a number to a floating point number. This function is identical to the to_number function.
Examples
Display the first 10 IP addresses that begin with a value greater than 192. Use the split function to split the IP address by '.', and then use the arrayindex
function to retrieve the first value in the resulting array. Convert this to a number and perform an arithmetic compare to arrive at a result set.
dataset = xdr_data
| fields action_local_ip as alii
| filter to_float(arrayindex(split(alii, "."),0)) > 192
| limit 10
17.4.83 | to_integer
Abstract
Learn more about the Cortex Query Language to_integer() function that converts a string field to an integer.
Syntax
to_integer(<string>)
Description
The to_integer() function converts a string value that represents a number of a given field to an integer. A good application of using the to_integer
function is when querying for USB vendor IDs and USB product IDs, which are usually provided in a hex format.
It is an error to provide a string to this function that contains a floating point number.
Examples
Display the first 10 IP addresses that begin with a value greater than 192. Use the split function to split the IP address by '.', and then use the arrayindex
function to retrieve the first value in the resulting array. Convert this to a number and perform an arithmetic compare to arrive at a result set.
dataset = xdr_data
| fields action_local_ip as alii
| filter to_integer(arrayindex(split(alii, "."),0)) > 192
| limit 10
17.4.84 | to_json_string
Abstract
Learn more about the Cortex Query Language to_json_string() function that accepts all data types and returns its contents as a JSON formatted string.
Syntax
to_json_string(<data type>)
Description
The to_json_string() function accepts all data types, such as integers, booleans, strings, and returns it as a JSON formatted string. This function always
returns a string. When the input is an object or an array, the function returns a JSON formatted string of the input. When the input string is a string, it returns the
string as is. You can then use the JSON formatted string or string returned by this function with the json_extract, json_extract_array, and json_extract_scalar
functions.
dataset = xdr_data
| fields action_file_device_info as afdi
| filter afdi != null
| alter the_json_string = to_json_string(afdi)
| limit 10
17.4.85 | to_number
Abstract
Learn more about the Cortex Query Language to_number() function that converts a string to a number.
Syntax
to_number (<string>)
Description
The to_number() function converts a string that represents a number to a floating point number. This function is identical to the to_float function.
Examples
Display the first 10 IP addresses that begin with a value greater than 192. Use the split function to split the IP address by '.', and then use the arrayindex
function to retrieve the first value in the resulting array. Convert this to a number and perform an arithmetic compare to arrive at a result set.
dataset = xdr_data
| fields action_local_ip as alii
| filter to_number(arrayindex(split(alii, "."),0)) > 192
| limit 10
17.4.86 | to_string
Abstract
Learn more about the Cortex Query Language to_string function that converts a number value to a string.
Syntax
to_string (<field>)
Description
Examples
Display the first non-NULL action_boot_time field value. In a second column called abt_string, use the concat function to prepend "str: " to the value,
and then display it.
dataset = xdr_data
| fields action_boot_time as abt
| filter abt != null
| alter abt_string = concat("str: ", to_string(abt))
| limit 1
17.4.87 | to_timestamp
Abstract
Learn more about the Cortex Query Language to_timestamp() function that converts an integer to a timestamp.
Syntax
Description
The to_timestamp() function converts an integer to a timestamp. This function requires a units value, which indicates whether the integer represents
seconds, milliseconds, or microseconds since the Unix epoch. Supported values are:
MILLIS
MICROS
Example
dataset = xdr_data
| filter story_publish_timestamp != null
| alter ts = to_timestamp(story_publish_timestamp, "MILLIS")
| fields ts
17.4.88 | uppercase
Abstract
Learn more about the Cortex Query Language uppercase() function that converts a string field to all uppercase letters.
Syntax
uppercase (<string>)
Description
Examples
Convert all actor_process_image_name field values that are not null to uppercase, and return a list of unique values.
dataset = xdr_data
| fields actor_process_image_name as apin
| dedup apin by asc _time
| filter apin != null
| alter apin = uppercase(apin)
17.4.89 | values
Abstract
Cortex Query Language comp values aggregate returns an array for all the values seen for the field in the result set.
Syntax
comp values(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The values aggregation is a comp function that returns an array of all the values found for a given field over a group of rows, for all records that contain
matching values for the fields identified in the by clause. The array values are all non-null. Each value appears in the array only once, even if a given value
repeats multiple times in the result set. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Example
Return an array containing all the values seen for the action_total_download field for all records that have matching values for their
actor_process_image_path and actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a
raw_data column listing the raw data events used to display the final comp results. In addition, this example contains a number of fields defined as aliases:
actor_process_image_path uses the alias Process_Path, actor_process_command_line uses the alias Process_CMD,
action_total_download uses the alias Download, and Download uses the alias values_download.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp values(Download) as values_download by Process_Path, Process_CMD addrawdata = true as raw_data
17.4.90 | var
Abstract
Syntax
comp var(<field>) [as <alias>] by <field_1>,<field_2> [addrawdata = true|false [as <target field>]]
Description
The var aggregation is a comp function that returns a single variance value of a field over a group of rows, for all records that contain matching values for the
fields identified in the by clause. This function is used in combination with a comp stage.
In addition, you can configure whether the raw data events are displayed by setting addrawdata to either true or false (default), which are used to
configure the final comp results. When including raw data events in your query, the query runs for up to 50 fields that you define and displays up to 100 events.
Example
Return the variance of the action_total_download field for all records that have matching values for their actor_process_image_path and
actor_process_command_line values. The query calculates a maximum of 100 xdr_data records and includes a raw_data column listing the raw data
events used to display the final comp results.
dataset = xdr_data
| fields actor_process_image_path as Process_Path, actor_process_command_line as Process_CMD, action_total_download as Download
| filter Download > 0
| limit 100
| comp var(Download) as variance_download by Process_Path, Process_CMD
addrawdata = true as raw_data
18 | Reference
18.1 | RBAC permissions
Abstract
Learn more about the RBAC permissions specifically about role permissions by component and the default Palo Alto Networks roles.
Cortex XDR uses role-based access control (RBAC) to manage roles with specific permissions for controlling user access. This section provides reference
information related to role permissions by component and the default Palo Alto Networks roles.
Abstract
You can manage role permissions in Cortex XDR, which are listed by the various components according to the sidebar navigation in Cortex XDR. Dataset
permissions are also included for custom roles. Some components include additional action permissions, such as pivot (right-click) options, to which you can
also assign access to, but only when you’ve given the user View/Edit permissions to the applicable component. Whenever you create a new role or edit an
existing role, these role permissions are configurable for all Cortex XDR apps and services in the Components tab of the Create Role window on the Roles
page. For more information, see Manage user roles.
Cortex XDR provides predefined Palo Alto Networks roles, which have set role permissions. For more information, see Default PANW roles.
The following table explains for each Cortex XDR component and additional action permissions, which are listed according to the sidebar navigation headings,
the pages that can be accessed with this role permission with the detailed edit permissions available on each page, and any additional information you should
know about the role permissions for this component.
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
New
Edit
Save as new
Set as default
Save as a report
Private/public
Disable
Delete
Delete
Edit
Generate Report
Save as new
Incident Response
Incident & Alerts
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
All actions
All actions
Investigation
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Remove queries
Edit
Remove
Rename
Disable
Edit
Labels
Description
Private/public
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Can open in asset view without being able to perform any actions.
Exclude
Add comment
Asset View from Quick Launcher → IP View, and pivot (right-click) from
a host with a Cortex XDR agent installed.
Response
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
All actions
Terminate Process Causality chain view is available from the Alerts table (Incident Response →
Incidents → Alerts Table), or from the Query Results after running a query on
✓ the related data. From both of these places, you can pivot (right-click) to the
causality chain view from any row in the table and select:
Quarantine Causality chain view is available from the Alerts table (Incident Response →
Incidents → Alerts Table), or from the Query Results after running a query on
✓ the related data. From both of these places, you can pivot (right-click) to the
causality chain view from any row in the table and select:
File Retrieval Incident Response → Response → Action Center → All Actions → New
Action and from the Define an Action page, select Files Retrieval.
✓
Endpoints → All Endpoints, and pivot (right-click) from a host with a
Cortex XDR agent installed, and select Security Operations → Retrieve
Endpoint Files.
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
File Search Incident Response → Incidents → Key Assets & Artifacts tab, and search for
a file.
✓
Destroy Files Incident Response → Response → Action Center → All Actions → New Action
and from the Define an Action page, select Destroy file.
✓
All actions
Allow List/Block List Incident Response → Response → Action Center → All Actions → New
Action and from the Define an Action page, select either:
✓
Add to block list
Disable
Edit Comment
Delete
Disable
Edit Comment
Delete
Disable Response Actions Endpoints → All Endpoints, and pivot (right-click) an endpoint that isn't an iOS
endpoint, and select Endpoint Control → Disable Capabilities.
✓
Remediation
Delete Quarantined Files Incident Response → Response → Action Center → Currently Applied
Actions → File Quarantine
Delete
Add
Delete
Agent Scripts ✓ Incident Response → Response → Action Center → Agent Script Library
Library
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Run Standard Script Incident Response → Response → Action Center → Agent Script
Library, and any script from the Scripts Library table, where the
✓ Outcome column is set to Standard, you can select:
Run
The standard scripts are available in the Scripts list to select from.
Run High-Risk Script Incident Response → Response → Action Center → Agent Script Library, and
any script from the Scripts Library table, where the Outcome column is set to
✓ High Risk, you can select:
Run
The high-risk scripts are available in the Scripts list to select from.
Script Configurations Incident Response → Response → Action Center → Agent Script Library
✓ New Script
Edit
Save as new
Delete
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Rules ✓ Detection & Threat Intel → Detection Rules → IOC → IOC Rules Edit
Cor
Add IOC
req
Edit per
the
Disable →I
Que
Delete
abo
Add to EDL &T
Det
Detection & Threat Intel → Detection Rules → BIOC → BIOC Rules
Add BIOC
Import Rules
Disable
Save as new
Add Correlation
Exclude Rule
Disable
Edit
Save as new
Delete
New Exception
Import Exceptions
Edit
Delete
Export
Prevention Rules Detection & Threat Intel → Threat Intel Management → Indicator Rules
Detection Rules → BIOC, select one or more BIOC rules, and right-click:
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Request WildFire Verdict Change From a WildFire report, you can click Report Verdict as Incorrect, and
under Suggested Verdict, suggest a new verdict. Open a WildFire report from:
✓
Incident Response → Incidents → Key Assets & Artifacts tab, and
under Artifacts, identify a file with a WildFire verdict and click Wildfire
Analysis Report
Assets
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Edit
All actions
All actions
Endpoints
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Ad
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Ad
Select one or more endpoints that you want to force the check-in of, and
right-click + Alt to open the options menu in advanced mode, and
select Endpoint Control → Force Check-in.
Select one or more agents that you want to restart, and right-click + Alt
to open the options menu in advanced mode, and select Endpoint
Control → Restart Agent.
✓ Select one or more agents that you want to move to the target server,
and right-click + Alt to open the options menu in advanced mode, and
select Endpoint Control → Change managing server.
✓ Select the endpoints you want to pause protection on, right-click and
select Endpoint Control → Pause Endpoint Protection.
✓ On the top right corner of the screen, the Tokens and Passwords icon is
displayed, which you can left-click and select:
Retrieve Token
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Ad
View endpoints
Edit
Delete
Save as new
Export group
Edit
Save As New
Disable
Delete
Edit Profile
Save As New
Delete
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Ad
Edit
Save As New
Disable
Delete
Profiles
Add Profile
Edit Profile
Save As New
Delete
All actions
All actions
Edit
Delete
64 bit installer
32 bit installer
Show rows
Save As New
Disable Group
Delete Group
Show rows
Hide rows
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Ad
Device Control ✓
Configurations
General Setting
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Alert — Notifications
Notification
Edit timezone
Timestamp Format
Email Contacts
Add
Impersonation Role
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Identity Analytics
Data Broker
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Pathfinder Applet Settings → Configurations → Data Broker → Broker VMs, and in the APPS
column of the Broker VMs page, the Pathfinder applet is displayed.
✓
All actions
Data Collection
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Uninstall Collector
Delete Collector
Add Group
Edit
Delete
Save as new
Create
Delete
Hide
Add Profile
Edit
Save As New
Delete
Add Policy
Disable
Delete
Save As New
Edit
All actions
All actions
Disable
Data Management
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
All actions
All actions
Integrations
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
New Key
Edit
Delete
Save as new
Delete
Edit
Long —
Running
HTTP
Integrations
configuration
Help
Components Additional Action Permissions With View/Edit Permissions Access Permissions To These Pages With Detailed View/Edit Permissions Add
Abstract
Learn more about the default Palo Alto Networks user roles included in Cortex XDR.
The default Palo Alto Networks roles provide a specific set of access rights to each role. You cannot edit the default roles directly, but you can save them as
new roles and edit the permissions of the new roles. To view the predefined permissions for each default role, go to Settings → Configurations → Access
Management → Roles. For more information, see Assign user roles and groups.
Abstract
Learn more about the Cortex XDR predefined user role called Account Admin.
The Account Admin is a Super User role that is assigned directly to the user in Cortex Gateway and has full access to all Cortex products in your account,
including all tenants added in the future. The Account Admin can assign roles for Cortex instances and activate Cortex tenants specific to the product.
The user who activated the Cortex product is assigned the Account Admin role. You cannot create additional Account Admin roles in the Cortex XDR tenant. If
you do not want the user to have Account Admin permission, you need to remove the Account Admin role in Cortex Gateway.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — — ✓ —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules — — ✓ —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — — ✓ —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — — ✓ —
Global Exceptions — — ✓ —
Endpoint Profiles — — ✓ —
Endpoint Installations — — ✓ —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing — ✓ N/A —
Alert Notifications — — ✓ —
General Configuration — — ✓ —
On-demand Analytics — — ✓ —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API — — ✓ —
Threat Intelligence — — ✓ —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Deployment Admin.
The Deployment Admin role is used to manage and control endpoints and installations, and configure Broker VMs.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics ✓ — — —
Host Insights ✓ — — —
Response
Action Center ✓ — — ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules ✓ — — ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance ✓ — N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — — ✓ —
Global Exceptions — ✓ — —
Endpoint Profiles ✓ — — —
Endpoint Installations — — ✓ —
Host Firewall ✓ — — —
Device Control ✓ — — ✓
Configurations
General Settings
Auditing — ✓ N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Instance Administrator.
The Instance Administrator role provides view and edit permissions for all components and can access all pages in the Cortex XDR tenant. The Instance
Administrator can also make other users an Instance Administrator for the tenant. If the tenant has predefined or custom roles, the Instance Administrator can
assign those roles to other users.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — — ✓ —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules — — ✓ —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — — ✓ —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — — ✓ —
Global Exceptions — — ✓ —
Endpoint Profiles — — ✓ —
Endpoint Installations — — ✓ —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing — ✓ N/A —
Alert Notifications — — ✓ —
General Configuration — — ✓ —
On-demand Analytics — — ✓ —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API — — ✓ —
Threat Intelligence — — ✓ —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Investigation Admin.
The Investigation Admin role is used to view and triage alerts and incidents, configure rules, view endpoint profiles and policies, and Analytics management
screens.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — — ✓ —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance ✓ — N/A —
Asset Inventory ✓ — — —
Endpoints
Endpoint Administrations ✓ — — ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups ✓ — — —
Global Exceptions ✓ — — —
Endpoint Profiles — ✓ — —
Endpoint Installations ✓ — — —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
18.1.2.5 | Investigator
Abstract
Learn more about the Cortex XDR predefined user role called Investigator.
The Investigator role is used to view and triage alerts and incidents.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — ✓ — —
Host Insights ✓ — — —
Response
Action Center ✓ — — ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules ✓ — — ✓
Prevention Rules
Assets
Network Configuration — ✓ — —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations ✓ — — ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups ✓ — — —
Global Exceptions ✓ — — —
Endpoint Profiles ✓ — — —
Endpoint Installations ✓ — — —
Host Firewall ✓ — — —
Device Control ✓ — — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
18.1.2.6 | IT Admin
Abstract
Learn more about the Cortex XDR predefined user role called IT Admin.
The IT Admin role is used to manage and control endpoints and installations, configure Broker VMs, view endpoint profiles and policies, and view alerts.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics ✓ — — —
Host Insights — — ✓ —
Response
Action Center — ✓ — ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules ✓ — — ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — — ✓ —
Global Exceptions — — ✓ —
Endpoint Profiles — ✓ — —
Endpoint Installations — — ✓ —
Host Firewall — ✓ — —
Device Control — ✓ — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration — — ✓ —
On-demand Analytics ✓ — — —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Privileged Investigator.
The Privileged Investigator role is used to view and triage alerts, incidents, and rules, view endpoint profiles and policies, and Analytics management screens.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — — ✓ —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — ✓ — ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations ✓ — — ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups ✓ — — —
Global Exceptions ✓ — — —
Endpoint Profiles — ✓ — —
Endpoint Installations ✓ — — —
Host Firewall — ✓ — —
Device Control — ✓ — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Privileged IT Admin.
The Privileged IT Admin role is used to manage and control endpoints and installations, configure Broker VMs, create profiles and policies, view alerts, and
initiate Live Terminal.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics ✓ — — —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — — ✓ —
Global Exceptions — — ✓ —
Endpoint Profiles — ✓ — —
Endpoint Installations — — ✓ —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration — — ✓ —
On-demand Analytics ✓ — — —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Privileged Responder.
The Privileged Responder role is used toView and triage alerts and incidents, access all response capabilities, and configure rules, policies, and profiles.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — — ✓ —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — ✓ — —
Global Exceptions ✓ — — —
Endpoint Profiles — ✓ — —
Endpoint Installations ✓ — — —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration — — ✓ —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the predefined user role called Privileged Security Admin.
The Privileged Security Admin role is used to triage and investigate alerts and incidents, and respond to and edit profiles and policies.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics ✓ — — —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — ✓ ✓ —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — ✓ — —
Global Exceptions — — ✓ —
Endpoint Profiles — — ✓ —
Endpoint Installations — ✓ — —
Host Firewall — — ✓ —
Device Control — — ✓ ✓
Configurations
General Settings
Auditing — ✓ N/A —
Alert Notifications — — ✓ —
General Configuration — — ✓ —
On-demand Analytics — — ✓ —
Data Broker
Broker Services — — ✓ ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence — — ✓ —
Help
Support — N/A ✓ —
18.1.2.11 | Responder
Abstract
Learn more about the Cortex XDR predefined user role called Responder.
The Responder role is used to view and triage alerts, and access all response capabilities excluding Live Terminal.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics — ✓ — —
Host Insights ✓ — — —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance ✓ — N/A —
Asset Inventory ✓ — — —
Endpoints
Endpoint Administrations ✓ — — ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups ✓ — — —
Global Exceptions ✓ — — —
Endpoint Profiles ✓ — — —
Endpoint Installations ✓ — — —
Host Firewall ✓ — — —
Device Control ✓ — — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support ✓ N/A — —
Abstract
Learn more about the Cortex XDR predefined user role called Scoped Endpoint Admin.
The Scoped Endpoint Admin role can only access product areas that support endpoint scoped-based access control (SBAC) - Endpoint Administration, Action
Center, Response, Dashboards, and Reports.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center ✓ — — —
Forensics ✓ — — —
Host Insights ✓ — — —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules ✓ — — ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance ✓ — N/A —
Asset Inventory ✓ — — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups ✓ — — —
Global Exceptions ✓ — — —
Endpoint Profiles ✓ — — —
Endpoint Installations ✓ — — —
Host Firewall ✓ — — —
Device Control ✓ — — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration ✓ — — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
Abstract
Learn more about the Cortex XDR predefined user role called Security Admin.
The Security Admin role can triage and investigate alerts and incidents, respond (excluding Live Terminal), and edit profiles and policies.
Dashboards — — ✓ —
Reports — — ✓ —
Incident Response
Incidents & Alerts
Investigation
Query Center — — ✓ —
Forensics ✓ — — —
Host Insights — — ✓ —
Response
Action Center — — ✓ ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL — N/A ✓ —
Script Configurations
Automation Rules ✓ — — —
Rules — — ✓ ✓
Prevention Rules
Assets
Network Configuration — — ✓ —
Compliance — ✓ N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — — ✓ ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — ✓ — —
Global Exceptions — ✓ — —
Endpoint Profiles — — ✓ —
Endpoint Installations — ✓ — —
Host Firewall — ✓ — —
Device Control — ✓ — ✓
Configurations
General Settings
Auditing ✓ — N/A —
Alert Notifications ✓ — — —
General Configuration — — ✓ —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support — N/A ✓ —
18.1.2.14 | Viewer
Abstract
Learn more about the Cortex XDR predefined user role called Viewer.
The Viewer role can view the majority of the features for this instance and edit reports.
Dashboards — ✓ — —
Reports — ✓ — —
Incident Response
Incidents & Alerts
Investigation
Query Center — ✓ — —
Forensics — ✓ — —
Host Insights — ✓ — —
Response
Action Center — ✓ — ✓
Isolate
Terminate Process
Quarantine
File Retrieval
File Search
Destroy Files
Remediation
EDL ✓ N/A — —
Script Configurations
Automation Rules ✓ — — —
Rules — ✓ — ✓
Prevention Rules
Assets
Network Configuration ✓ — — —
Compliance ✓ — N/A —
Asset Inventory — ✓ — —
Endpoints
Endpoint Administrations — ✓ — ✓
Endpoint Management
Endpoint Scan
Pause Protection
Endpoint Groups — ✓ — —
Global Exceptions — ✓ — —
Endpoint Profiles — ✓ — —
Endpoint Installations — ✓ — —
Host Firewall — ✓ — —
Device Control — ✓ — ✓
Configurations
General Settings
Auditing — ✓ N/A —
Alert Notifications ✓ — — —
General Configuration — ✓ — —
On-demand Analytics ✓ — — —
Data Broker
Broker Services ✓ — — ✓
Pathfinder Applet
Data Management
Integrations
Public API ✓ — — —
Threat Intelligence ✓ — — —
Help
Support ✓ N/A — —
You can enable security auditing events using GPO or set them up on a local server. Active Directory Certificate Services (ADCS) events require additional
setup.
We recommend you configure security auditing using Group Policy Object (GPO). Using GPO simplifies audit management and ensures that auditing settings
are uniformly applied across your network, reducing the risk of misconfigurations on individual machines.
Use the Group Policy Management Editor to configure security auditing policies across domain controllers or other target machines.
We recommend that you configure the Group Policy Object (GPO) to apply to all endpoints and not just Domain Controllers. This ensures comprehensive
auditing across your entire network.
2. Open the Group Policy Management Editor using one of the following methods:
Create a new GPO and link it to an Organizational Unit (OU) containing the computers where you want to apply the changes.
Use an existing GPO. For example, to apply changes to domain controllers, expand the Domain Controllers OU, right-click Default Domain
Controllers Policy, and select Edit.
4. In the Group Policy Management Editor, navigate to Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit
Policy Configuration → Audit Policies.
5. In the Audit Policies settings, enable logging for both successful and failed attempts for the following events.
4768, 4771, 4824 Account Logon Audit Kerberos Authentication Service DCs only
4769, 4770, 4821 Account Logon Audit Kerberos Service Ticket Operations DCs only
4741, 4742, 4743 Account Management Audit Computer Account Management DCs only
4662 DS Access Audit Directory Service Access Additional setup for Active
Directory Certificate Services
(ADCS) events
DCs only
4880, 4881, 4885, Object Access Audit Certification Services Additional setup for Active
4886, 4887, 4888, Directory Certificate Services
4896, 4898, 4899, (ADCS) events
4900
To enable collection of event logs on a local machine without GPO, use the following command in an administrator command prompt:
4768, 4771, 4824 Account Logon Audit Kerberos Authentication Service DCs only
4769, 4770, 4821 Account Logon Audit Kerberos Service Ticket Operations DCs only
4741, 4742, 4743 Account Management Audit Computer Account Management DCs only
4727, 4728, 4729, 4731, Account Management Audit Security Group Management
4732, 4733, 4735, 4737,
4754, 4755, 4756, 4757,
4764, 4799
4720, 4722, 4723, 4724, Account Management Audit User Account Management
4725, 4726, 4738, 4740,
4765, 4766, 4767, 4780,
4781
4662 DS Access Audit Directory Service Access Additional setup for Active
Directory Certificate Services
(ADCS) events
DCs only
4880, 4881, 4885, 4886, Object Access Audit Certification Services Additional setup for Active
4887, 4888, 4896, 4898, Directory Certificate Services
4899, 4900 (ADCS) events
18.2.1.3 | Additional setup for Active Directory Certificate Services (ADCS) events
ADCS events with IDs 4880, 4881, 4886, 4887, 4896, 4898, 4899, 4900 require additional setup.
Enabling auditing for Active Directory Certificate Services (ADCS) restarts (events 4880 and 4881) can significantly slow down the service if you have a large
database. To prevent delays:
Clean up the database: Remove any unnecessary entries to reduce its size.
Skip this audit: If restart speed is critical, consider not enabling auditing for ADCS starts and stops (event IDs 4880 and 4881).
2. In the Start menu, under Administrative Tools, open Active Directory Users and Computers.
5. To view detailed information about your domain, right-click its name and select Properties.
6. Click the Security tab, usually located near the top of the Properties window.
7. Click Advanced which is located within the Security tab or near the bottom of the window.
8. In the Advanced Security Settings window that opens, select the Auditing tab and click Add.
10. In the window that opens, under Enter the object name to select, type Everyone, click Check Names, and then OK.
Applies to: To monitor actions by users within this group and any subgroups, select Descendant User objects.
Permissions: To remove any existing permissions from this audit entry, click Clear all.
Scroll up to Permissions to see view the list of permissions. Click the checkbox next to Full Control which automatically selects all the individual
permissions below it.
List contents
Read permissions
12. Repeat step 11, with the following values in Applies to:
For the following event IDs, the auditing setup is configured using the Windows Event Viewer. Access the Event Viewer through the search box in the Start
menu.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → User Profile Service, right click Operational and select Enable Log.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → CAPI2, right click Operational and select Enable Log.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → DNS Client Events, right click Operational and select Enable Log.
Event ID 2004
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → DriverFrameworks-UserMode, right click Operational and select Enable
Log.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → PowerShell, right click Operational and select Enable Log.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → Windows Defender, right click Operational and select Enable Log.
Event ID 1024
In Event viewer → Application and Services Logs → Microsoft → Windows → TerminalServices-ClientActiveXCore → Microsoft-Windows-TerminalServices-
RDPClient, right click Operational and select Enable Log.
In Event Viewer → Expand Applications and Services Logs → Microsoft → Windows → Windows Firewall With Advanced Security → Firewall, right click
Operational and select Enable Log.
You can enable LDAP server event logging using RegEdit or GPO.
Make the following changes on all LDAP servers in the domain for which you want to configure auditing.
1. On a domain controller or a system with Remote Server Administration Tools (RSAT) installed, open the Group Policy Management Console (GPMC).
2. Create a new Group Policy Object (GPO): Right-click on the domain or organizational unit (OU) where your domain controllers reside, then select Create
a GPO in this domain, and Link it here.... Give it a descriptive name, e.g. Domain Controller Registry Settings.
2. In the Group Policy Management Editor, navigate to Computer Configuration → Preferences → Windows Settings → Registry.
3. Add Registry Items: Right-click on Registry and select New → Registry Item.
4. Configure Registry Keys: For each of the registry keys you want to set, create a new Registry Item.
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics]
15 Field Engineering
Action: Update
Hive: HKEY_LOCAL_MACHINE
Value data: 5
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters]
Hive: HKEY_LOCAL_MACHINE
Value data: 1
Action: Update
Hive: HKEY_LOCAL_MACHINE
Value data: 1
Hive: HKEY_LOCAL_MACHINE
Value data: 1
5. To link the GPO to the OU where your domain controllers reside, in Group Policy Management, right-click the OU, select Link an Existing GPO, then
select the GPO you just created.
6. Force Group Policy Update: Force a Group Policy update using the gpupdate /force command on each domain controller or by restarting them.
19 | Glossary
This glossary includes information about key terms for Cortex products.
19.1 | Alert
When you identify a threat, you can define specific rules for which you want Cortex XDR/Cortex XSIAM to raise alerts. Non-informational alerts are consolidated
from your detection sources to enable you to efficiently and effectively triage the events you see each day on the Alerts page. By analyzing the alert, you can
better understand the cause of what happened and the full story with context to validate whether an alert requires additional action. Cortex XDR/Cortex XSIAM
supports saving 2M alerts per 4000 agents or 20 terabytes, half of the alerts are allocated for informational alerts and half for severity alerts.
19.14 | Dataset
A dataset is a collection of column:value sets and has a unique name. Every Cortex Query Language query begins by identifying a dataset that the query will
run against. If you do not specify a dataset in your query, the query runs against the default datasets configured.
19.22 | Filebeat
Elasticsearch Filebeat, also called Filebeat, is a type of log source that can be ingested by Cortex XDR/Cortex XSIAM. Depending on the type of Elasticsearch
Filebeat logs that you want to ingest, a different data source is used.
19.23 | Forensics
Allows you to perform forensic analysis easily by collecting all the artifacts you need and displaying them in an intuitive forensics console.
19.26 | Incident
An attack can affect several hosts or users and raises different alert types stemming from a single event. All artifacts, assets, and alerts from a threat event are
gathered into an Incident. The Incidents page displays all incidents to help you prioritize, track, triage, investigate and take remedial action.
19.33 | Notebooks
Cortex XSIAM Notebooks enable you to analyze and visualize the extensive data collected by Cortex XSIAM. Using Jupyter tools, you can build machine
learning models to visualize clusters, identify anomalies, and then feed your findings back into the Cortex XSIAM environment to generate security insights.
19.35 | Playbook
A playbook is a high-level automation tool that coordinates multiple tasks or actions in a workflow. Playbooks are a series of tasks, conditions, automations,
conditions, commands, and loops that run in a predefined flow to save time and improve the efficiency and results of the investigation and response process.
19.36 | Prisma
Prisma is another Cortex product that can be integrated to Cortex XDR/Cortex XSIAM. For example, you can receive alerts from Prisma Cloud Compute and
forwarded data from Prisma Access.
19.37 | Script
A script performs specific actions and can be comprised of integration commands, which are used in playbook tasks and when running automation on
demand in the War Room.