0% found this document useful (0 votes)
298 views7 pages

Ai Specialist Exam Prep

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
298 views7 pages

Ai Specialist Exam Prep

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

AI SPECIALIST EXAM PREP – APUNTES

Examen
60 pregunta
105 mins
73% para aprobar

Contenido

Einstein Trust Layer – 15%


Generative AI in CRM Applications – 17%
Prompt Builder – 37%
Agentforce Tools – 23%
Model Builder – 8%

1. Einstein Trust Layer – 15%


Einstein Generative AI & Trust
Salesforce’s Einstein generative AI solutions are designed, developed, and delivered based on our five
principles for trusted generative AI.
 Accuracy: We back up model responses with explanations and sources whenever possible. We
recommend that a human checks model responses before sharing with end users for the majority of
use cases.
 Safety: We work to detect and mitigate bias, toxicity, and harmful responses from models used in
our products through industry-leading detection and mitigation techniques.
 Transparency: We ensure that our models and features respect data provenance and are grounded
in your data whenever possible.
 Empowerment: We believe our products should augment people’s capabilities and make them more
efficient and purposeful in their work.
 Sustainability: We strive towards building right-sized models that prioritize accuracy and reducing
our carbon footprint.

Problems that can be faced with Generative AI due to it being LLM generated:

 Accuracy: Generative AI can sometimes “hallucinate”—fabricate responses that aren’t grounded in


fact or existing sources.
 Bias and Toxicity: Because AI is created by humans and trained on data created by humans, it can
also contain bias against historically marginalized groups. Rarely, some responses can contain
harmful language.

Einstein Trust Layer: Designed for Trust


Einstein Trust Layer: collection of features, processes and policies designed to safeguard data privacy,
enhance AI accuracy and promote responsible use of AI across Salesforce ecosystem. A secure AI
architecture.

How data flows through the Trust Layer:


 The data in the form of a prompt, flows from CRM apps, through the Einstein Trust Layer, to the large
language model (LLM), which we’ll call prompt journey.
 The LLM generates a response using the prompt, which we’ll call response generation.
 The generated response then flows back through the Einstein Trust Layer and back to the CRM apps,
which we’ll call the response journey.
The flow is divided in two parts: prompt journey and response journey
Prompt Journey:

Prompt → Set of instructions to a LLM (Large Language Model) to generate a specific output or generated
response. The prompt can come from any of the CRM apps. You can create it on Prompt Builder and invoke it
from Apex or a Flow. This step is mandatory. For LLM to generate personalized email content, it needs
context about your customer, their preferences and other relevant data. Example of masked data:

Grounding → Process of adding additional context to the prompt, because the LLM needs additional context.
You can ground your prompt using merge fields with CRM data (ex: record fields, flows, Apex, Data Cloud
DMOs and related lists). It’s dynamic because happens at run time and depends on the user’s access.
Secure data retrieval  First step in the Trust Layer. Means that the prompt is grounded only with data that
the executing user has access to, because data retrieval process respects existing access control and
permissions.

The Einstein Trust Layer uses data masking to prevent sensitive data from being exposed to the LLM (Large
Language Model). Data masking involves replacing the sensitive data with placeholder text based on what it
represents. We identify sensitive data using two methods:
 Pattern-based: We use patterns and context to identify sensitive data in the prompt text.
 Field-Based: We use the metadata in the fields that are classified using Shield Platform Encryption or
data classification to identify sensitive fields. Field-Based masking supports only merge fields that are
referenced in record merge fields and related lists.
Once identified, the data is then masked with a placeholder text to prevent the data from being exposed to
external models.
Einstein Trust Layer temporarily stores the relationship between the original entities and their respective
placeholders. The relationship is used later to demask the data in the generated response.

Prompt Defense  refers to system policies that help limit hallucinations and decrease the likelihood of
harmful outputs. To help decrease the likelihood of the LLM generating something unintended or harmful,
Prompt Builder and Prompt Template Connect API use system policies (a set of instructions to the LLM for
how to behave in a certain manner).

Models built or fine-tuned by Salesforce are hosted in the Salesforce trust boundary. External Models built
and maintained by third-party providers, such as OpenAI, are in a shared trust boundary. Models that you
build and maintain are hosted on your infrastructure.

When the prompt is ready, it’s sent through the LLM gateway, which generates a response and sends it back
to Salesforce.

Zero-data retention policy  in place with external partner model providers, such as OpenAI or Azure Open
AI. The policy states that data sent to the LLM from Salesforce isn’t retained and is deleted after a response is
sent back to Salesforce.

Einstein Trust Layer: Response Journey

Next, a set of instructions and policies are applied to the prompt to reduce inaccuracies and prevent
unintended consequences.

But before sending it back to you, Einstein scans the response for any toxicity (toxicity detection) and then
de-masks the data (the placeholders that were created for masking the data during the prompt journey are
now replaced with the actual data that was behind them.
). The toxicity detection process includes a toxicity confidence score, which reflects the probability of the
response including harmful or inappropriate content.
Finally, you can edit further the response or send it as it is. Give feedback to improve the tool. Data and
feedback are stored in Data Cloud for 30 days (included the toxicity score). Einstein generative AI audit and
feedback data is stored in Data Cloud.

Audit Trail also includes the original prompt, masked prompt, scores logged during toxicity detection, the
original output from the LLM, and the demasked output.

Use reports in Data Cloud to check how Einstein Trust Layer is impacting and feedback to improve prompt
design.

Einstein Trust Layer Limits


Einstein Trust Layer is included in the sandbox. But those features that require Data Cloud cannot be tested
since Data Cloud is not supported in sandbox. These Einstein Trust features aren’t available for testing:
 LLM Data Masking configuration in Einstein Trust Layer Setup.
 Grounding on Objects in Data Cloud.
 Logging and reviewing audit and feedback data in Data Cloud.

How to set up Einstein Trust Layer


Before you can set up the Einstein Trust Layer, you must enable Einstein Generative AI and configure Data
Cloud in your org. Data Cloud is required to ensure that the Einstein Trust Layer functions correctly and
protects your data.
These settings are applied to your Salesforce org.
1. From Setup, in the Quick Find box, enter Einstein, and then select Einstein Trust Layer.
2. Data masking is enabled by default. Enable data masking if it’s turned off to allow the Einstein Trust
Layer to detect and mask sensitive data.
3. To change individual settings, click Configure Data Masking, then save your changes.

Select what data to mask


Permissions needed: View setup and Customize Application
At initial setup, the most commonly used entries are turned on, and less frequently used entries are off.

1. From Setup, in the Quick Find box, enter Einstein, and then select Einstein Trust Layer.
2. Select Go to Einstein Trust Layer
3. Turn on large language model data masking.
4. Review the list of data types included in the pattern-based masking section and make changes as
needed. Some data types are turned on for data masking by default.
5. Turn on data masking for Shield Platform Encryption, compliance categories, and data sensitivity levels.
Confirm that the sensitive fields that must be masked are tagged with the correct compliance categories
and data sensitivity levels in Object Manager.
You see the Shield Platform Encryption option in Einstein Trust Layer setup only if you enabled it in your org.

Verify masked data


After you configure data masking in Einstein Trust Layer Setup, you want to verify your sensitive data.
Permissions: To create and manage prompt templates in Prompt Builder (Prompt Template Manager
permission set) and to view and access reports and dashboards (Data Cloud User).
There are two ways in which you can verify what data is masked.
 Using Prompt Builder, you can see what data will be masked at run time
 To monitor data masking activity, build a standard Data Cloud report to confirm that sensitive data, such
as credit card or phone numbers, are properly masked in your LLM prompts. The following steps walk
you through the steps to creating a data cloud report.
Before you can view data in data cloud, you must Turn on Einstein Generative AI Data Collection.
1. From Data Cloud, on the Reports tab, click Reports.
2. In the Data Cloud report category, select the GenAIGatewayRequest report. This report uses the Generative
AI Request data model object (DMO).
3. Click Start Report.
4. Add columns to the report.
Here’s a list of columns to add:
 Timestamp
 Model
 # promptTokens
 Prompt
 MaskedPrompt
5. Run the report. The report lists all prompts with the data and their corresponding masked text.

Review toxicity scores


Build a standard Data Cloud report to review toxicity scores in the responses generated by the large language
model (LLM). You can also use the pre-built dashboards and reports to review toxicity trends.
Einstein Generative AI Audit and Feedback Data reports package must be installed.
1. From Data Cloud, on the Reports tab, click Reports, then New Report.
2. In the Data Cloud report category, select the GenAIGatewayResponse with
GenAIContentCategory report.
3. Click Start Report.
4. Add columns to the report. Here’s a list of columns to add:
 Timestamp
 ResponseText
 DetectorType
 Category
 Value
5. Select the Filters panel. Choose Detector Type for the field, select Equals for the operator, and
select toxicity for the value.
6. Run the report.

When the isToxicityDetected field is true, it indicates a high level of confidence that the content contains
toxic language. However, when the isToxicityDetected field is false, it doesn’t necessarily mean there isn’t
toxicity, but rather, that the model didn’t detect toxicity in the content. The model is trained to provide
scores from 0 through 1.
 The score for the safety category ranges from 0 through 1 with 1 being the safest. We consider a
safety score that is between 0.5 and 1 as safe.
 The scores for all other categories indicate the toxicity in each category, and range from 0 through 1,
1 being the most toxic. We consider toxicity scores of 0.5 and above as toxic for that category.

Data Model for Generative AI Audit trail and Feedback

Generative AI audit data (also known as audit trail) includes data about the Einstein Trust Layer features such
as data masking and toxicity scores.
Audit trail along with feedback data is stored in Data Cloud.
DMO: Data Model Object.
 Request Tag: captures any custom data points as key-value pairs.
 Gateway Request: captures prompt input, request parameters, and model details and parameters.
 Gateway Response: enables joins between the GenAIRequest DMO and GenAIGenerations DMO.
 Content Category DMO: captures Einstein Trust Layer detector result values by detector type and sub-
category. This DMO includes the safety and toxicity scores of output from the LLM.
 Content Quality DMO: captures if a request or response is safe or unsafe.
 Generation: captures generated responses returned for generation requests. This DMO also includes
masked prompts if masking is enabled and may also include sensitive user data.
 Feedback Detail DMO: details of user feedback.
 Feedback: captures feedback for a specific generation or part of a generation.
 App Generation: captures feature-specific changes made to the original generated text.

Exam Practice questions for this section:


1. Which feature of Einstein Trust Layer helps limit hallucinations and decrease the likelihood of
unintended output?
a) Dynamic Grounding with Secure Data Retrieval
b) Prompt Defense
c) Toxicity Scoring
B. Correct. Prompt Defense refers to system policies that help limit hallucinations and decrease the likelihood
of harmful outputs.
2. What is one way Einstein Trust Layer ensures data privacy?
a) The Einstein Trust Layer detects and masks sensitive information before sending it to the large
language model (LLM)
b) The Einstein Trust Layer assigns role-based access controls to regulate data access
c) The Einstein Trust Layer enhances firewall protections to prevent unauthorize access
A. Correct. Prompt Defense refers to system policies that help limit hallucinations and decrease the likelihood
of harmful outputs.
3. A healthcare company is implementing Salesforce Einstein to enhance its customer service
operations but is highly concerned about data privacy and healthcare regulation compliance. The
company requires that no patient data is used for LLM model training or product improvements.
What feature of the Einstein Trust Layer addresses the organization's data privacy concerns?
a) Zero-Data Retention policy
b) Dynamic Grounding
c) Prompt Defense
A. Correct. The Zero-Data Retention Policy ensures that no data is used for LLM model training or product
improvements.
4. From where is Einstein Generative AI Audit and Feedback Data Report Package accessed?
a) Data Cloud
b) Marketing Could
c) Sales Cloud
A. Correct. Einstein generative AI audit and feedback data is stored in Data Cloud.

2. Generative AI in CRM Applications – 17%


3. Prompt Builder – 37%
4. Agentforce Tools – 23%
5. Model Builder – 8%

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy