100% found this document useful (1 vote)
852 views18 pages

AIF-C01 Exam Valid Dumps

The document provides an overview of AIF-C01 AWS Certified AI Practitioner exam dumps, highlighting their features such as instant download, free updates, and customer support. It includes sample questions and answers related to AI and machine learning concepts, emphasizing the importance of data privacy and model evaluation metrics. Additionally, it discusses various AWS services relevant to AI applications, including Amazon Bedrock and SageMaker.

Uploaded by

Zabrocki Archie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
852 views18 pages

AIF-C01 Exam Valid Dumps

The document provides an overview of AIF-C01 AWS Certified AI Practitioner exam dumps, highlighting their features such as instant download, free updates, and customer support. It includes sample questions and answers related to AI and machine learning concepts, emphasizing the importance of data privacy and model evaluation metrics. Additionally, it discusses various AWS services relevant to AI applications, including Amazon Bedrock and SageMaker.

Uploaded by

Zabrocki Archie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

AIF-C01 AWS Certified AI Practitioner exam dumps questions are the best

material for you to test all the related Amazon exam topics. By using the AIF-C01
exam dumps questions and practicing your skills, you can increase your
confidence and chances of passing the AIF-C01 exam.

Features of Dumpsinfo’s products

Instant Download
Free Update in 3 Months
Money back guarantee
PDF and Software
24/7 Customer Support

Besides, Dumpsinfo also provides unlimited access. You can get all
Dumpsinfo files at lowest price.

AWS Certified AI Practitioner AIF-C01 exam free dumps questions are


available below for you to study.

Full version: AIF-C01 Exam Dumps Questions

1.An AI practitioner has built a deep learning model to classify the types of materials in images. The
AI practitioner now wants to measure the model performance.
Which metric will help the AI practitioner evaluate the performance of the model?
A. Confusion matrix
B. Correlation matrix
C. R2 score
D. Mean squared error (MSE)
Answer: A
Explanation:
A confusion matrix is the correct metric for evaluating the performance of a classification model, such
as the deep learning model built to classify types of materials in images.
Confusion Matrix:
It is a table used to describe the performance of a classification model by comparing the actual and
predicted classifications.
Provides detailed insights into the model’s performance, including true positives, true negatives,
false positives, and false negatives.
Why Option A is Correct:
Performance Measurement: Helps measure various performance metrics like accuracy, precision,
recall, and F1-score, which are critical for evaluating a classification model.
Comprehensive Evaluation: Allows for a thorough analysis of where the model is making errors and
the types of errors being made.
Why Other Options are Incorrect:
B. Correlation matrix: Used to identify relationships between variables, not for evaluating classification
performance.
C. R2 score: Used for regression models, not classification.
D. Mean squared error (MSE): Also a metric for regression, measuring the average of the squares of
the errors.

2.A bank is fine-tuning a large language model (LLM) on Amazon Bedrock to assist customers with
questions about their loans. The bank wants to ensure that the model does not reveal any private
customer data.
Which solution meets these requirements?
A. Use Amazon Bedrock Guardrails.
B. Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM.
C. Increase the Top-K parameter of the LLM.
D. Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM.
Answer: B
Explanation:
The goal is to prevent a fine-tuned large language model (LLM) on Amazon Bedrock from revealing
private customer data.
Let’s analyze the options:
A. Amazon Bedrock Guardrails: Guardrails in Amazon Bedrock allow users to define policies to filter
harmful or sensitive content in model inputs and outputs. While useful for real-time content
moderation, they do not address the risk of private data being embedded in the model during fine-
tuning, as the model could still memorize sensitive information.
B. Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM:
Removing PII (e.g., names, addresses, account numbers) from the training dataset ensures that the
model does not learn or memorize sensitive customer data, reducing the risk of data leakage. This is
a proactive and effective approach to data privacy during model training.
C. Increase the Top-K parameter of the LLM: The Top-K parameter controls the randomness of the
model’s output by limiting the number of tokens considered during generation. Adjusting this
parameter affects output diversity but does not address the privacy of customer data embedded in the
model.
D. Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM: Encrypting data
in Amazon S3 protects data at rest and in transit, but during fine-tuning, the data is decrypted and
used to train the model. If PII is present, the model could still learn and potentially expose it, so
encryption alone does not solve the problem.
Exact Extract
Reference: AWS emphasizes data privacy in AI/ML workflows, stating, “To protect sensitive data, you
can preprocess datasets to remove personally identifiable information (PII) before using them for
model training. This reduces the risk of models inadvertently learning or exposing sensitive
information.” (Source: AWS Best Practices for Responsible AI, https://aws.amazon.com/machine-
learning/responsible-ai/). Additionally, the Amazon Bedrock documentation notes that users are
responsible for ensuring compliance with data privacy regulations during fine-tuning
(https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization.html).
Removing PII before fine-tuning is the most direct and effective way to prevent the model from
revealing private customer data, making B the correct answer.
Reference: AWS Bedrock Documentation: Model Customization
(https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization.html)
AWS Responsible AI Best Practices (https://aws.amazon.com/machine-learning/responsible-ai/)
AWS AI Practitioner Study Guide (emphasis on data privacy in LLM fine-tuning)

3.Which AW5 service makes foundation models (FMs) available to help users build and scale
generative AI applications?
A. Amazon Q Developer
B. Amazon Bedrock
C. Amazon Kendra
D. Amazon Comprehend
Answer: B
Explanation:
Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) from
various providers, enabling users to build and scale generative AI applications. It simplifies the
process of integrating FMs into applications for tasks like text generation, chatbots, and more.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI
providers available through a single API, enabling developers to build and scale generative AI
applications with ease."
(Source: AWS Bedrock User Guide, Introduction to Amazon Bedrock)
Detailed
Option A: Amazon Q DeveloperAmazon Q Developer is an AI-powered assistant for coding and AWS
service guidance, not a service for hosting or providing foundation models.
Option B: Amazon BedrockThis is the correct answer. Amazon Bedrock provides access to
foundation models, making it the primary service for building and scaling generative AI applications.
Option C: Amazon KendraAmazon Kendra is an intelligent search service powered by machine
learning, not a service for providing foundation models or building generative AI applications.
Option D: Amazon ComprehendAmazon Comprehend is an NLP service for text analysis tasks like
sentiment analysis, not for providing foundation models or supporting generative AI.
Reference: AWS Bedrock User Guide: Introduction to Amazon Bedrock
(https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) AWS AI Practitioner
Learning Path: Module on Generative AI Services
AWS Documentation: Generative AI on AWS (https://aws.amazon.com/generative-ai/)

4.A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer
customer queries about products. The company wants to validate the model's responses to new types
of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.
Which AWS service meets these requirements?
A. Amazon S3
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Elastic File System (Amazon EFS)
D. AWS Snowcone
Answer: A
Explanation:
Amazon S3 is the optimal choice for storing and uploading datasets used for machine learning model
validation and training. It offers scalable, durable, and secure storage, making it ideal for holding
datasets required by Amazon Bedrock for validation purposes.
Option A (Correct): "Amazon S3": This is the correct answer because Amazon S3 is widely used for
storing large datasets that are accessed by machine learning models, including those in Amazon
Bedrock.
Option B: "Amazon Elastic Block Store (Amazon EBS)" is incorrect because EBS is a block storage
service for use with Amazon EC2, not for directly storing datasets for Amazon Bedrock.
Option C: "Amazon Elastic File System (Amazon EFS)" is incorrect as it is primarily used for file
storage with shared access by multiple instances.
Option D: "AWS Snowcone" is incorrect because it is a physical device for offline data transfer, not
suitable for directly providing data to Amazon Bedrock. AWS AI Practitioner
Reference: Storing and Managing Datasets on AWS for Machine Learning: AWS recommends using
S3 for storing and managing datasets required for ML model training and validation.

5.A company has developed an ML model for image classification. The company wants to deploy the
model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without
managing any of the underlying infrastructure.
Which solution will meet these requirements?
A. Use Amazon SageMaker Serverless Inference to deploy the model.
B. Use Amazon CloudFront to deploy the model.
C. Use Amazon API Gateway to host the model and serve predictions.
D. Use AWS Batch to host the model and serve predictions.
Answer: A
Explanation:
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to
production in a way that allows a web application to use the model without the need to manage the
underlying infrastructure.
Amazon SageMaker Serverless Inference provides a fully managed environment for deploying
machine learning models. It automatically provisions, scales, and manages the infrastructure required
to host the model, removing the need for the company to manage servers or other underlying
infrastructure.
Why Option A is Correct:
No Infrastructure Management: SageMaker Serverless Inference handles the infrastructure
management for deploying and serving ML models. The company can simply provide the model and
specify the required compute capacity, and SageMaker will handle the rest.
Cost-Effectiveness: The serverless inference option is ideal for applications with intermittent or
unpredictable traffic, as the company only pays for the compute time consumed while handling
requests.
Integration with Web Applications: This solution allows the model to be easily accessed by web
applications via RESTful APIs, making it an ideal choice for hosting the model and serving
predictions.
Why Other Options are Incorrect:
B. Use Amazon CloudFront to deploy the model: CloudFront is a content delivery network (CDN)
service for distributing content, not for deploying ML models or serving predictions.
C. Use Amazon API Gateway to host the model and serve predictions: API Gateway is used for
creating, deploying, and managing APIs, but it does not provide the infrastructure or the required
environment to host and run ML models.
D. Use AWS Batch to host the model and serve predictions: AWS Batch is designed for running batch
computing workloads and is not optimized for real-time inference or hosting machine learning models.
Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without
managing any underlying infrastructure.

6.A company wants to build a lead prioritization application for its employees to contact potential
customers. The application must give employees the ability to view and adjust the weights assigned
to different variables in the model based on domain knowledge and expertise.
Which ML model type meets these requirements?
A. Logistic regression model
B. Deep learning model built on principal components
C. K-nearest neighbors (k-NN) model
D. Neural network
Answer: A
Explanation:
The company needs an ML model for a lead prioritization application where employees can view and
adjust the weights assigned to different variables based on domain knowledge. Logistic regression is
a linear model that assigns interpretable weights to input features, making it easy for users to
understand and modify these weights. This interpretability and adjustability make it suitable for the
requirements.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Logistic regression is a supervised learning algorithm used for classification tasks. It is highly
interpretable, as it assigns weights to each feature, allowing users to understand and adjust the
importance of different variables based on domain expertise."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Algorithms)
Detailed
Option A: Logistic regression model This is the correct answer. Logistic regression provides
interpretable coefficients (weights) for each feature, enabling employees to view and adjust them
based on domain knowledge, meeting the application’s requirements.
Option B: Deep learning model built on principal componentsDeep learning models, even when using
principal components, are complex and lack interpretability. The weights in such models are not
easily adjustable by users, making this option unsuitable.
Option C: K-nearest neighbors (k-NN) modelk-NN is a non-parametric model that does not assign
explicit weights to features. It relieson distance metrics, which are not easily adjustable based on
domain knowledge, so it does not meet the requirements.
Option D: Neural networkNeural networks are highly complex and lack interpretability, as their
weights are not directly tied to input features in a human-understandable way. Adjusting weights
based on domain knowledge is impractical, making this option incorrect.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Algorithms
Amazon SageMaker Developer Guide: Logistic Regression
(https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html)
AWS Documentation: Interpretable Machine Learning Models (https://aws.amazon.com/machine-
learning/)

7.An ML research team develops custom ML models. The model artifacts are shared with other
teams for integration into products and services. The ML team retains the model training code and
data. The ML team wants to builk a mechanism that the ML team can use to audit models.
Which solution should the ML team use when publishing the custom ML models?
A. Create documents with the relevant information. Store the documents in Amazon S3.
B. Use AWS A] Service Cards for transparency and understanding models.
C. Create Amazon SageMaker Model Cards with Intended uses and training and inference details.
D. Create model training scripts. Commit the model training scripts to a Git repository.
Answer: C
Explanation:
The ML research team needs a mechanism to audit custom ML models while sharing model artifacts
with other teams. Amazon SageMaker Model Cards provide a structured way todocument model
details, including intended uses, training data, and inference performance, making them ideal for
auditing and ensuring transparency when publishing models. Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Amazon SageMaker Model Cards enable you to document critical details about your machine
learning models, such as intended uses, training data, evaluation metrics, and inference details.
Model Cards support auditing by providing a centralized record that can be reviewed by teams to
understand model behavior and limitations."
(Source: Amazon SageMaker Developer Guide, SageMaker Model Cards)
Detailed
Option A: Create documents with the relevant information. Store the documents in Amazon S3.While
storing documents in S3 is feasible, it lacks the structured format and integration with SageMaker that
Model Cards provide, making it less suitable for auditing purposes.
Option B: Use AWS AI Service Cards for transparency and understanding models.AWS AI Service
Cards are not a standard feature in AWS documentation. This option appears to be a distractor and is
not a valid solution.
Option C: Create Amazon SageMaker Model Cards with Intended uses and training and inference
details. This is the correct answer. SageMaker Model Cards are specifically designed to document
model details for auditing, transparency, and collaboration, meeting the team’s requirements.
Option D: Create model training scripts. Commit the model training scripts to a Git repository.Sharing
training scripts in a Git repository provides access to code but does not offer a structured auditing
mechanism for model details like intended uses or inference performance.
Reference: Amazon SageMaker Developer Guide: SageMaker Model Cards
(https://docs.aws.amazon.com/sagemaker/latest/dg/model-cards.html)
AWS AI Practitioner Learning Path: Module on Model Governance and Auditing
AWS Documentation: Responsible AI with SageMaker (https://aws.amazon.com/sagemaker/)

8.A company wants to deploy a conversational chatbot to answer customer questions. The chatbot is
based on a fine-tuned Amazon SageMaker JumpStart model. The application must comply with
multiple regulatory frameworks.
Which capabilities can the company show compliance for? (Select TWO.)
A. Auto scaling inference endpoints
B. Threat detection
C. Data protection
D. Cost optimization
E. Loosely coupled microservices
Answer: B,C
Explanation:
To comply with multiple regulatory frameworks, the company must ensure data protection and threat
detection. Data protection involves safeguarding sensitive customer information, while threat
detection identifies and mitigates security threats to the application.
Option C (Correct): "Data protection": This is correct because data protection is critical for compliance
with privacy and security regulations.
Option B (Correct): "Threat detection": This is correct because detecting and mitigating threats is
essential to maintaining the security posture required for regulatory compliance.
Option A: "Auto scaling inference endpoints" is incorrect because auto-scaling does not directly relate
to regulatory compliance.
Option D: "Cost optimization" is incorrect because it is focused on managing expenses, not
compliance.
Option E: "Loosely coupled microservices" is incorrect because this architectural approach does not
directly address compliance requirements.
AWS AI Practitioner
Reference: AWS Compliance Capabilities: AWS offers services and tools, such as data protection
and threat detection, to help companies meet regulatory requirements for security and privacy.

9.A company has a database of petabytes of unstructured data from internal sources. The company
wants to transform this data into a structured format so that its data scientists can perform machine
learning (ML) tasks.
Which service will meet these requirements?
A. Amazon Lex
B. Amazon Rekognition
C. Amazon Kinesis Data Streams
D. AWS Glue
Answer: D
Explanation:
AWS Glue is the correct service for transforming petabytes of unstructured data into a structured
format suitable for machine learning tasks.
AWS Glue:
A fully managed extract, transform, and load (ETL) service that makes it easy to prepare and
transform unstructured data into a structured format.
Provides a range of tools for cleaning, enriching, and cataloging data, making it ready for data
scientists to use in ML models.
Why Option D is Correct:
Data Transformation: AWS Glue can handle large volumes of data and transform unstructured data
into structured formats efficiently.
Integrated ML Support: Glue integrates with other AWS services to support ML workflows.
Why Other Options are Incorrect:
A. Amazon Lex: Used for building chatbots, not for data transformation.
B. Amazon Rekognition: Used for image and video analysis, not for data transformation.
C. Amazon Kinesis Data Streams: Handles real-time data streaming, not suitable for batch
transformation of large volumes of unstructured data.

10.In which stage of the generative AI model lifecycle are tests performed to examine the model's
accuracy?
A. Deployment
B. Data selection
C. Fine-tuning
D. Evaluation
Answer: D
Explanation:
The evaluation stage of the generative AI model lifecycle involves testing the model to assess its
performance, including accuracy, coherence, and other metrics. This stage ensures the model meets
the desired quality standards before deployment.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The evaluation phase in the machine learning lifecycle involves testing the model against validation
or test datasets to measure its performance metrics, such as accuracy, precision, recall, or task-
specific metrics for generative AI models."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed
Option A: DeploymentDeployment involves making the model available for use in production. While
monitoring occurs post-deployment, accuracy testing is performed earlier in the evaluation stage.
Option B: Data selectionData selection involves choosing and preparing data for training, not testing
the model’s accuracy.
Option C: Fine-tuningFine-tuning adjusts a pre-trained model to improve performance for a specific
task, but it is not the stage where accuracy is formally tested.
Option D: EvaluationThis is the correct answer. The evaluation stage is where tests are conducted to
examine the model’s accuracy and other performance metrics, ensuring it meets requirements.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle Amazon
SageMaker Developer Guide: Model Evaluation
(https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html)
AWS Documentation: Generative AI Lifecycle (https://aws.amazon.com/machine-learning/)

11.HOTSPOT
A company wants to build an ML application.
Select and order the correct steps from the following list to develop a well-architected ML workload.
Each step should be selected one time. (Select and order FOUR.)
• Deploy model
• Develop model
• Monitor model
• Define business goal and frame ML problem
Answer:

Explanation:
Building a well-architected ML workload follows a structured lifecycle as outlined in AWS best
practices. The process begins with defining the business goal and framing the ML problem to ensure
the project aligns with organizational objectives. Next, the model is developed, which includes data
preparation, training, and evaluation. Once the model is ready, it is deployed tomake predictions in a
production environment. Finally, the model is monitored to ensure it performs as expected and to
address any issues like drift or degradation over time. This order ensures a systematic approach to
ML development.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The machine learning lifecycle typically follows these stages: 1) Define the business goal and frame
the ML problem, 2) Develop the model (including data preparation, training, and evaluation), 3)
Deploy the model to production, and 4) Monitor the model for performance and drift to ensure it
continues to meet business needs."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed
Step 1: Define business goal and frame ML problemThis is the first step in any ML project. It involves
understanding the business objective (e.g., reducing churn) and framing the ML problem (e.g.,
classification or regression). Without this step, the project lacks direction. The hotspot lists this option
as "Define business goal and frame ML problem," which matches this stage.
Step 2: Develop modelAfter defining the problem, the next step is to develop the model. This includes
collecting and preparing data, selecting an algorithm, training the model, and evaluating its
performance. The hotspot lists "Develop model" as an option, aligning with this stage.
Step 3: Deploy modelOnce the model is developed and meets performance requirements, it is
deployed to a production environment to make predictions or automate decisions. The hotspot
includes "Deploy model" as an option, which fits this stage.
Step 4: Monitor modelAfter deployment, the model must be monitored to ensure it performs well
over time, addressing issues like data drift or performance degradation. The hotspot lists "Monitor
model" as an option, completing the lifecycle.
Hotspot Selection Analysis:
The hotspot provides four steps, each with the same dropdown options: "Select...," "Deploy model,"
"Develop model," "Monitor model," and "Define business goal and frame ML problem."
The correct selections are:
Step 1: Define business goal and frame ML problem
Step 2: Develop model
Step 3: Deploy model
Step 4: Monitor model
Each option is used exactly once, as required, and follows the logical order of the ML lifecycle.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle Amazon
SageMaker Developer Guide: Machine Learning Workflow
(https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-mlconcepts.html) AWS Well-
Architected Framework: Machine Learning Lens
(https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/)

12.A company is using a large language model (LLM) on Amazon Bedrock to build a chatbot. The
chatbot processes customer support requests. To resolve a request, the customer and the chatbot
must interact a few times.
Which solution gives the LLM the ability to use content from previous customer messages?
A. Turn on model invocation logging to collect messages.
B. Add messages to the model prompt.
C. Use Amazon Personalize to save conversation history.
D. Use Provisioned Throughput for the LLM.
Answer: B
Explanation:
The company is building a chatbot using an LLM on Amazon Bedrock, and the chatbot needs to use
content from previous customer messages to resolve requests. Adding previous messages to the
model prompt (also known as providing conversation history) enables the LLM to maintain context
across interactions, allowing it to respond coherently based on the ongoing conversation. Exact
Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"To enable a large language model (LLM) to maintain context in a conversation, you can include
previous messages in the model prompt. This approach, often referred to as providing conversation
history, allows the LLM to generate responses that are contextually relevant toprior interactions."
(Source: AWS Bedrock User Guide, Building Conversational Applications) Detailed
Option A: Turn on model invocation logging to collect messages.Model invocation logging records
interactions for auditing or debugging but does not provide the LLM with access to previous
messages during inference to maintain conversation context.
Option B: Add messages to the model prompt. This is the correct answer. Including previous
messages in the prompt gives the LLM the conversation history it needs to respond appropriately, a
common practice for chatbots on Amazon Bedrock.
Option C: Use Amazon Personalize to save conversation history.Amazon Personalize is for building
recommendation systems, not for managing conversation history in a chatbot. This option is
irrelevant.
Option D: Use Provisioned Throughput for the LLM.Provisioned Throughput in Amazon Bedrock
ensures consistent performance for model inference but does not address the need to use previous
messages in the conversation.
Reference: AWS Bedrock User Guide: Building Conversational Applications
(https://docs.aws.amazon.com/bedrock/latest/userguide/conversational-apps.html) AWS AI
Practitioner Learning Path: Module on Generative AI and Chatbots Amazon Bedrock Developer
Guide: Managing Conversation Context (https://aws.amazon.com/bedrock/)

13.A company wants to create a new solution by using AWS Glue. The company has minimal
programming experience with AWS Glue.
Which AWS service can help the company use AWS Glue?
A. Amazon Q Developer
B. AWS Config
C. Amazon Personalize
D. Amazon Comprehend
Answer: A
Explanation:
AWS Glue is a serverless data integration service that enables users to extract, transform, and load
(ETL) data. For a company with minimal programming experience, Amazon Q Developer provides an
AI-powered assistant that can generate code, explain AWS services, and guide users through tasks
like creating AWS Glue jobs. This makes it an ideal tool to help the company use AWS Glue
effectively.
Exact Extract from AWS AI Documents:
From the AWS Documentation on Amazon Q Developer:
"Amazon Q Developer is an AI-powered assistant that helps developers by generating code,
answering questions about AWS services, and providing step-by-step guidance for tasks such as
building ETL pipelines with AWS Glue. It is designed to assist users with varying levels of expertise,
including those with minimal programming experience."
(Source: AWS Documentation, Amazon Q Developer Overview)
Detailed
Option A: Amazon Q Developer
This is the correct answer. Amazon Q Developer can assist the company by generating AWS Glue
scripts, explaining Glue concepts, and providing guidance on setting up ETL jobs, which is particularly
helpful for users with limited programming experience.
Option B: AWS Config
AWS Config is used for tracking and managing resource configurations and compliance, not for
assisting with coding or using services like AWS Glue.
This option is incorrect.
Option C: Amazon Personalize
Amazon Personalize is a machine learning service for building recommendation systems, not for
assisting with data integration or AWS Glue. This option is irrelevant.
Option D: Amazon Comprehend
Amazon Comprehend is an NLP service for analyzing text, not for helping users write code or use
AWS Glue. This option does not meet the requirements.
Reference: AWS Documentation: Amazon Q Developer Overview
(https://aws.amazon.com/q/developer/)
AWS Glue Developer Guide: Introduction to AWS Glue
(https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html)
AWS AI Practitioner Learning Path: Module on AWS Developer Tools and Services

14.A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat
interface for the company's product manuals. The manuals are stored as PDF files.
Which solution meets these requirements MOST cost-effectively?
A. Use prompt engineering to add one PDF file as context to the user prompt when the prompt is
submitted to Amazon Bedrock.
B. Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is
submitted to Amazon Bedrock.
C. Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the fine-tuned model
to process user prompts.
D. Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to
provide context when users submit prompts to Amazon Bedrock.
Answer: A
Explanation:
Using Amazon Bedrock with large language models (LLMs) allows for efficient utilization of AI to
answer queries based on context provided in product manuals. To achieve this cost-effectively, the
company should avoid unnecessary use of resources.
Option A (Correct): "Use prompt engineering to add one PDF file as context to the user prompt when
the prompt is submitted to Amazon Bedrock": This is the most cost-effective solution. By using prompt
engineering, only the relevant content from one PDF file is added as context to each query. This
approach minimizes the amount of data processed, which helps in reducing costs associated with
LLMs' computational requirements.
Option B: "Use prompt engineering to add all the PDF files as context to the user prompt when the
prompt is submitted to Amazon Bedrock" is incorrect. Including all PDF files would increase costs
significantly due to the large context size processed by the model.
Option C: "Use all the PDF documents to fine-tune a model with Amazon Bedrock" is incorrect. Fine-
tuning a model is more expensive than using prompt engineering, especially if done for multiple
documents.
Option D: "Upload PDF documents to an Amazon Bedrock knowledge base" is incorrect because
Amazon Bedrock does not have a built-in knowledge base feature for directly managing and querying
PDF documents.
AWS AI Practitioner
Reference: Prompt Engineering for Cost-Effective AI: AWS emphasizes the importance of using
prompt engineering to minimize costs when interacting with LLMs. By carefully selecting relevant
context, users can reduce the amount of data processed and save on expenses.

15.A company is developing a new model to predict the prices of specific items. The model performed
well on the training dataset. When the company deployed the model to production, the model's
performance decreased significantly.
What should the company do to mitigate this problem?
A. Reduce the volume of data that is used in training.
B. Add hyperparameters to the model.
C. Increase the volume of data that is used in training.
D. Increase the model training time.
Answer: C
Explanation:
When a model performs well on the training data but poorly in production, it is often due to overfitting.
Overfitting occurs when a model learns patterns and noise specific to the training data, which does
not generalize well to new, unseen data in production. Increasing the volume of data used in training
can help mitigate this problem by providing a more diverse and representative dataset, which helps
the model generalize better.
Option C (Correct): "Increase the volume of data that is used in training": Increasing the data volume
can help the model learn more generalized patterns rather than specific features of the training
dataset, reducing overfitting and improving performance in production.
Option A: "Reduce the volume of data that is used in training" is incorrect, as reducing data volume
would likely worsen the overfitting problem.
Option B: "Add hyperparameters to the model" is incorrect because adding hyperparameters alone
does not address the issue of data diversity or model generalization.
Option D: "Increase the model training time" is incorrect because simply increasing training time does
not prevent overfitting; the model needs more diverse data. AWS AI Practitioner
Reference: Best Practices for Model Training on AWS: AWS recommends using a larger and more
diverse training dataset to improve a model's generalization capability and reduce the risk of
overfitting.

16.Which technique should be used to fine-tune a pre-trained large language model (LLM) to improve
its performance on a specific task?
A. Zero-shot learning
B. Few-shot learning
C. Transfer learning
D. Unsupervised learning
Answer: C

17.A company is developing an ML model to predict customer churn.


Which evaluation metric will assess the model's performance on a binary classification task such as
predicting chum?
A. F1 score
B. Mean squared error (MSE)
C. R-squared
D. Time used to train the model
Answer: A
Explanation:
The company is developing an ML model to predict customer churn, a binary classification task
(churn or no churn). The F1 score is an evaluation metric that balances precision and recall, making it
suitable for assessing the performance of binary classification models, especially when dealing with
imbalanced datasets, which is common in churn prediction.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"The F1 score is a metric for evaluating binary classification models, combining precision and recall
into a single value. It is particularly useful for tasks like churn prediction, where class imbalance may
exist, ensuring the model performs well on both positive and negative classes." (Source: Amazon
SageMaker Developer Guide, Model Evaluation Metrics)
Detailed
Option A: F1 scoreThis is the correct answer. The F1 score is ideal for binary classification tasks like
churn prediction, as it measures the model’s ability to correctly identify both churners and non-
churners.
Option B: Mean squared error (MSE)MSE is used for regression tasks to measure the average
squared difference between predicted and actual values, not for binary classification.
Option C: R-squaredR-squared is a metric for regression models, indicating how well the model
explains the variability of the target variable. It is not applicable to classification tasks.
Option D: Time used to train the modelTraining time is not an evaluation metric for model
performance; it measures the duration of training, not the model’s accuracy or effectiveness.
Reference: Amazon SageMaker Developer Guide: Model Evaluation Metrics
(https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html) AWS AI Practitioner
Learning Path: Module on Model Performance and Evaluation
AWS Documentation: Metrics for Classification (https://aws.amazon.com/machine-learning/)

18.A company is building a chatbot to improve user experience. The company is using a large
language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-
shot learning to improve intent detection accuracy.
Which additional data does the company need to meet these requirements?
A. Pairs of chatbot responses and correct user intents
B. Pairs of user messages and correct chatbot responses
C. Pairs of user messages and correct user intents
D. Pairs of user intents and correct chatbot responses
Answer: C
Explanation:
Few-shot learning involves providing a model with a few examples (shots) to learn from. For
improving intent detection accuracy in a chatbot using a large language model (LLM), the data should
consist of pairs of user messages and their corresponding correct intents. Few-shot Learning for
Intent Detection:
Few-shot learning aims to enable the model to learn from a small number of examples. For intent
detection, the model needs to understand the relationship between user messages and the intended
action or meaning.
Providing examples of user messages and the correct user intents allows the model to learn patterns
in the phrasing or language that corresponds to each intent.
Why Option C is Correct:
User Messages and Intents: These examples directly teach the model how to map a user’s input to
the appropriate intent, which is the goal of intent detection in chatbots.
Improves Accuracy: By using few-shot learning with these examples, the model can generalize better
from limited data, improving intent detection.
Why Other Options are Incorrect:
A. Pairs of chatbot responses and correct user intents: Incorrect because it does not focus on user
input but rather on outputs.
B. Pairs of user messages and correct chatbot responses: This would be useful for response
generation, not intent detection.
D. Pairs of user intents and correct chatbot responses: Again, this is not aligned with detecting intents
but with generating responses.

19.An AI company periodically evaluates its systems and processes with the help of independent
software vendors (ISVs). The company needs to receive email message notifications when an ISV's
compliance reports become available.
Which AWS service can the company use to meet this requirement?
A. AWS Audit Manager
B. AWS Artifact
C. AWS Trusted Advisor
D. AWS Data Exchange
Answer: D
Explanation:
AWS Data Exchange is a service that allows companies to securely exchange data with third parties,
such as independent software vendors (ISVs). AWS Data Exchange can be configured to provide
notifications, including email notifications, when new datasets or compliance reports become
available.
Option D (Correct): "AWS Data Exchange": This is the correct answer because it enables the
company to receive notifications, including email messages, when ISVs' compliance reports are
available.
Option A: "AWS Audit Manager" is incorrect because it focuses on assessing an organization's own
compliance, not receiving third-party compliance reports.
Option B: "AWS Artifact" is incorrect as it provides access to AWS’s compliance reports, not ISVs'.
Option C: "AWS Trusted Advisor" is incorrect as it offers optimization and best practices guidance,
not
compliance report notifications.
AWS AI Practitioner
Reference: AWS Data Exchange Documentation: AWS explains how Data Exchange allows
organizations to subscribe to third-party data and receive notifications when updates are available.

20.Which metric measures the runtime efficiency of operating AI models?


A. Customer satisfaction score (CSAT)
B. Training time for each epoch
C. Average response time
D. Number of training instances
Answer: C
Explanation:
The average response time is the correct metric for measuring the runtime efficiency of operating AI
models.
Average Response Time:
Refers to the time taken by the model to generate an output after receiving an input. It is a key metric
for evaluating the performance and efficiency of AI models in production.
A lower average response time indicates a more efficient model that can handle queries quickly.
Why Option C is Correct:
Measures Runtime Efficiency: Directly indicates how fast the model processes inputs and delivers
outputs, which is critical for real-time applications.
Performance Indicator: Helps identify potential bottlenecks and optimize model performance.
Why Other Options are Incorrect:
A. Customer satisfaction score (CSAT): Measures customer satisfaction, not model runtime
efficiency.
B. Training time for each epoch: Measures training efficiency, not runtime efficiency during model
operation.
D. Number of training instances: Refers to data used during training, not operational efficiency.

21.A loan company is building a generative AI-based solution to offer new applicants discounts based
on specific business criteria. The company wants to build and use an AI model responsibly to
minimize bias that could negatively affect some customers.
Which actions should the company take to meet these requirements? (Select TWO.)
A. Detect imbalances or disparities in the data.
B. Ensure that the model runs frequently.
C. Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the
model is 100% accurate.
E. Ensure that the model's inference time is within the accepted limits.
Answer: A,C
Explanation:
To build an AI model responsibly and minimize bias, it is essential to ensure fairness and
transparency throughout the model development and deployment process. This involves detecting
and mitigating data imbalances and thoroughly evaluating the model's behavior to understand its
impact on different groups.
Option A (Correct): "Detect imbalances or disparities in the data": This is correct because identifying
and addressing data imbalances or disparities is a critical step in reducing bias. AWS provides tools
like Amazon SageMaker Clarify to detect bias during data preprocessing and model training.
Option C (Correct): "Evaluate the model's behavior so that the company can provide transparency to
stakeholders": This is correct because evaluating the model's behavior for fairness and accuracy is
key to ensuring that stakeholders understand how the model makes decisions. Transparency is a
crucial aspect of responsible AI.
Option B: "Ensure that the model runs frequently" is incorrect because the frequency of model runs
does not address bias.
Option D: "Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure
that the model is 100% accurate" is incorrect because ROUGE is a metric for evaluating the quality of
text summarization models, not for minimizing bias.
Option E: "Ensure that the model's inference time is within the accepted limits" is incorrect as it
relates to performance, not bias reduction.
AWS AI Practitioner
Reference: Amazon SageMaker Clarify: AWS offers tools such as SageMaker Clarify for detecting
bias in datasets and models, and for understanding model behavior to ensure fairness and
transparency. Responsible AI Practices: AWS promotes responsible AI by advocating for fairness,
transparency, and inclusivity in model development and deployment.

22.HOTSPOT
A company is using a generative AI model to develop a digital assistant. The model's responses
occasionally include undesirable and potentially harmful content. Select the correct Amazon Bedrock
filter policy from the following list for each mitigation action.
Each filter policy should be selected one time. (Select FOUR.)
• Content filters
• Contextual grounding check
• Denied topics
• Word filters

Answer:

Explanation:
Block input prompts or model responses that contain harmful content such as hate, insults, violence,
or misconduct: Content filters
Avoid subjects related to illegal investment advice or legal advice: Denied topics
Detect and block specific offensive terms: Word filters
Detect and filter out information in the model’s responses that is not grounded in the provided source
information: Contextual grounding check
The company is using a generative AI model on Amazon Bedrock and needs to mitigate undesirable
and potentially harmful content in the model’s responses. Amazon Bedrock provides several
guardrail mechanisms, including content filters, denied topics, word filters, and contextual grounding
checks, to ensure safe and accurate outputs. Each mitigation action in the hotspot aligns with a
specific Bedrock filter policy, and each policy must be used exactly once. Exact Extract from AWS AI
Documents:
From the AWS Bedrock User Guide:
*"Amazon Bedrock guardrails provide mechanisms to control model outputs, including:
Content filters: Block harmful content such as hate speech, violence, or misconduct.
Denied topics: Prevent the model from generating responses on specific subjects, such as illegal
activities or advice.
Word filters: Detect and block specific offensive or inappropriate terms.
Contextual grounding check: Ensure responses are grounded in the provided source information,
filtering out ungrounded or hallucinated content."*(Source: AWS Bedrock User Guide, Guardrails for
Responsible AI)
Detailed
Block input prompts or model responses that contain harmful content such as hate, insults, violence,
or misconduct: Content filters Content filters in Amazon Bedrock are designed to detect and block
harmful content, such as hate speech, insults, violence, or misconduct, ensuring the model’s outputs
are safe and appropriate. This matches the first mitigation action.
Avoid subjects related to illegal investment advice or legal advice: Denied topicsDenied topics allow
users to specify subjects the model should avoid, such as illegal investment advice or legal advice,
which could have regulatory implications. This policy aligns with the second mitigation action. Detect
and block specific offensive terms: Word filtersWord filters enable the detection and blocking of
specific offensive or inappropriate terms defined by the user, making them ideal for this mitigation
action focused on specific terms.
Detect and filter out information in the model’s responses that is not grounded in the provided source
information: Contextual grounding check The contextual grounding check ensures that the model’s
responses are based on the provided source information, filtering out ungrounded or hallucinated
content. This matches the fourth mitigation action. Hotspot Selection Analysis:
The hotspot lists four mitigation actions, each with the same dropdown options: "Select...," "Content
filters," "Contextual grounding check," "Denied topics," and "Word filters." The correct selections are:
First action: Content filters
Second action: Denied topics
Third action: Word filters
Fourth action: Contextual grounding check
Each filter policy is used exactly once, as required, and aligns with Amazon Bedrock’s guardrail
capabilities.
Reference: AWS Bedrock User Guide: Guardrails for Responsible AI
(https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html)
AWS AI Practitioner Learning Path: Module on Responsible AI and Model Safety
Amazon Bedrock Developer Guide: Configuring Guardrails (https://aws.amazon.com/bedrock/)

Powered by TCPDF (www.tcpdf.org)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy