CLF-C02_Exam_Guide_Slides
CLF-C02_Exam_Guide_Slides
material for you to test all the related Amazon exam topics. By using the CLF-
C02 exam dumps questions and practicing your skills, you can increase your
confidence and chances of passing the CLF-C02 exam.
Instant Download
Free Update in 3 Months
Money back guarantee
PDF and Software
24/7 Customer Support
Besides, Dumpsinfo also provides unlimited access. You can get all
Dumpsinfo files at lowest price.
AWS Certified Cloud Practitioner CLF-C02 exam free dumps questions are
available below for you to study.
2.A company wants its Amazon EC2 instances to share the same geographic area but use redundant
underlying power sources.
Which solution will meet these requirements?
A. Use EC2 instances across multiple Availability Zones in the same AWS Region.
B. Use Amazon CloudFront as the database for the EC2 instances.
C. Use EC2 instances in the same edge location and the same Availability Zone.
D. Use EC2 instances in AWS OpsWorks stacks in different AWS Regions.
Answer: A
Explanation:
Using EC2 instances across multiple Availability Zones in the same AWS Region is a solution that
meets the requirements of sharing the same geographic area but using redundant underlying power
sources. Availability Zones are isolated locations within an AWS Region that have independent
power, cooling, and physical security. They are connected through low-latency, high-throughput, and
highly redundant networking. By launching EC2 instances in different Availability Zones, users can
increase the fault tolerance and availability of their applications. Amazon CloudFront is a content
delivery network (CDN) service that speeds up the delivery of web content and media to end users by
caching it at the edge locations closer to them. It is not a database service and cannot be used to
store operational data for EC2 instances. Edge locations are sites that are part of the Amazon
CloudFront network and are located in many cities around the world. They are not the same as
Availability Zones and do not provide redundancy for EC2 instances. AWS OpsWorks is a
configuration management service that allows users to automate the deployment and management of
applications using Chef or Puppet. It can be used to create stacks that span multiple AWS Regions,
but this would not meet the requirement of sharing the same geographic area.
3.A company has an Amazon S3 bucket containing images of scanned financial invoices. The
company is building an artificial intelligence (Al)-based application on AWS. The company wants the
application to identify and read total balance amounts on the invoices.
Which AWS service will meet these requirements?
A. Amazon Forecast
B. Amazon Textract
C. Amazon Rekognition
D. Amazon Lex
Answer: B
Explanation:
Amazon Textract is a service that automatically extracts text and data from scanned documents.
Amazon Textract goes beyond simple optical character recognition (OCR) to also identify the
contents of fields in forms and information stored in tables. Amazon Textract can analyze images of
scanned financial invoices and extract the total balance amounts, as well as other relevant
information, such as invoice number, date, vendor name, etc5.
4.A company wants to ensure that two Amazon EC2 instances are in separate data centers with
minimal
communication latency between the data centers.
How can the company meet this requirement?
A. Place the EC2 instances in two separate AWS Regions connected with a VPC peering connection.
B. Place the EC2 instances in two separate Availability Zones within the same AWS Region.
C. Place one EC2 instance on premises and the other in an AWS Region. Then connect them by
using an AWS VPN connection.
D. Place both EC2 instances in a placement group for dedicated bandwidth.
Answer: B
Explanation:
The correct answer is B because placing the EC2 instances in two separate Availability Zones within
the same AWS Region is the best way to meet the requirement. Availability Zones are isolated
locations within an AWS Region that have independent power, cooling, and networking. Users can
launch their resources, such as Amazon EC2 instances, in multiple Availability Zones to increase the
fault tolerance and resilience of their applications. Availability Zones within the same AWS Region are
connected with low-latency, high-throughput, and highly redundant networking. The other options are
incorrect because they are not the best ways to meet the requirement. Placing the EC2 instances in
two separate AWS Regions connected with a VPC peering connection is not the best way to meet the
requirement because AWS Regions are geographically dispersed and may have higher
communication latency between them than Availability Zones within the same AWS Region. VPC
peering connection is a networking connection between two VPCs that enables users to route traffic
between them using private IP addresses. Placing one EC2 instance on premises and the other in an
AWS Region, and then connecting them by using an AWS VPN connection is not the best way to
meet the requirement because on-premises and AWS Region are geographically dispersed and may
have higher communication latency between them than Availability Zones within the same AWS
Region. AWS VPN connection is a secure and encrypted connection between a user’s network and
their VPC. Placing both EC2 instances in a placement group for dedicated bandwidth is not the best
way to meet the requirement because a placement group is a logical grouping of instances within a
single Availability Zone that enables users to launch instances with specific performance
characteristics. A placement group does not ensure that the instances are in separate data centers,
and it does not provide low-latency communication between instances in different Availability Zones.
Reference: [Regions, Availability Zones, and Local Zones], [VPC Peering], [AWS VPN], [Placement
Groups]
5.Which AWS feature or resource is a deployable Amazon EC2 instance template that is
prepackaged with software and security requirements?
A. Amazon Elastic Block Store (Amazon EBS) volume
B. AWS CloudFormation template
C. Amazon Elastic Block Store (Amazon EBS) snapshot
D. Amazon Machine Image (AMI)
Answer: D
Explanation:
An Amazon Machine Image (AMI) is a deployable Amazon EC2 instance template that is
prepackaged with software and security requirements. It provides the information required to launch
an instance, which is a virtual server in the cloud. You can use an AMI to launch as many instances
as you need. You can also create your own custom AMIs or use AMIs shared by other AWS users1.
6.For which AWS service is the customer responsible for maintaining the underlying operating
system?
A. Amazon DynamoDB
B. Amazon S3
C. Amazon EC2
D. AWS Lambda
Answer: C
Explanation:
Amazon EC2 is a service that provides resizable compute capacity in the cloud. Users can launch
and manage virtual servers, known as instances, that run on the AWS infrastructure. Users are
responsible for maintaining the underlying operating system of the instances, as well as any
applications or software that run on them. Amazon DynamoDB is a service that provides a fully
managed NoSQL database that delivers fast and consistent performance at any scale. Users do not
need to manage the underlying operating system or the database software. Amazon S3 is a service
that provides scalable and durable object storage in the cloud. Users do not need to manage the
underlying operating system or the storage infrastructure. AWS Lambda is a service that allows users
to run code without provisioning or managing servers. Users only need to upload their code and
configure the triggers and parameters. AWS Lambda takes care of the underlying operating system
and the execution environment.
8.A company wants to launch its web application in a second AWS Region. The company needs to
determine which services must be regionally configured for this launch.
Which AWS services can be configured at the Region level? (Select TWO.)
A. Amazon EC2
B. Amazon Route 53
C. Amazon CloudFront
D. AWS WAF
E. Amazon DynamoDB
Answer: B, D
Explanation:
Amazon Route 53 and AWS WAF are AWS services that can be configured at the Region level.
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service
that lets you register domain names, route traffic to resources, and check the health of your
resources. AWS WAF is a web application firewall that helps protect your web applications or APIs
against common web exploits that may affect availability, compromise security, or consume excessive
resources. Amazon EC2, Amazon CloudFront, and Amazon DynamoDB are AWS services that can
be configured at the global level or the Availability Zone level.
10.Which AWS service or tool can be used to consolidate payments for a company with multiple AWS
accounts?
A. AWS Cost and Usage Report
B. AWS Organizations
C. Cost Explorer
D. AWS Budgets
Answer: B
Explanation:
AWS Organizations is an account management service that enables you to consolidate multiple AWS
accounts into an organization that you create and centrally manage. AWS Organizations includes
consolidated billing and account management capabilities that enable you to better meet the
budgetary, security, and compliance needs of your business1.
11.Which AWS service provides protection against DDoS attacks for applications that run in the AWS
Cloud?
A. Amazon VPC
B. AWS Shield
C. AWS Audit Manager
D. AWS Config
Answer: B
Explanation:
AWS Shield is an AWS service that provides protection against distributed denial of service (DDoS)
attacks for applications that run in the AWS Cloud. DDoS attacks are attempts to make an online
service unavailable by overwhelming it with traffic from multiple sources. AWS Shield provides two
tiers of protection: AWS Shield Standard and AWS Shield Advanced. AWS Shield Standard is
automatically enabled for all AWS customers at no additional charge. It provides protection against
common and frequently occurring network and transport layer DDoS attacks. AWS Shield Advanced
is an optional paid service that provides additional protection against larger and more sophisticated
DDoS attacks. AWS Shield Advanced also provides access to 24/7 DDoS response team, cost
protection, and enhanced detection and mitigation capabilities
12.Which AWS Support plan assigns an AWS concierge agent to a company's account?
A. AWS Basic Support
B. AWS Developer Support
C. AWS Business Support
D. AWS Enterprise Support
Answer: D
Explanation:
AWS Enterprise Support is the AWS Support plan that assigns an AWS concierge agent to a
company’s account. AWS Enterprise Support is the highest level of support that AWS offers, and it
provides the most comprehensive and personalized assistance. An AWS concierge agent is a
dedicated technical account manager who acts as a single point of contact for the company and helps
to optimize the AWS environment, resolve issues, and access AWS experts. For more information,
see [AWS Support Plans] and [AWS Concierge Support].
13.A company needs to categorize and track AWS usage cost based on business categories.
Which AWS service or feature should the company use to meet these requirements?
A. Cost allocation tags
B. AWS Organizations
C. AWS Security Hub
D. AWS Cost and Usage Report
Answer: A
Explanation:
The AWS service or feature that the company should use to categorize and track AWS usage cost
based on business categories is cost allocation tags. Cost allocation tags are key-value pairs that
users can attach to AWS resources to organize and track their AWS costs. Users can use cost
allocation tags to filter and group their AWS costs by categories such as project, department,
environment, or application. Users can also use cost allocation tags to generate detailed billing
reports that show the costs associated with each tag3. AWS Organizations, AWS Security Hub, and
AWS Cost and Usage Report are other AWS services or features that can help users with different
aspects of their AWS usage, such as managing multiple accounts, monitoring security issues, or
analyzing billing data, but they do not enable users to categorize and track AWS costs based on
business categories.
14.Which feature of the AWS Cloud gives users the ability to pay based on current needs rather than
forecasted needs?
A. AWS Budgets
B. Pay-as-you-go pricing
C. Volume discounts
D. Savings Plans
Answer: B
Explanation:
Pay-as-you-go pricing is the feature of the AWS Cloud that gives users the ability to pay based on
current needs rather than forecasted needs. Pay-as-you-go pricing means that users only pay for the
AWS services and resources they use, without any upfront or long-term commitments. This allows
users to scale up or down their usage depending on their changing business requirements, and avoid
paying for idle or unused capacity. Pay-as-you-go pricing also enables users to benefit from the
economies of scale and lower costs of AWS as they grow their business5
15.A company has an AWS-hosted website located behind an Application Load Balancer. The
company wants to safeguard the website from SQL injection or cross-site scripting.
Which AWS service should the company use?
A. Amazon GuardDuty
B. AWS WAF
C. AWS Trusted Advisor
D. Amazon Inspector
Answer: B
Explanation:
The company should use AWS WAF to safeguard the website from SQL injection or cross-site
scripting. AWS WAF is a web application firewall that helps protect web applications from common
web exploits that could affect availability, compromise security, or consume excessive resources. The
company can use AWS WAF to create custom rules that block malicious requests that match certain
patterns, such as SQL injection or cross-site scripting. AWS WAF can be applied to web applications
that are behind an Application Load Balancer, Amazon CloudFront, or Amazon API Gateway.
Amazon GuardDuty, AWS Trusted Advisor, and Amazon Inspector are not the best services to use
for this purpose. Amazon GuardDuty is a threat detection service that monitors for malicious activity
and unauthorized behavior across the AWS accounts and resources. AWS Trusted Advisor is a
service that provides best practice recommendations for cost optimization, performance, security, and
fault tolerance. Amazon Inspector is a service that assesses the security and compliance of
applications running on Amazon EC2 instances12
17.A company runs business applications in an on-premises data center and in the AWS Cloud. The
company needs a shared file system that can be available to both environments.
Which AWS service meets these requirements?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon S3
C. Amazon ElastiCache
D. Amazon Elastic File System (Amazon EFS)
Answer: D
Explanation:
Amazon Elastic File System (Amazon EFS) is a service that provides a simple, scalable, fully
managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is
built to scale on demand to petabytes without disrupting applications, growing and shrinking
automatically as you add and remove files, eliminating the need to provision and manage capacity to
accommodate growth. You can use Amazon EFS to create a shared file system that can be available
to both your on-premises data center and your AWS Cloud environment. Amazon Elastic Block Store
(Amazon EBS) is a service that provides persistent block storage volumes for use with Amazon EC2
instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its
Availability Zone to protect you from component failure, offering high availability and durability.
However, Amazon EBS volumes are not shared file systems, and they cannot be available to both
your on-premises data center and your AWS Cloud environment. Amazon S3 is a service that
provides object storage through a web services interface. You can use Amazon S3 to store and
protect any amount of data for a range of use cases, such as data lakes, websites, mobile
applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
However, Amazon S3 is not a shared file system, and it cannot be available to both your on-premises
data center and your AWS Cloud environment without additional configuration. Amazon ElastiCache
is a service that enables you to seamlessly set up, run, and scale popular open-source compatible in-
memory data stores in the cloud. You can use Amazon ElastiCache to improve the performance of
your applications by allowing you to retrieve information from fast, managed, in-memory data stores,
instead of relying entirely on slower disk-based databases. However, Amazon ElastiCache is not a
shared file system, and it cannot be available to both your on-premises data center and your AWS
Cloud environment.
18.A company runs a database on Amazon Aurora in the us-east-1 Region. The company has a
disaster recovery requirement that the database be available in another Region.
Which solution meets this requirement with minimal disruption to the database operations?
A. Perform an Aurora Multi-AZ deployment.
B. Deploy Aurora cross-Region read replicas.
C. Create Amazon Elastic Block Store (Amazon EBS) volume snapshots for Aurora and copy them to
another Region.
D. Deploy Aurora Replicas.
Answer: B
Explanation:
The solution that meets the requirement of the company that runs a database on Amazon Aurora in
the us-east-1 Region and has a disaster recovery requirement that the database be available in
another Region with minimal disruption to the database operations is to deploy Aurora cross-Region
read replicas. Aurora cross-Region read replicas are secondary Aurora clusters that are created in a
different AWS Region from the primary Aurora cluster, and are kept in sync with the primary cluster
using physical replication. The company can use Aurora cross-Region read replicas to improve the
availability and durability of the database, as well as to reduce the recovery time objective (RTO) and
recovery point objective (RPO) in case of a regional disaster. Performing an Aurora Multi-AZ
deployment, creating Amazon EBS volume snapshots for Aurora and copying them to another
Region, and deploying Aurora Replicas are not the best solutions for this requirement. An Aurora
Multi-AZ deployment is a configuration that creates one or more Aurora Replicas within the same
AWS Region as the primary Aurora cluster, and provides automatic failover in case of an Availability
Zone outage. However, this does not provide cross-Region disaster recovery. Creating Amazon EBS
volume snapshots for Aurora and copying them to another Region is a manual process that requires
stopping the database, creating the snapshots, copying them to the target Region, and restoring them
to a new Aurora cluster. This process can cause significant downtime and data loss. Deploying
Aurora Replicas is a configuration that creates one or more secondary Aurora clusters within the
same AWS Region as the primary Aurora cluster, and provides read scaling and high availability.
However, this does not provide cross-Region disaster recovery.
19.A company wants an AWS service to provide product recommendations based on its customer
data.
Which AWS service will meet this requirement?
A. Amazon Polly
B. Amazon Personalize
C. Amazon Comprehend
D. Amazon Rekognition
Answer: B
Explanation:
Amazon Personalize is an AWS service that helps developers quickly build and deploy a custom
recommendation engine with real-time personalization and user segmentation1. It uses machine
learning (ML) to analyze customer data and provide relevant recommendations based on their
preferences, behavior, and context. Amazon Personalize can be used for various use cases such as
optimizing recommendations, targeting customers more accurately, maximizing the value of
unstructured text, and promoting items using business rules1.
The other options are not suitable for providing product recommendations based on customer data.
Amazon Polly is a service that converts text into lifelike speech. Amazon Comprehend is a service
that uses natural language processing (NLP) to extract insights from text and documents. Amazon
Rekognition is a service that uses computer vision (CV) to analyze images and videos for faces,
objects, scenes, and activities.
Reference:
1: Cloud Products - Amazon Web Services (AWS)
2: Recommender System C Amazon Personalize C Amazon Web Services
3: Top 25 AWS Services List 2023 - GeeksforGeeks
4: AWS to Azure services comparison - Azure Architecture Center
5: The 25+ Best AWS Cost Optimization Tools (Updated 2023) - CloudZero
6: Amazon Polly C Text-to-Speech Service - AWS
7: Natural Language Processing - Amazon Comprehend - AWS
8: Image and Video Analysis - Amazon Rekognition - AWS
20.A company migrated its core application onto multiple workloads in the AWS Cloud. The company
wants to improve the application's reliability.
Which cloud design principle should the company implement to achieve this goal?
A. Maximize utilization.
B. Decouple the components.
C. Rightsize the resources.
D. Adopt a consumption model.
Answer: B
Explanation:
Decoupling the components of an application means reducing the dependencies and interactions
between them, which can improve the application’s reliability, scalability, and performance.
Decoupling can be achieved by using services such as Amazon Simple Queue Service (Amazon
SQS), Amazon Simple Notification Service (Amazon SNS), and AWS Lambda1
21.A company seeks cost savings in exchange for a commitment to use a specific amount of an AWS
service or category ofAWS services for 1 year or 3 years.
Which AWS pricing model or offering will meet these requirements?
A. Pay-as-you-go pricing
B. Savings Plans
C. AWS Free Tier
D. Volume discounts
Answer: B
Explanation:
Savings Plans are an AWS pricing model or offering that can meet the requirements of seeking cost
savings in exchange for a commitment to use a specific amount of an AWS service or category of
AWS services for 1 year or 3 years. Savings Plans are flexible plans that offer significant discounts on
AWS compute usage, such as EC2, Lambda, and Fargate. The company can choose from two types
of Savings Plans: Compute Savings Plans and EC2 Instance Savings Plans. Compute Savings Plans
provide the most flexibility and apply to any eligible compute usage, regardless of instance family,
size, region, operating system, or tenancy. EC2 Instance Savings Plans provide more savings and
apply to a specific instance family within a region. The company can select the amount of compute
usage per hour (e.g., $10/hour) that they want to commit to for the duration of the plan (1 year or 3
years). The company will pay the discounted Savings Plan rate for the amount of usage that matches
their commitment, and the regular on-demand rate for any usage beyond that
23.A company is looking for a managed machine learning (ML) service that can recommend products
based on a customer's previous behaviors.
Which AWS service meets this requirement?
A. Amazon Personalize
B. Amazon SageMaker
C. Amazon Pinpoint
D. Amazon Comprehend
Answer: A
Explanation:
The AWS service that meets the requirement of providing a managed machine learning (ML) service
that can recommend products based on a customer’s previous behaviors is Amazon Personalize.
Amazon Personalize is a fully managed service that enables developers to create personalized
recommendations for customers using their own data. Amazon Personalize can automatically process
and examine the data, identify what is meaningful, select the right algorithms, and train and optimize
a personalized recommendation model2. Amazon SageMaker, Amazon Pinpoint, and Amazon
Comprehend are other AWS services related to machine learning, but they do not provide the specific
functionality of product recommendation.
24.A company is building a mobile app to provide shopping recommendations to its customers. The
company wants to use a graph database as part of the shopping recommendation engine.
Which AWS database service should the company choose?
A. Amazon DynamoDB
B. Amazon Aurora
C. Amazon Neptune
D. Amazon DocumentDB (with MongoDB compatibility)
Answer: C
Explanation:
Amazon Neptune is a service that provides a fully managed graph database that supports property
graphs and RDF graphs. It can be used to build applications that work with highly connected
datasets, such as shopping recommendations, social networks, fraud detection, and knowledge
graphs2. Amazon DynamoDB is a service that provides a fully managed NoSQL database that
delivers fast and consistent performance at any scale. Amazon Aurora is a service that provides a
fully managed relational database that is compatible with MySQL and PostgreSQL. Amazon
DocumentDB (with MongoDB compatibility) is a service that provides a fully managed document
database that is compatible with MongoDB.
25.Which AWS service gives users the ability to provision a dedicated and private network connection
from their internal network to AWS?
A. AWS CloudHSM
B. AWS Direct Connect
C. AWS VPN
D. Amazon Connect
Answer: B
Explanation:
AWS Direct Connect gives users the ability to provision a dedicated and private network connection
from their internal network to AWS. AWS Direct Connect links the user’s internal network to an AWS
Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected
to the user’s router, the other to an AWS Direct Connect router. With this connection in place, the
user can create virtual interfaces directly to the AWS cloud and Amazon Virtual Private Cloud
(Amazon VPC), bypassing internet service providers in the network path2.
26.Which AWS service or tool provides on-demand access to AWS security and compliance reports
and AWS online agreements?
A. AWS Artifact
B. AWS Trusted Advisor
C. Amazon Inspector
D. AWS Billing console
Answer: A
Explanation:
AWS Artifact is the AWS service or tool that provides on-demand access to AWS security and
compliance reports and AWS online agreements. AWS Trusted Advisor is a tool that provides real-
time guidance to help users provision their resources following AWS best practices. Amazon
Inspector is a service that helps users improve the security and compliance of their applications. AWS
Billing console is a tool that helps users manage their AWS costs and usage. These concepts are
explained in the AWS Cloud Practitioner Essentials course3.
27.A company is planning a migration to the AWS Cloud and wants to examine the costs that are
associated with different workloads.
Which AWS tool will meet these requirements?
A. AWS Budgets
B. AWS Cost Explorer
C. AWS Pricing Calculator
D. AWS Cost and Usage Report
Answer: C
Explanation:
The AWS tool that will meet the requirements of the company that is planning a migration to the AWS
Cloud and wants to examine the costs that are associated with different workloads is AWS Pricing
Calculator. AWS Pricing Calculator is a tool that helps customers estimate the cost of using AWS
services based on their requirements and preferences. The company can use AWS Pricing Calculator
to compare the costs of different AWS services and configurations, such as Amazon EC2, Amazon
S3, Amazon RDS, and more. AWS Pricing Calculator also provides detailed breakdowns of the cost
components, such as compute, storage, network, and data transfer. AWS Pricing Calculator helps
customers plan and optimize their cloud budget and migration strategy. AWS Budgets, AWS Cost
Explorer, and AWS Cost and Usage Report are not the best tools to use for this purpose. AWS
Budgets is a tool that helps customers monitor and manage their AWS spending and usage against
predefined budget limits and thresholds. AWS Cost Explorer is a tool that helps customers analyze
and visualize their AWS spending and usage trends over time. AWS Cost and Usage Report is a tool
that helps customers access comprehensive and granular information about their AWS costs and
usage in a CSV or Parquet file. These tools are more useful for tracking and optimizing the existing
AWS costs and usage, rather than estimating the costs of different workloads34
28.Which AWS service uses AWS Compute Optimizer to provide sizing recommendations based on
workload metrics?
A. Amazon EC2
B. Amazon RDS
C. Amazon Lightsail
D. AWS Step Functions
Answer: A
Explanation:
Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It allows
you to launch virtual servers, called instances, with different configurations of CPU, memory, storage,
and networking resources. AWS Compute Optimizer analyzes the specifications and utilization
metrics of your Amazon EC2 instances and generates recommendations for optimal instance types
that can reduce costs and improve performance. You can view the recommendations on the AWS
Compute Optimizer console or the Amazon EC2 console12.
Amazon RDS, Amazon Lightsail, and AWS Step Functions are not supported by AWS Compute
Optimizer. Amazon RDS is a managed relational database service that lets you set up, operate, and
scale a relational database in the cloud. Amazon Lightsail is an easy-to-use cloud platform that offers
everything you need to build an application or website, plus a cost-effective, monthly plan. AWS Step
Functions lets you coordinate multiple AWS services into serverless workflows so you can build and
update apps quickly3.
29.Which options does AWS make available for customers who want to learn about security in the
cloud in an instructor-led setting? (Select TWO.)
A. AWS Trusted Advisor
B. AWS Online Tech Talks
C. AWS Blog
D. AWS Forums
E. AWS Classroom Training
Answer: B, E
Explanation:
The correct answers are B and E because AWS Online Tech Talks and AWS Classroom Training are
options that AWS makes available for customers who want to learn about security in the cloud in an
instructor-led setting. AWS Online Tech Talks are live, online presentations that cover a broad range
of topics at varying technical levels. AWS Online Tech Talks are delivered by AWS experts and
feature live Q&A sessions with the audience. AWS Classroom Training are in-person or virtual
courses that are led by accredited AWS instructors. AWS Classroom Training offer hands-on labs,
exercises, and best practices to help customers gain confidence and skills on AWS. The other options
are incorrect because they are not options that AWS makes available for customers who want to
learn about security in the cloud in an instructor-led setting. AWS Trusted Advisor is an AWS service
that provides real-time guidance to help customers follow AWS best practices for security,
performance, cost optimization, and fault tolerance. AWS Blog is an AWS resource that provides
news, announcements, and insights from AWS experts and customers. AWS Forums are AWS
resources that enable customers to interact with other AWS users and get feedback and support.
Reference: AWS Online Tech Talks, AWS Classroom Training
30.Which AWS Support plan provides customers with access to an AWS technical account manager
(TAM)?
A. AWS Basic Support
B. AWS Developer Support
C. AWS Business Support
D. AWS Enterprise Support
Answer: D
Explanation:
The correct answer is D because AWS Enterprise Support is the support plan that provides
customers with access to an AWS technical account manager (TAM). AWS Enterprise Support is the
highest level of support plan offered by AWS, and it provides customers with the most comprehensive
and personalized support experience. An AWS TAM is a dedicated technical resource who works
closely with customers to understand their business and technical needs, provide proactive guidance,
and coordinate support across AWS teams. The other options are incorrect because they are not
support plans that provide customers with access to an AWS TAM. AWS Basic Support is the default
and free support plan that provides customers with access to online documentation, forums, and
account information. AWS Developer Support is the lowest level of paid support plan that provides
customers with access to technical support during business hours, general guidance, and best
practice recommendations. AWS Business Support is the intermediate level of paid support plan that
provides customers with access to technical support 24/7, system health checks, architectural
guidance, and case management.
Reference: AWS Support Plans
32.A company has 5 TB of data stored in Amazon S3. The company plans to occasionally run queries
on the data for analysis.
Which AWS service should the company use to run these queries in the MOST cost-effective
manner?
A. Amazon Redshift
B. Amazon Athena
C. Amazon Kinesis
D. Amazon RDS
Answer: B
Explanation:
Amazon Athena is a serverless, interactive analytics service that allows users to run SQL queries on
data stored in Amazon S3. It is ideal for occasional queries on large datasets, as it does not require
any server provisioning, configuration, or management. Users only pay for the queries they run,
based on the amount of data scanned. Amazon Athena supports various data formats, such as CSV,
JSON, Parquet, ORC, and Avro, and integrates with AWS Glue Data Catalog to create and manage
schemas. Amazon Athena also supports querying data from other sources, such as on-premises or
other cloud systems, using data connectors1.
Amazon Redshift is a fully managed data warehouse service that allows users to run complex
analytical queries on petabyte-scale data. However, it requires users to provision and maintain
clusters of nodes, and pay for the storage and compute capacity they use. Amazon Redshift is more
suitable for frequent and consistent queries on structured or semi-structured data2.
Amazon Kinesis is a platform for streaming data on AWS, enabling users to collect, process, and
analyze real-time data. It is not designed for querying data stored in Amazon S3. Amazon Kinesis
consists of four services: Kinesis Data Streams, Kinesis Data Firehose, Kinesis Data Analytics, and
Kinesis Video Streams3.
Amazon RDS is a relational database service that provides six database engines: Amazon Aurora,
PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. It simplifies database
administration
tasks such as backup, patching, scaling, and replication. However, it is not optimized for querying
data stored in Amazon S3. Amazon RDS is more suitable for transactional workloads that require
high
performance and availability4.
Reference:
Interactive SQL - Serverless Query Service - Amazon Athena - AWS [Amazon Redshift C Data
Warehouse Solution - AWS] [Amazon Kinesis - Streaming Data Platform - AWS]
[Amazon Relational Database Service (RDS) C AWS]
33.A company is preparing to launch a redesigned website on AWS. Users from around the world will
download digital handbooks from the website.
Which AWS solution should the company use to provide these static files securely?
A. Amazon Kinesis Data Streams
B. Amazon CloudFront with Amazon S3
C. Amazon EC2 instances with an Application Load Balancer
D. Amazon Elastic File System (Amazon EFS)
Answer: B
Explanation:
Amazon CloudFront with Amazon S3 is a solution that allows you to provide static files securely to
users from around the world. Amazon CloudFront is a fast content delivery network (CDN) service
that securely delivers data, videos, applications, and APIs to customers globally with low latency, high
transfer speeds, all within a developer-friendly environment. Amazon S3 is an object storage service
that offers industry-leading scalability, data availability, security, and performance. You can use
Amazon S3 to store and retrieve any amount of data from anywhere. You can also configure Amazon
S3 to work with Amazon CloudFront to distribute your content to edge locations near your users for
faster delivery and lower latency. Amazon Kinesis Data Streams is a service that enables you to build
custom applications that process or analyze streaming data for specialized needs. This option is not
relevant for providing static files securely. Amazon EC2 instances with an Application Load Balancer
is a solution that allows you to distribute incoming traffic across multiple targets, such as EC2
instances, in multiple Availability Zones. This option is suitable for dynamic web applications, but not
necessary for static files. Amazon Elastic File System (Amazon EFS) is a service that provides a
simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-
premises resources. This option is not relevant for providing static files securely.
34.A company hosts an application on an Amazon EC2 instance. The EC2 instance needs to access
several AWS resources, including Amazon S3 and Amazon DynamoDB.
What is the MOST operationally efficient solution to delegate permissions?
A. Create an 1AM role with the required permissions. Attach the role to the EC2 instance.
B. Create an IAM user and use its access key and secret access key in the application.
C. Create an 1AM user and use its access key and secret access key to create a CLI profile in the
EC2 instance.
D. Create an 1AM role with the required permissions. Attach the role to the administrative1AM user.
Answer: A
Explanation:
Creating an IAM role with the required permissions and attaching the role to the EC2 instance is the
most operationally efficient solution to delegate permissions. An IAM role is an entity that defines a
set of permissions for making AWS service requests. An IAM role can be assumed by an EC2
instance to access other AWS resources, such as Amazon S3 and Amazon DynamoDB, without
having to store any credentials on the instance. This solution is more secure and scalable than using
IAM users and their access keys. For more information, see [IAM Roles for Amazon EC2] and [Using
an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances].
35.A company wants to use Amazon EC2 instances to run a stateless and restartable process after
business hours.
Which AWS service provides DNS resolution?
A. Amazon CloudFront
B. Amazon VPC
C. Amazon Route 53
D. AWS Direct Connect
Answer: C
Explanation:
Amazon Route 53 is the AWS service that provides DNS resolution. DNS (Domain Name System) is
a service that translates domain names into IP addresses. Amazon Route 53 is a highly available and
scalable cloud DNS service that offers domain name registration, DNS routing, and health checking.
Amazon Route 53 can route the traffic to various AWS services, such as Amazon EC2, Amazon S3,
and Amazon CloudFront. Amazon Route 53 can also integrate with other AWS services, such as
AWS Certificate Manager, AWS Shield, and AWS WAF. For more information, see [What is Amazon
Route 53?] and [Amazon Route 53 Features].
36.Which AWS services can limit manual errors by consistently provisioning AWS resources in
multiple envirom
A. AWS Config
B. AWS CodeStar
C. AWS CloudFormation
D. AWS Cloud Development Kit (AWS CDK)
E. AWS CodeBuild
Answer: C, D
Explanation:
AWS CloudFormation and AWS Cloud Development Kit (AWS CDK) are AWS services that can limit
manual errors by consistently provisioning AWS resources in multiple environments. AWS
CloudFormation is a service that enables you to model and provision AWS resources using
templates. You can use AWS CloudFormation to define the AWS resources and their dependencies
that you need for your applications, and to automate the creation and update of those resources
across multiple environments, such as development, testing, and production. AWS CloudFormation
helps you ensure that your AWS resources are configured consistently and correctly, and that you
can easily replicate or modify them as needed. AWS Cloud Development Kit (AWS CDK) is a service
that enables you to use familiar programming languages, such as Python, TypeScript, Java, and C#,
to define and provision AWS resources. You can use AWS CDK to write code that synthesizes into
AWS CloudFormation templates, and to leverage the existing libraries and tools of your preferred
language. AWS CDK helps you reduce the complexity and errors of writing and maintaining AWS
CloudFormation templates, and to apply the best practices and standards of software development to
your AWS infrastructure.
37.Which duties are the responsibility of a company that is using AWS Lambda? (Select TWO.)
A. Security inside of code
B. Selection of CPU resources
C. Patching of operating system
D. Writing and updating of code
E. Security of underlying infrastructure
Answer: A, D
Explanation:
The duties that are the responsibility of a company that is using AWS Lambda are security inside of
code and writing and updating of code. AWS Lambda is a serverless compute service that allows you
to run code without provisioning or managing servers, scaling, or patching. AWS Lambda takes care
of the security of the underlying infrastructure, such as the operating system, the network, and the
firewall. However, the company is still responsible for the security of the code itself, such as
encrypting sensitive data, validating input, and handling errors. The company is also responsible for
writing and updating the code that defines the Lambda function, and choosing the runtime
environment, such as Node.js, Python, or Java. AWS Lambda does not require the selection of CPU
resources, as it automatically allocates them based on the memory configuration34
38.Which AWS service or feature enables users to encrypt data at rest in Amazon S3?
A. 1AM policies
B. Server-side encryption
C. Amazon GuardDuty
D. Client-side encryption
Answer: B
Explanation:
Server-side encryption is an encryption option that Amazon S3 provides to encrypt data at rest in
Amazon S3. With server-side encryption, Amazon S3 encrypts an object before saving it to disk in its
data centers and decrypts it when you download the objects. You have three server-side encryption
options to choose from: SSE-S3, SSE-C, and SSE-KMS. SSE-S3 uses keys that are managed by
Amazon S3. SSE-C allows you to manage your own encryption keys. SSE-KMS uses keys that are
managed by AWS Key Management Service (AWS KMS)5.
39.A company wants to migrate a database from an on-premises environment to Amazon RDS.
After the migration is complete, which management task will the company still be responsible for?
A. Hardware lifecycle management
B. Application optimization
C. Server maintenance
D. Power, network, and cooling provisioning
Answer: B
Explanation:
Amazon RDS is a managed database service that handles most of the common database
administration tasks, such as hardware provisioning, server maintenance, backup and recovery,
patching, scaling, and replication. However, Amazon RDS does not optimize the application that
interacts with the database. The company is still responsible for tuning the performance, security, and
availability of the application according to its business requirements and best practices12.
Reference:
What is Amazon Relational Database Service (Amazon RDS)?
Perform common DBA tasks for Amazon RDS DB instances
40.A company plans to migrate to AWS and wants to create cost estimates for its AWS use cases.
Which AWS service or tool can the company use to meet these requirements?
A. AWS Pricing Calculator
B. Amazon CloudWatch
C. AWS Cost Explorer
D. AWS Budgets
Answer: A
Explanation:
AWS Pricing Calculator is a web-based planning tool that customers can use to create estimates for
their AWS use cases. They can use it to model their solutions before building them, explore the AWS
service price points, and review the calculations behind their estimates. Therefore, the correct answer
is A. You can learn more about AWS Pricing Calculator and how it works from this page.
41.Which option is AWS responsible for under the AWS shared responsibility model?
A. Network and firewall configuration
B. Client-side data encryption
C. Management of user permissions
D. Hardware and infrastructure
Answer: D
Explanation:
Hardware and infrastructure is the option that AWS is responsible for under the AWS shared
responsibility model. The AWS shared responsibility model describes how AWS and customers share
responsibilities for security and compliance in the cloud. AWS is responsible for security of the cloud,
which means protecting the infrastructure that runs all the services offered in the AWS Cloud. This
infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud
services. Customers are responsible for security in the cloud, which means taking care of the security
of their own applications, data, and operating systems. This includes network and firewall
configuration, client-side data encryption, management of user permissions, and more.
42.A company wants to migrate its on-premises application to the AWS Cloud. The company is
legally obligated to retain certain data in its onpremises data center.
Which AWS service or feature will support this requirement?
A. AWS Wavelength
B. AWS Local Zones
C. VMware Cloud on AWS
D. AWS Outposts
Answer: D
Explanation:
AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and
tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid
experience. AWS Outposts enables you to run AWS services in your on-premises data center, which
can support the requirement of retaining certain data on-premises due to legal obligations5.
43.Which statements explain the business value of migration to the AWS Cloud? (Select TWO.)
A. The migration of enterprise applications to the AWS Cloud makes these applications automatically
available on mobile devices.
B. AWS availability and security provide the ability to improve service level agreements (SLAs) while
reducing risk and unplanned downtime.
C. Companies that migrate to the AWS Cloud eliminate the need to plan for high availability and
disaster recovery.
D. Companies that migrate to the AWS Cloud reduce IT costs related to infrastructure, freeing budget
for reinvestment in other areas.
E. Applications are modernized because migration to the AWS Cloud requires companies to
rearchitect and rewrite all
enterprise applications.
Answer: B, D
Explanation:
B and D are correct because AWS availability and security enable customers to improve their SLAs
while reducing risk and unplanned downtime1, and AWS reduces IT costs related to infrastructure,
allowing customers to reinvest in other areas2. A is incorrect because migrating to the AWS Cloud
does not automatically make applications available on mobile devices, as it depends on the
application design and compatibility. C is incorrect because companies that migrate to the AWS
Cloud still need to plan for high availability and disaster recovery, as AWS is a shared responsibility
model3. E is incorrect because migrating to the AWS Cloud does not require companies to rearchitect
and rewrite all enterprise applications, as AWS offers different migration strategies depending on the
application complexity and business objectives4.
44.A company is moving an on-premises data center to the AWS Cloud. The company must migrate
50 petabytes of file storage data to AWS with the least possible operational overhead.
Which AWS service or resource should the company use to meet these requirements?
A. AWS Snowmobile
B. AWS Snowball Edge
C. AWS Data Exchange
D. AWS Database Migration Service (AWS DMS)
Answer: A
Explanation:
The AWS service that the company should use to meet these requirements is A. AWS Snowmobile.
AWS Snowmobile is a service that allows you to migrate large amounts of data to AWS using a
45-foot long ruggedized shipping container that can store up to 100 petabytes of data. AWS
Snowmobile is designed for situations where you need to move massive amounts of data to the cloud
in a fast, secure, and cost-effective way. AWS Snowmobile has the least possible operational
overhead because it eliminates the need to buy, configure, or manage hundreds or thousands of
storage devices12.
AWS Snowball Edge is a service that allows you to migrate data to AWS using a physical device that
can store up to 80 terabytes of data and has compute and storage capabilities to run applications on
the device. AWS Snowball Edge is suitable for situations where you have limited or intermittent
network connectivity, or where bandwidth costs are high. However, AWS Snowball Edge has more
operational overhead than AWS Snowmobile because you need to request multiple devices and
transfer your data onto them using the client3.
AWS Data Exchange is a service that allows you to find, subscribe to, and use third-party data in the
cloud. AWS Data Exchange is not a data migration service, but rather a data marketplace that
enables data providers and data consumers to exchange data sets securely and efficiently4.
AWS Database Migration Service (AWS DMS) is a service that helps migrate databases to AWS.
AWS
DMS does not migrate file storage data, but rather supports various database platforms and engines
as sources and targets5.
Reference:
1: AWS Snowmobile C Move Exabytes of Data to the Cloud in Weeks 2: AWS Snowmobile - Amazon
Web Services 3: Automated Software Vulnerability Management - Amazon Inspector - AWS 4: AWS
Data Exchange - Find, subscribe to, and use third-party data in … 5: AWS Database Migration Service
C Amazon Web Services
45.A company wants to define a central data protection policy that works across AWS services for
compute, storage, and database resources.
Which AWS service will meet this requirement?
A. AWS Batch
B. AWS Elastic Disaster Recovery
C. AWS Backup
D. Amazon FSx
Answer: C
Explanation:
The AWS service that will meet this requirement is C. AWS Backup.
AWS Backup is a service that allows you to define a central data protection policy that works across
AWS services for compute, storage, and database resources. You can use AWS Backup to create
backup plans that specify the frequency, retention, and lifecycle of your backups, and apply them to
your AWS resources using tags or resource IDs. AWS Backup supports various AWS services, such
as Amazon EC2, Amazon EBS, Amazon RDS, Amazon DynamoDB, Amazon EFS, Amazon FSx, and
AWS Storage Gateway12.
AWS Batch is a service that allows you to run batch computing workloads on AWS. AWS Batch does
not provide a central data protection policy, but rather enables you to optimize the allocation and
utilization of your compute resources3.
AWS Elastic Disaster Recovery is a service that allows you to prepare for and recover from disasters
using AWS. AWS Elastic Disaster Recovery does not provide a central data protection policy, but
rather helps you minimize downtime and data loss by replicating your applications and data to AWS4.
Amazon FSx is a service that provides fully managed file storage for Windows and Linux applications.
Amazon FSx does not provide a central data protection policy, but rather offers features such as
encryption, snapshots, backups, and replication to protect your file systems5.
Reference: 1: AWS Backup C Centralized backup across AWS services 3: AWS Batch C Run Batch
Computing Jobs
on AWS 2: Data Protection Reference Architectures with AWS Backup 4: AWS Elastic Disaster
Recovery C Prepare for and recover from disasters using AWS 5: Amazon FSx C Fully managed file
storage for Windows and Linux applications
46.Which company needs to apply security rules to a subnet for Amazon EC2 instances.
Which AWS service or feature provides this functionality?
A. Network ACLs
B. Security groups
C. AWS Certificate Manager (ACM)
D. AWS Config
Answer: A
Explanation:
Network ACLs (network access control lists) are an AWS service or feature that provides the
functionality of applying security rules to a subnet for EC2 instances. A subnet is a logical partition of
an IP network within a VPC (virtual private cloud). A VPC is a logically isolated section of the AWS
Cloud where the company can launch AWS resources in a virtual network that they define. A network
ACL is a virtual firewall that controls the inbound and outbound traffic for one or more subnets. The
company can use network ACLs to allow or deny traffic based on protocol, port, or source and
destination IP address. Network ACLs are stateless, meaning that they do not track the traffic that
flows through them. Therefore, the company must create rules for both inbound and outbound traffic4
47.A company wants to create a chatbot and integrate the chatbot with its current web application.
Which AWS service will meet these requirements?
A. AmazonKendra
B. Amazon Lex
C. AmazonTextract
D. AmazonPolly
Answer: B
Explanation:
The AWS service that will meet the requirements of the company that wants to create a chatbot and
integrate the chatbot with its current web application is Amazon Lex. Amazon Lex is a service that
helps customers build conversational interfaces using voice and text. The company can use Amazon
Lex to create a chatbot that can understand natural language and respond to user requests, using the
same deep learning technologies that power Amazon Alexa. Amazon Lex also provides easy
integration with other AWS services, such as Amazon Comprehend, Amazon Polly, and AWS
Lambda, as well as popular platforms, such as Facebook Messenger, Slack, and Twilio. Amazon Lex
helps customers create engaging and interactive chatbots for their web applications. Amazon Kendra,
Amazon Textract, and Amazon Polly are not the best services to use for this purpose. Amazon
Kendra is a service that helps customers provide accurate and natural answers to natural language
queries using machine learning. Amazon Textract is a service that helps customers extract text and
data from scanned documents using optical character recognition (OCR) and machine learning.
Amazon Polly is a service that helps customers convert text into lifelike speech using deep learning.
These services are more useful for different types of natural language processing and generation
tasks, rather than creating and integrating chatbots.
48.A company has two AWS accounts in an organization in AWS Organizations for consolidated
billing.
All of the company's AWS resources are hosted in one AWS Region.
Account A has purchased five Amazon EC2 Standard Reserved Instances (RIs) and has four EC2
instances running. Account B has not purchased any RIs and also has four EC2 instances running.
Which statement is true regarding pricing for these eight instances?
A. The eight instances will be charged as regular instances.
B. Four instances will be charged as RIs, and four will be charged as regular instances.
C. Five instances will be charged as RIs, and three will be charged as regular instances.
D. The eight instances will be charged as RIs.
Answer: B
Explanation:
The statement that is true regarding pricing for these eight instances is: four instances will be charged
as RIs, and four will be charged as regular instances. Amazon EC2 Reserved Instances (RIs) are a
pricing model that allows users to reserve EC2 instances for a specific term and benefit from
discounted hourly rates and capacity reservation. RIs are purchased for a specific AWS Region, and
can be shared across multiple accounts in an organization in AWS Organizations for consolidated
billing. However, RIs are applied on a first-come, first-served basis, and there is no guarantee that all
instances in the organization will be charged at the RI rate. In this case, Account A has purchased
five RIs and has four instances running, so all four instances will be charged at the RI rate. Account B
has not purchased any RIs and also has four instances running, so all four instances will be charged
at the regular rate. The remaining RI in Account A will not be applied to any instance in Account B,
and will be wasted.
49.A company wants to create multiple isolated networks in the same AWS account.
Which AWS service or component will provide this functionality?
A. AWS Transit Gateway
B. Internet gateway
C. Amazon VPC
D. Amazon EC2
Answer: C
Explanation:
Amazon Virtual Private Cloud (Amazon VPC) is the AWS service that allows customers to create
multiple isolated networks in the same AWS account. A VPC is a logically isolated section of the AWS
Cloud where customers can launch AWS resources in a virtual network that they define. Customers
can create multiple VPCs within an AWS account, each with its own IP address range, subnets, route
tables, security groups, network access control lists, gateways, and other components. AWS Transit
Gateway, Internet gateway, and Amazon EC2 are not services or components that provide the
functionality of creating multiple isolated networks in the same AWS account. AWS Transit Gateway
is a service that enables customers to connect their Amazon VPCs and their on-premises networks to
a single gateway. An Internet gateway is a component that enables communication between
instances in a VPC and the Internet. Amazon EC2 is a service that provides scalable compute
capacity in the cloud34
50.A company is running an application on AWS. The company wants to identify and prevent the
accidental
Which AWS service or feature will meet these requirements?
A. Amazon GuardDuty
B. Network ACL
C. AWS WAF
D. AWS Network Firewall
Answer: A
Explanation:
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With
the cloud, the collection and aggregation of account and network activities is simplified, but it can be
time consuming for security teams to continuously analyze event log data for potential threats. With
GuardDuty, you can automate anomaly detection and get actionable findings to help you protect your
AWS resources4.
51.A company wants to improve its security and audit posture by limiting Amazon EC2 inbound
access.
According to the AWS shared responsibility model, which task is the responsibility of the customer?
A. Protect the global infrastructure that runs all of the services offered in the AWS Cloud.
B. Configure logical access controls for resources, and protect account credentials.
C. Configure the security used by managed services.
D. Patch and back up Amazon Aurora.
Answer: B
Explanation:
According to the AWS shared responsibility model, the customer is responsible for configuring logical
access controls for resources, and protecting account credentials. This includes managing IAM user
permissions, security group rules, network ACLs, encryption keys, and other aspects of access
management1. AWS is responsible for protecting the global infrastructure that runs all of the services
offered in the AWS Cloud, such as the hardware, software, networking, and facilities. AWS is also
responsible for configuring the security used by managed services, such as Amazon RDS, Amazon
DynamoDB, and Amazon Aurora2.
53.A company wants to implement controls (guardrails) in a newly created AWS Control Tower
landing zone.
Which AWS services or features can the company use to create and define these controls
(guardrails)? (Select TWO.)
A. AWS Config
B. Service control policies (SCPs)
C. Amazon GuardDuty
D. AWS Identity and Access Management (1AM)
E. Security groups
Answer: A, B
Explanation:
AWS Config and service control policies (SCPs) are AWS services or features that the company can
use to create and define controls (guardrails) in a newly created AWS Control Tower landing zone.
AWS Config is a service that enables users to assess, audit, and evaluate the configurations of their
AWS resources. It can be used to create rules that check for compliance with the desired
configurations and report any deviations. AWS Control Tower provides a set of predefined AWS
Config rules that can be enabled as guardrails to enforce compliance across the landing zone1.
Service control policies (SCPs) are a type of policy that can be used to manage permissions in AWS
Organizations. They can be used to restrict the actions that the users and roles in the member
accounts can perform on the AWS resources. AWS Control Tower provides a set of predefined SCPs
that can be enabled as guardrails to prevent access to certain services or regions across the landing
zone2. Amazon GuardDuty is a service that provides intelligent threat detection and continuous
monitoring for AWS accounts and resources. It is not a feature that can be used to create and define
controls (guardrails) in a landing zone. AWS Identity and Access Management (IAM) is a service that
allows users to manage access to AWS resources and services. It can be used to create users,
groups, roles, and policies that control who can do what in AWS. It is not a feature that can be used to
create and define controls (guardrails) in a landing zone. Security groups are virtual firewalls that
control the inbound and outbound traffic for Amazon EC2 instances. They can be used to allow or
deny access to an EC2 instance based on the port, protocol, and source or destination. They are not
a feature that can be used to create and define controls (guardrails) in a landing zone.
54.A company deployed an application on an Amazon EC2 instance. The application ran as expected
for 6 months. In the past week, users have reported latency issues. A system administrator found that
the CPU utilization was at 100% during business hours. The company wants a scalable solution to
meet demand.
Which AWS service or feature should the company use to handle the load for its application during
periods of high demand?
A. Auto Scaling groups
B. AWS Global Accelerator
C. Amazon Route 53
D. An Elastic IP address
Answer: A
Explanation:
Auto Scaling groups are a feature that allows users to automatically scale the number of Amazon
EC2 instances up or down based on demand or a predefined schedule. Auto Scaling groups can help
improve the performance and availability of applications by adjusting the capacity in response to
traffic fluctuations1. AWS Global Accelerator is a service that improves the availability and
performance of applications by routing traffic through AWS edge locations2. Amazon Route 53 is a
service that provides scalable and reliable domain name system (DNS) service3. An Elastic IP
address is a static IPv4 address that can be associated with an Amazon EC2 instance4.
55.In the AWS shared responsibility model, which tasks are the responsibility of AWS? (Select TWO.)
A. Patch an Amazon EC2 instance operating system.
B. Configure a security group.
C. Monitor the health of an Availability Zone.
D. Protect the infrastructure that runs Amazon EC2 instances.
E. Manage access to the data in an Amazon S3 bucket
Answer: C, D
Explanation:
According to the AWS shared responsibility model, AWS is responsible for the security of the cloud,
which includes the tasks of monitoring the health of an Availability Zone and protecting the
infrastructure that runs Amazon EC2 instances. An Availability Zone is a physically isolated location
within an AWS Region that has its own power, cooling, and network connectivity. AWS monitors the
health and performance of each Availability Zone and notifies customers of any issues or disruptions.
AWS also protects the infrastructure that runs AWS services, such as Amazon EC2, by implementing
physical, environmental, and operational security measures. AWS is not responsible for patching an
Amazon EC2 instance operating system, configuring a security group, or managing access to the
data in an Amazon S3 bucket. These are the customer’s responsibilities for security in the cloud. The
customer must ensure that the operating system and applications on their EC2 instances are up to
date and secure. The customer must also configure the security group rules that control the inbound
and outbound traffic for their EC2 instances. The customer must also manage the access permissions
and encryption settings for their S3 buckets and objects2
56.A company wants to host its relational databases on AWS. The databases have predefined
schemas that the company needs to replicate on AWS.
Which AWS services could the company use for the databases? (Select TWO.)
A. Amazon Aurora
B. Amazon RDS
C. Amazon DocumentDB (with MongoDB compatibility)
D. Amazon Neptune
E. Amazon DynamoDB
Answer: A, B
Explanation:
: The correct answers are A and B because Amazon Aurora and Amazon RDS are AWS services that
the company could use for the relational databases. Amazon Aurora is a relational database that is
compatible with MySQL and PostgreSQL. Amazon Aurora is a fully managed, scalable, and high-
performance service that offers up to five times the throughput of standard MySQL and up to three
times the throughput of standard PostgreSQL. Amazon RDS is a service that enables users to set up,
operate, and scale relational databases in the cloud. Amazon RDS supports six popular database
engines: MySQL, PostgreSQL, Oracle, SQL Server, MariaDB, and Amazon Aurora. The other options
are incorrect because they are not AWS services that the company could use for the relational
databases. Amazon DocumentDB (with MongoDB compatibility) is a document database that is
compatible with MongoDB. Amazon Neptune is a graph database that supports property graph and
RDF models. Amazon DynamoDB is a key-value and document database.
Reference: Amazon Aurora, Amazon RDS
58.A company wants to move its iOS application development and build activities to AWS.
Which AWS service or resource should the company use for these activities?
A. AWS CodeCommit
B. Amazon EC2 M1 Mac instances
C. AWS Amplify
D. AWS App Runner
Answer: B
Explanation:
Amazon EC2 M1 Mac instances are the AWS service or resource that the company should use for its
iOS application development and build activities, as they enable users to run macOS on AWS and
access a broad and growing set of AWS services. AWS CodeCommit is a service that provides a fully
managed source control service that hosts secure Git-based repositories. AWS Amplify is a set of
tools and services that enable developers to build full-stack web and mobile applications using AWS.
AWS App Runner is a service that makes it easy for developers to quickly deploy containerized web
applications and APIs. These concepts are explained in the AWS Developer Tools page4.
60.Which of the following are AWS Cloud design principles? (Select TWO.)
A. Pay for compute resources in advance.
B. Make data-driven decisions to determine cloud architectural design.
C. Emphasize manual processes to allow for changes.
D. Test systems at production scale.
E. Refine operational procedures infrequently.
Answer: B, D
Explanation:
The correct answers are B and D because making data-driven decisions to determine cloud
architectural design and testing systems at production scale are AWS Cloud design principles.
Making data-driven decisions to determine cloud architectural design means that users should collect
and analyze data from their AWS resources and applications to optimize their performance,
availability,
security, and cost. Testing systems at production scale means that users should simulate real-world
scenarios and load conditions to validate the functionality, reliability, and scalability of their systems.
The other options are incorrect because they are not AWS Cloud design principles. Paying for
compute resources in advance means that users have to invest heavily in data centers and servers
before they know how they will use them. This is not a cloud design principle, but rather a traditional
IT model. Emphasizing manual processes to allow for changes means that users have to rely on
human intervention and coordination to perform operational tasks and updates. This is not a cloud
design principle, but rather a source of inefficiency and error. Refining operational procedures
infrequently means that users have to stick to the same methods and practices without adapting to
the changing needs and feedback. This is not a cloud design principle, but rather a hindrance to
innovation and improvement.
Reference: AWS Well-Architected Framework
61.A retail company is migrating its IT infrastructure applications from on premises to the AWS Cloud.
Which costs will the company eliminate with this migration? (Select TWO.)
A. Cost of data center operations
B. Cost of application licensing
C. Cost of marketing campaigns
D. Cost of physical server hardware
E. Cost of network management
Answer: A, D
Explanation:
The costs that the company will eliminate with this migration are the cost of application licensing and
the cost of physical server hardware. The cost of application licensing is the fee that the company has
to pay to use the software applications on its on-premises servers. The cost of physical server
hardware is the expense that the company has to incur to purchase, maintain, and upgrade the
servers and related equipment. By migrating to the AWS Cloud, the company can avoid these costs
by using the AWS services and resources that are already licensed and managed by AWS. For more
information, see [Cloud Economics] and [AWS Total Cost of Ownership (TCO) Calculator].
63.Which AWS service or tool provides recommendations to help users get rightsized Amazon EC2
instances based on historical workload usage data?
A. AWS Pricing Calculator
B. AWS Compute Optimizer
C. AWS App Runner
D. AWS Systems Manager
Answer: B
Explanation:
AWS Compute Optimizer is the AWS service or tool that provides recommendations to help users get
rightsized Amazon EC2 instances based on historical workload usage data. AWS Compute Optimizer
analyzes the configuration and performance characteristics of the EC2 instances and delivers
recommendations for optimal instance types, sizes, and configurations. AWS Compute Optimizer
helps users improve performance, reduce costs, and eliminate underutilized resources
64.Which task must a user perform by using the AWS account root user credentials?
A. Make changes to AWS production resources.
B. Change AWS Support plans.
C. Access AWS Cost and Usage Reports.
D. Grant auditors’ access to an AWS account for a compliance audit.
Answer: B
Explanation:
Changing AWS Support plans is a task that must be performed by using the AWS account root user
credentials. The root user is the email address that you used to sign up for AWS. It has complete
access to all AWS services and resources in the account. You should use the root user only to
perform a few account and service management tasks, such as changing AWS Support plans, closing
the account, or changing the account name or email address. Making changes to AWS production
resources, accessing AWS Cost and Usage Reports, and granting auditors access to an AWS
account for a compliance audit are tasks that can be performed by using IAM users or roles, which
are entities that you create in AWS to delegate permissions to access AWS services and resources.
66.Which AWS service or feature offers security for a VPC by acting as a firewall to control traffic in
and out of subnets?
A. AWS Security Hub
B. Security groups
C. Network ACL
D. AWSWAF
Answer: C
Explanation:
A network access control list (network ACL) is a feature that acts as a firewall for controlling traffic in
and out of one or more subnets in a virtual private cloud (VPC). AWS Security Hub is a service that
provides a comprehensive view of the security posture of AWS accounts and resources. Security
groups are features that act as firewalls for controlling traffic at the instance level. AWS WAF is a web
application firewall that helps protect web applications from common web exploits.
67.Which tasks are customer responsibilities according to the AWS shared responsibility model?
(Select TWO.)
A. Determine application dependencies with operating systems.
B. Provide user access with AWS Identity and Access Management (1AM).
C. Secure the data center in an Availability Zone.
D. Patch the hypervisor.
E. Provide network availability in Availability Zones.
Answer: B
Explanation:
The correct answer to the question is B because providing user access with AWS Identity and Access
Management (IAM) is a customer responsibility according to the AWS shared responsibility model.
The AWS shared responsibility model is a framework that defines the division of responsibilities
between AWS and the customer for security and compliance. AWS is responsible for the security of
the cloud, which includes the global infrastructure, such as the regions, availability zones, and edge
locations; the hardware, software, networking, and facilities that run the AWS services; and the
virtualization layer that separates the customer instances and storage. The customer is responsible
for the security in the cloud, which includes the customer data, the guest operating systems, the
applications, the identity and access management, the firewall configuration, and the encryption. IAM
is an AWS service that enables customers to manage access and permissions to AWS resources and
services. Customers are responsible for creating and managing IAM users, groups, roles, and
policies, and ensuring that they follow the principle of least privilege.
Reference: AWS Shared Responsibility Model
68.Which AWS service or tool helps companies measure the environmental impact of their AWS
usage?
A. AWS customer carbon footprint tool
B. AWS Compute Optimizer
C. Sustainability pillar
D. OS-Climate (Open Source Climate Data Commons)
Answer: A
Explanation:
AWS customer carbon footprint tool is an AWS service or tool that helps companies measure the
environmental impact of their AWS usage. It allows users to estimate the carbon emissions
associated with their AWS resources and services, such as EC2, S3, and Lambda. It also provides
recommendations and best practices to reduce the carbon footprint and improve the sustainability of
their AWS workloads4. AWS Compute Optimizer is an AWS service that helps users optimize the
performance and cost of their EC2 instances and Auto Scaling groups. It provides recommendations
for optimal instance types, sizes, and configurations based on the workload characteristics and
utilization metrics. It does not help users measure the environmental impact of their AWS usage.
Sustainability pillar is a concept that refers to the ability of a system to operate in an environmentally
friendly and socially responsible manner. It is not an AWS service or tool that helps users measure
the environmental impact of their AWS usage. OS-Climate (Open Source Climate Data Commons) is
an initiative that aims to provide open source data, tools, and platforms to accelerate climate action
and innovation. It is not an AWS service or tool that helps users measure the environmental impact of
their AWS usage.
69.A company has created an AWS Cost and Usage Report and wants to visualize the report.
Which AWS service should the company use to ingest and display this information?
A. Amazon QuickSight
B. Amazon Pinpoint
C. Amazon Neptune
D. Amazon Kinesis
Answer: A
Explanation:
Amazon QuickSight is an AWS service that provides business intelligence and data visualization
capabilities. Amazon QuickSight enables you to ingest, analyze, and display data from various
sources, such as AWS Cost and Usage Reports, Amazon S3, Amazon Athena, Amazon Redshift,
and Amazon RDS. You can use Amazon QuickSight to create interactive dashboards and charts that
show insights and trends from your data. You can also share your dashboards and charts with other
users
or embed them into your applications.
70.Which AWS service is a key-value database that provides sub-millisecond latency on a large
scale?
A. Amazon DynamoDB
B. Amazon Aurora
C. Amazon DocumentDB (with MongoDB compatibility)
D. Amazon Neptune
Answer: A
Explanation:
The correct answer is A because Amazon DynamoDB is a key-value database that provides sub-
millisecond latency on a large scale. Amazon DynamoDB is a fully managed, serverless, and scalable
NoSQL database service that supports both key-value and document data models. The other options
are incorrect because they are not key-value databases. Amazon Aurora is a relational database that
is compatible with MySQL and PostgreSQL. Amazon DocumentDB (with MongoDB compatibility) is a
document database that is compatible with MongoDB. Amazon Neptune is a graph database that
supports property graph and RDF models.
Reference: Amazon DynamoDB FAQs
72.A company runs a MySQL database in its on-premises data center. The company wants to run a
copy
of this database in the AWS
Cloud.
Which AWS service would support this workload?
A. Amazon RDS
B. Amazon Neptune
C. Amazon ElastiCache for Redis
D. Amazon Quantum Ledger Database (Amazon QLDB)
Answer: A
Explanation:
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up,
operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity,
while automating time-consuming administration tasks such as hardware provisioning, database
setup, patching, and backups. Amazon RDS supports six popular database engines: Amazon Aurora,
PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. Amazon RDS can support
running a copy of a MySQL database in the AWS Cloud, as it offers compatibility, scalability, and
availability features.