0% found this document useful (0 votes)
17 views67 pages

TEST4

The document contains a quiz for the Solutions Architect - Associate SAA-C03 certification, featuring multiple-choice questions focused on AWS services and best practices for application deployment and data management. Each question includes a scenario and options for solutions, with feedback indicating the correct answers. The quiz assesses knowledge on various AWS services such as DynamoDB, EC2, Lambda, S3, and API Gateway, among others.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views67 pages

TEST4

The document contains a quiz for the Solutions Architect - Associate SAA-C03 certification, featuring multiple-choice questions focused on AWS services and best practices for application deployment and data management. Each question includes a scenario and options for solutions, with feedback indicating the correct answers. The quiz assesses knowledge on various AWS services such as DynamoDB, EC2, Lambda, S3, and API Gateway, among others.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Solutions Architect - Associate SAA-C03 -

Quiz Form (Part 4)


Total points 43/65

Malpi

Name *

Gokul Upadhyay Guragain

Email *

gokulupadhayaya19@gmail.com
196. A company runs an application on a large fleet of Amazon EC2 1/1
instances. The application reads and writes entries into an Amazon
DynamoDB table. The size of the DynamoDB table continuously grows, but
the application needs only data from the last 30 days. The company needs
a solution that minimizes cost and development effort.Which solution
meets these requirements?

Use an AWS CloudFormation template to deploy the complete solution. Redeploy


the CloudFormation stack every 30 days, and delete the original stack.

Use an EC2 instance that runs a monitoring application from AWS Marketplace.
Configure the monitoring application to use Amazon DynamoDB Streams to store
the timestamp when a new item is created in the table. Use a script that runs on the
EC2 instance to delete items that have a timestamp that is older than 30 days.

Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a


new item is created in the table. Configure the Lambda function to delete items in
the table that are older than 30 days.

Extend the application to add an attribute that has a value of the current
timestamp plus 30 days to each new item that is created in the table. Configure
DynamoDB to use the attribute as the TTL attribute.
197. A company has a Microsoft .NET application that runs on an on- 1/1
premises Windows Server. The application stores data by using an Oracle
Database Standard Edition server. The company is planning a migration to
AWS and wants to minimize development changes while moving the
application. The AWS application environment should be highly
available.Which combination of actions should the company take to meet
these requirements? (Choose two.)

Refactor the application as serverless with AWS Lambda functions running .NET
Core.

Rehost the application in AWS Elastic Beanstalk with the .NET platform in a
Multi-AZ deployment.

Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon
Machine Image (AMI).

Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle
database to Amazon DynamoDB in a Multi-AZ deployment.

Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle
database to Oracle on Amazon RDS in a Multi-AZ deployment.

Feedback

Correct! The correct answer is: Rehost the application in AWS Elastic Beanstalk with the
.NET platform in a Multi-AZ deployment.
198. A company runs a containerized application on a Kubernetes cluster 1/1
in an on-premises data center. The company is using a MongoDB database
for data storage. The company wants to migrate some of these
environments to AWS, but no code changes or deployment method
changes are possible at this time. The company needs a solution that
minimizes operational overhead.Which solution meets these
requirements?

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker
nodes for compute and MongoDB on EC2 for data storage.

Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for
compute and Amazon DynamoDB for data storage

Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes for compute and Amazon DynamoDB for data storage.

Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for
compute and Amazon DocumentDB (with MongoDB compatibility) for data
storage.

Feedback

Correct! The correct answer is: Use Amazon Elastic Kubernetes Service (Amazon EKS)
with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility)
for data storage.
199. A telemarketing company is designing its customer call center 1/1
functionality on AWS. The company needs a solution that provides multiple
speaker recognition and generates transcript files. The company wants to
query the transcript files to analyze the business patterns. The transcript
files must be stored for 7 years for auditing purposes.Which solution will
meet these requirements?

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files
in Amazon S3. Use machine learning models for transcript file analysis.

Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena
for transcript file analysis.

Use Amazon Translate for multiple speaker recognition. Store the transcript files in
Amazon Redshift. Use SQL queries for transcript file analysis.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files
in Amazon S3. Use Amazon Textract for transcript file analysis.

Feedback

Correct! The correct answer is: Use Amazon Transcribe for multiple speaker recognition.
Use Amazon Athena for transcript file analysis.
200. A company hosts its application on AWS. The company uses Amazon 1/1
Cognito to manage users. When users log in to the application, the
application fetches required data from Amazon DynamoDB by using a
REST API that is hosted in Amazon API Gateway. The company wants an
AWS managed solution that will control access to the REST API to reduce
development efforts.Which solution will meet these requirements with the
LEAST operational overhead?

Configure an AWS Lambda function to be an authorizer in API Gateway to validate


which user made the request.

For each user, create and assign an API key that must be sent with each request.
Validate the key by using an AWS Lambda function.

Send the user’s email address in the header with every request. Invoke an AWS
Lambda function to validate that the user with that email address has proper
access.

Configure an Amazon Cognito user pool authorizer in API Gateway to allow


Amazon Cognito to validate each request.

Feedback

Correct! The correct answer is: Configure an Amazon Cognito user pool authorizer in API
Gateway to allow Amazon Cognito to validate each request.
201. A company is developing a marketing communications service that 1/1
targets mobile app users. The company needs to send confirmation
messages with Short Message Service (SMS) to its users. The users must
be able to reply to the SMS messages. The company must store the
responses for a year for analysis.What should a solutions architect do to
meet these requirements?

Create an Amazon Connect contact flow to send the SMS messages. Use AWS
Lambda to process the responses.

Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events


to an Amazon Kinesis data stream for analysis and archiving.

Use Amazon Simple Queue Service (Amazon SQS) to distribute the SMS messages.
Use AWS Lambda to process the responses.

Create an Amazon Simple Notification Service (Amazon SNS) FIFO topic. Subscribe
an Amazon Kinesis data stream to the SNS topic for analysis and archiving.

Feedback

Correct! The correct answer is: Build an Amazon Pinpoint journey. Configure Amazon
Pinpoint to send events to an Amazon Kinesis data stream for analysis and archiving.
202. A company is planning to move its data to an Amazon S3 bucket. The 0/1
data must be encrypted when it is stored in the S3 bucket. Additionally, the
encryption key must be automatically rotated every year.Which solution will
meet these requirements with the LEAST operational overhead?

Move the data to the S3 bucket. Use server-side encryption with Amazon S3
managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3
encryption keys.

Create an AWS Key Management Service (AWS KMS) customer managed key.
Enable automatic key rotation. Set the S3 bucket’s default encryption behavior
to use the customer managed KMS key. Move the data to the S3 bucket.

Create an AWS Key Management Service (AWS KMS) customer managed key. Set
the S3 bucket’s default encryption behavior to use the customer managed KMS key.
Move the data to the S3 bucket. Manually rotate the KMS key every year.

Encrypt the data with customer key material before moving the data to the S3
bucket. Create an AWS Key Management Service (AWS KMS) key without key
material. Import the customer key material into the KMS key. Enable automatic key
rotation.

Correct answer

Move the data to the S3 bucket. Use server-side encryption with Amazon S3
managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3
encryption keys.

Feedback

Incorrect! The correct answer is: Move the data to the S3 bucket. Use server-side
encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key
rotation behavior of SSE-S3 encryption keys.
203. The customers of a finance company request appointments with 1/1
financial advisors by sending text messages. A web application that runs
on Amazon EC2 instances accepts the appointment requests. The text
messages are published to an Amazon Simple Queue Service (Amazon
SQS) queue through the web application. Another application that runs on
EC2 instances then sends meeting invitations and meeting confirmation
email messages to the customers. After successful scheduling, this
application stores the meeting information in an Amazon DynamoDB
database.As the company expands, customers report that their meeting
invitations are taking longer to arrive.What should a solutions architect
recommend to resolve this issue?

Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.

Add an Amazon API Gateway API in front of the web application that accepts the
appointment requests.

Add an Amazon CloudFront distribution. Set the origin as the web application that
accepts the appointment requests.

Add an Auto Scaling group for the application that sends meeting invitations.
Configure the Auto Scaling group to scale based on the depth of the SQS queue.

Feedback

Correct! The correct answer is: Add an Auto Scaling group for the application that sends
meeting invitations. Configure the Auto Scaling group to scale based on the depth of the
SQS queue.
204. An online retail company has more than 50 million active customers 0/1
and receives more than 25,000 orders each day. The company collects
purchase data for customers and stores this data in Amazon S3. Additional
customer data is stored in Amazon RDS.The company wants to make all
the data available to various teams so that the teams can perform
analytics. The solution must provide the ability to manage fine-grained
permissions for the data and must minimize operational overhead.Which
solution will meet these requirements?

Migrate the purchase data to write directly to Amazon RDS. Use RDS access
controls to limit access.

Schedule an AWS Lambda function to periodically copy data from Amazon RDS
to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the
data. Use S3 policies to limit access.

Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC
connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake
Formation access controls to limit access.

Create an Amazon Redshift cluster. Schedule an AWS Lambda function to


periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use
Amazon Redshift access controls to limit access.

Correct answer

Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC
connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake
Formation access controls to limit access.

Feedback

Incorrect! The correct answer is: Create a data lake by using AWS Lake Formation. Create
an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation.
Use Lake Formation access controls to limit access.
205. A company hosts a marketing website in an on-premises data center. 0/1
The website consists of static documents and runs on a single server. An
administrator updates the website content infrequently and uses an SFTP
client to upload new documents.The company decides to host its website
on AWS and to use Amazon CloudFront. The company�s solutions
architect creates a CloudFront distribution. The solutions architect must
design the most cost-effective and resilient architecture for website
hosting to serve as the CloudFront origin.Which solution will meet these
requirements?

Create a virtual server by using Amazon Lightsail. Configure the web server in the
Lightsail instance. Upload website content by using an SFTP client.

Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application
Load Balancer. Upload website content by using an SFTP client.

Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a
CloudFront origin access identity (OAI). Upload website content by using the AWS
CLI.

Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure
the S3 bucket for website hosting. Upload website content by using the SFTP
client.

Correct answer

Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a
CloudFront origin access identity (OAI). Upload website content by using the AWS
CLI.

Feedback

Incorrect! The correct answer is: Create a private Amazon S3 bucket. Use an S3 bucket
policy to allow access from a CloudFront origin access identity (OAI). Upload website
content by using the AWS CLI.
206. A company wants to manage Amazon Machine Images (AMIs). The 0/1
company currently copies AMIs to the same AWS Region where the AMIs
were created. The company needs to design an application that captures
AWS API calls and sends alerts whenever the Amazon EC2 CreateImage
API operation is called within the company�s account.Which solution will
meet these requirements with the LEAST operational overhead?

Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert
when a CreateImage API call is detected.

Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon


SNS) notification that occurs when updated logs are sent to Amazon S3. Use
Amazon Athena to create a new table and to query on CreateImage when an API
call is detected.

Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the


CreateImage API call. Configure the target as an Amazon Simple Notification
Service (Amazon SNS) topic to send an alert when a CreateImage API call is
detected.

Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a


target for AWS CloudTrail logs. Create an AWS Lambda function to send an
alert to an Amazon Simple Notification Service (Amazon SNS) topic when a
CreateImage API call is detected.

Correct answer

Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the


CreateImage API call. Configure the target as an Amazon Simple Notification
Service (Amazon SNS) topic to send an alert when a CreateImage API call is
detected.

Feedback

Incorrect! The correct answer is: Create an Amazon EventBridge (Amazon CloudWatch
Events) rule for the CreateImage API call. Configure the target as an Amazon Simple
Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is
detected.
207. A company owns an asynchronous API that is used to ingest user 1/1
requests and, based on the request type, dispatch requests to the
appropriate microservice for processing. The company is using Amazon
API Gateway to deploy the API front end, and an AWS Lambda function that
invokes Amazon DynamoDB to store user requests before dispatching
them to the processing microservices.The company provisioned as much
DynamoDB throughput as its budget allows, but the company is still
experiencing availability issues and is losing user requests.What should a
solutions architect do to address this issue without impacting existing
users?

Add throttling on the API Gateway with server-side throttling limits.

Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.

Create a secondary index in DynamoDB for the table with the user requests.

Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to
buffer writes to DynamoDB.

Feedback

Correct! The correct answer is: Use the Amazon Simple Queue Service (Amazon SQS)
queue and Lambda to buffer writes to DynamoDB.
208. A company needs to move data from an Amazon EC2 instance to an 0/1
Amazon S3 bucket. The company must ensure that no API calls and no
data are routed through public internet routes. Only the EC2 instance can
have access to upload data to the S3 bucket.Which solution will meet
these requirements?

Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2
instance is located. Attach a resource policy to the S3 bucket to only allow the EC2
instance’s IAM role for access.

Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the
EC2 instance is located. Attach appropriate security groups to the endpoint. Attach
a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for
access.

Run the nslookup tool from inside the EC2 instance to obtain the private IP address
of the S3 bucket’s service API endpoint. Create a route in the VPC route table to
provide the EC2 instance with access to the S3 bucket. Attach a resource policy to
the S3 bucket to only allow the EC2 instance’s IAM role for access.

Use the AWS provided, publicly available ip-ranges.json file to obtain the private
IP address of the S3 bucket’s service API endpoint. Create a route in the VPC
route table to provide the EC2 instance with access to the S3 bucket. Attach a
resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for
access.

Correct answer

Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2
instance is located. Attach a resource policy to the S3 bucket to only allow the EC2
instance’s IAM role for access.

Feedback

Incorrect! The correct answer is: Create an interface VPC endpoint for Amazon S3 in the
subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket to
only allow the EC2 instance’s IAM role for access.
209. A solutions architect is designing the architecture of a new 0/1
application being deployed to the AWS Cloud. The application will run on
Amazon EC2 On-Demand Instances and will automatically scale across
multiple Availability Zones. The EC2 instances will scale up and down
frequently throughout the day. An Application Load Balancer (ALB) will
handle the load distribution. The architecture needs to support distributed
session data management. The company is willing to make changes to
code if needed.What should the solutions architect do to ensure that the
architecture supports distributed session data management?

Use Amazon ElastiCache to manage and store session data.

Use session affinity (sticky sessions) of the ALB to manage session data.

Use Session Manager from AWS Systems Manager to manage the session.

Use the GetSessionToken API operation in AWS Security Token Service (AWS STS)
to manage the session.

Correct answer

Use Amazon ElastiCache to manage and store session data.

Feedback

Incorrect! The correct answer is: Use Amazon ElastiCache to manage and store session
data.
210. A company offers a food delivery service that is growing rapidly. 1/1
Because of the growth, the company�s order processing system is
experiencing scaling problems during peak traffic hours. The current
architecture includes the following:�A group of Amazon EC2 instances
that run in an Amazon EC2 Auto Scaling group to collect orders from the
application�Another group of EC2 instances that run in an Amazon EC2
Auto Scaling group to fulfill ordersThe order collection process occurs
quickly, but the order fulfillment process can take longer. Data must not be
lost because of a scaling event.A solutions architect must ensure that the
order collection process and the order fulfillment process can both scale
properly during peak traffic hours. The solution must optimize utilization of
the company�s AWS resources.Which solution meets these
requirements?

Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto
Scaling groups. Configure each Auto Scaling group’s minimum capacity according
to peak workload values.

Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto
Scaling groups. Configure a CloudWatch alarm to invoke an Amazon Simple
Notification Service (Amazon SNS) topic that creates additional Auto Scaling
groups on demand.

Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order
collection and another for order fulfillment. Configure the EC2 instances to poll their
respective queue. Scale the Auto Scaling groups based on notifications that the
queues send.

Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for
order collection and another for order fulfillment. Configure the EC2 instances
to poll their respective queue. Create a metric based on a backlog per instance
calculation. Scale the Auto Scaling groups based on this metric.

Feedback

Correct! The correct answer is: Provision two Amazon Simple Queue Service (Amazon
SQS) queues: one for order collection and another for order fulfillment. Configure the EC2
instances to poll their respective queue. Create a metric based on a backlog per instance
calculation. Scale the Auto Scaling groups based on this metric.
211. A company hosts multiple production applications. One of the 0/1
applications consists of resources from Amazon EC2, AWS Lambda,
Amazon RDS, Amazon Simple Notification Service (Amazon SNS), and
Amazon Simple Queue Service (Amazon SQS) across multiple AWS
Regions. All company resources are tagged with a tag name of
�application� and a value that corresponds to each application. A
solutions architect must provide the quickest solution for identifying all of
the tagged components.Which solution meets these requirements?

Use AWS CloudTrail to generate a list of resources with the application tag.

Use the AWS CLI to query each service across all Regions to report the tagged
components.

Run a query in Amazon CloudWatch Logs Insights to report on the components


with the application tag.

Run a query with the AWS Resource Groups Tag Editor to report on the resources
globally with the application tag.

Correct answer

Run a query with the AWS Resource Groups Tag Editor to report on the resources
globally with the application tag.

Feedback

Incorrect! The correct answer is: Run a query with the AWS Resource Groups Tag Editor to
report on the resources globally with the application tag.
212. A company needs to export its database once a day to Amazon S3 for 0/1
other teams to access. The exported object size varies between 2 GB and
5 GB. The S3 access pattern for the data is variable and changes rapidly.
The data must be immediately available and must remain accessible for up
to 3 months. The company needs the most cost-effective solution that will
not increase retrieval time.Which S3 storage class should the company use
to meet these requirements?

S3 Intelligent-Tiering

S3 Glacier Instant Retrieval

S3 Standard

S3 Standard-Infrequent Access (S3 Standard-IA)

Correct answer

S3 Intelligent-Tiering

Feedback

Incorrect! The correct answer is: S3 Intelligent-Tiering


213. A company is developing a new mobile app. The company must 1/1
implement proper traffic filtering to protect its Application Load Balancer
(ALB) against common application-level attacks, such as cross-site
scripting or SQL injection. The company has minimal infrastructure and
operational staff. The company needs to reduce its share of the
responsibility in managing, updating, and securing servers for its AWS
environment.What should a solutions architect recommend to meet these
requirements?

Configure AWS WAF rules and associate them with the ALB.

Deploy the application using Amazon S3 with public hosting enabled.

Deploy AWS Shield Advanced and add the ALB as a protected resource.

Create a new ALB that directs traffic to an Amazon EC2 instance running a third-
party firewall, which then passes the traffic to the current ALB.

Feedback

Correct! The correct answer is: Configure AWS WAF rules and associate them with the
ALB.
214. A company�s reporting system delivers hundreds of .csv files to an 1/1
Amazon S3 bucket each day. The company must convert these files to
Apache Parquet format and must store the files in a transformed data
bucket.Which solution will meet these requirements with the LEAST
development effort?

Create an Amazon EMR cluster with Apache Spark installed. Write a Spark
application to transform the data. Use EMR File System (EMRFS) to write files to
the transformed data bucket.

Create an AWS Glue crawler to discover the data. Create an AWS Glue extract,
transform, and load (ETL) job to transform the data. Specify the transformed
data bucket in the output step.

Use AWS Batch to create a job definition with Bash syntax to transform the data
and output the data to the transformed data bucket. Use the job definition to submit
a job. Specify an array job as the job type.

Create an AWS Lambda function to transform the data and output the data to the
transformed data bucket. Configure an event notification for the S3 bucket. Specify
the Lambda function as the destination for the event notification.

Feedback

Correct! The correct answer is: Create an AWS Glue crawler to discover the data. Create an
AWS Glue extract, transform, and load (ETL) job to transform the data. Specify the
transformed data bucket in the output step.
215. A company has 700 TB of backup data stored in network attached 0/1
storage (NAS) in its data center. This backup data need to be accessible
for infrequent regulatory requests and must be retained 7 years. The
company has decided to migrate this backup data from its data center to
AWS. The migration must be complete within 1 month. The company has
500 Mbps of dedicated bandwidth on its public internet connection
available for data transfer.What should a solutions architect do to migrate
and store the data at the LOWEST cost?

Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition
the files to Amazon S3 Glacier Deep Archive.

Deploy a VPN connection between the data center and Amazon VPC. Use the AWS
CLI to copy the data from on premises to Amazon S3 Glacier.

Provision a 500 Mbps AWS Direct Connect connection and transfer the data to
Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep
Archive.

Use AWS DataSync to transfer the data and deploy a DataSync agent on
premises. Use the DataSync task to copy files from the on-premises NAS
storage to Amazon S3 Glacier.

Correct answer

Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition
the files to Amazon S3 Glacier Deep Archive.

Feedback

Incorrect! The correct answer is: Order AWS Snowball devices to transfer the data. Use a
lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
216. A company has a serverless website with millions of objects in an 0/1
Amazon S3 bucket. The company uses the S3 bucket as the origin for an
Amazon CloudFront distribution. The company did not set encryption on
the S3 bucket before the objects were loaded. A solutions architect needs
to enable encryption for all existing objects and for all objects that are
added to the S3 bucket in the future.Which solution will meet these
requirements with the LEAST amount of effort?

Create a new S3 bucket. Turn on the default encryption settings for the new S3
bucket. Download all existing objects to temporary local storage. Upload the
objects to the new S3 bucket.

Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory
feature to create a .csv file that lists the unencrypted objects. Run an S3 Batch
Operations job that uses the copy command to encrypt those objects.

Create a new encryption key by using AWS Key Management Service (AWS
KMS). Change the settings on the S3 bucket to use server-side encryption with
AWS KMS managed encryption keys (SSE-KMS). Turn on versioning for the S3
bucket.

Navigate to Amazon S3 in the AWS Management Console. Browse the S3 bucket’s


objects. Sort by the encryption field. Select each unencrypted object. Use the
Modify button to apply default encryption settings to every unencrypted object in
the S3 bucket.

Correct answer

Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory
feature to create a .csv file that lists the unencrypted objects. Run an S3 Batch
Operations job that uses the copy command to encrypt those objects.

Feedback

Incorrect! The correct answer is: Turn on the default encryption settings for the S3 bucket.
Use the S3 Inventory feature to create a .csv file that lists the unencrypted objects. Run an
S3 Batch Operations job that uses the copy command to encrypt those objects.
217. A company runs a global web application on Amazon EC2 instances 0/1
behind an Application Load Balancer. The application stores data in
Amazon Aurora. The company needs to create a disaster recovery solution
and can tolerate up to 30 minutes of downtime and potential data loss. The
solution does not need to handle the load when the primary infrastructure
is healthy.What should a solutions architect do to meet these
requirements?

Deploy the application with the required infrastructure elements in place. Use
Amazon Route 53 to configure active-passive failover. Create an Aurora Replica in a
second AWS Region.

Host a scaled-down deployment of the application in a second AWS Region. Use


Amazon Route 53 to configure active-active failover. Create an Aurora Replica in the
second Region.

Replicate the primary infrastructure in a second AWS Region. Use Amazon Route
53 to configure active-active failover. Create an Aurora database that is restored
from the latest snapshot.

Back up data with AWS Backup. Use the backup to create the required
infrastructure in a second AWS Region. Use Amazon Route 53 to configure
active-passive failover. Create an Aurora second primary instance in the second
Region.

Correct answer

Deploy the application with the required infrastructure elements in place. Use
Amazon Route 53 to configure active-passive failover. Create an Aurora Replica in a
second AWS Region.

Feedback

Incorrect! The correct answer is: Deploy the application with the required infrastructure
elements in place. Use Amazon Route 53 to configure active-passive failover. Create an
Aurora Replica in a second AWS Region.
218. A company has a web server running on an Amazon EC2 instance in a 0/1
public subnet with an Elastic IP address. The default security group is
assigned to the EC2 instance. The default network ACL has been modified
to block all traffic. A solutions architect needs to make the web server
accessible from everywhere on port 443.Which combination of steps will
accomplish this task? (Choose two.)

Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.

Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.

Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.

Update the network ACL to allow inbound/outbound TCP port 443 from source
0.0.0.0/0 and to destination 0.0.0.0/0.

Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and
outbound TCP port 32768-65535 to destination 0.0.0.0/0.

Correct answer

Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.

Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and
outbound TCP port 32768-65535 to destination 0.0.0.0/0.

Feedback

Incorrect! The correct answer is: Create a security group with a rule to allow TCP port 443
from source 0.0.0.0/0.
219. A company�s application is having performance issues. The 0/1
application is stateful and needs to complete in-memory tasks on Amazon
EC2 instances. The company used AWS CloudFormation to deploy
infrastructure and used the M5 EC2 instance family. As traffic increased,
the application performance degraded. Users are reporting delays when
the users attempt to access the application.Which solution will resolve
these issues in the MOST operationally efficient way?

Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group.
Make the changes by using the AWS Management Console.

Modify the CloudFormation templates to run the EC2 instances in an Auto


Scaling group. Increase the desired capacity and the maximum capacity of the
Auto Scaling group manually when an increase is necessary.

Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2
instances. Use Amazon CloudWatch built-in EC2 memory metrics to track the
application performance for future capacity planning.

Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2
instances. Deploy the Amazon CloudWatch agent on the EC2 instances to generate
custom application latency metrics for future capacity planning.

Correct answer

Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2
instances. Deploy the Amazon CloudWatch agent on the EC2 instances to generate
custom application latency metrics for future capacity planning.

Feedback

Incorrect! The correct answer is: Modify the CloudFormation templates. Replace the EC2
instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2
instances to generate custom application latency metrics for future capacity planning.
220. A solutions architect is designing a new API using Amazon API 1/1
Gateway that will receive requests from users. The volume of requests is
highly variable; several hours can pass without receiving a single request.
The data processing will take place asynchronously, but should be
completed within a few seconds after a request is made.Which compute
service should the solutions architect have the API invoke to deliver the
requirements at the lowest cost?

An AWS Glue job

An AWS Lambda function

A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon


EKS)

A containerized service hosted in Amazon ECS with Amazon EC2

Feedback

Correct! The correct answer is: An AWS Lambda function


221. A company runs an application on a group of Amazon Linux EC2 1/1
instances. For compliance reasons, the company must retain all
application log files for 7 years. The log files will be analyzed by a reporting
tool that must be able to access all the files concurrently.Which storage
solution meets these requirements MOST cost-effectively?

Amazon Elastic Block Store (Amazon EBS)

Amazon Elastic File System (Amazon EFS)

Amazon EC2 instance store

Amazon S3

Feedback

Correct! The correct answer is: Amazon S3


222. A company has hired an external vendor to perform work in the 0/1
company�s AWS account. The vendor uses an automated tool that is
hosted in an AWS account that the vendor owns. The vendor does not have
IAM access to the company�s AWS account.How should a solutions
architect grant this access to the vendor?

Create an IAM role in the company’s account to delegate access to the vendor’s
IAM role. Attach the appropriate IAM policies to the role for the permissions that
the vendor requires.

Create an IAM user in the company’s account with a password that meets the
password complexity requirements. Attach the appropriate IAM policies to the user
for the permissions that the vendor requires.

Create an IAM group in the company’s account. Add the tool’s IAM user from the
vendor account to the group. Attach the appropriate IAM policies to the group for
the permissions that the vendor requires.

Create a new identity provider by choosing “AWS account” as the provider type
in the IAM console. Supply the vendor’s AWS account ID and user name. Attach
the appropriate IAM policies to the new provider for the permissions that the
vendor requires.

Correct answer

Create an IAM role in the company’s account to delegate access to the vendor’s IAM
role. Attach the appropriate IAM policies to the role for the permissions that the
vendor requires.

Feedback

Incorrect! The correct answer is: Create an IAM role in the company’s account to delegate
access to the vendor’s IAM role. Attach the appropriate IAM policies to the role for the
permissions that the vendor requires.
223. A company has deployed a Java Spring Boot application as a pod that 1/1
runs on Amazon Elastic Kubernetes Service (Amazon EKS) in private
subnets. The application needs to write data to an Amazon DynamoDB
table. A solutions architect must ensure that the application can interact
with the DynamoDB table without exposing traffic to the internet.Which
combination of steps should the solutions architect take to accomplish
this goal? (Choose two.)

Attach an IAM role that has sufficient privileges to the EKS pod.

Attach an IAM user that has sufficient privileges to the EKS pod.

Allow outbound connectivity to the DynamoDB table through the private subnets’
network ACLs.

Create a VPC endpoint for DynamoDB.

Embed the access keys in the Java Spring Boot code.

Feedback

Correct! The correct answer is: Attach an IAM role that has sufficient privileges to the EKS
pod.
224. A company recently migrated its web application to AWS by rehosting 0/1
the application on Amazon EC2 instances in a single AWS Region. The
company wants to redesign its application architecture to be highly
available and fault tolerant. Traffic must reach all running EC2 instances
randomly.Which combination of steps should the company take to meet
these requirements? (Choose two.)

Create an Amazon Route 53 failover routing policy.

Create an Amazon Route 53 weighted routing policy.

Create an Amazon Route 53 multivalue answer routing policy.

Launch three EC2 instances: two instances in one Availability Zone and one
instance in another Availability Zone.

Launch four EC2 instances: two instances in one Availability Zone and two
instances in another Availability Zone.

Correct answer

Create an Amazon Route 53 multivalue answer routing policy.

Launch four EC2 instances: two instances in one Availability Zone and two
instances in another Availability Zone.

Feedback

Incorrect! The correct answer is: Create an Amazon Route 53 weighted routing policy.
225. A media company collects and analyzes user activity data on 1/1
premises. The company wants to migrate this capability to AWS. The user
activity data store will continue to grow and will be petabytes in size. The
company needs to build a highly available data ingestion solution that
facilitates on-demand analytics of existing data and new data with
SQL.Which solution will meet these requirements with the LEAST
operational overhead?

Send activity data to an Amazon Kinesis data stream. Configure the stream to
deliver the data to an Amazon S3 bucket.

Send activity data to an Amazon Kinesis Data Firehose delivery stream.


Configure the stream to deliver the data to an Amazon Redshift cluster.

Place activity data in an Amazon S3 bucket. Configure Amazon S3 to run an AWS


Lambda function on the data as the data arrives in the S3 bucket.

Create an ingestion service on Amazon EC2 instances that are spread across
multiple Availability Zones. Configure the service to forward data to an Amazon
RDS Multi-AZ database.

Feedback

Correct! The correct answer is: Send activity data to an Amazon Kinesis Data Firehose
delivery stream. Configure the stream to deliver the data to an Amazon Redshift cluster.
226. A company collects data from thousands of remote devices by using 1/1
a RESTful web services application that runs on an Amazon EC2 instance.
The EC2 instance receives the raw data, transforms the raw data, and
stores all the data in an Amazon S3 bucket. The number of remote devices
will increase into the millions soon. The company needs a highly scalable
solution that minimizes operational overhead.Which combination of steps
should a solutions architect take to meet these requirements? (Choose
two.)

Use AWS Glue to process the raw data in Amazon S3.

Use Amazon Route 53 to route traffic to different EC2 instances.

Add more EC2 instances to accommodate the increasing amount of incoming data.

Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2
instances to process the data.

Use Amazon API Gateway to send the raw data to an Amazon Kinesis data
stream. Configure Amazon Kinesis Data Firehose to use the data stream as a
source to deliver the data to Amazon S3.

Feedback

Correct! The correct answer is: Use AWS Glue to process the raw data in Amazon S3.
227. A company needs to retain its AWS CloudTrail logs for 3 years. The 1/1
company is enforcing CloudTrail across a set of AWS accounts by using
AWS Organizations from the parent account. The CloudTrail target S3
bucket is configured with S3 Versioning enabled. An S3 Lifecycle policy is
in place to delete current objects after 3 years.After the fourth year of use
of the S3 bucket, the S3 bucket metrics show that the number of objects
has continued to rise. However, the number of new CloudTrail logs that are
delivered to the S3 bucket has remained consistent.Which solution will
delete objects that are older than 3 years in the MOST cost-effective
manner?

Configure the organization’s centralized CloudTrail trail to expire objects after 3


years.

Configure the S3 Lifecycle policy to delete previous versions as well as current


versions.

Create an AWS Lambda function to enumerate and delete objects from Amazon S3
that are older than 3 years.

Configure the parent account as the owner of all objects that are delivered to the S3
bucket.

Feedback

Correct! The correct answer is: Configure the S3 Lifecycle policy to delete previous
versions as well as current versions.
228. A company has an API that receives real-time data from a fleet of 1/1
monitoring devices. The API stores this data in an Amazon RDS DB
instance for later analysis. The amount of data that the monitoring devices
send to the API fluctuates. During periods of heavy traffic, the API often
returns timeout errors.After an inspection of the logs, the company
determines that the database is not capable of processing the volume of
write traffic that comes from the API. A solutions architect must minimize
the number of connections to the database and must ensure that data is
not lost during periods of heavy traffic.Which solution will meet these
requirements?

Increase the size of the DB instance to an instance type that has more available
memory.

Modify the DB instance to be a Multi-AZ DB instance. Configure the application to


write to all active RDS DB instances.

Modify the API to write incoming data to an Amazon Simple Queue Service
(Amazon SQS) queue. Use an AWS Lambda function that Amazon SQS invokes
to write data from the queue to the database.

Modify the API to write incoming data to an Amazon Simple Notification Service
(Amazon SNS) topic. Use an AWS Lambda function that Amazon SNS invokes to
write data from the topic to the database.

Feedback

Correct! The correct answer is: Modify the API to write incoming data to an Amazon
Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that Amazon
SQS invokes to write data from the queue to the database.
229. A company manages its own Amazon EC2 instances that run MySQL 1/1
databases. The company is manually managing replication and scaling as
demand increases or decreases. The company needs a new solution that
simplifies the process of adding or removing compute capacity to or from
its database tier as needed. The solution also must offer improved
performance, scaling, and durability with minimal effort from
operations.Which solution meets these requirements?

Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.

Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.

Combine the databases into one larger MySQL database. Run the larger database
on larger EC2 instances.

Create an EC2 Auto Scaling group for the database tier. Migrate the existing
databases to the new environment.

Feedback

Correct! The correct answer is: Migrate the databases to Amazon Aurora Serverless for
Aurora MySQL.
230. A company is concerned that two NAT instances in use will no longer 0/1
be able to support the traffic needed for the company�s application. A
solutions architect wants to implement a solution that is highly available,
fault tolerant, and automatically scalable.What should the solutions
architect recommend?

Remove the two NAT instances and replace them with two NAT gateways in the
same Availability Zone.

Use Auto Scaling groups with Network Load Balancers for the NAT instances in
different Availability Zones.

Remove the two NAT instances and replace them with two NAT gateways in
different Availability Zones.

Replace the two NAT instances with Spot Instances in different Availability Zones
and deploy a Network Load Balancer.

Correct answer

Remove the two NAT instances and replace them with two NAT gateways in
different Availability Zones.

Feedback

Incorrect! The correct answer is: Remove the two NAT instances and replace them with
two NAT gateways in different Availability Zones.
231. An application runs on an Amazon EC2 instance that has an Elastic IP 1/1
address in VPC A. The application requires access to a database in VPC B.
Both VPCs are in the same AWS account.Which solution will provide the
required access MOST securely?

Create a DB instance security group that allows all traffic from the public IP address
of the application server in VPC A.

Configure a VPC peering connection between VPC A and VPC B.

Make the DB instance publicly accessible. Assign a public IP address to the DB


instance.

Launch an EC2 instance with an Elastic IP address into VPC B. Proxy all requests
through the new EC2 instance.

Feedback

Correct! The correct answer is: Configure a VPC peering connection between VPC A and
VPC B.
232. A company runs demonstration environments for its customers on 1/1
Amazon EC2 instances. Each environment is isolated in its own VPC. The
company�s operations team needs to be notified when RDP or SSH
access to an environment has been established.

Configure Amazon CloudWatch Application Insights to create AWS Systems


Manager OpsItems when RDP or SSH access is detected.

Configure the EC2 instances with an IAM instance profile that has an IAM role with
the AmazonSSMManagedInstanceCore policy attached.

Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric
filters. Create an Amazon CloudWatch metric alarm with a notification action
for when the alarm is in the ALARM state.

Configure an Amazon EventBridge rule to listen for events of type EC2 Instance
State-change Notification. Configure an Amazon Simple Notification Service
(Amazon SNS) topic as a target. Subscribe the operations team to the topic.

Feedback

Correct! The correct answer is: Publish VPC flow logs to Amazon CloudWatch Logs.
Create required metric filters. Create an Amazon CloudWatch metric alarm with a
notification action for when the alarm is in the ALARM state.
233. A solutions architect has created a new AWS account and must 1/1
secure AWS account root user access.Which combination of actions will
accomplish this? (Choose two.)

Ensure the root user uses a strong password.

Enable multi-factor authentication to the root user.

Store root user access keys in an encrypted Amazon S3 bucket.

Add the root user to a group containing administrative permissions.

Apply the required permissions to the root user with an inline policy document.

Feedback

Correct! The correct answer is: Ensure the root user uses a strong password.
234. A company is building a new web-based customer relationship 1/1
management application. The application will use several Amazon EC2
instances that are backed by Amazon Elastic Block Store (Amazon EBS)
volumes behind an Application Load Balancer (ALB). The application will
also use an Amazon Aurora database. All data for the application must be
encrypted at rest and in transit.Which solution will meet these
requirements?

Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt
data in transit. Use AWS Certificate Manager (ACM) to encrypt the EBS volumes
and Aurora database storage at rest.

Use the AWS root account to log in to the AWS Management Console. Upload the
company’s encryption certificates. While in the root account, select the option to
turn on encryption for all data at rest and in transit for the account.

Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and
Aurora database storage at rest. Attach an AWS Certificate Manager (ACM)
certificate to the ALB to encrypt data in transit.

Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys
to AWS Key Management Service (AWS KMS) Attach the KMS keys to the ALB to
encrypt data in transit.

Feedback

Correct! The correct answer is: Use AWS Key Management Service (AWS KMS) to encrypt
the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate Manager
(ACM) certificate to the ALB to encrypt data in transit.
235. A company is moving its on-premises Oracle database to Amazon 1/1
Aurora PostgreSQL. The database has several applications that write to the
same tables. The applications need to be migrated one by one with a
month in between each migration. Management has expressed concerns
that the database has a high number of reads and writes. The data must be
kept in sync across both databases throughout the migration.What should
a solutions architect recommend?

Use AWS DataSync for the initial migration. Use AWS Database Migration Service
(AWS DMS) to create a change data capture (CDC) replication task and a table
mapping to select all tables.

Use AWS DataSync for the initial migration. Use AWS Database Migration Service
(AWS DMS) to create a full load plus change data capture (CDC) replication task
and a table mapping to select all tables.

Use the AWS Schema Conversion Tool with AWS Database Migration Service
(AWS DMS) using a memory optimized replication instance. Create a full load
plus change data capture (CDC) replication task and a table mapping to select
all tables.

Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS
DMS) using a compute optimized replication instance. Create a full load plus
change data capture (CDC) replication task and a table mapping to select the
largest tables.

Feedback

Correct! The correct answer is: Use the AWS Schema Conversion Tool with AWS Database
Migration Service (AWS DMS) using a memory optimized replication instance. Create a full
load plus change data capture (CDC) replication task and a table mapping to select all
tables.
236. A company has a three-tier application for image sharing. The 1/1
application uses an Amazon EC2 instance for the front-end layer, another
EC2 instance for the application layer, and a third EC2 instance for a
MySQL database. A solutions architect must design a scalable and highly
available solution that requires the least amount of change to the
application.Which solution meets these requirements?

Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the
application layer. Move the database to an Amazon DynamoDB table. Use Amazon
S3 to store and serve users’ images.

Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end
layer and the application layer. Move the database to an Amazon RDS DB instance
with multiple read replicas to serve users’ images.

Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto
Scaling group for the application layer. Move the database to a memory optimized
instance type to store and serve users’ images.

Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-
end layer and the application layer. Move the database to an Amazon RDS Multi-
AZ DB instance. Use Amazon S3 to store and serve users’ images.

Feedback

Correct! The correct answer is: Use load-balanced Multi-AZ AWS Elastic Beanstalk
environments for the front-end layer and the application layer. Move the database to an
Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.
237. An application running on an Amazon EC2 instance in VPC-A needs to 0/1
access files in another EC2 instance in VPC-B. Both VPCs are in separate
AWS accounts. The network administrator needs to design a solution to
configure secure access to EC2 instance in VPC-B from VPC-A. The
connectivity should not have a single point of failure or bandwidth
concerns.Which solution will meet these requirements?

Set up a VPC peering connection between VPC-A and VPC-B.

Set up VPC gateway endpoints for the EC2 instance running in VPC-B.

Attach a virtual private gateway to VPC-B and set up routing from VPC-A.

Create a private virtual interface (VIF) for the EC2 instance running in VPC-B
and add appropriate routes from VPC-A.

Correct answer

Set up a VPC peering connection between VPC-A and VPC-B.

Feedback

Incorrect! The correct answer is: Set up a VPC peering connection between VPC-A and
VPC-B.
238. A company wants to experiment with individual AWS accounts for its 1/1
engineer team. The company wants to be notified as soon as the Amazon
EC2 instance usage for a given month exceeds a specific threshold for
each account.What should a solutions architect do to meet this
requirement MOST cost-effectively?

Use Cost Explorer to create a daily report of costs by service. Filter the report by
EC2 instances. Configure Cost Explorer to send an Amazon Simple Email Service
(Amazon SES) notification when a threshold is exceeded.

Use Cost Explorer to create a monthly report of costs by service. Filter the report by
EC2 instances. Configure Cost Explorer to send an Amazon Simple Email Service
(Amazon SES) notification when a threshold is exceeded.

Use AWS Budgets to create a cost budget for each account. Set the period to
monthly. Set the scope to EC2 instances. Set an alert threshold for the budget.
Configure an Amazon Simple Notification Service (Amazon SNS) topic to
receive a notification when a threshold is exceeded.

Use AWS Cost and Usage Reports to create a report with hourly granularity.
Integrate the report data with Amazon Athena. Use Amazon EventBridge to
schedule an Athena query. Configure an Amazon Simple Notification Service
(Amazon SNS) topic to receive a notification when a threshold is exceeded.

Feedback

Correct! The correct answer is: Use AWS Budgets to create a cost budget for each
account. Set the period to monthly. Set the scope to EC2 instances. Set an alert threshold
for the budget. Configure an Amazon Simple Notification Service (Amazon SNS) topic to
receive a notification when a threshold is exceeded.
239. A solutions architect needs to design a new microservice for a 1/1
company�s application. Clients must be able to call an HTTPS endpoint
to reach the microservice. The microservice also must use AWS Identity
and Access Management (IAM) to authenticate calls. The solutions
architect will write the logic for this microservice by using a single AWS
Lambda function that is written in Go 1.x.Which solution will deploy the
function in the MOST operationally efficient way?

Create an Amazon API Gateway REST API. Configure the method to use the
Lambda function. Enable IAM authentication on the API.

Create a Lambda function URL for the function. Specify AWS_IAM as the
authentication type.

Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge.


Integrate IAM authentication logic into the Lambda@Edge function.

Create an Amazon CloudFront distribution. Deploy the function to CloudFront


Functions. Specify AWS_IAM as the authentication type.

Feedback

Correct! The correct answer is: Create an Amazon API Gateway REST API. Configure the
method to use the Lambda function. Enable IAM authentication on the API.
240. A company previously migrated its data warehouse solution to AWS. 1/1
The company also has an AWS Direct Connect connection. Corporate
office users query the data warehouse using a visualization tool. The
average size of a query returned by the data warehouse is 50 MB and each
webpage sent by the visualization tool is approximately 500 KB. Result
sets returned by the data warehouse are not cached.Which solution
provides the LOWEST data transfer egress cost for the company?

Host the visualization tool on premises and query the data warehouse directly over
the internet.

Host the visualization tool in the same AWS Region as the data warehouse. Access
it over the internet.

Host the visualization tool on premises and query the data warehouse directly over
a Direct Connect connection at a location in the same AWS Region.

Host the visualization tool in the same AWS Region as the data warehouse and
access it over a Direct Connect connection at a location in the same Region.

Feedback

Correct! The correct answer is: Host the visualization tool in the same AWS Region as the
data warehouse and access it over a Direct Connect connection at a location in the same
Region.
241. An online learning company is migrating to the AWS Cloud. The 1/1
company maintains its student records in a PostgreSQL database. The
company needs a solution in which its data is available and online across
multiple AWS Regions at all times.Which solution will meet these
requirements with the LEAST amount of operational overhead?

Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2


instances.

Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB


instance with the Multi-AZ feature turned on.

Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance.


Create a read replica in another Region.

Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance.


Set up DB snapshots to be copied to another Region.

Feedback

Correct! The correct answer is: Migrate the PostgreSQL database to an Amazon RDS for
PostgreSQL DB instance with the Multi-AZ feature turned on.
242. A company hosts its web application on AWS using seven Amazon 0/1
EC2 instances. The company requires that the IP addresses of all healthy
EC2 instances be returned in response to DNS queries.Which policy should
be used to meet this requirement?

Simple routing policy

Latency routing policy

Multivalue routing policy

Geolocation routing policy

Correct answer

Multivalue routing policy

Feedback

Incorrect! The correct answer is: Multivalue routing policy


243. A medical research lab produces data that is related to a new study. 1/1
The lab wants to make the data available with minimum latency to clinics
across the country for their on-premises, file-based applications. The data
files are stored in an Amazon S3 bucket that has read-only permissions for
each clinic.What should a solutions architect recommend to meet these
requirements?

Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on


premises at each clinic

Migrate the files to each clinic’s on-premises applications by using AWS DataSync
for processing.

Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on


premises at each clinic.

Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic’s on-
premises servers.

Feedback

Correct! The correct answer is: Deploy an AWS Storage Gateway file gateway as a virtual
machine (VM) on premises at each clinic
244. A company is using a content management system that runs on a 1/1
single Amazon EC2 instance. The EC2 instance contains both the web
server and the database software. The company must make its website
platform highly available and must enable the website to scale to meet
user demand.What should a solutions architect recommend to meet these
requirements?

Move the database to Amazon RDS, and enable automatic backups. Manually
launch another EC2 instance in the same Availability Zone. Configure an
Application Load Balancer in the Availability Zone, and set the two instances as
targets.

Migrate the database to an Amazon Aurora instance with a read replica in the same
Availability Zone as the existing EC2 instance. Manually launch another EC2
instance in the same Availability Zone. Configure an Application Load Balancer, and
set the two EC2 instances as targets.

Move the database to Amazon Aurora with a read replica in another Availability
Zone. Create an Amazon Machine Image (AMI) from the EC2 instance.
Configure an Application Load Balancer in two Availability Zones. Attach an
Auto Scaling group that uses the AMI across two Availability Zones.

Move the database to a separate EC2 instance, and schedule backups to Amazon
S3. Create an Amazon Machine Image (AMI) from the original EC2 instance.
Configure an Application Load Balancer in two Availability Zones. Attach an Auto
Scaling group that uses the AMI across two Availability Zones.

Feedback

Correct! The correct answer is: Move the database to Amazon Aurora with a read replica in
another Availability Zone. Create an Amazon Machine Image (AMI) from the EC2 instance.
Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling
group that uses the AMI across two Availability Zones.
245. A company is launching an application on AWS. The application uses 1/1
an Application Load Balancer (ALB) to direct traffic to at least two Amazon
EC2 instances in a single target group. The instances are in an Auto
Scaling group for each environment. The company requires a development
environment and a production environment. The production environment
will have periods of high traffic.Which solution will configure the
development environment MOST cost-effectively?

Reconfigure the target group in the development environment to have only one
EC2 instance as a target.

Change the ALB balancing algorithm to least outstanding requests.

Reduce the size of the EC2 instances in both environments.

Reduce the maximum number of EC2 instances in the development environment’s


Auto Scaling group.

Feedback

Correct! The correct answer is: Reconfigure the target group in the development
environment to have only one EC2 instance as a target.
246. A company runs a web application on Amazon EC2 instances in 1/1
multiple Availability Zones. The EC2 instances are in private subnets. A
solutions architect implements an internet-facing Application Load
Balancer (ALB) and specifies the EC2 instances as the target group.
However, the internet traffic is not reaching the EC2 instances.How should
the solutions architect reconfigure the architecture to resolve this issue?

Replace the ALB with a Network Load Balancer. Configure a NAT gateway in a
public subnet to allow internet traffic.

Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security
groups to allow outbound traffic to 0.0.0.0/0.

Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic
through the internet gateway route. Add a rule to the EC2 instances’ security groups
to allow outbound traffic to 0.0.0.0/0.

Create public subnets in each Availability Zone. Associate the public subnets
with the ALB. Update the route tables for the public subnets with a route to the
private subnets.

Feedback

Correct! The correct answer is: Create public subnets in each Availability Zone. Associate
the public subnets with the ALB. Update the route tables for the public subnets with a
route to the private subnets.
247. A company has deployed a database in Amazon RDS for MySQL. Due 0/1
to increased transactions, the database support team is reporting slow
reads against the DB instance and recommends adding a read
replica.Which combination of actions should a solutions architect take
before implementing this change? (Choose two.)

Enable binlog replication on the RDS primary node.

Choose a failover priority for the source DB instance.

Allow long-running transactions to complete on the source DB instance.

Create a global table and specify the AWS Regions where the table will be
available.

Enable automatic backups on the source instance by setting the backup retention
period to a value other than 0.

Correct answer

Allow long-running transactions to complete on the source DB instance.

Enable automatic backups on the source instance by setting the backup retention
period to a value other than 0.

Feedback

Incorrect! The correct answer is: Allow long-running transactions to complete on the
source DB instance.
248. A company runs analytics software on Amazon EC2 instances. The 1/1
software accepts job requests from users to process data that has been
uploaded to Amazon S3. Users report that some submitted data is not
being processed Amazon CloudWatch reveals that the EC2 instances have
a consistent CPU utilization at or near 100%. The company wants to
improve system performance and scale the system based on user
load.What should a solutions architect do to meet these requirements?

Create a copy of the instance. Place all instances behind an Application Load
Balancer.

Create an S3 VPC endpoint for Amazon S3. Update the software to reference the
endpoint.

Stop the EC2 instances. Modify the instance type to one with a more powerful CPU
and more memory. Restart the instances.

Route incoming requests to Amazon Simple Queue Service (Amazon SQS).


Configure an EC2 Auto Scaling group based on queue size. Update the software
to read from the queue.

Feedback

Correct! The correct answer is: Route incoming requests to Amazon Simple Queue Service
(Amazon SQS). Configure an EC2 Auto Scaling group based on queue size. Update the
software to read from the queue.
249. A company is implementing a shared storage solution for a media 1/1
application that is hosted in the AWS Cloud. The company needs the ability
to use SMB clients to access data. The solution must be fully
managed.Which AWS solution meets these requirements?

Create an AWS Storage Gateway volume gateway. Create a file share that uses the
required client protocol. Connect the application server to the file share.

Create an AWS Storage Gateway tape gateway. Configure tapes to use Amazon S3.
Connect the application server to the tape gateway.

Create an Amazon EC2 Windows instance. Install and configure a Windows file
share role on the instance. Connect the application server to the file share.

Create an Amazon FSx for Windows File Server file system. Attach the file
system to the origin server. Connect the application server to the file system.

Feedback

Correct! The correct answer is: Create an Amazon FSx for Windows File Server file system.
Attach the file system to the origin server. Connect the application server to the file
system.
250. A company�s security team requests that network traffic be 1/1
captured in VPC Flow Logs. The logs will be frequently accessed for 90
days and then accessed intermittently.What should a solutions architect do
to meet these requirements when configuring the logs?

Use Amazon CloudWatch as the target. Set the CloudWatch log group with an
expiration of 90 days

Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain
the logs for 90 days.

Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3


bucket, and enable S3 Intelligent-Tiering.

Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the


logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.

Feedback

Correct! The correct answer is: Use Amazon S3 as the target. Enable an S3 Lifecycle policy
to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
251. An Amazon EC2 instance is located in a private subnet in a new VPC. 1/1
This subnet does not have outbound internet access, but the EC2 instance
needs the ability to download monthly security updates from an outside
vendor.What should a solutions architect do to meet these requirements?

Create an internet gateway, and attach it to the VPC. Configure the private subnet
route table to use the internet gateway as the default route.

Create a NAT gateway, and place it in a public subnet. Configure the private
subnet route table to use the NAT gateway as the default route.

Create a NAT instance, and place it in the same subnet where the EC2 instance is
located. Configure the private subnet route table to use the NAT instance as the
default route.

Create an internet gateway, and attach it to the VPC. Create a NAT instance, and
place it in the same subnet where the EC2 instance is located. Configure the private
subnet route table to use the internet gateway as the default route.

Feedback

Correct! The correct answer is: Create a NAT gateway, and place it in a public subnet.
Configure the private subnet route table to use the NAT gateway as the default route.
252. A solutions architect needs to design a system to store client case 1/1
files. The files are core company assets and are important. The number of
files will grow over time.The files must be simultaneously accessible from
multiple application servers that run on Amazon EC2 instances. The
solution must have built-in redundancy.Which solution meets these
requirements?

Amazon Elastic File System (Amazon EFS)

Amazon Elastic Block Store (Amazon EBS)

Amazon S3 Glacier Deep Archive

AWS Backup

Feedback

Correct! The correct answer is: Amazon Elastic File System (Amazon EFS)
253. A solutions architect has created two IAM policies: Policy1 and 1/1
Policy2. Both policies are attached to an IAM group.A cloud engineer is
added as an IAM user to the IAM group. Which action will the cloud
engineer be able to perform?

Deleting IAM users

Deleting directories

Deleting Amazon EC2 instances

Deleting logs from Amazon CloudWatch Logs

Feedback

Correct! The correct answer is: Deleting Amazon EC2 instances


254. A company is reviewing a recent migration of a three-tier application 0/1
to a VPC. The security team discovers that the principle of least privilege is
not being applied to Amazon EC2 security group ingress and egress rules
between the application tiers.What should a solutions architect do to
correct this issue?

Create security group rules using the instance ID as the source or destination.

Create security group rules using the security group ID as the source or destination.

Create security group rules using the VPC CIDR blocks as the source or destination.

Create security group rules using the subnet CIDR blocks as the source or
destination.

Correct answer

Create security group rules using the security group ID as the source or destination.

Feedback

Incorrect! The correct answer is: Create security group rules using the security group ID as
the source or destination.
255. A company has an ecommerce checkout workflow that writes an 1/1
order to a database and calls a service to process the payment. Users are
experiencing timeouts during the checkout process. When users resubmit
the checkout form, multiple unique orders are created for the same desired
transaction.How should a solutions architect refactor this workflow to
prevent the creation of multiple orders?

Configure the web application to send an order message to Amazon Kinesis Data
Firehose. Set the payment service to retrieve the message from Kinesis Data
Firehose and process the order.

Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the
logged application path request. Use Lambda to query the database, call the
payment service, and pass in the order information.

Store the order in the database. Send a message that includes the order number to
Amazon Simple Notification Service (Amazon SNS). Set the payment service to poll
Amazon SNS, retrieve the message, and process the order.

Store the order in the database. Send a message that includes the order
number to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set
the payment service to retrieve the message and process the order. Delete the
message from the queue.

Feedback

Correct! The correct answer is: Store the order in the database. Send a message that
includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO
queue. Set the payment service to retrieve the message and process the order. Delete the
message from the queue.
256. A solutions architect is implementing a document review application 1/1
using an Amazon S3 bucket for storage. The solution must prevent
accidental deletion of the documents and ensure that all versions of the
documents are available. Users must be able to download, modify, and
upload documents.Which combination of actions should be taken to meet
these requirements? (Choose two.)

Enable a read-only bucket ACL.

Enable versioning on the bucket.

Attach an IAM policy to the bucket.

Enable MFA Delete on the bucket.

Encrypt the bucket using AWS KMS.

Feedback

Correct! The correct answer is: Enable versioning on the bucket.


257. A company is building a solution that will report Amazon EC2 Auto 1/1
Scaling events across all the applications in an AWS account. The
company needs to use a serverless solution to store the EC2 Auto Scaling
status data in Amazon S3. The company then will use the data in Amazon
S3 to provide near-real-time updates in a dashboard. The solution must not
affect the speed of EC2 instance launches.How should the company move
the data to Amazon S3 to meet these requirements?

Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status
data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.

Launch an Amazon EMR cluster to collect the EC2 Auto Scaling status data and
send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.

Create an Amazon EventBridge rule to invoke an AWS Lambda function on a


schedule. Configure the Lambda function to send the EC2 Auto Scaling status data
directly to Amazon S3.

Use a bootstrap script during the launch of an EC2 instance to install Amazon
Kinesis Agent. Configure Kinesis Agent to collect the EC2 Auto Scaling status data
and send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.

Feedback

Correct! The correct answer is: Use an Amazon CloudWatch metric stream to send the
EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in Amazon
S3.
258. A company has an application that places hundreds of .csv files into 1/1
an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file
is uploaded, the company needs to convert the file to Apache Parquet
format and place the output file into an S3 bucket.Which solution will meet
these requirements with the LEAST operational overhead?

Create an AWS Lambda function to download the .csv files, convert the files to
Parquet format, and place the output files in an S3 bucket. Invoke the Lambda
function for each S3 PUT event.

Create an Apache Spark job to read the .csv files, convert the files to Parquet
format, and place the output files in an S3 bucket. Create an AWS Lambda function
for each S3 PUT event to invoke the Spark job.

Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the
application places the .csv files. Schedule an AWS Lambda function to periodically
use Amazon Athena to query the AWS Glue table, convert the query results into
Parquet format, and place the output files into an S3 bucket.

Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv
files to Parquet format and place the output files into an S3 bucket. Create an
AWS Lambda function for each S3 PUT event to invoke the ETL job.

Feedback

Correct! The correct answer is: Create an AWS Glue extract, transform, and load (ETL) job
to convert the .csv files to Parquet format and place the output files into an S3 bucket.
Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.
259. A company is implementing new data retention policies for all 0/1
databases that run on Amazon RDS DB instances. The company must
retain daily backups for a minimum period of 2 years. The backups must
be consistent and restorable.Which solution should a solutions architect
recommend to meet these requirements?

Create a backup vault in AWS Backup to retain RDS backups. Create a new backup
plan with a daily schedule and an expiration period of 2 years after creation. Assign
the RDS DB instances to the backup plan.

Configure a backup window for the RDS DB instances for daily snapshots. Assign a
snapshot retention policy of 2 years to each RDS DB instance. Use Amazon Data
Lifecycle Manager (Amazon DLM) to schedule snapshot deletions.

Configure database transaction logs to be automatically backed up to Amazon


CloudWatch Logs with an expiration period of 2 years.

Configure an AWS Database Migration Service (AWS DMS) replication task.


Deploy a replication instance, and configure a change data capture (CDC) task
to stream database changes to Amazon S3 as the target. Configure S3 Lifecycle
policies to delete the snapshots after 2 years.

Correct answer

Create a backup vault in AWS Backup to retain RDS backups. Create a new backup
plan with a daily schedule and an expiration period of 2 years after creation. Assign
the RDS DB instances to the backup plan.

Feedback

Incorrect! The correct answer is: Create a backup vault in AWS Backup to retain RDS
backups. Create a new backup plan with a daily schedule and an expiration period of 2
years after creation. Assign the RDS DB instances to the backup plan.
260. A company�s compliance team needs to move its file shares to 0/1
AWS. The shares run on a Windows Server SMB file share. A self-managed
on-premises Active Directory controls access to the files and folders.The
company wants to use Amazon FSx for Windows File Server as part of the
solution. The company must ensure that the on-premises Active Directory
groups restrict access to the FSx for Windows File Server SMB compliance
shares, folders, and files after the move to AWS. The company has created
an FSx for Windows File Server file system.Which solution will meet these
requirements?

Create an Active Directory Connector to connect to the Active Directory. Map


the Active Directory groups to IAM groups to restrict access.

Assign a tag with a Restrict tag key and a Compliance tag value. Map the Active
Directory groups to IAM groups to restrict access.

Create an IAM service-linked role that is linked directly to FSx for Windows File
Server to restrict access.

Join the file system to the Active Directory to restrict access.

Correct answer

Join the file system to the Active Directory to restrict access.

Feedback

Incorrect! The correct answer is: Join the file system to the Active Directory to restrict
access.

This form was created inside of Adex International Pvt. Ltd..


Does this form look suspicious? Report

Forms

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy