AWS Mock Test - 3
AWS Mock Test - 3
A FinTech startup deployed an application on an Amazon EC2 instance with attached Instance Store volumes
and an Elastic IP address. The server is only accessed from 8 AM to 6 PM and can be stopped from 6 PM to 8
AM for cost efficiency using Lambda with the script that automates this based on tags.
Which of the following will occur when the EC2 instance is stopped and started? (Select TWO.)
The Elastic IP address is disassociated with the instance.
(Incorrect)
The ENI (Elastic Network Interface) is detached.
(Incorrect)
There will be no changes.
The underlying host for the instance is possibly changed.
(Correct)
All data on the attached instance-store devices will be lost.
(Correct)
Explanation
This question did not mention the specific type of EC2 instance, however, it says that it will be stopped and
started. Since only EBS-backed instances can be stopped and restarted, it is implied that the instance is EBS-
backed. Remember that an instance store-backed instance can only be rebooted or terminated and its data will
be erased if the EC2 instance is either stopped or terminated.
If you stopped an EBS-backed EC2 instance, the volume is preserved but the data in any attached instance store
volume will be erased. Keep in mind that an EC2 instance has an underlying physical host computer. If the
instance is stopped, AWS usually moves the instance to a new host computer. Your instance may stay on the
same host computer if there are no problems with the host computer. In addition, its Elastic IP address is
disassociated from the instance if it is an EC2-Classic instance. Otherwise, if it is an EC2-VPC instance, the
Elastic IP address remains associated.
Take note that an EBS-backed EC2 instance can have attached Instance Store volumes. This is the reason why
there is an option that mentions the Instance Store volume, which is placed to test your understanding of this
specific storage type. You can launch an EBS-backed EC2 instance and attach several Instance Store volumes
but remember that there are some EC2 Instance types that don't support this kind of set up.
Question 2: Correct
A large insurance company has an AWS account that contains three VPCs (DEV, UAT and PROD) in the
same region. UAT is peered to both PROD and DEV using a VPC peering connection. All VPCs have non-
overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up
time to market.
Question 3: Incorrect
A company is deploying a Microsoft SharePoint Server environment on AWS using CloudFormation. The
Solutions Architect needs to install and configure the architecture that is composed of Microsoft Active Directory
(AD) domain controllers, Microsoft SQL Server 2012, multiple Amazon EC2 instances to host the Microsoft
SharePoint Server and many other dependencies. The Architect needs to ensure that the required components
are properly running before the stack creation proceeds.
Which of the following should the Architect do to meet this requirement?
Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success
signal after the applications are installed and configured using the cfn-signal helper script.
(Correct)
Configure a UpdatePolicy attribute to the instance in the CloudFormation template. Send a success
signal after the applications are installed and configured using the cfn-signal helper script.
Configure the DependsOn attribute in the CloudFormation template. Send a success signal after the
applications are installed and configured using the cfn-init helper script.
(Incorrect)
Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal
after the applications are installed and configured using the cfn-signal helper script.
Explanation
You can associate the CreationPolicy attribute with a resource to prevent its status from reaching create
complete until AWS CloudFormation receives a specified number of success signals or the timeout period is
exceeded. To signal a resource, you can use the cfn-signal helper script or SignalResource API. AWS
CloudFormation publishes valid signals to the stack events so that you track the number of signals sent.
The creation policy is invoked only when AWS CloudFormation creates the associated resource. Currently, the
only AWS CloudFormation resources that support creation policies
are AWS::AutoScaling::AutoScalingGroup , AWS::EC2::Instance , and AWS::CloudFormation::WaitCondition .
Use the CreationPolicy attribute when you want to wait on resource configuration actions before stack creation
proceeds. For example, if you install and configure software applications on an EC2 instance, you might want
those applications to be running before proceeding. In such cases, you can add a CreationPolicy attribute to the
instance, and then send a success signal to the instance after the applications are installed and configured.
Question 4: Incorrect
A media company recently launched their newly created web application. Many users tried to visit the website,
but they are receiving a 503 Service Unavailable Error. The system administrator tracked the EC2 instance status
and saw the capacity is reaching its maximum limit and unable to process all the requests. To gain insights from
the application's data, they need to launch a real-time analytics service.
Which of the following allows you to read records in batches?
Create an Amazon S3 bucket to store the captured data and use Amazon Redshift Spectrum to analyze
the data.
(Incorrect)
Create a Kinesis Data Firehose and use AWS Lambda to read records from the data stream.
Create a Kinesis Data Stream and use AWS Lambda to read records from the data stream.
(Correct)
Create an Amazon S3 bucket to store the captured data and use Amazon Athena to analyze the data.
Explanation
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service.
KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources. You can
use an AWS Lambda function to process records in Amazon KDS. By default, Lambda invokes your function as
soon as records are available in the stream. Lambda can process up to 10 batches in each shard simultaneously.
If you increase the number of concurrent batches per shard, Lambda still ensures in-order processing at the
partition-key level.
The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler
method to process the event. When the function returns a response, it stays active and waits to process
additional events. If you invoke the function again while the first event is being processed, Lambda initializes
another instance, and the function processes the two events concurrently. As more events come in, Lambda
Question 5: Incorrect
A company is storing its financial reports and regulatory documents in an Amazon S3 bucket. To comply with the
IT audit, they tasked their Solutions Architect to track all new objects added to the bucket as well as the removed
ones. It should also track whether a versioned object is permanently deleted. The Architect must configure
Amazon S3 to publish notifications for these events to a queue for post-processing and to an Amazon SNS topic
that will notify the Operations team.
Which of the following is the MOST suitable solution that the Architect should implement?
Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the
bucket to publish s3:ObjectAdded:* and s3:ObjectRemoved:* event types to SQS and SNS.
Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the
bucket to publish s3:ObjectCreated:* and ObjectRemoved:DeleteMarkerCreated event types to SQS
and SNS.
Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on
the bucket to publish s3:ObjectCreated:* and ObjectRemoved:DeleteMarkerCreated event types to
SQS and SNS.
(Incorrect)
Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on
the bucket to publish s3:ObjectCreated:* and s3:ObjectRemoved:Delete event types to SQS and SNS.
(Correct)
Explanation
The Amazon S3 notification feature enables you to receive notifications when certain events happen in your
bucket. To enable notifications, you must first add a notification configuration that identifies the events you want
Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this
configuration in the notification subresource that is associated with a bucket. Amazon S3 provides an API for you
to manage this subresource.
Amazon S3 event notifications typically deliver events in seconds but can sometimes take a minute or longer. If
two writes are made to a single non-versioned object at the same time, it is possible that only a single event
notification will be sent. If you want to ensure that an event notification is sent for every successful write, you can
enable versioning on your bucket. With versioning, every successful write will create a new version of your object
and will also send an event notification.
Question 6: Correct
A Solutions Architect is working for a large insurance firm. To maintain compliance with HIPAA laws, all data that
is backed up or stored on Amazon S3 needs to be encrypted at rest.
In this scenario, what is the best method of encryption for the data, assuming S3 is being used for storing
financial-related data? (Select TWO.)
Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS
endpoints.
(Correct)
Store the data on EBS volumes with encryption enabled instead of using Amazon S3
Use AWS Shield to protect your data at rest
Enable SSE on an S3 bucket to make use of AES-256 encryption
(Correct)
Store the data in encrypted EBS snapshots
Explanation
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it
is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-
side encryption. You have the following options for protecting data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its
data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In
this case, you manage the encryption process, the encryption keys, and related tools.
Question 8: Incorrect
A company needs to use Amazon Aurora as the Amazon RDS database engine of their web application. The
Solutions Architect has been instructed to implement a 90-day backup retention policy.
Which of the following options can satisfy the given requirement?
Create an AWS Backup plan to take daily snapshots with a retention period of 90 days.
(Correct)
Configure an automated backup and set the backup retention period to 90 days.
Configure RDS to export the automated snapshot automatically to Amazon S3 and create a lifecycle
policy to delete the object after 90 days.
(Incorrect)
Create a daily scheduled event using CloudWatch Events and AWS Lambda to directly download the
RDS automated snapshot to an S3 bucket. Archive snapshots older than 90 days to Glacier.
Explanation
AWS Backup is a centralized backup service that makes it easy and cost-effective for you to backup your
application data across AWS services in the AWS Cloud, helping you meet your business and regulatory backup
compliance requirements. AWS Backup makes protecting your AWS storage volumes, databases, and file
systems simple by providing a central place where you can configure and audit the AWS resources you want to
backup, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity.
Question 9: Correct
An on-premises server is using an SMB network file share to store application data. The application produces
around 50 MB of data per day but it only needs to access some of it for daily processes. To save on storage
costs, the company plans to copy all the application data to AWS, however, they want to retain the ability to
retrieve data with the same low-latency access as the local file share. The company does not have the capacity
to develop the needed tool for this operation.
Which AWS service should the company use?
AWS Storage Gateway
(Correct)
AWS Virtual Private Network (VPN)
Amazon FSx for Windows File Server
AWS Snowball Edge
Explanation
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited
cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid
cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by
cloud storage, and providing low latency access to data in AWS for on-premises applications.
You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one
launch configuration for an Auto Scaling group at a time, and you can't modify a launch configuration after you've
The Route table attached to the VPC is shown below. You can establish an SSH connection into the EC2
instance from the Internet. However, you are not able to connect to the web server using your Chrome browser.
FTP is incorrect because the File Transfer Protocol does not guarantee fast throughput and consistent, fast data
transfer.
AWS Direct Connect is incorrect because you have users all around the world and not just on your on-premises
data center. Direct Connect would be too costly and is definitely not suitable for this purpose.
Using CloudFront Origin Access Identity is incorrect because this is a feature which ensures that only
CloudFront can serve S3 content. It does not increase throughput and ensure fast delivery of content to your
customers.
For applications that have steady state or predictable usage, Reserved Instances can provide significant savings
compared to using On-Demand instances.
Reserved Instances are recommended for:
- Applications with steady state usage
- Applications that may require reserved capacity
AWS Mock test-3 Page 16 of 63
- Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs
You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If
you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre-signed object URL
without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an
object.
Hence, the correct answers are:
- Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to
read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed URLs or signed
cookies.
This EBS attribute can be changed through the AWS Console upon launching the instance or through CLI/API
command.
Hence, the correct answer is the option that says: Set the value of DeleteOnTermination attribute of the EBS
volumes to False .
The option that says: Use AWS DataSync to replicate root volume data to Amazon S3 is incorrect because
AWS DataSync does not work with Amazon EBS volumes. DataSync can copy data between Network File
System (NFS) shares, Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone,
Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems,
and Amazon FSx for Windows File Server file systems.
The option that says: Configure ASG to suspend the health check process for each EC2 instance is
incorrect because suspending the health check process will prevent the ASG from replacing unhealthy EC2
instances. This can cause availability issues to the application.
The option that says: Enable the Termination Protection option for all EC2 instances is incorrect.
Termination Protection will just prevent your instance from being accidentally terminated using the Amazon EC2
console.
Active-Active Failover
Use this failover configuration when you want all of your resources to be available the majority of the time. When
a resource becomes unavailable, Route 53 can detect that it's unhealthy and stop including it when responding to
queries.
In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the
same routing policy (such as weighted or latency) are active unless Route 53 considers them unhealthy. Route
53 can respond to a DNS query using any healthy record.
Active-Passive Failover
Use an active-passive failover configuration when you want a primary resource or group of resources to be
available the majority of the time and you want a secondary resource or group of resources to be on standby in
case all the primary resources become unavailable. When responding to queries, Route 53 includes only the
healthy primary resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy
secondary resources in response to DNS queries.
Configuring an Active-Passive Failover with Weighted Records and configuring an Active-Passive
Failover with Multiple Primary and Secondary Resources are incorrect because an Active-Passive Failover is
mainly used when you want a primary resource or group of resources to be available most of the time and you
want a secondary resource or group of resources to be on standby in case all the primary resources become
unavailable. In this scenario, all of your resources should be available all the time as much as possible which is
why you have to use an Active-Active Failover instead.
Configuring an Active-Active Failover with One Primary and One Secondary Resource is incorrect because
you cannot set up an Active-Active Failover with One Primary and One Secondary Resource. Remember that an
Active-Active Failover uses all available resources all the time without a primary nor a secondary resource.
Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process
vast amounts of data across dynamically scalable Amazon EC2 instances. It securely and reliably handles a
broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine
learning, financial analysis, scientific simulation, and bioinformatics. You can also run other popular distributed
frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other
AWS data stores such as Amazon S3 and Amazon DynamoDB.
The option that says: Amazon DynamoDB for storing and EC2 for analyzing the logs is incorrect because
DynamoDB is a noSQL database solution of AWS. It would be inefficient to store logs in DynamoDB while using
EC2 to analyze them.
The option that says: Amazon EC2 with EBS volumes for storing and analyzing the log files is incorrect
because using EC2 with EBS would be costly, and EBS might not provide the most durable storage for your logs,
unlike S3.
The option that says: Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log
files using a custom-built application is incorrect because using EC2 to analyze logs would be inefficient and
expensive since you will have to program the analyzer yourself.
In Amazon S3, data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and
at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by
using client-side encryption. You have the following options to protect data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its
data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In
this case, you manage the encryption process, the encryption keys, and related tools.
Creating an EBS Snapshot is incorrect because this is a backup solution of EBS. It does not provide security of
data inside EBS volumes when executed.
Migrating the EC2 instances from the public to private subnet is incorrect because the data you want to
secure are those in EBS volumes and S3 buckets. Moving your EC2 instance to a private subnet involves a
different matter of security practice, which does not achieve what you want in this scenario.
Using AWS Shield and WAF is incorrect because these protect you from common security threats for your web
applications. However, what you are trying to achieve is securing and encrypting your data inside EBS and S3.
When setting up a bastion host in AWS, you should only allow the individual IP of the client and not the entire
network. Therefore, in the Source, the proper CIDR notation should be used. The /32 denotes one IP address
and the /0 refers to the entire network.
The option that says: Security Group Inbound Rule: Protocol – UDP, Port Range – 22, Source
175.45.116.100/32 is incorrect since the SSH protocol uses TCP and port 22, and not UDP.
The option that says: Network ACL Inbound Rule: Protocol – UDP, Port Range – 22, Source
175.45.116.100/32 is incorrect since the SSH protocol uses TCP and port 22, and not UDP. Aside from that,
network ACLs act as a firewall for your whole VPC subnet while security groups operate on an instance level.
Since you are securing an EC2 instance, you should be using security groups.
The option that says: Network ACL Inbound Rule: Protocol – TCP, Port Range-22, Source
175.45.116.100/0 is incorrect as it allowed the entire network instead of a single IP to gain access to the host.
Amazon API Gateway lets you create an API that acts as a "front door" for applications to access data, business
logic, or functionality from your back-end services, such as code running on AWS Lambda. Amazon API Gateway
handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API
calls, including traffic management, authorization and access control, monitoring, and API version management.
Amazon API Gateway has no minimum fees or startup costs.
AWS Lambda scales your functions automatically on your behalf. Every time an event notification is received for
your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code. Since your
code is stateless, AWS Lambda can start as many copies of your function as needed without lengthy deployment
and configuration delays.
The option that says: Configure CloudFront with DynamoDB as the origin; cache frequently accessed data
on the client device using ElastiCache is incorrect. Although CloudFront delivers content faster to your users
using edge locations, you still cannot integrate DynamoDB table with CloudFront as these two are incompatible.
The option that says: Use AWS SSO and Cognito to authenticate users and have them directly access
DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU
and WCU is incorrect because AWS Single Sign-On (SSO) is a cloud SSO service that just makes it easy to
centrally manage SSO access to multiple AWS accounts and business applications. This will not be of much help
on the scalability and performance of the application. It is costly to manually set the provisioned read and write
capacity to a higher RCU and WCU because this capacity will run round the clock and will still be the same even
if the incoming traffic is stable and there is no need to scale.
The option that says: Since Auto Scaling is enabled by default, the provisioned read and write capacity will
adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from
milliseconds to microseconds is incorrect because, by default, Auto Scaling is not enabled in a DynamoDB
table which is created using the AWS CLI.
The option that says: Your VPC does not have a NAT gateway is incorrect because an issue in the NAT
Gateway is unlikely to cause a request throttling issue or produce an EC2ThrottledException error in Lambda.
As per the scenario, the issue is happening only at certain times of the day, which means that the issue is only
intermittent and the function works at other times. We can also conclude that an availability issue is not an issue
since the application is already using a highly available NAT Gateway and not just a NAT instance.
The option that says: The associated security group of your function does not allow outbound
connections is incorrect because if the associated security group does not allow outbound connections then the
Lambda function will not work at all in the first place. Remember that as per the scenario, the issue only happens
intermittently. In addition, Internet traffic restrictions do not usually produce EC2ThrottledException errors.
The option that says: The attached IAM execution role of your function does not have the necessary
permissions to access the resources of your VPC is incorrect because just as what is explained above, the
issue is intermittent and thus, the IAM execution role of the function does have the necessary permissions to
access the resources of the VPC since it works at those specific times. In case the issue is indeed caused by a
permission problem then an EC2AccessDeniedException the error would most likely be returned and not
an EC2ThrottledException error.
You can use Amazon Kinesis Data Firehose in conjunction with Amazon Kinesis Data Streams if you need to
implement real-time processing of streaming big data. Kinesis Data Streams provides an ordering of records, as
well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The
Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor,
making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example,
to perform counting, aggregation, and filtering).
Amazon Simple Queue Service (Amazon SQS) is different from Amazon Kinesis Data Firehose. SQS offers a
reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets
you easily move data between distributed application components and helps you build applications in which
messages are processed independently (with message-level ack/fail semantics), such as automated workflows.
Amazon Kinesis Data Firehose is primarily used to load streaming data into data stores and analytics tools.
Hence, the correct answer is: Amazon Kinesis Data Firehose.
Amazon Kinesis is incorrect because this is the streaming data platform of AWS and has four distinct services
under it: Kinesis Data Firehose, Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data
Analytics. For the specific use case just as asked in the scenario, use Kinesis Data Firehose.
Amazon Redshift is incorrect because this is mainly used for data warehousing making it simple and cost-
effective to analyze your data across your data warehouse and data lake. It does not meet the requirement of
being able to load and stream data into data stores for analytics. You have to use Kinesis Data Firehose instead.
Amazon SQS is incorrect because you can't capture, transform, and load streaming data into Amazon S3,
Amazon Elasticsearch Service, and Splunk using this service. You have to use Kinesis Data Firehose instead.
What could be a reason for this issue and how would you resolve it?
By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select
a different Availability Zone and retry the failed request.
By default, AWS allows you to provision a maximum of 20 instances per region. Select a different
region and retry the failed request.
There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed.
Just submit the limit increase form to AWS and retry the failed requests once approved.
(Correct)
There was an issue with the Amazon EC2 API. Just resend the requests and these will be
provisioned successfully.
Explanation
You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing
20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS
accounts may start with limits that are lower than the limits described here.
Using Dedicated EC2 instances to ensure that each instance has the maximum performance possible is
not a viable mitigation technique because Dedicated EC2 instances are just an instance billing option. Although it
may ensure that each instance gives the maximum performance, that by itself is not enough to mitigate a DDoS
attack.
Adding multiple Elastic Fabric Adapters (EFA) to each EC2 instance to increase the network bandwidth is
also not a viable option as this is mainly done for performance improvement, and not for DDoS attack mitigation.
Moreover, you can attach only one EFA per EC2 instance. An Elastic Fabric Adapter (EFA) is a network device
that you can attach to your Amazon EC2 instance to accelerate High-Performance Computing (HPC) and
machine learning applications.
The following options are valid mitigation techniques that can be used to prevent DDoS:
- Use an Amazon CloudFront service for distributing both static and dynamic content.
- Use an Application Load Balancer with Auto Scaling groups for your EC2 instances. Prevent direct
Internet traffic to your Amazon RDS database by deploying it to a new private subnet.
- Use AWS Shield and AWS WAF.
A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB
cluster. Use the reader endpoint for read operations, such as queries. By processing those statements on the
read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance. It also helps the cluster
to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in
the cluster. Each Aurora DB cluster has one reader endpoint.
If the cluster contains one or more Aurora Replicas, the reader endpoint load-balances each connection request
among the Aurora Replicas. In that case, you can only perform read-only statements such as SELECT in that
session. If the cluster only contains a primary instance and no Aurora Replicas, the reader endpoint connects to
the primary instance. In that case, you can perform write operations through the endpoint.
Hence, the correct answer is to use the built-in Reader endpoint of the Amazon Aurora database.
The option that says: Use the built-in Cluster endpoint of the Amazon Aurora database is incorrect because
a cluster endpoint (also known as a writer endpoint) simply connects to the current primary DB instance for that
DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You
can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data
updated in real time. You just specify the data for your app with simple code statements and AWS AppSync
manages everything needed to keep the app data updated in real time. This will allow your app to access data in
Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data
from these services to provide the exact data you need for your app.
Amazon Redshift and AWS Mobile Hub are incorrect as Amazon Redshift is mainly used as a data warehouse
and for online analytic processing (OLAP). Although this service can be used for this scenario, DynamoDB is still
the top choice given its better durability and scalability.
Amazon Relational Database Service (RDS) and Amazon MQ and Amazon Aurora and Amazon
Cognito are possible answers in this scenario, however, DynamoDB is much more suitable for simple mobile
apps that do not have complicated data relationships compared with enterprise web applications. It is stated in
the scenario that the mobile app will be used from around the world, which is why you need a data storage
service which can be supported globally. It would be a management overhead to implement multi-region
AWS Mock test-3 Page 40 of 63
deployment for your RDS and Aurora database instances compared to using the Global table feature of
DynamoDB.
There is no additional charge for using gateway endpoints. However, standard charges for data transfer and
resource usage still apply.
Hence, the correct answer is: Create an Amazon S3 gateway endpoint to enable a connection between the
instances and Amazon S3.
The option that says: Set up a NAT Gateway in the public subnet to connect to Amazon S3 is incorrect. This
will enable a connection between the private EC2 instances and Amazon S3 but it is not the most cost-efficient
solution. NAT Gateways are charged on an hourly basis even for idle time.
The option that says: Create an Amazon S3 interface endpoint to enable a connection between the
instances and Amazon S3 is incorrect. This is also a possible solution but it's not the most cost-effective
solution. You pay an hourly rate for every provisioned Interface endpoint.
The option that says: Set up an AWS Transit Gateway to access Amazon S3 is incorrect because this service
is mainly used for connecting VPCs and on-premises networks through a central hub.
To store the backup data from on-premises to a durable cloud storage service, you can use File Gateway to store
and retrieve objects through standard file storage protocols (SMB or NFS). File Gateway enables your existing
file-based applications, devices, and workflows to use Amazon S3, without modification. File Gateway securely
and durably stores both file contents and metadata as objects while providing your on-premises applications low-
latency access to cached data.
Hence, the correct answer is: Use the AWS Storage Gateway file gateway to store all the backup data in
Amazon S3.
The option that says: Use the AWS Storage Gateway volume gateway to store the backup data and directly
access it using Amazon S3 API actions is incorrect. Although this is a possible solution, you cannot directly
access the volume gateway using Amazon S3 APIs. You should use File Gateway to access your data in
Amazon S3.
The option that says: Use Amazon EBS volumes to store all the backup data and attached it to an Amazon
EC2 instance is incorrect. Take note that in the scenario, you are required to store the backup data in a durable
storage service. An Amazon EBS volume is not highly durable like Amazon S3. Also, file storage protocols such
as NFS or SMB, are not directly supported by EBS.
The option that says: Use AWS Snowball Edge to directly backup the data in Amazon S3 Glacier is incorrect
because AWS Snowball Edge cannot store and retrieve objects through standard file storage protocols. Also,
Snowball Edge can’t directly integrate backups to S3 Glacier.
Amazon S3 is composed of buckets, object keys, object metadata, object tags, and many other components as
shown below:
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
An Amazon S3 object key refers to the key name, which uniquely identifies the object in the bucket.
An Amazon S3 object metadata is a name-value pair that provides information about the object.
An Amazon S3 object tag is a key-pair value used for object tagging to categorize storage.
You can perform S3 Select to query only the necessary data inside the CSV files based on the bucket's name
and the object's key.
The following snippet below shows how it is done using boto3 ( AWS SDK for Python ):
client = boto3.client('s3')
resp = client.select_object_content(
Bucket='tdojo-bucket', # Bucket Name.
Key='s3-select/tutorialsdojofile.csv', # Object Key.
ExpressionType= 'SQL',
Expression = "select \"Sample\" from s3object s where s.\"tutorialsdojofile\" in ['A', 'B']"
Hence, the correct answer is the option that says: Perform an S3 Select operation based on the bucket's
name and object's key.
The option that says: Perform an S3 Select operation based on the bucket's name and object's metadata is
incorrect because metadata is not needed when querying subsets of data in an object using S3 Select.
The option that says: Perform an S3 Select operation based on the bucket's name and object tags is
incorrect because object tags just provide additional information to your object. This is not needed when querying
with S3 Select although this can be useful for S3 Batch Operations. You can categorize objects based on tag
values to provide S3 Batch Operations with a list of objects to operate on.
The option that says: Perform an S3 Select operation based on the bucket's name is incorrect because you
need both the bucket’s name and the object key to successfully perform an S3 Select operation.
When you create a read replica for Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle, Amazon RDS
sets up a secure communications channel using public-key encryption between the source DB instance and the
read replica, even when replicating across regions. Amazon RDS establishes any AWS security configurations
such as adding security group entries needed to enable the secure channel.
You can also create read replicas within a Region or between Regions for your Amazon RDS for MySQL,
MariaDB, PostgreSQL, and Oracle database instances encrypted at rest with AWS Key Management Service
(KMS).
Hence, the correct answers are:
- It elastically scales out beyond the capacity constraints of a single DB instance for read-heavy database
workloads.
- Provides asynchronous replication and improves the performance of the primary database by taking
read-heavy database workloads from it.
The option that says: Allows both read and write operations on the read replica to complement the primary
database is incorrect as Read Replicas are primarily used to offload read-only operations from the primary
database instance. By default, you can't do a write operation to your Read Replica.
stopped - The instance is shut down and cannot be used. The instance can be restarted at any time.
shutting-down - The instance is preparing to be terminated.
terminated - The instance has been permanently deleted and cannot be restarted. Take note that Reserved
Instances that applied to terminated instances are still billed until the end of their term according to their
payment option.
The option that says: You will be billed when your On-Demand instance is preparing to hibernate with
a stopping state is correct because when the instance state is stopping , you will not billed if it is preparing to
stop however, you will still be billed if it is just preparing to hibernate.
When the first AZ goes down, the second AZ will only have an initial 4 EC2 instances. This will eventually be
scaled up to 8 instances since the solution is using Auto Scaling.
The 110% compute capacity for the 4 servers might cause some degradation of the service, but not a total
outage since there are still some instances that handle the requests. Depending on your scale-up configuration in
your Auto Scaling group, the additional 4 EC2 instances can be launched in a matter of minutes.
T3 instances also have a Burstable Performance capability to burst or go beyond the current compute capacity of
the instance to higher performance as required by your workload. So your 4 servers will be able to manage 110%
compute capacity for a short period of time. This is the power of cloud computing versus our on-premises network
architecture. It provides elasticity and unparalleled scalability.
Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event
of an Availability Zone outage in the region. Hence, the correct answer is the option that says: Deploy four EC2
instances with Auto Scaling in one Availability Zone and four in another availability zone in the same
region behind an Amazon Elastic Load Balancer.
The option that says: Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an
Amazon Elastic Load Balancer is incorrect because this architecture is not highly available. If that Availability
Zone goes down then your web application will be unreachable.
The options that say: Deploy four EC2 instances with Auto Scaling in one region and four in another region
behind an Amazon Elastic Load Balancer and Deploy two EC2 instances with Auto Scaling in four regions
behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region
and not across multiple regions.
Using Amazon Data Lifecycle Manager is incorrect because this is primarily used to manage the lifecycle of
your AWS resources and not to allow certain traffic to go through.
Configuring the Network Access Control List of your VPC to permit ingress traffic over port 22 from your
IP is incorrect because this is not necessary in this scenario as it was specified that you were able to connect to
other EC2 instances. In addition, Network ACL is much suitable to control the traffic that goes in and out of your
entire VPC and not just on one EC2 instance.
Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP is
incorrect because this is relevant to RDP and not SSH.
Amazon EBS volume is incorrect because this is not as durable compared with S3. In addition, it is best to store
the static contents in S3 rather than EBS.
Amazon EC2 instance store is incorrect because it is definitely not suitable - the data it holds will be wiped out
immediately once the EC2 instance is restarted.
Amazon RDS instance is incorrect because an RDS instance is just a database and not suitable for storing
static content. By default, RDS is not durable, unless you launch it to be in Multi-AZ deployments configuration.
To be sure that a destination account owns an S3 object copied from another account, grant the destination
account the permissions to perform the cross-account copy. Follow these steps to configure cross-account
permissions to copy objects from a source bucket in Account A to a destination bucket in Account B:
- Attach a bucket policy to the source bucket in Account A.
- Attach an AWS Identity and Access Management (IAM) policy to a user or role in Account B.
- Use the IAM user or role in Account B to perform the cross-account copy.
Hence, the correct answer is: Configure cross-account permissions in S3 by creating an IAM customer-
managed policy that allows an IAM user or role to copy objects from the source bucket in one account to
the destination bucket in the other account. Then attach the policy to the IAM user or role that you want
to use to copy objects between accounts.
The option that says: Enable the Requester Pays feature in the source S3 bucket. The fees would be
waived through Consolidated Billing since both AWS accounts are part of AWS Organizations is incorrect
because the Requester Pays feature is primarily used if you want the requester, instead of the bucket owner, to
pay the cost of the data transfer request and download from the S3 bucket. This solution lacks the necessary IAM
Permissions to satisfy the requirement. The most suitable solution here is to configure cross-account permissions
in S3.
The option that says: Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that
allows an IAM user or role to copy objects from the source bucket in one account to the destination
bucket in the other account is incorrect because CORS simply defines a way for client web applications that
are loaded in one domain to interact with resources in a different domain, and not on a different AWS account.
The option that says: Connect the two S3 buckets from two different AWS accounts to Amazon WorkDocs.
Set up cross-account access to integrate the two S3 buckets. Use the Amazon WorkDocs console to
copy the objects from one account to the other with modified object ownership assigned to the
destination account is incorrect because Amazon WorkDocs is commonly used to easily collaborate, share
content, provide rich feedback, and collaboratively edit documents with other users. There is no direct way for
you to integrate WorkDocs and an Amazon S3 bucket owned by a different AWS account. A better solution here
is to use cross-account permissions in S3 to meet the requirement.
Spread placement groups are recommended for applications that have a small number of critical instances that
should be kept separate from each other. Launching instances in a spread placement group reduces the risk of
simultaneous failures that might occur when instances share the same racks. Spread placement groups provide
access to distinct racks, and are therefore suitable for mixing instance types or launching instances over time. A
spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of
seven running instances per Availability Zone per group.
Hence, the correct answer is: Set up a cluster placement group within a single Availability Zone in the same
AWS Region.
The option that says: Set up a spread placement group across multiple Availability Zones in multiple AWS
Regions is incorrect because although using a placement group is valid for this particular scenario, you can only
set up a placement group in a single AWS Region only. A spread placement group can span multiple Availability
Zones in the same Region.
The option that says: Set up AWS Direct Connect connections across multiple Availability Zones for
increased bandwidth throughput and more consistent network experience is incorrect because this is
primarily used for hybrid architectures. It bypasses the public Internet and establishes a secure, dedicated
connection from your on-premises data center into AWS, and not used for having low latency within your AWS
network.
The option that says: Use EC2 Dedicated Instances is incorrect because these are EC2 instances that run in a
VPC on hardware that is dedicated to a single customer and are physically isolated at the host hardware level
from instances that belong to other AWS accounts. It is not used for reducing latency.
Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts
compared to On-Demand prices. EC2 Spot enables you to optimize your costs on the AWS cloud and scale your
application's throughput up to 10X for the same budget. By simply selecting Spot when launching EC2 instances,
you can save up-to 90% on On-Demand prices. The only difference between On-Demand instances and Spot
Instances is that Spot instances can be interrupted by EC2 with two minutes of notification when the EC2 needs
the capacity back.
You can specify whether Amazon EC2 should hibernate, stop, or terminate Spot Instances when they are
interrupted. You can choose the interruption behavior that meets your needs.
Take note that there is no "bid price" anymore for Spot EC2 instances since March 2018. You simply have to set
your maximum price instead.
Reserved instances and Dedicated instances are incorrect as both do not act as spare compute capacity.
On-demand instances is a valid option but a Spot instance is much cheaper than On-Demand.
By attaching a transit gateway to a Direct Connect gateway using a transit virtual interface, you can manage a
single connection for multiple VPCs or VPNs that are in the same AWS Region. You can also advertise prefixes
from on-premises to AWS and from AWS to on-premises.
The AWS Transit Gateway and AWS Direct Connect solution simplify the management of connections between
an Amazon VPC and your networks over a private connection. It can also minimize network costs, improve
bandwidth throughput, and provide a more reliable network experience than Internet-based connections.
Hence, the correct answer is: Create a new Direct Connect gateway and integrate it with the existing Direct
Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct
Connect gateway.
The option that says: Set up another Direct Connect connection for each and every new AWS account that
will be added is incorrect because this solution entails a significant amount of additional cost. Setting up a single
DX connection requires a substantial budget and takes a lot of time to establish. It also has high management
overhead since you will need to manage all of the Direct Connect connections for all AWS accounts.
The option that says: Create a new AWS VPN CloudHub. Set up a Virtual Private Network (VPN) connection
for additional AWS accounts is incorrect because a VPN connection is not capable of providing consistent and
dedicated access to the on-premises network services. Take note that a VPN connection traverses the public
Internet and doesn't use a dedicated connection.
The option that says: Set up a new Direct Connect gateway and integrate it with the existing Direct
Connect connection. Configure a VPC peering connection between AWS accounts and associate it with
Direct Connect gateway is incorrect because VPC peering is not supported in a Direct Connect connection.
VPC peering does not support transitive peering relationships.
The option that says: It securely delivers data to customers globally with low latency and high transfer
speeds is incorrect because this option describes what CloudFront does and not ElastiCache.
The option that says: It provides an in-memory cache that delivers up to 10x performance improvement
from milliseconds to microseconds or even at millions of requests per second is incorrect because this
option describes what Amazon DynamoDB Accelerator (DAX) does and not ElastiCache. Amazon DynamoDB
Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB. Amazon ElastiCache
cannot provide a performance improvement from milliseconds to microseconds, let alone millions of requests per
second like DAX can.
The option that says: It reduces the load on your database by routing read queries from your applications
to the Read Replica is incorrect because this option describes what an RDS Read Replica does and not
ElastiCache. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database
instance within the same AWS Region or in a different AWS Region.