Solve Question
Solve Question
The EC2 instances need to communicate to each other frequently and require network performance with low
latency and high throughput. Which EC2 configuration meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
B. Launch the EC2 instances in a spread placement group in one Availability Zone.
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
Analysis :
Cluster placement groups are designed for low-latency, high-throughput communication between
instances.
Instances are placed close together in a single Availability Zone to minimize latency.
Suitable for HPC because it directly addresses the need for low latency and high throughput.
Spread placement group is require when need High Availability.
Q2. A company wants to host a scalable web application on AWS. The application will be accessed by
users from different geographic regions of the world. Application users will be able to download and
upload unique data up to gigabytes in size. The development team wants a cost-effective solution to
minimize upload and download latency and maximize performance. What should a solutions architect
do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControI headers to host the application.
C. Use Amazon EC2 with Auto sealing and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Analysis :
Amazon S3 with Transfer Acceleration is correct because it minimizes upload and download latency by
using AWS’s global edge network, making it cost-effective and ideal for large, geographically distributed
file transfers. It meets the application’s needs for performance and scalability.
Q3. A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the
company's applications stores files on a Windows file server farm that uses Distributed File System
Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS
B. Amazon FSx
C. Amazon S3
D. AWS Storage Gateway
1
Analysis:
Amazon EFS (Elastic File System)
Fully managed, scalable file storage service for Linux-based applications.
Uses the Network File System (NFS) protocol for file sharing.
Ideal for Linux workloads, container storage, and applications needing shared storage.
Not suitable for Windows-based file servers or DFSR.
Amazon FSx
Amazon S3
Scalable object storage service for storing and retrieving data (e.g., backups, images, large files).
Does not support traditional file-sharing protocols like SMB or NFS.
Suitable for static content, data archives, and backup storage.
Not a replacement for file servers or DFSR.
Q4. A company has a legacy application that processes data in two parts. The second part of the process
takes longer than the first, so the company has decided to rewrite the application as two microservices
running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
A. Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to
invoke microservice 2.
B. Implement code in microservice I to publish data to an Amazon SNS topic. Implement code in
microservice 2 to subscribe to this topic.
C. Implement code in microservice I to send data to Amazon Kinesis Data Firehose. Implement code in
microservice 2 to read from Kinesis Data Firehose.
D. Implement code in microservice I to send data to an Amazon SQS queue. Implement code in
microservice 2 to process messages from the queue.
2
Q5. A company captures clickstream data from multiple websites and analyzes it using batch processing.
The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants
to move towards near-real-time data processing for timely insights. The solution should process the
streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon Kinesis Data Streams
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics
Q6. A company's application runs on Amazon EC2 instances behind an Application Load Balancer
(ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On
the first day of every month at midnight, the application becomes much slower when the month-end
financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately
peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and
avoid downtime?
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
Analysis:
Option B (Simple Scaling): Use this when the workload is unpredictable and varies based on real-time
traffic or resource usage, such as spikes in user demand. Simple scaling reacts to metrics like CPU
utilization, automatically adding or removing EC2 instances as needed. While effective for dynamic
scenarios, it may not scale quickly enough for workloads with predictable spikes, as it only acts after
thresholds are breached.
Option C (Scheduled Scaling): Use this when the workload is predictable and follows a known pattern,
such as daily or monthly processing tasks. Scheduled scaling allows you to proactively scale the number of
EC2 instances up or down at specific times, ensuring the system is prepared to handle resource-intensive
tasks before they occur. This approach prevents downtime and ensures cost efficiency by scaling only when
required.
In summary, use Option B for unpredictable workloads and Option C for predictable workloads.
3
Q7. A company runs a multi-tier web application that hosts news content. The application runs on Amazon
EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across
multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the
application more resilient to periodic increases in request rates.
Q8. An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When
evaluating performance metrics, a solutions architect discovered that the database reads are causing high
I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Amazon Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create a read replica and modify the application to use the appropriate endpoint.
D. Create a second Amazon Aurora database and link it to the primary database as a read replica.
Analysis:
Amazon Aurora read replicas are designed to handle read-only traffic, which reduces the load on the
primary database and improves write performance. By creating a read replica, the application can direct read
requests to the replica using the Aurora reader endpoint, which automatically balances the traffic across
replicas. This ensures efficient traffic separation and eliminates the impact of high read operations on write
latency. Aurora’s native support for read replicas makes this a simple, cost-effective, and scalable solution to
handle increased read traffic without additional complexity.
Q9. A recently acquired company is required to build its own infrastructure on AWS and migrate multiple
applications to the cloud within a month. Each application has approximately 50 TB of data to be
transferred. After the migration is complete, this company and its parent company will both require secure
network connectivity with consistent throughput from their data centers to the applications. A
solutions architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements?
A. AWS Direct Connect for both the initial transfer and ongoing connectivity.
B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.
4
Analysis:
AWS Snowball is ideal for the one-time transfer of large datasets (~50 TB per application), as it bypasses
network bandwidth limitations and ensures quick and secure migration. For ongoing connectivity, AWS
Direct Connect provides a dedicated, reliable, and high-throughput connection between the data center and
AWS, meeting the need for secure and consistent performance after migration. This combination effectively
addresses both the initial migration and the long-term connectivity requirements.
Q10. A company serves content to its subscribers across the world using an application running on AWS.
The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer
(ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block
access for certain countries.
Which action will meet these requirements?
A. Modify the ALB security group to deny incoming traffic from blocked countries.
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
C. Use Amazon CloudFront to serve the application and deny access to blocked countries.
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.
Analysis:
Option C (Amazon CloudFront) is correct because it natively supports geo-restrictions to block traffic
from specific countries efficiently. It is scalable, easy to implement, and enhances performance for allowed
users by caching content at edge locations.
Q11. A company is creating a new application that will store a large amount of data. The data will be
analyzed hourly and modified by several Amazon EC2 Linux instances that are deployed across multiple
Availability Zones. The application team believes the amount of space needed will continue to grow for
the next 6 months.
Which set of actions should a solutions architect take to support these needs?
A. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the
application instances.
B. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system
on the application instances.
C. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the
application instances.
D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared
between the application instances.
Analysis:
Amazon EFS is correct because it provides a scalable, shared file system that can be accessed concurrently
by multiple EC2 Linux instances across Availability Zones. It supports frequent data modifications and
hourly analysis, while automatically scaling to accommodate the growing data needs.
5
Q12. A company is migrating a three-tier application to AWS. The application requires a MySQL
database. In the past, the application users reported poor application performance when creating new
entries. These performance issues were caused by users generating different real-time reports from the
application during working hours.
Which solution will improve the performance of the application when it is moved to AWS?
A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to
use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the
on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster With multiple read replicas. Configure
the application to use the reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup
instance of the cluster as an endpoint for the reports.
Analysis:
Amazon Aurora MySQL Multi-AZ with read replicas allows you to separate read and write workloads
effectively. The Multi-AZ setup ensures high availability for the primary database, while the read replicas
handle reporting workloads without impacting the performance of write operations, such as creating new
entries. By using the Aurora reader endpoint, the application can automatically distribute read requests
across the replicas, ensuring efficient load balancing and scalability. This approach resolves the performance
issue by offloading read-heavy tasks, making the application faster and more reliable.
Q13. A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The
database stores all data on multiple instances so it can withstand the loss of an instance. The database
requires block storage with latency and throughput to support several million transactions per second per
server.
Which storage solution should the solutions.
A. EBS Amazon Elastic Block Store (Amazon EBS)
B. Amazon EC2 instance store
C. Amazon Elastic File System (Amazon EFS)
D. Amazon S3
Analysis:
Option B (Amazon EC2 Instance Store) is correct because it provides super-fast, temporary storage
directly attached to the server, making it perfect for databases that need to handle millions of transactions
per second. Since the database is distributed across multiple instances, losing data when an instance stops
isn’t a problem. This makes instance store the best choice for high-speed, high-performance storage.
6
Q14. Organizers for a global event want to put daily reports online a static HTML pages. The pages are
expected to generate millions of views from users around the world. The files are stored in amazon S3
bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Generate presigned URLs for the files.
B. Use cross-Region replication to all Regions.
C. Use the geoproximity feature of Amazon Route 53.
D. Use Amazon CloudFront with the S3 bucket as its origin.
Analysis:
Use Amazon CloudFront with the S3 bucket as its origin because CloudFront is like a network of fast
delivery points located all around the world. It takes the files from the S3 bucket and stores copies of them
closer to users, so people can access the pages faster no matter where they are. This also reduces the
workload on the S3 bucket and helps handle millions of views without slowing down. It's the best way to
ensure the reports are available quickly and efficiently for a global audience.
Q15. A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for
the service will be unpredictable and can change suddenly from 0 requests to over 500 per second. The
total size of the data that needs to be persisted in a backend database is currently less than 1 GB with
unpredictable future growth. Data can be queried using simple key-value requests.
Which combination of AWS services would meet these requirements? (Choose two.)
A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. MySQL-compatible Amazon Aurora
Analysis:
AWS Lambda is a serverless compute service that automatically scales based on the number of incoming
requests. It is cost-efficient for unpredictable workloads because it charges only for the time the code runs.
Perfect for handling sudden traffic spikes without manual intervention. It is serverless, it's also cost-
effective. You only pay for compute time when your code runs.
Amazon DynamoDB is a NoSQL database service that provides extremely fast and predictable
performance with seamless scalability. It's ideal for key-value lookups and can handle massive amounts of
data with automatic scaling of storage and throughput. This addresses the small initial data size,
unpredictable future growth, and key-value access requirements.
Q16. A start-up company has a web application based in the us-east-I Region with multiple Amazon EC2
instances running behind an Application Load Balancer across multiple Availability Zones. As the
7
company's user base grows in the us-west-1 Region, it needs a solution with low latency and high
availability.
What should a solutions architect do to accomplish this?
A. Provision EC2 instances in us-west-1. Switch the Application Load Balancer to a Network Load Balancer
to achieve cross-Region load balancing.
B. Provision EC2 instances and an Application Load Balancer in us-west-1. Make the load balancer
distribute the traffic based on the location of the request.
C. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Create an
accelerator in AWS Global Accelerator that uses an endpoint group that includes the load balancer
endpoints in both Regions.
D. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Configure Amazon
Route 53 with a weighted routing policy. Create alias records in Route 53 that point to the Application Load
Balancer.
Analysis:
AWS Global Accelerator is designed to improve the performance of global applications by routing traffic
through AWS's global network backbone. By creating an accelerator with endpoint groups in both us-east-1
and us-west-1, traffic will be directed to the closest healthy endpoint (the ALBs in each region), minimizing
latency for users in both regions. This solution also provides high availability because if one region becomes
unavailable, traffic will be automatically routed to the other.
NLBs do not support cross-region load balancing.
ALBs are regional resources. A single ALB cannot distribute traffic across multiple regions.
Q17. A solutions architect is designing a solution to access a catalog of images (Original images need to be
stored.) and provide users with the ability to submit requests to customize images. Image customization
parameters will be in any request sent to an AWS API Gateway API. The customized image will be
generated on demand, and users will receive a link they can click to view or download their customized
image. The solution must be highly available for viewing and customizing images.
What is the MOST cost-effective solution to meet these requirements?
A. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the
original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2
instances.
B. Use AWS Lambda to manipulate the original image to the requested customizations. Store the
original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with
the S3 bucket as the origin.
C. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original
images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load
Balancer in front of the Amazon EC2 instances.
D. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the
original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon
CloudFront distribution with the S3 bucket as the origin.
8
Analysis:
B is the most cost-effective and highly available solution. Lambda is a serverless compute service that only
runs when invoked, making it ideal for on-demand image processing. Storing images in S3 provides durable
and scalable storage at a low cost. CloudFront, a CDN, caches the images globally, providing low latency
access for viewing and downloads and further reducing S3 costs by serving content from edge locations.
This combination minimizes costs by only paying for compute when needed (Lambda) and leveraging the
cost-effective storage and delivery of S3 and CloudFront.
DynamoDB is designed for key-value or document data, not for storing binary image data.
Q18. A company is planning to migrate a business-critical dataset to Amazon S3. The current solution
design uses a single S3 bucket in the us-east-I Region with versioning enabled to store the dataset. The
company's disaster recovery policy states that all data multiple AWS Regions.
How should a solutions architect design the S3 solution?
A. Create an additional S3 bucket in another Region and configure cross-Region replication.
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS).
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region
replication.
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource
(CORS).
Analysis:
Cross-Region Replication feature is specifically designed to replicate data across AWS Regions. It ensures
that your data is redundant and available in multiple locations.
Versioning: Enabling versioning in both the source and destination buckets is crucial for disaster recovery.
This allows you to restore to any previous version of an object in case of data corruption or accidental
deletion.
Why not CORS? CORS (Cross-Origin Resource Sharing) is used to allow web applications running on one
domain to access resources from a different domain. It's not relevant for disaster recovery or data replication
across Regions.
Q19. A company has application running on Amazon EC2 instances in a VPC. One of the applications
needs to call an Amazon S3 API to store and read objects. The company's security policies restrict any
internet-bound traffic from the applications.
Which action will fulfill these requirements and maintain security?
A. Configure an S3 interface endpoint.
B. Configure an S3 gateway endpoint.
C. Create an S3 bucket in a private subnet.
D. Create an S3 bucket in the same Region as the EC2 instance.
Analysis:
Option A is incorrect because Amazon S3 does not use interface endpoints. Interface endpoints are for
services like DynamoDB or API Gateway. S3 uses gateway endpoints to enable private communication
within a VPC, so interface endpoints are not applicable here.
9
Option C is incorrect because Amazon S3 is a global service and is not tied to specific subnets. You
cannot place an S3 bucket in a private subnet; it doesn’t operate that way. This makes the option invalid for
solving the problem.
Option B is correct because an S3 gateway endpoint allows secure access to S3 directly through the AWS
private network. It ensures that all traffic between the EC2 instances and S3 stays within the VPC and never
goes to the internet, fully complying with the company’s security policies.
Q20. A company's web application uses an Amazon RDS PostgreSQL DB instance to store its application
data. During the financial closing period at the start of every month, Accountants run large queries that
impact the database's performance due to high usage. The company wants to minimize the impact that the
reporting activity has on the web application.
What should a solutions architect do to reduce the impact on the database with the LEAST amount of effort?
A. Create a read replica and direct reporting traffic to the replica.
B. Create a Multi-AZ database and direct reporting traffic to the standby.
C. Create a cross-Region read replica and direct reporting traffic to the replica.
D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.
Analysis:
Create a read replica is correct because it allows reporting traffic to be offloaded to a separate, read-only
replica, reducing the load on the primary database with minimal effort. It is easy to set up and requires small
changes to redirect reporting queries, ensuring the web application's performance is not impacted by
reporting activities.
A Multi-AZ database provides a standby instance for high availability and failover. The standby instance is
not accessible for read or write operations unless a failover occurs.
Amazon Redshift is designed for lightning-fast queries, enabling businesses to answer complex questions
about massive amounts of data in seconds, which is crucial for making quick, data-driven decisions. It is
built to handle enormous datasets, from terabytes to petabytes, acting as a specialized warehouse for storing
and analyzing vast amounts of information. Additionally, Redshift is optimized for performance and
efficiency, making it a cost-effective solution for businesses looking to gain insights without incurring high
costs.
Q21. A company wants to migrate a high performance computing (HPC) application and data from on-
premises to the AWS Cloud. The company uses tiered storage on premises with hot high-performance
parallel storage to support the application during periodic runs of the application, and more economical
cold storage to hold the data when the application is not actively running.
Which combination of solutions should a solutions architect recommend to support the storage needs of the
application? (Choose two.)
A. Amazon S3 for cold data storage
B. Amazon Elastic File System (Amazon EFS) for cold data storage
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
E. Amazon FSx for Windows for high-performance parallel storage
10
Analysis:
FSx for Lustre is specifically designed for high-performance computing workloads. It provides a scalable
and durable file system that can handle the demanding performance requirements of HPC applications.
Lustre is a well-known and widely used parallel file system in the HPC community.
S3 is an ideal choice for cold storage due to its low cost and scalability. It's well-suited for archiving data
that is not frequently accessed, such as data between HPC runs. S3 offers various storage classes (like S3
Standard-Infrequent Access) optimized for cost-effectiveness for infrequently accessed data.
Q22. A company's application is running on Amazon EC2 instances in a single Region. In the event of a
disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Detach a volume on an EC2 instance and copy it to Amazon S3.
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the
destination.
E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2
instance in the destination Region using that EBS volume.
Analysis:
Option D: Copy an Amazon Machine Image (AMI) to a Different Region
Copying an AMI to a different Region ensures that you have a backup of your EC2 instance in a new
Region, ready for disaster recovery. AMIs are the most efficient way to replicate an instance because they
capture the configuration, operating system, and data necessary to recreate the instance. By copying the AMI
to the target Region, you enable quick and seamless deployment of EC2 instances without the need to
reconfigure or manually transfer data, making it a critical step in disaster recovery planning.
Option B: Launch a New EC2 Instance from an AMI in a New Region
Once the AMI is copied to the new Region, you can use it to launch a new EC2 instance, ensuring that the
application can run in the second Region during a disaster. This step restores the application environment
quickly and efficiently, leveraging the preconfigured AMI to recreate the instance exactly as it was in the
original Region. By launching a new EC2 instance from the AMI, you minimize downtime and ensure
business continuity with minimal effort.
Q23. A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2
instances in a VPC do not traverse the internet.
What should the solutions architect do to accomplish this? (Choose two.)
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
C. Create a new DynamoDB table that uses the endpoint.
D. Create an ENI for the endpoint in each of the subnets of the VPC.
11
E. Create a security group entry in the default security group to provide access.
Analysis:
Option A: Create a route table entry for the endpoint
This is correct because a route table entry is required to direct traffic to the gateway endpoint for
DynamoDB. Gateway endpoints are used for AWS services like Amazon S3 and DynamoDB. Without the
route table entry, traffic wouldn’t know to use the private endpoint, and it might traverse the internet instead.
Option B: Create a gateway endpoint for DynamoDB
This is correct because a gateway endpoint provides private connectivity to AWS services like Amazon S3
and DynamoDB, keeping traffic within the AWS network. It is the primary method to ensure secure
communication with these services without requiring internet access.
Option D: Create an ENI for the endpoint in each of the subnets of the VPC
This is incorrect because ENIs (Elastic Network Interfaces) are used for interface endpoints, not for
gateway endpoints. Interface endpoints are used for services like Amazon EC2, Amazon SNS, AWS
Systems Manager, and others. Since DynamoDB uses a gateway endpoint, ENIs are not needed.
Option E: Create a security group entry in the default security group to provide access
This is incorrect because gateway endpoints do not rely on security groups for traffic control. Instead, they
work through route tables to manage traffic. Security group entries are relevant for services that use
interface endpoints with ENIs, but they are not applicable to gateway endpoints for DynamoDB.
Q24. A company's legacy application is currently relying on a single-instance Amazon RDS MySQL
database without encryption. Due to new compliance requirements, all existing and new data in this
database must be encrypted.
How should this be accomplished?
A. Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3.
Delete the RDS instance.
B. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to
delete the original instance.
C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS
instance from the encrypted snapshot.
D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch
the application over to the new master. Delete the old RDS instance.
Analysis:
Option C is correct because it follows the AWS-recommended approach to encrypt an unencrypted RDS
instance. By creating an encrypted copy of the database snapshot and restoring a new RDS instance, both
existing and new data are encrypted. This ensures compliance with minimal complexity.
12
Q25. A manufacturing company wants to implement predictive maintenance on its machinery equipment.
The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions
architect is tasked with implementing a solution that will receive events in an ordered manner for each
machinery asset and ensure that data is saved for further processing at a later time.
Which solution would be MOST efficient?
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset.
Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use
Amazon Kinesis Data Firehose to save data to Amazon Elastic Block Store (Amazon EBS).
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger
an AWS Lambda function for the SQS queue to save data to Amazon Elastic File System (Amazon EFS).
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset.
Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.
Analysis:
Option A is correct because Amazon Kinesis Data Streams ensures ordered event processing with
partitions for each asset, and Kinesis Data Firehose efficiently saves the data to Amazon S3 for future
analysis. This combination provides a scalable and efficient solution tailored to the company’s needs.
Kinesis Data Fire house Amazon S3 only not use Amazon EBS
Q26. A company's website runs on Amazon EC2 instances behind an Application Load Balancer (ALB).
The website has a mix of dynamic and static content. Users around the globe are reporting that the website is
slow.
Which set of actions will improve website performance for users worldwide?
A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the
Amazon Route 53 record to point to the CloudFront distribution.
B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with
larger instance sizes and register the instances with the ALB.
C. Launch new EC2 instances hosting the same web application in different Regions closer to the users.
Then register instances with the same ALB using cross- Region VPC peering.
D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2
instances. Then update an Amazon Route 53 record to point to the S3 buckets.
Analysis:
A Content Delivery Network (CDN) like CloudFront caches static and dynamic content at edge locations
around the globe. This brings content closer to users, significantly reducing latency and improving website
performance. Configuring ALB as the origin allows CloudFront to efficiently fetch dynamic content from
your web servers.
S3 can only host static content and S3 is object storage, not a relational database.
13
Q27. A company has been storing analytics data in an Amazon RDS instance for the past few years. The
company asked a solutions architect to find a solution that allows users to access this data using an API.
The expectation is that the application will experience periods of inactivity but could receive bursts of
traffic within seconds.
Which solution should the solutions architect suggest?
A. Set up an Amazon API Gateway and use Amazon ECS.
B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.
C. Set up an Amazon API Gateway and use AWS Lambda functions.
D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.
Analysis:
API Gateway with Lambda is the best fit because it offers a serverless architecture that scales
automatically based on traffic, ensuring cost-efficiency during inactivity and instant scalability for sudden
bursts of traffic. Lambda is designed for short-lived executions, making it ideal for handling API requests
while only charging for the compute time used. Its seamless integration with API Gateway simplifies
defining endpoints, managing traffic, and ensuring secure access, allowing the company to focus solely on
developing the API logic and accessing data from RDS without worrying about server management or
infrastructure maintenance.
When to Use AWS Elastic Beanstalk
AWS Elastic Beanstalk is ideal for deploying and managing traditional applications where you want to focus
on writing code without worrying about the underlying infrastructure. It automatically handles provisioning,
load balancing, scaling, and monitoring, making it a great choice for developers who need a quick and easy
setup. Beanstalk is best suited for standard architectures like web applications, REST APIs, or backend
services using languages such as Node.js, Python, Java, or .NET. For example, a small-to-medium-sized
web application that requires a fast deployment process and predictable scaling needs can efficiently run on
Elastic Beanstalk without requiring manual infrastructure management.
When to Use Amazon ECS
Amazon Elastic Container Service (ECS) is the best fit for containerized workloads that require high
scalability and fine-grained control over deployment. ECS allows you to run, scale, and manage
containerized applications, making it ideal for microservices-based architectures or large-scale, distributed
workloads. It integrates seamlessly with AWS Fargate for serverless containers or EC2 instances for more
control. For instance, if you are deploying a microservices architecture for an e-commerce platform where
each service (e.g., payments, inventory) runs in a container, ECS provides the orchestration needed to
manage these services effectively.
Q28. A company must generate sales reports at the beginning of every month. The reporting process
launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot be
interrupted. The company wants to minimize costs.
Which pricing model should the company choose?
A. Reserved Instances
B. Spot Block Instances
14
C. On-Demand Instances
D. Scheduled Reserved Instances
Q29. A gaming company has multiple Amazon EC2 instances in a single Availability Zone for its
multiplayer game that communicates with users on Layer 4. The chief technology officer (CTO) wants to
make the architecture highly available and cost-effective.
What should a solutions architect do to meet these requirements? (Choose two.)?
A. Increase the number of EC2 instances.
B. Decrease the number of EC2 instances.
C. Configure a Network Load Balancer in front of the EC2 instances.
D. Configure an Application Load Balancer in front of the EC2 instances.
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically.
Analysis:
A Network Load Balancer (NLB) operates at Layer 4 (Transport Layer) and is specifically designed to
handle protocols like TCP and UDP, making it an ideal choice for gaming applications. It provides low-
latency, high-throughput traffic distribution, ensuring that incoming traffic is efficiently routed to available
EC2 instances. This improves reliability and performance for multiplayer gaming traffic. Since the
requirement involves Layer 4 communication, an NLB is a better fit than other load balancers designed for
higher-layer protocols.
An Application Load Balancer (ALB) operates at Layer 7 (Application Layer) and is designed to manage
HTTP/HTTPS traffic. While ALBs are excellent for web applications requiring advanced features like
routing based on URLs or headers, they are not suitable for Layer 4 gaming protocols such as TCP/UDP.
Using an ALB in this case would not meet the performance or protocol requirements of the gaming
application, making it an incorrect choice.
Option E: Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically
An Auto Scaling group ensures that the number of EC2 instances dynamically adjusts based on real-time
traffic demand, optimizing both cost and performance. It also distributes instances across multiple
Availability Zones, providing high availability and resilience to failure in a single zone. This makes it a
perfect solution for ensuring the architecture remains cost-effective while maintaining uptime and
scalability, aligning with the company’s requirements.
15
Q30. A company currently operates a web application backed by an Amazon RDS MySQL database. It has
automated backups that are run daily and are not encrypted. A security audit requires future backups to be
encrypted and the unencrypted backups to be destroyed. The company will make at least one encrypted
backup before destroying the old backups.
What should be done to enable encryption for future backups?
A. Enable default encryption for the Amazon S3 bucket where backups are stored.
B. Modify the backup section of the database configuration to toggle the Enable encryption check box.
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the
encrypted snapshot.
D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary.
Remove the original database instance.
Analysis:
Option A suggests enabling default encryption for the Amazon S3 bucket where backups are stored. This
option only encrypts the data stored within the S3 bucket itself. It does not encrypt the actual
database backups. However, RDS backups are managed by AWS and are not directly stored in S3 buckets
that users can control. Enabling encryption on an S3 bucket does not affect the encryption of RDS backups.
This option does not address the requirement to encrypt the database or its backups, making it irrelevant in
this scenario.
Option B proposes modifying the database configuration to enable encryption. However, AWS does not
allow encryption to be turned on for an existing RDS instance. Encryption must be enabled during the
creation of the database. To encrypt an existing database, it must first be restored from an encrypted
snapshot. Therefore, this option is not technically feasible.
Option C is correct because it uses the AWS-recommended method to encrypt an unencrypted RDS
database. By creating a snapshot of the database, copying it to an encrypted snapshot, and restoring the
database from the encrypted snapshot, a new encrypted database instance is created. This ensures that all
future backups will also be encrypted, meeting the security audit requirements effectively and without
unnecessary complexity.
Q31. A company is hosting a website behind multiple Application Load Balancers. The company has
different distribution rights for its content around the world. A solutions architect needs to ensure that
users are served the correct content without violating distribution rights.
Which configuration should the solutions architect choose to meet these requirements?
A. Configure Amazon CloudFront with AWS WAF.
B. Configure Application Load Balancers with AWS WAF.
C. Configure Amazon Route 53 with a geolocation policy.
D. Configure Amazon Route 53 with a geoproximity routing policy.
Analysis:
16
Geolocation Policy: Route 53's geolocation routing policy allows you to direct traffic to different endpoints
based on the user's geographic location. This enables you to serve content specific to the user's region,
ensuring compliance with distribution rights.
Configure Amazon CloudFront with AWS WAF: While CloudFront can be used for content delivery, it
primarily focuses on caching and improving performance. It doesn't offer fine-grained control over content
distribution based on user location.
Configure Amazon Route 53 with a geoproximity routing policy: Geoproximity routing directs traffic to
the endpoint with the lowest latency. While this can improve performance, it doesn't guarantee that users
will be served content according to their specific location and distribution rights.
Configure Application Load Balancers with AWS WAF: Application Load Balancers are primarily for
load balancing traffic across multiple targets.
Q32. A solutions architect has created a new AWS account and must secure AWS account root user access.
Which combination of actions will accomplish this? (Choose two.)
A. Ensure the root user uses a strong password.
B. Enable multi-factor authentication to the root user.
C. Store root user access keys in an encrypted Amazon S3 bucket.
D. Add the root user to a group containing administrative permissions.
E. Apply the required permissions to the root user with an inline policy document.
Q33. A solutions architect at an ecommerce company wants to back up application log data to Amazon S3.
The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed
the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
A. S3 Glacier
B. S3 Intelligent-Tiering
C. S3 Standard-infrequent Access (S3 Standard-IA)
D. S3 One Zone-infrequent Access (S3 One Zone-IA)
Analysis:
Option B: S3 Intelligent-Tiering is the most suitable choice for storing application log data with
unpredictable access patterns. This storage class automatically transitions data between different access
tiers (S3, S3 Infrequent Access, and S3 Glacier Deep Archive) based on actual access patterns. This
eliminates the need for manual tiering decisions and ensures that data is stored in the most cost-effective tier.
For example, if log files are frequently accessed initially for debugging or analysis, they will remain in the
S3 storage class. However, if access to these logs diminishes over time, Intelligent-Tiering will
automatically move them to the less expensive S3 Infrequent Access tier. Conversely, if infrequently
accessed logs are suddenly accessed more frequently, Intelligent-Tiering will automatically return them to
the S3 storage class. This dynamic tiering mechanism optimizes storage costs without sacrificing data
availability.
Option C: S3 Standard-Infrequent Access (S3 Standard-IA) is accessed infrequently but still requires
fast and immediate retrieval when needed. It offers lower storage costs compared to S3 Standard but
17
incurs retrieval fees, making it suitable for data with predictable access patterns. However, for
unpredictable workloads like application logs, Standard-IA can lead to higher overall costs due to retrieval
charges and the lack of automatic tiering.
Option D: S3 One Zone-IA is similar to S3 Standard-IA but offers lower storage costs in exchange for
reduced durability. It stores data in a single Availability Zone, which means there's a slightly higher risk of
data loss in case of an Availability Zone failure. For application log data, where data durability might be a
concern, S3 One Zone-IA might not be the most suitable choice. While cost-effective, it introduces a higher
risk of data loss compared to other options.
Q34. A company's website is used to sell products to the public. The site runs on Amazon EC2 instances in
an Auto Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront
distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin
for the CloudFront distribution. A recent review of security logs revealed an external malicious IP that needs
to be blocked from accessing the website.
A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B. Modify the configuration Of AWS WAF to add an IP match condition to block the malicious IP
address.
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address.
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address.
Analysis:
Option B: Modify the configuration of AWS WAF to add an IP match condition to block the
malicious IP address
This option is correct because AWS WAF is designed to provide web application security by allowing or
blocking requests based on rules such as IP match conditions. When the IP match rule is added to AWS
WAF, malicious traffic from the specific IP address is blocked at the edge (CloudFront), ensuring the
request does not reach the backend infrastructure, such as the ALB or EC2 instances. This approach reduces
unnecessary traffic to backend resources, improves performance, and aligns with the current architecture,
which already uses AWS WAF for security purposes.
Option A: Modify the network ACL on the CloudFront distribution to add a deny rule for the
malicious IP address
This option is incorrect because network ACLs (NACLs) are associated with subnets in a VPC and are
used to control inbound and outbound traffic at the network layer. However, CloudFront does not use
NACLs, as it is a global service not tied to VPC subnets. Attempting to apply a deny rule in a NACL would
have no effect on CloudFront traffic. CloudFront traffic filtering must be managed at the application layer,
using tools like AWS WAF, not NACLs.
Option C: Modify the network ACL for the EC2 instances in the target groups behind the ALB to
deny the malicious IP address
This option is incorrect because NACLs operate at the subnet level and are not directly associated with
ALBs. Since the Application Load Balancer is the origin for traffic from CloudFront, its IP address is what
is visible to the EC2 instances, not the original client IP address. Blocking traffic at the NACL level would
18
require whitelisting CloudFront IP ranges, which is impractical to manage. Therefore, this method is
ineffective for blocking a specific malicious IP when using CloudFront and ALB.
Option D: Modify the security groups for the EC2 instances in the target groups behind the ALB to
deny the malicious IP address
This option is also incorrect because security groups control traffic to and from EC2 instances but cannot
block the malicious IP in this scenario. Traffic coming from the ALB to the EC2 instances would have the
ALB’s IP address, not the original client IP. Therefore, applying a deny rule in the security group would
block all traffic from the ALB, not just the malicious IP. This makes it an unsuitable solution for the given
architecture.
Q35. A solutions architect is designing an application for a two-step order process. The first step is
synchronous and must return to the user with little latency. The second step takes longer, so it will be
implemented in a separate component. Orders must be processed exactly once and in the order in which
they are received.
How should the solutions architect integrate these components?
A. Use Amazon SQS FIFO queues.
B. Use an AWS Lambda function along with Amazon SQS standard queues.
C. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.
D. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.
Analysis:
Option A: Use Amazon SQS FIFO queues
This is the correct option because Amazon SQS FIFO queues ensure message ordering (First-In-First-Out)
and guarantee exactly-once processing. These features are essential for this use case, as orders must be
processed in the sequence they are received and cannot be processed multiple times. FIFO queues also
integrate well with asynchronous processing, making them ideal for decoupling the two steps in the order
process. The synchronous step can immediately push a message to the FIFO queue, while the second, longer
process can consume messages from the queue in the exact order. This ensures reliability, scalability, and
compliance with the application's requirements.
Option B: Use an AWS Lambda function along with Amazon SQS standard queues
This option is incorrect because SQS standard queues do not guarantee message order, which violates
the requirement for sequential order processing. Standard queues offer at-least-once delivery, meaning a
message might be delivered more than once, leading to potential duplicate processing. While Lambda can
process messages asynchronously, this combination does not meet the critical requirements of exactly-once
processing and maintaining order, making it unsuitable for this use case.
Option C: Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic
SNS is designed for fan-out delivery, meaning a single message is published to multiple subscribers. In this
case, we need to ensure that the order processing happens sequentially, not simultaneously. Using SNS
would introduce unnecessary complexity and could disrupt the order of processing.
Option D: Create an SNS topic and subscribe an Amazon SQS standard queue to that topic
This option is also incorrect because neither SNS nor SQS standard queues guarantee message order.
Additionally, SQS standard queues only provide at-least-once delivery, which does not ensure exactly-once
19
processing. While this combination can handle high throughput and broadcast messages to multiple
consumers, it fails to meet the application’s specific needs for order preservation and duplicate prevention.
Q36. A web application is deployed in the AWS Cloud. It consists of a two-tier architecture that includes a
web layer and a database layer. The web server is vulnerable to cross-site scripting (XSS) attacks.
What should a solutions architect do to remediate the vulnerability?
A. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
B. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS
WAF.
D. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield
Standard.
Analysis:
Option A: A Classic Load Balancer (CLB) is a legacy load balancing solution in AWS that operates at
Layer 4 (TCP) and Layer 7 (HTTP/HTTPS). While CLB can distribute traffic, it lacks advanced features
compared to the newer Application Load Balancer (ALB). For instance, CLB does not offer integrated
support for routing based on URL paths, host headers, or other HTTP-specific features. Moreover, while
AWS WAF can be used with a Classic Load Balancer, it doesn't provide the fine-grained security controls
that ALB can offer when it comes to web application traffic. Due to these limitations in managing web
traffic and HTTP-based attacks, CLB is not the ideal choice for mitigating cross-site scripting (XSS)
vulnerabilities, making this option less effective than others.
Option B: A Network Load Balancer (NLB) operates at Layer 4 (TCP/UDP), which is focused on routing
network traffic based on IP addresses and ports, not the actual contents of the HTTP request. Although AWS
WAF can be enabled with NLB for some use cases, NLB itself does not handle HTTP-specific traffic well,
and it lacks the necessary Layer 7 (application layer) capabilities for inspecting web traffic. Since XSS
attacks target vulnerabilities in HTTP-based web traffic (such as malicious scripts in web forms or URLs),
NLB's inability to inspect and route HTTP traffic effectively makes it unsuitable for protecting against such
vulnerabilities. For mitigating XSS, a more application-aware solution like ALB is preferred.
Option C: This option is the best solution. An Application Load Balancer (ALB) operates at Layer 7
(HTTP/HTTPS), which means it can intelligently route web traffic based on HTTP-specific parameters like
URL paths, query strings, and headers. This makes it ideal for handling web application traffic and
protecting against attacks such as cross-site scripting (XSS). ALB integrates seamlessly with AWS WAF,
which is designed to filter out malicious web requests that might exploit vulnerabilities like XSS. AWS
WAF can inspect incoming HTTP requests, block known attack patterns, and provide customizable rules to
prevent such threats. This combination of ALB and AWS WAF is the most effective and secure solution for
protecting your web application from XSS attacks.
20
Option D: This option combines an Application Load Balancer (ALB) with AWS Shield Standard for
protection. While AWS Shield Standard provides protection against DDoS (Distributed Denial of Service)
attacks, it does not address application-layer vulnerabilities like XSS. AWS Shield is primarily focused on
defending against large-scale attacks that flood your network with excessive traffic, rather than blocking
malicious content at the application level. To protect against XSS attacks, you need a solution like AWS
WAF, which is specifically designed to filter malicious web traffic. While Shield provides an important
layer of DDoS protection, it does not address the root cause of XSS vulnerabilities, making this solution less
effective for remediating the issue compared to enabling AWS WAF.
Q37. A company's website is using an Amazon RDS MySQL Multi-AZ DB instance for its transactional
data storage. There are other internal systems that query this DB instance to fetch data for internal batch
processing. The RDS DB instance slows down significantly when the internal systems fetch data. This
impacts the website's read and write performance, and the users experience slow response times.
Which solution will improve the website's performance?
A. Use an RDS PostgreSQL DB instance instead of a MySQL database.
B. Use Amazon ElastiCache to cache the query responses for the website.
C. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance.
D. Add a read replica to the RDS DB instance and configure the internal systems to query the read
replica.
Analysis:
Option D: Add a read replica to the RDS DB instance and configure the internal systems to query the
read replica. This is the most effective solution for improving the website's performance. A read replica is a
synchronized copy of the primary database optimized for read operations. By directing the internal systems
to query the read replica for their batch processing needs, you offload read traffic from the primary database.
This significantly reduces the load on the primary instance, allowing it to handle website requests more
efficiently. As a result, the website experiences faster read and write operations, leading to improved
response times and a better user experience.
Option A: Use an RDS PostgreSQL DB instance instead of a MySQL database. While PostgreSQL is a
powerful database, switching to PostgreSQL might not directly address the performance bottleneck caused
by heavy read traffic from internal systems. Both MySQL and PostgreSQL are relational databases.
Option B: Use Amazon ElastiCache to cache the query responses for the website. ElastiCache is well-
suited for caching frequently accessed data. However, it might not be as effective for complex queries or
large datasets that are frequently updated, as seen in this scenario where internal systems are fetching data
for batch processing.
Option C: Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance.
Adding an Availability Zone to a Multi-AZ setup primarily enhances high availability and fault tolerance. It
does not directly address the performance issue caused by the increased read load from internal systems.
Q38. An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run
in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best
when the CPU utilization of the EC2 instances is at or near 40%.
21
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Analysis:
When the particular CPU utilization is mentioned choose target tracking policy.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
A target tracking scaling policy is the most efficient choice for this scenario. It automatically adjusts the
number of EC2 instances in the Auto Scaling group to keep a specific metric (like CPU utilization) at a
target value. By setting the target to 40% CPU utilization, the Auto Scaling group will scale the number of
instances up or down as needed to maintain this level of CPU usage. This approach is dynamic, automatic,
and designed to keep performance at an optimal level without manual intervention.
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
A simple scaling policy can be based on a specific CloudWatch alarm, such as when CPU utilization
exceeds a certain threshold. However, a simple scaling policy doesn't automatically adjust to maintain a
target value like 40% CPU utilization. It reacts to specific conditions but lacks the ability to continuously
optimize for the desired CPU utilization in a balanced way, making it less ideal for maintaining a specific
performance target.
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
While AWS Lambda could be used to programmatically adjust the desired capacity of an Auto Scaling
group, this approach requires custom logic and does not provide the same level of automation and
responsiveness as target tracking scaling policies. Lambda functions would need to continuously monitor
CPU utilization and adjust the group size, but this would involve more complexity and manual
configuration, making it less efficient than using a built-in scaling policy.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Scheduled scaling actions allow you to scale the Auto Scaling group at fixed times, such as scaling up
during known peak hours and scaling down during off-peak times. While this could be useful in certain
scenarios where traffic patterns are predictable, it does not automatically adjust based on real-time metrics
like CPU utilization. It would not help maintain the desired CPU utilization of 40% in a dynamic and
responsive manner, so it's less suitable for maintaining performance as efficiently as target tracking.
Q39. A company runs an internal browser-based application. The application runs on Amazon EC2
instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group
across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but
scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day
begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
22
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown
period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the
office opens.
Analysis:
Target Tracking Policy: This policy dynamically adjusts the number of instances based on a target metric
(like CPU utilization) and a target value. By setting a target CPU utilization (e.g., 40%), the Auto Scaling
group will automatically scale up or down to maintain that target. This ensures optimal resource utilization
and minimizes costs by avoiding over-provisioning.
Q40. A financial services company has a web application that serves users in the United States and Europe.
The application consists of a database tier and a web server tier. The database tier consists of a MySQL
database hosted in us-east-I. Amazon Route 53 geoproximity routing is used to direct traffic to instances in
the closest Region. A performance review of the system reveals that European users are not receiving the
same level of query performance as those in the United States.
Which changes should be made to the database tier to improve performance?
A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to
additional Regions.
C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to
reduce the load on the primary instance.
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode.
Configure read replicas in one of the European Regions.
Analysis:
Aurora Global Database: This is designed for global applications with low latency requirements. It
replicates data across multiple Availability Zones and Regions with minimal latency. By configuring read
replicas in a European Region, you bring the database closer to European users.
Multi-AZ deployments provide high availability and automatic failover within a single Region, but they do
not improve read query performance for users in other Regions.
DynamoDB is a NoSQL database, which may not be compatible with the application's existing relational
database structure and query requirements.
Deploying MySQL instances in each Region would improve regional performance, but using an
Application Load Balancer (ALB) in front of a database is not a valid or supported design. ALBs are used
for distributing traffic among web or application servers, not for database tiers.
Q41. A company hosts a static website on-premises and wants to migrate the website to AWS. The website
should load as quickly as possible for users around the world. The company also wants the most cost-
effective solution.
A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content.
Replicate the S3 bucket to multiple AWS Regions.
23
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage
content. Configure Amazon CloudFront with the S3 bucket as the origin.
C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP
Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache
HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to
select the closest origin.
Analysis:
S3 is designed for storing and retrieving objects like website content & static website. CloudFront caches
website content at edge locations around the world.
Q42. A solutions architect is designing storage for a high performance computing (HPC) environment based
on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require
shared storage and heavy computing.
Which storage option would be the optimal solution?
A. Amazon Elastic File System (Amazon EFS)
B. Amazon FSx for Lustre
C. Amazon EC2 instance store
D. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (iol)
Analysis:
When mention high performance computing (HPC) &Amazon Linux go for Amazon FSx for luster. FSx for
Lustre is specifically designed for high-performance computing workloads & shared storage.
EBS is not a shared storage solution.
Q43. A company is performing an AWS Well-Architected Framework review of an existing workload
deployed on AWS. The review identified a public-facing website running on the same Amazon EC2
instance as a Microsoft Active Directory domain controller that was install recently to support other AWS
services. A solutions architect needs to recommend a new design that would improve the security of the
architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?
A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on
the current EC2 instance.
B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active
Directory.
C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to
the Active domain controller running on the current EC2 instance.
D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0
federation with the current Active Directory controller. Modify the EC2 instance's security group to deny
public access to Active Directory.
24
Analysis:
Option A is the correct option because AWS Directory Service provides a fully managed, secure, and
scalable Active Directory solution. It eliminates the need to host Active Directory on the same EC2 instance
as the public-facing website, resolving the security risk. AWS also handles administrative tasks like backups
and updates, reducing IT workload. This solution aligns with best practices for security and operational
efficiency.
Q44. A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure
that data can be recovered in case of accidental deletion.
Which action will accomplish this?
A. Enable Amazon S3 versioning.
B. Enable Amazon S3 Intelligent-Tiering.
C. Enable an Amazon S3 lifecycle policy.
D. Enable Amazon S3 cross-Region replication.
Analysis:
S3 Versioning: This is the correct option because enabling S3 versioning keeps multiple versions of an
object within the bucket. If an object is accidentally deleted or overwritten, previous versions can be
retrieved, ensuring data recovery. This is the most effective way to protect against accidental deletions or
modifications.
Enable Amazon S3 Intelligent-Tiering: Intelligent-Tiering is designed for cost optimization by
automatically moving objects between different storage classes based on access patterns. It does not provide
any data recovery capabilities for accidental deletions.
Enable an Amazon S3 lifecycle policy: Lifecycle policies are primarily used for data management tasks
such as transitioning objects to different storage classes (e.g., S3 Standard-IA) to reduce costs, or for
deleting objects after a certain period. They do not provide a mechanism for recovering accidentally deleted
objects.
Q45. A company's production application runs online transaction processing (OLTP) transactions on an
Amazon RDS MySQL DB instance. The company is launching a new reporting tool that will access the
same data. The reporting tool must be highly available and not impact the performance of the production
application.
How can this be achieved?
A. Create hourly snapshots of the production RDS DB instance.
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
25
C. Create multiple RDS Read Replicas of the production RDS DB instance. Place the Read Replicas in an
Auto Scaling group.
D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ
RDS Read Replica from the replica.
Q46. A company runs an application in a branch office within a small data closet with no virtualized
compute resources. The application data is stored on an NFS volume. Compliance standards require a daily
offsite backup of the NFS volume.
Which solution meets these requirements?
A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data
to Amazon S3.
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data
to Amazon S3.
D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data
to Amazon S3.
Analysis:
In this scenario, the company needs to implement offsite backups for an NFS volume in a resource-
constrained branch office. Using an AWS Storage Gateway File Gateway hardware appliance provides the
most efficient solution. This dedicated appliance simplifies the backup process by directly replicating data
from the NFS volume to Amazon S3. The appliance handles the complexities of data transfer, ensuring
reliable and efficient backups. This approach minimizes the impact on the local environment's resources
while ensuring that critical data is securely protected offsite.
Q47. A company's web application is using multiple Linux Amazon EC2 instances and storing data on
Amazon Elastic Block Store (Amazon EBS) volumes. The company is looking for a solution to increase the
resiliency of the application in case of a failure and to provide storage that complies with atomicity,
consistency, isolation, and durability (ACID).
What should a solutions architect do to meet these requirements?
A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2
instance.
B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Mount an instance store on each EC2 instance.
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Store data on Amazon Elastic File System (Amazon EFS) and mount a target on each instance.
D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store
data using Amazon S3 One Zone-infrequent Access (S3 One Zone-IA).
Analysis:
Option C: Amazon EFS is a managed, shared, elastic file system that works across multiple AZs and
provides high availability and durability. It ensures compliance with ACID principles by maintaining
consistent data across all EC2 instances accessing the file system. By using an Application Load Balancer
and Auto Scaling groups, the application is made highly available and resilient to AZ failures. The EFS
26
storage system allows seamless data sharing between multiple EC2 instances, ensuring consistent and
durable storage.
Option A: Amazon EBS volumes are tied to a specific Availability Zone. In case of an AZ failure, the data
will not be accessible from other zones unless snapshots are manually restored. This setup does not
inherently comply with the ACID principles or increase resiliency effectively across AZs.
Option B: Instance store volumes are ephemeral (temporary) and do not persist data when the instance is
stopped or terminated. This solution does not provide data durability or resiliency, and it cannot comply with
the ACID requirements.
Q48. A security team to limit access to specific services or actions in all of the team's AWS accounts. All
accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must
be a single point where permissions can be maintained.
A. Create an ACL to provide access to the services or actions.
B. Create a security group to allow accounts and attach it to user groups.
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or
actions.
Analysis:
Option D: This is the most effective solution because it provides centralized control over permissions across
all accounts within the organization. SCPs act as guardrails, defining the maximum permissions allowed for
all IAM users and roles within the organization. By creating an SCP at the root level, you enforce a
consistent set of security rules across all accounts, ensuring that access to sensitive services or actions is
restricted as required. This centralized approach simplifies security management and reduces the risk of
misconfigurations that could lead to security vulnerabilities.
Option A: ACLs (Access Control Lists) are typically associated with specific resources, such as S3 buckets.
They are not designed for centralized permission management across multiple accounts and services within
an organization.
Option B: Security groups are used to control network traffic to and from EC2 instances. They are not
designed for managing permissions to AWS services or actions.
Option C: While cross-account roles can be used to manage access to resources in other accounts, they are
not suitable for enforcing consistent and centralized security restrictions across the entire organization.
Q49. A data science team requires storage for nightly log processing. The size and number of logs is
unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?
A. Amazon S3 Glacier
B. Amazon S3 Standard
C. Amazon S3 Intelligent-Tiering
D. Amazon S3 One Zone-infrequent Access (S3 One Zone-IA)
Analysis:
27
Amazon S3 Standard: This is the default storage class for S3. It offers the lowest latency for data access
and retrieval. Since the logs need to be processed within 24 hours, low latency is crucial for efficient
processing. While S3 Standard has a higher per-GB cost compared to other storage classes, the short
retention period (24 hours) minimizes the overall storage cost.
Q50. A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-
uploaded documents in an Amazon Elastic Block Store (Amazon EBS) volume. For better scalability and
availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in
another Availability Zone, placing both behind an Application Load Balancer. After completing this change,
users reported that each time they refreshed the website, they could see one subset of their documents or the
other, but never all of the documents at the same time.
A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents.
C. Copy the data from both EBS volumes to Amazon Elastic File System (Amazon EFS). Modify the
application to save new documents to Amazon Elastic File System (Amazon EFS).
D. Configure the Application Load Balancer to send the request to both servers. Return each document from
the correct server.
Analysis:
Amazon EFS is a shared file system that can be mounted by multiple EC2 instances across different
Availability Zones. By migrating the document storage to EFS, both EC2 instances will have access to the
same data, ensuring that all documents are visible to users regardless of which EC2 instance they are
directed to by the ALB. EFS provides scalability, availability, and durability for shared storage, which is
perfect for this use case.
Q51. A company is planning to use Amazon S3 to store images uploaded by its users. The images must be
encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys,
but it does want to control who can access those keys.
What should a solutions architect use to accomplish this?
A. Server-Side Encryption With keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
Analysis:
Option A: It is not valid because Amazon S3 does not offer an encryption mechanism that involves storing
encryption keys directly in the S3 bucket.
Option C: SSE-S3 encrypts data using keys that are fully managed by Amazon S3. While this simplifies
encryption and key management, it does not provide any customer control over the keys. Since the company
wants control over who can access the keys, SSE-S3 does not meet this requirement. However, SSE-S3 is
suitable for scenarios where no specific. control over key access is needed.
28
Option D: SSE-KMS uses AWS Key Management Service (KMS) to manage encryption keys, which
provides both encryption at rest and control over key access. The company can use IAM and key policies to
define who can access the keys. Additionally, KMS handles key rotation automatically, removing the need
for manual management and meeting the company's requirement. This option is correct because it balances
security, control, and ease of key management.
Q52. A company is running an ecommerce application on Amazon EC2. The application consists of a
stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the
application's usage. The application requires 50 instances 80% of the time.
B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.
C. purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining
instances.
D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover
the remaining instances.
Analysis:
Reserved Instances are ideal for the base workload of 50 instances, which is required 80% of the time. For
the fluctuating and peak workloads, a combination of On-Demand Instances (for immediate scalability) and
Spot Instances (for cost efficiency during peak periods) balances cost and performance. This approach is the
most cost-effective and scalable solution.
Q53. A company has deployed an API in a VPC behind an internet-facing Application Load Balancer
(ALB). An application that consumes the API as a client is deployed in a second account in private subnets
behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher
than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
A. Configure a VPCpeering connection between the two VPCs. Access the API using the private
address.
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private
address.
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the
ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the
PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API
using the private address.
Analysis:
29
A. VPC Peering establishes a direct, private connection between two Virtual Private Clouds (VPCs). This
allows resources in one VPC to communicate with resources in the other VPC as if they were on the same
network, using private IP addresses. By enabling the client application to access the API within the other
VPC without traversing the internet, VPC Peering eliminates the need for the client to use the NAT gateway.
This significantly reduces or eliminates the costs associated with NAT gateway usage.
B. AWS Direct Connect provides a dedicated network connection between your on-premises network and
AWS. While it can be used to establish a high-bandwidth connection between VPCs, it is primarily designed
for connecting on-premises networks to AWS. In this specific scenario, where the goal is to reduce NAT
gateway costs for an application within AWS, Direct Connect might be overkill and potentially more
expensive than VPC Peering.
C. ClassicLink allows an EC2 instance in one VPC to access resources in another VPC using its private IP
address. However, it is an older service and generally not recommended for new deployments. It might not
be as efficient or cost-effective as VPC Peering or PrivateLink for this use case.
D. PrivateLink enables you to access AWS services (like the API) or other AWS resources privately within
your own VPC. By creating a private endpoint within the client application's VPC, PrivateLink allows it to
access the API without ever leaving the AWS network. This eliminates the need for the client application to
use the internet and the associated NAT gateway, effectively reducing NAT gateway costs.
E. AWS Resource Access Manager (RAM) allows you to share resources across different AWS accounts.
While RAM can be useful for managing resource sharing, it does not directly address the issue of reducing
NAT gateway costs for the client application.
Q54. A solutions architect is tasked with transferring 750 TB of data from an on-premises network-attached
file system located at a branch office Amazon S3 Glacier. The migration must not saturate the on-premises 1
Mbps internet connection.
Which solution will meet these requirements?
A. Create an AWS site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Transfer
the files directly by using the AWS CLI.
B. Order 10 AWS Snowball Edge Storage Optimized devices, and select an S3 Glacier vault as the
destination.
C. Mount the network-attached file system to an S3 bucket, and copy the files directly. Create a lifecycle
policy to transition the S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball Edge Storage Optimized devices, and select an Amazon S3 bucket as the
destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
Analysis:
Option D: AWS Snowball Edge Storage Optimized devices are specifically designed for large-scale data
migrations. Each Snowball Edge device can store up to 80 TB of usable storage, requiring approximately 10
devices to transfer 750 TB. The data is first migrated to an Amazon S3 bucket. A lifecycle policy can then
transition the data to Amazon S3 Glacier storage for cost efficiency. This approach avoids saturating the 1
Mbps connection and provides a fast, reliable, and scalable migration solution.
Option B: S3 Glacier storage is accessed via lifecycle policies or archival processes, not as a direct
destination during the Snowball migration.
Option C: Mounting an on-premises file system directly to an S3 bucket is not a supported capability.
30
Q55. A company has a two-tier application architecture that runs in public and private subnets. Amazon
EC2 instances running the web application are in the public subnet and a database runs on the private
subnet. The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this
architecture? (Choose two.)
A. Create new public and private subnets in the same AZ for high availability.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple
AZs.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load
Balancer.
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to
an Amazon RDS multi-AZ deployment.
Analysis:
Option B:
Auto Scaling group: By placing the web application instances in an Auto Scaling group across multiple
Availability Zones, you ensure that if one AZ experiences an outage, the Auto Scaling group will
automatically launch replacement instances in other healthy AZs.
Application Load Balancer: The Application Load Balancer distributes traffic across the web application
instances, ensuring that traffic is routed to healthy instances regardless of their location.
Option E:
Multi-AZ RDS: By using an RDS multi-AZ deployment, Amazon automatically maintains a synchronous
replica of your database in a separate Availability Zone. In case of an outage in the primary AZ, the database
automatically fails over to the standby replica in the other AZ, minimizing downtime.
New subnets: Creating new subnets in different AZs allows for proper isolation and redundancy for both the
web application and database tiers.
Q56. A solutions architect is implementing a document review application using an Amazon S3 bucket for
storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of
the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
A. Enable a read-only bucket ACL.
B. Enable versioning on the bucket.
C. Attach an IAM policy to the bucket.
D. Enable MFA Delete on the bucket.
E. Encrypt the bucket using AWS KMS.
Q57. An application hosted on AWS is experiencing performance problems, and the application vendor
wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and
is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.
31
What is the MOST secure way to do this?
A. Enable public read on the S3 object and provide the link to the vendor.
B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
C. Generate a presigned URL and have the vendor download the log file before it expires.
D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-
factor authentication.
Analysis:
Generating a presigned URL provides the most secure method for sharing the log file with the vendor.
Presigned URLs offer temporary, time-limited access to an S3 object, allowing the vendor to download the
file without requiring long-term AWS credentials. This approach minimizes the risk of data exposure as the
URL grants access only for a specific period, and you can precisely control the allowed actions and
expiration time. By using presigned URLs, the application owner maintains control over access to the
sensitive log file while enabling the vendor to perform the necessary analysis.
Q58. A solutions architect is designing a two-tier web application. The application consists of a public-
facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL
Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.
How should security groups be configured in this situation? (Choose two.)
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the
security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the
security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the
security group for the web tier.
Q59. A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster
experimentation and agility. However, the security operations team is concerned that the developers could
attach the existing administrator policy, which would allow the developers to circumvent any other security
policies.
A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.
B. Use service control policies to disable IAM activity across all accounts in the organizational unit.
32
C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations
team.
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the
administrator policy.
Q60. A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto
Scaling group in a single Availability Zone behind an Application Load Balancer (ALB). A solutions
architect needs to modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?
A. Create an Auto Scaling group that uses three instances across each of two Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic
to the web tier.
Q61. A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons,
the company must retain all application log files for 7 years. The log files will be analyzed by a reporting
tool that must access all files concurrently.
Which storage solution meets these requirements MOST cost-effectively?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Amazon EC2 instance store
D. Amazon S3
Q62. A media streaming company collects real-time data and stores it in a disk-optimized database
system. The company is not getting the expected throughput and wants an in-memory database storage
solution that performs faster and provides high availability using data replication.
A. Amazon RDS for MySQL
B. Amazon RDS for PostgreSQL.
C. Amazon ElastiCache for Redis
D. Amazon ElastiCache for Memcached
Analysis:
Amazon ElastiCache for Redis is the ideal solution for real-time, in-memory data storage needs that
demand high performance and low latency. Redis operates entirely in memory, providing sub-millisecond
33
response times, making it perfect for use cases such as media streaming, gaming leaderboards, and real-time
analytics. Amazon ElastiCache for Redis supports data replication across nodes, ensuring high availability
and fault tolerance by enabling automatic failover in case of a node failure. Additionally, it is fully managed
by AWS, which simplifies operational overhead by handling tasks like scaling, patching, and backups. This
combination of speed, reliability, and ease of management makes ElastiCache for Redis a superior choice
for applications requiring fast data processing and high availability.
A. Amazon RDS for MySQL & B. Amazon RDS for PostgreSQL: These are relational databases, not in-
memory data stores. While they offer good performance, they are generally not as fast as in-memory
solutions like Redis for real-time data processing and high-throughput applications.
D. Amazon ElastiCache for Memcached: Memcached is another in-memory data store. However, Redis
offers more features than Memcached, such as data persistence, data structures (lists, sets, hashes), and built-
in replication, making it a more versatile and often preferred choice.
Q63. A company hosts its product information webpages on AWS. The existing solution uses multiple
Amazon C2 instances behind an Application Load Balancer in an Auto Scaling group. The website also uses
a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate. The company
is planning a new product launch and wants to be sure that users from around the world have the best
possible experience on the new website.
A. Redesign the application to use Amazon CloudFront.
B. Redesign the application to use AWS Elastic Beanstalk.
C. Redesign the application to use a Network Load Balancer.
D. Redesign the application to use Amazon S3 static website hosting.
Q64. A solutions architect is designing the cloud architecture for a new application being deployed on AWS.
The process should run in parallel while adding and removing application nodes as needed based on the
number of jobs to be processed. The processor application is stateless. The solutions architect must ensure
that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine
Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI.
Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on CPU usage.
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine
Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI.
Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on network usage.
C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon
Machine Image (AMI) that consists of the processor application. Create a launch template that uses
the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto
Scaling group to add and remove nodes based on the number of items in the SQS queue.
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine
Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create
an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add
and remove nodes based on the number of messages published to the SNS topic.
34
Analysis:
SQS provides a durable queue for holding job messages. It ensures reliable delivery, making it suitable for
storing jobs that need processing.
Launch template is preferred not launch configuration.
Q65. A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An
application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the
S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
A. Attach a resource-based policy to the S3 bucket.
B. Create an IAM user for the application with specific permissions to the S3 bucket.
C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
Analysis:
IAM roles are designed for AWS services like EC2 instances to assume temporary security credentials. You
can attach an IAM role to an EC2 instance, and that instance will automatically assume the role when it
starts. By creating an IAM role with only the necessary permissions to access the S3 bucket (e.g., read-only
access), you adhere to the principle of least privilege. This minimizes the risk of unintended access or data
breaches.
Q66. A company has on-premises servers that run a relational database. The database serves high-read
traffic for users in different locations. The company wants to migrate the database to AWS with the least
amount of effort. The database solution must support high availability and must not affect the company's
current traffic flow.
Which solution meets these requirements?
A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.
B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica.
C. Use databases that are hosted on multiple Amazon EC2 instances in different AWS Regions.
D. Use databases that are hosted on Amazon EC2 instances behind an Application Load Balancer in
different Availability Zones.
Analysis:
Option A: Use a database in Amazon RDS with Multi-AZ and at least one read replica is the best
solution for the company. Amazon RDS with Multi-AZ provides high availability by maintaining a fully
synchronized standby replica in a different Availability Zone, ensuring that the database remains operational
in the event of a failure. Additionally, read replicas are specifically designed to handle high-read traffic by
offloading read operations from the primary database. This setup allows the company to serve users in
different locations efficiently while distributing the workload. Since Amazon RDS is a fully managed
service, it reduces operational complexity, making it easy to migrate the database with minimal effort. This
35
solution ensures high availability, scalability, and a seamless experience for end users without affecting the
current traffic flow.
Standby replicas in Multi-AZ configurations are for failover purposes only and cannot serve read traffic.
This does not support the high-read traffic requirement.
Q67. A company's application is running on Amazon EC2 instances within an Auto Scaling group behind an
Elastic Load Balancer. Based on the application's history, the company anticipates a spike in traffic during a
holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group
proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period
of peak demand.
C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak
demand period.
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there
are LAUNCH events.
Analysis:
Option B is the best solution because it allows the company to prepare for predictable traffic spikes, such as
during a holiday. By scheduling the Auto Scaling group to add more EC2 instances ahead of time, the
application will already have enough capacity to handle the increased demand when it happens. This
proactive approach prevents delays or performance issues that might occur if scaling only happens after
traffic increases. Once the peak period is over, the scaling automatically adjusts back to normal, ensuring
cost efficiency.
Q68. A company hosts an application on multiple Amazon EC2 instances. The application processes
messages from an Amazon SQS queue, writes for an Amazon RDS table, and deletes the message from the
queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any
duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
A. Use the CreateQueue API call to create a new queue.
B. Use the AddPermission API call to add appropriate permissions.
C. Use the ReceiveMessage API call to set an appropriate wait time.
D. Use the ChangeMessageVisibiIity API call to increase the visibility timeout.
Analysis:
To ensure that each message is processed only once, you need to adjust the visibility timeout to allow
enough time for the application to complete its tasks. Using the ChangeMessageVisibility API is the most
effective solution for this scenario.
Q69. An Amazon EC2 administrator created the following policy associated with an IAM group containing
several users: What is the effect of this policy?
36
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:TerminateInstances",
"Resource": "*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "10.100.100.0/24"
"Effect": "Deny",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"ec2:Region": "us-east-1"
A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is
10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east- 1 Region when the user's source IP is
10.100.100.254.
Analysis:
Option B: This option misunderstands the policy's focus. The policy does not check the IP address of the
EC2 instance (like 10.100.100.1). Instead, it checks the source IP of the user making the request. If the
user's source IP falls within the allowed range (10.100.100.0/24) and the action happens in the us-east-1
region, termination is allowed. Since Option B incorrectly focuses on the EC2 instance's IP, it is wrong.
Option C: This option correctly describes the policy's behavior. The Allow statement lets users terminate
EC2 instances only when their source IP is in the range 10.100.100.0/24 (which includes 10.100.100.254),
and the region is us-east-1. Since this matches both conditions in the policy, Option C is correct.
Q70. A solutions architect is optimizing a website for an upcoming musical event, Videos of the
performances will be streamed in real time and then will be available on demand. The event is expected
to attract a global online audience.
A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route S3
D. Amazon S3 Transfer Acceleration
Analysis:
37
Amazon CloudFront is a Content Delivery Network (CDN) service designed to deliver web content,
including videos, quickly and securely to users worldwide & Supports Real-Time and On-Demand
Streaming. Global Accelerator can help only in Static IP.
Q71. A company has a three-tier image-sharing application. It uses an Amazon EC2 instance for the front-
end layer, another for the backend tier, and a third for the MySQL database. A solutions architect has been
tasked with designing a solution that is highly available, and requires the least amount of changes to the
application
A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the
database to an Amazon DynamoDB table and use Amazon S3 to store and serve users' images.
B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers.
Move the database to an Amazon RDS instance with multiple read replicas to store and serve users' images.
C. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling
group for the backend layer. Move the database to a memory optimized instance type to store and serve
users' images.
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend
layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3
to store & serve user`s images.
Analysis:
Option D is the best solution because it provides a highly available and reliable architecture while requiring
minimal changes to the existing application. By using load-balanced Multi-AZ AWS Elastic Beanstalk
environments for the front-end and backend tiers, the solution ensures scalability and fault tolerance.
Moving the database to an Amazon RDS instance with Multi-AZ deployment guarantees high availability
and automatic failover for the relational database. Additionally, using Amazon S3 to store and serve user-
uploaded images offloads static asset storage from the application and enhances performance. This
architecture aligns with the company's goals of high availability and minimal application changes while
leveraging managed AWS services to simplify operations and improve reliability.
Option B is close to the correct solution, using read replicas in RDS is typically for improving read
performance, not ensuring high availability. Multi-AZ deployments provide better reliability.
Q72. A solutions architect is designing a system to analyze the performance of financial markets while the
markets are closed. The system will run a series of compute- intensive jobs for 4 hours every night. The time
to complete the compute jobs is expected to remain constant, and jobs cannot be interrupted once started.
Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
A. Spot Instances
B. On-Demand Instances
C. Standard Reserved Instances
D. Scheduled Reserved Instances
Analysis:
Option A: Spot Instances
Spot Instances are cost-effective and ideal for workloads that are flexible, fault-tolerant, and can handle
interruptions. They are commonly used for batch processing, testing environments, or data analysis tasks
where interruptions do not cause significant issues. However, Spot Instances are not suitable for this
scenario because the compute jobs are time-sensitive and cannot be interrupted once started.
38
Ex: Like renting a car on a short-term, hourly basis. You get a great deal, but there's a risk the rental
company might take it back if someone else needs it more urgently.
Option B: On-Demand Instances
On-Demand Instances are designed for short-term, unpredictable workloads where flexibility is critical, and
cost is less of a concern. These instances are often used for applications with varying workloads or
development and testing environments. While they provide flexibility, On-Demand Instances are more
expensive over the long term compared to Reserved Instances, making them unsuitable for this predictable
and recurring workload.
Ex: Like renting a car whenever you need it. You have flexibility, but it can get expensive if you rent
frequently.
Option C: Standard Reserved Instances
Standard Reserved Instances are well-suited for long-term, continuous workloads that require always-on
compute capacity. They are cost-effective for running critical web applications, databases, or other
workloads that require 24/7 availability. However, they lack scheduling capabilities, which makes them a
poor fit for this scenario, as the compute jobs only run for 4 hours nightly.
Ex: Like leasing a car for a year. You get a good discount, but you're committed to using it even when you
don't need it.
Scheduled Reserved Instances are specifically designed for workloads with predictable and recurring
schedules. They allow users to reserve capacity for a specific time period, ensuring availability while
reducing costs compared to On-Demand Instances. This option is the most suitable for the scenario because
the compute jobs run nightly for a fixed duration and require uninterrupted processing. By aligning with the
predictable schedule, Scheduled Reserved Instances provide the perfect balance of cost-efficiency and
performance.
Ex: Like reserving a car for a specific time slot every night. You get a significant discount and the guarantee
that the "car" will be available when you need it.
Q73. A company built a food ordering application that captures user data and stores it for future analysis.
The application's static front end is deployed on an Amazon EC2 instance. The front-end application sends
the requests to the backend application running on separate EC2 instance. The backend application then
stores the data in Amazon RDS.
What should a solutions architect do to decouple the architecture and make it scalable?
A. Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the
backend application. The backend application will process and store the data in Amazon RDS.
B. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification
Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic,
and process and store the data in Amazon RDS.
C. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the
backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data
in Amazon RDS.
39
D. Use Amazon S3 to serve the static front-end application and send requests to Amazon API
Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto
Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
Analysis:
Static Front End: The front-end application is hosted on Amazon S3, improving scalability and reducing
infrastructure management.
API Gateway: Acts as a highly scalable entry point to handle front-end requests.
Amazon SQS: Decouples the backend by queuing requests, ensuring scalability and smooth processing.
Auto Scaling Group: Dynamically scales backend EC2 instances based on the SQS queue depth, ensuring
efficient resource utilization.
Q74. A solutions architect needs to design a managed storage solution for a company's application that
includes high-performance machine learning functionality. This application runs on AWS Fargate and the
connected storage needs to have concurrent access to files and deliver high performance.
A. Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate
with Amazon S3.
B. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to
communicate with FSx for Lustre.
C. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that
allows Fargate to communicate with Amazon Elastic File System (Amazon EFS).
D. Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM
role that allows Fargate to communicate with Amazon Elastic Block Store (Amazon EBS).
Analysis:
AWS Fargate does not natively support Amazon FSx for Lustre directly as a storage option. However,
Fargate works well with Amazon EFS, which is a shared storage solution for containers. If your application
needs high-performance storage like FSx for Lustre, you would need to use Amazon EC2. EFS is the best
option for containerized apps running on Fargate because it provides scalable and shared storage that works
directly with Fargate. So, for containerized applications on Fargate, use EFS for persistent storage.
Q75. A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles
during peak operating hours. The company wants to use these data points in its existing analytics platform.
A solutions architect must determine the most viable multi-tier option to support this architecture. The data
points must be accessible from the REST API.
Analysis:
The best solution is to use Amazon API Gateway with AWS Lambda because it meets all the
requirements. The company already has an analytics platform, so they don’t need new analytics tools like
40
Amazon Athena or Amazon Kinesis Data Analytics. Instead, they need a way to handle real-time location
tracking during peak hours and make the data accessible through a REST API. API Gateway enables the
creation of REST APIs to collect location data, while AWS Lambda processes this data in real time and
sends it to the existing analytics platform. This setup is efficient, scalable, and tailored to the company's
needs, unlike Amazon QuickSight with Redshift, which is better suited for data visualization and does not
support REST APIs.
Q76. A solutions architect is designing a web application that will run on Amazon EC2 instances behind an
Application Load Balancer (ALB). The company strictly requires that the application be resilient against
malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.
B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
C. Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.
D. Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.
Analysis:
Option C: AWS Shield Advanced provides enhanced DDoS protection and advanced safeguards against
complex threats, including the ability to handle sophisticated application-layer attacks. When combined with
Amazon Web Application Firewall (WAF) and other AWS security tools, Shield Advanced ensures
comprehensive protection against various threats, including malicious internet activity and exposure to
common vulnerabilities.
Option A: CloudFront is a CDN service that helps with DDoS protection at the edge. However, it does not
inherently block application-layer threats or protect against CVEs. It's a partial solution, not complete.
Option B: While AWS WAF managed rules can block application-layer threats like SQL injection or XSS,
WAF alone does not provide DDoS protection or block sophisticated attacks beyond predefined rules. This
makes it insufficient for meeting all requirements.
Option D: Restricting ports is a good practice for minimizing exposure but does not address application-
layer threats, CVEs, or sophisticated attacks.
Q77. A company has an application that calls AWS Lambda functions. A code review shows that database
credentials are stored in a Lambda function's source code, which violates the company's security policy. The
credentials must be securely stored and must be automatically rotated on an ongoing basis to meet security
policy requirements.
What should a solutions architect recommend to meet these requirements in the MOST secure manner?
A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key
ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password.
B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can
use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to
automatically rotate the password.
41
C. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with
a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically
rotate the uploaded password.
D. Move the database password to an environment variable that is associated with the Lambda function.
Retrieve the password from the environment variable by invoking the function. Create a deployment script
to automatically rotate the password.
Analysis:
Option B: The best way to meet the company’s requirements is to use AWS Secrets Manager, which is
specifically designed for securely managing sensitive information like database credentials. It encrypts the
credentials, ensures secure access through IAM roles, and can automatically rotate the credentials as needed.
This eliminates the need to hardcode passwords and ensures compliance with security policies. Other
options like CloudHSM, KMS, or environment variables are either not purpose-built for secret management
or lack features like automatic rotation, making them less ideal for this scenario.
Option A: CloudHSM is a hardware security module designed for cryptographic operations, not specifically
for secret management or automatic rotation of database credentials. It would require additional effort to
manage password rotation.
Option C: AWS KMS is used for managing encryption keys, not for securely managing or automatically
rotating secrets. KMS does not offer built-in secret rotation.
Option D: Storing passwords in environment variables improves security compared to embedding them in
code, but it does not address the need for automatic rotation or centralized secret management. Additionally,
environment variables could be less secure than dedicated secret management services.
Q78. A company is managing health records on-premises. The company must keep these records
indefinitely, disable any modifications to the records once they are stored, and granularly audit access at all
levels. The chief technology officer (CTO) is concerned because there are already millions of records not
being used by any application, and the current infrastructure is running out of space. The CTO has requested
a solutions architect design a solution to move existing data and support future records.
Which services can the solutions architect recommend to meet these requirements?
A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data.
Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
B. Use AWS Storage Gateway to move existing data to AWS. Use Amazon S3 to store existing and new
data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
42
C. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data.
Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
D. Use AWS Storage Gateway to move existing data to AWS. Use Amazon Elastic Block Store (Amazon
EBS) to store existing and new data. Enable Amazon 53 object lock and enable Amazon 53 server access
logging.
Analysis:
Option B: The best solution for the company is to use AWS DataSync to quickly and efficiently migrate
their existing health records to AWS. These records, along with new ones, can be stored in Amazon S3,
which is a cost-effective and scalable solution for long-term data storage. To meet the requirement of
preventing modifications, Amazon S3 Object Lock can be enabled to make the data immutable.
CloudTrail with Data Events Provides detailed logs of all data events within your S3 buckets, including
object creations, modifications, and deletions. This enables granular auditing of all access and modifications
to the health records.
Option B & C: While these options also use AWS DataSync and S3, they incorrectly mention using AWS
CloudTrail with management events. Management events only log changes to AWS resources themselves
(e.g., bucket creation, policy changes), not data-level events. CloudTrail Management Events track API-
level changes, not detailed object-level operations required for granular auditing.
Q79. A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The company
would rarely need to access this copy. The storage solution's cost should be minimal.
A. S3 Standard
B. S3 Intelligent-Tiering
Q80. A company's operations team has an existing Amazon S3 bucket configured to notify an Amazon SQS
queue when new objects are created within the bucket. The development team also wants to receive events
when new objects are created. The existing operations team workflow must remain intact.
A. Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new
object is created.
B. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 to update
this queue when a new object is created.
C. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to
the new topic. Updates both queues to poll Amazon SNS.
43
D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send
events to the new topic. Add subscriptions for both queues in the topic.
Analysis:
Option D: Create an Amazon SNS topic as the central notification point. The existing SQS queue (used by
the operations team) and a new SQS queue (for the development team) can subscribe to the SNS topic.
This ensures that the S3 bucket sends events to the SNS topic, and the topic delivers the notifications to both
SQS queues. The existing workflow remains intact while adding support for the development team.
Option C: SQS queues don’t poll SNS topics. SNS pushes notifications to SQS queues. This option is
technically incorrect.
Q81. An application runs on Amazon EC2 instances in private subnets. The application needs to access an
Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic
does not leave the AWS network?
Option A: This is the most secure and efficient method for accessing DynamoDB from Amazon EC2
instances in private subnets. A VPC endpoint allows traffic between the EC2 instances and DynamoDB to
stay within the AWS network, without traversing the public internet. This ensures data privacy and reduces
the attack surface since no internet gateway or NAT device is required.
NAT gateway leave from AWS. An internet gateway is a device that allows communication between your
VPC and the public internet.
44