DVA-C02 Updated Dumps - AWS Certified Developer - Associate
DVA-C02 Updated Dumps - AWS Certified Developer - Associate
3. A developer maintains a critical business application that uses Amazon DynamoDB as the
primary data store The DynamoDB table contains millions of documents and receives 30-60
requests each minute. The developer needs to perform processing in near-real time on the
documents when they are added or updated in the DynamoDB table
How can the developer implement this feature with the LEAST amount of change to the existing
application code?
A. Set up a cron job on an Amazon EC2 instance Run a script every hour to query the table for
changes and process the documents
B. Enable a DynamoDB stream on the table Invoke an AWS Lambda function to process the
documents.
C. Update the application to send a PutEvents request to Amazon EventBridge. Create an
EventBridge rule to invoke an AWS Lambda function to process the documents.
D. Update the application to synchronously process the documents directly after the DynamoDB
write
Answer: B
Explanation:
DynamoDB Streams: Capture near real-time changes to DynamoDB tables, triggering
downstream actions.
Lambda for Processing: Lambda functions provide a serverless way to execute code in
response to events like DynamoDB Stream updates.
Minimal Code Changes: This solution requires the least modifications to the existing application.
Reference: DynamoDB
Streams: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
AWS Lambda: https://aws.amazon.com/lambda/
4. A developer is creating a serverless application that uses an AWS Lambda function The
developer will use AWS CloudFormation to deploy the application The application will write logs
to Amazon CloudWatch Logs The developer has created a log group in a CloudFormation
template for the application to use The developer needs to modify the CloudFormation template
to make the name of the log group available to the application at runtime.
Which solution will meet this requirement?
A. Use the AWS: lnclude transform in CloudFormation to provide the log group's name to the
application
B. Pass the log group's name to the application in the user data section of the CloudFormation
template.
C. Use the CloudFormation template's Mappings section to specify the log group's name for the
application.
D. Pass the log group's Amazon Resource Name (ARN) as an environment variable to the
Lambda function
Answer: D
Explanation:
CloudFormation and Lambda Environment Variables:
CloudFormation is an excellent tool to manage infrastructure as code, including the log group
resource.
Lambda functions can access environment variables at runtime, making them a suitable way to
pass
configuration information like the log group ARN.
CloudFormation Template Modification:
In your CloudFormation template, define the log group resource.
In the Lambda function resource, add an Environment section:
YAML
Environment:
Variables:
LOG_GROUP_ARN: !Ref LogGroupResourceName
Use code with caution.
content_copy
The !Ref intrinsic function retrieves the log group's ARN, which CloudFormation generates
during stack creation.
Using the ARN in Your Lambda Function:
Within your Lambda code, access the LOG_GROUP_ARN environment variable.
Configure your logging library (e.g., Python's logging module) to send logs to the specified log
group.
Reference: AWS Lambda Environment
Variables: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html
CloudFormation !Ref Intrinsic
Function: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-
reference-ref.html
5. An application interacts with Amazon Aurora to store and track customer information. The
primary database is set up with multiple read replicas for improving the performance of the read
queries. However, one of the Aurora replicas is receiving most or all of the traffic, while the other
Aurora replica remains idle.
How can this issue be resolved?
A. Disable application-level DNS caching.
B. Enable application-level DNS caching.
C. Enable application pooling.
D. Disable application pooling.
Answer: A
7. A company has built an AWS Lambda function to convert large image files into output files
that can be used in a third-party viewer application The company recently added a new module
to the function to improve the output of the generated files However, the new module has
increased the bundle size and has increased the time that is needed to deploy changes to the
function code.
How can a developer increase the speed of the Lambda function deployment?
A. Use AWS CodeDeploy to deploy the function code
B. Use Lambda layers to package and load dependencies.
C. Increase the memory size of the function.
D. Use Amazon S3 to host the function dependencies
Answer: B
Explanation:
Problem: Large bundle size increases Lambda deployment time.
Lambda Layers: Layers let you package dependencies separately from your function code. This
optimizes the deployment package, making updates faster.
Modularization: Breaking down dependencies into layers improves code organization and
reusability.
Reference: AWS Lambda Layers: https://docs.aws.amazon.com/lambda/latest/dg/configuration-
layers.html
8. An application is using Amazon Cognito user pools and identity pools for secure access. A
developer wants to integrate the user-specific file upload and download features in the
application with Amazon S3. The developer must ensure that the files are saved and retrieved
in a secure manner and that users can access only their own files. The file sizes range from 3
KB to 300 MB.
Which option will meet these requirements with the HIGHEST level of security?
A. Use S3 Event Notifications to validate the file upload and download requests and update the
user interface (UI).
B. Save the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of
files in the user interface (UI) by comparing the current user ID with the user ID associated with
the file in the table.
C. Use Amazon API Gateway and an AWS Lambda function to upload and download files.
Validate each request in the Lambda function before performing the requested operation.
D. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own
folders in Amazon S3.
Answer: D
Explanation:
https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-
pools-with-identity-pools.html
9. A developer needs to retrieve all data from an Amazon DynamoDB table that matches a
particular partition key.
Which solutions will meet this requirement in the MOST operationally efficient way? (Select
TWO.)
A. Use the Scan API and a filter expression to match on the key.
B. Use the GetItem API with a request parameter for key that contains the partition key name
and specific key value.
C. Use the ExecuteStatement API and a filter expression to match on the key.
D. Use the GetItem API and a PartiQL statement to match on the key.
E. Use the ExecuteStatement API and a PartiQL statement to match on the key.
Answer: B, E
10. A company had an Amazon RDS for MySQL DB instance that was named mysql-db. The
DB instance was deleted within the past 90 days. A developer needs to find which 1AM user or
role deleted the DB instance in the AWS environment.
Which solution will provide this information?
A. Retrieve the AWS CloudTrail events for the resource mysql-db where the event name is
DeleteDBInstance. Inspect each event.
B. Retrieve the Amazon CloudWatch log events from the most recent log stream within the
rds/mysql-db log group. Inspect the log events.
C. Retrieve the AWS X-Ray trace summaries. Filter by services with the name mysql-db.
Inspect the ErrorRootCauses values within each summary.
D. Retrieve the AWS Systems Manager deletions inventory Filter the inventory by deletions that
have a TypeName value of RDS. Inspect the deletion details.
Answer: A
11. A developer must use multi-factor authentication (MFA) to access data in an Amazon S3
bucket that is in another AWS account.
Which AWS Security Token Service (AWS STS) API operation should the developer use with
the MFA information to meet this requirement?
A. AssumeRoleWithWebidentity
B. GetFederationToken
C. AssumeRoleWithSAML
D. AssumeRole
Answer: D
Explanation:
AWS STS AssumeRole: The central operation for assuming temporary security credentials,
commonly used for cross-account access.
MFA Integration: The AssumeRole call can include MFA information to enforce multi-factor
authentication.
Credentials for S3 Access: The returned temporary credentials would provide the necessary
permissions to access the S3 bucket in the other account.
Reference: AWS STS AssumeRole
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
12. A data visualization company wants to strengthen the security of its core applications The
applications are deployed on AWS across its development staging, pre-production, and
production environments. The company needs to encrypt all of its stored sensitive credentials
The sensitive credentials need to be automatically rotated Aversion of the sensitive credentials
need to be stored for each environment
Which solution will meet these requirements in the MOST operationally efficient way?
A. Configure AWS Secrets Manager versions to store different copies of the same credentials
across multiple environments
B. Create a new parameter version in AWS Systems Manager Parameter Store for each
environment Store the environment-specific credentials in the parameter version.
C. Configure the environment variables in the application code Use different names for each
environment type
D. Configure AWS Secrets Manager to create a new secret for each environment type. Store
the environment-specific credentials in the secret
Answer: D
Explanation:
Secrets Management: AWS Secrets Manager is designed specifically for storing and managing
sensitive credentials.
Environment Isolation: Creating separate secrets for each environment (development, staging,
etc.) ensures clear separation and prevents accidental leaks.
Automatic Rotation: Secrets Manager provides built-in rotation capabilities, enhancing security
posture.
Versioning: Tracking changes to secrets is essential for auditing and compliance.
Reference: AWS Secrets Manager: https://aws.amazon.com/secrets-manager/ Secrets
Manager
Rotation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
13. An Amazon Simple Queue Service (Amazon SQS) queue serves as an event source for an
AWS Lambda function In the SQS queue, each item corresponds to a video file that the
Lambda function must convert to a smaller resolution The Lambda function is timing out on
longer video files, but the Lambda function's timeout is already configured to its maximum value
What should a developer do to avoid the timeouts without additional code changes?
A. Increase the memory configuration of the Lambda function
B. Increase the visibility timeout on the SQS queue
C. Increase the instance size of the host that runs the Lambda function.
D. Use multi-threading for the conversion.
Answer: B
Explanation:
Visibility Timeout: When an SQS message is processed by a consumer (here, the Lambda
function), it's temporarily hidden from other consumers. Visibility timeout controls this duration.
How It Helps:
Increase the visibility timeout beyond the maximum processing time your Lambda might
typically take for long videos.
This prevents the message from reappearing in the queue while Lambda is still working,
avoiding
premature timeouts.
Reference: SQS Visibility
Timeout: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/s
qs-visibility-timeout.html
14. An organization is using Amazon CloudFront to ensure that its users experience low-latency
access to its web application. The organization has identified a need to encrypt all traffic
between users and CloudFront, and all traffic between CloudFront and the web application.
How can these requirements be met? (Select TWO)
A. Use AWS KMS t0 encrypt traffic between cloudFront and the web application.
B. Set the Origin Protocol Policy to "HTTPS Only".
C. Set the Origin’s HTTP Port to 443.
D. Set the Viewer Protocol Policy to "HTTPS Only" or Redirect HTTP to HTTPS"
E. Enable the CloudFront option Restrict Viewer Access.
Answer: B, D
Explanation:
This solution will meet the requirements by ensuring that all traffic between users and
CloudFront, and all traffic between CloudFront and the web application, are encrypted using
HTTPS protocol. The Origin Protocol Policy determines how CloudFront communicates with the
origin server (the web application), and setting it to “HTTPS Only” will force CloudFront to use
HTTPS for every request to the origin server. The Viewer Protocol Policy determines how
CloudFront responds to HTTP or HTTPS requests from users, and setting it to “HTTPS Only”
or “Redirect HTTP to HTTPS” will force CloudFront to use HTTPS for every response to users.
Option A is not optimal because it will use AWS KMS to encrypt traffic between CloudFront and
the web application, which is not necessary or supported by CloudFront.
Option C is not optimal because it will set the origin’s HTTP port to 443, which is incorrect as
port 443 is used for HTTPS protocol, not HTTP protocol.
Option E is not optimal because it will enable the CloudFront option Restrict Viewer Access,
which is used for controlling access to private content using signed URLs or signed cookies, not
for encrypting traffic.
Reference: [Using HTTPS with CloudFront], [Restricting Access to Amazon S3 Content by
Using an Origin Access Identity]
15. A developer at a company recently created a serverless application to process and show
data from
business reports. The application's user interface (UI) allows users to select and start
processing the files. The Ul displays a message when the result is available to view. The
application uses AWS Step Functions with AWS Lambda functions to process the files. The
developer used Amazon API Gateway and Lambda functions to create an API to support the UI.
The company's Ul team reports that the request to process a file is often returning timeout
errors because of the see or complexity of the files. The Ul team wants the API to provide an
immediate response so that the Ul can deploy a message while the files are being processed.
The backend process that is invoked by the API needs to send an email message when the
report processing is complete.
What should the developer do to configure the API to meet these requirements?
A. Change the API Gateway route to add an X-Amz-Invocation-Type header win a sialic value
of 'Event' in the integration request Deploy the API Gateway stage to apply the changes.
B. Change the configuration of the Lambda function that implements the request to process a
file. Configure the maximum age of the event so that the Lambda function will ion
asynchronously.
C. Change the API Gateway timeout value to match the Lambda function ominous value.
Deploy the API Gateway stage to apply the changes.
D. Change the API Gateway route to add an X-Amz-Target header with a static value of 'A sync'
in the integration request Deploy me API Gateway stage to apply the changes.
Answer: A
Explanation:
This solution allows the API to invoke the Lambda function asynchronously, which means that
the API will return an immediate response without waiting for the function to complete. The X-
Amz-Invocation-Type header specifies the invocation type of the Lambda function, and setting it
to ‘Event’ means that the function will be invoked asynchronously. The function can then use
Amazon Simple Email Service (SES) to send an email message when the report processing is
complete.
Reference: [Asynchronous invocation], [Set up Lambda proxy integrations in API Gateway]
16. A developer creates a static website for their department The developer deploys the static
assets for the website to an Amazon S3 bucket and serves the assets with Amazon CloudFront
The developer uses origin access control (OAC) on the CloudFront distribution to access the S3
bucket
The developer notices users can access the root URL and specific pages but cannot access
directories without specifying a file name. For example, /products/index.html works, but
/products returns an error The developer needs to enable accessing directories without
specifying a file name without exposing the S3 bucket publicly.
Which solution will meet these requirements?
A. Update the CloudFront distribution's settings to index.html as the default root object is set
B. Update the Amazon S3 bucket settings and enable static website hosting. Specify index html
as the Index document Update the S3 bucket policy to enable access. Update the CloudFront
distribution's origin to use the S3 website endpoint
C. Create a CloudFront function that examines the request URL and appends index.html when
directories are being accessed Add the function as a viewer request CloudFront function to the
CloudFront distribution's behavior.
D. Create a custom error response on the CloudFront distribution with the HTTP error code set
to the HTTP 404 Not Found response code and the response page path to /index html Set the
HTTP response code to the HTTP 200 OK response code
Answer: B
Explanation:
Problem: Directory access without file names fails.
S3 Static Website Hosting:
Configuring S3 as a static website enables automatic serving of index.html for directory
requests.
Bucket policies ensure correct access permissions.
Updating the CloudFront origin simplifies routing.
Avoiding Public Exposure: The S3 website endpoint allows CloudFront to access content
without
making the bucket public.
Reference: S3 Static Website
Hosting: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
17. A company notices that credentials that the company uses to connect to an external
software as a service (SaaS) vendor are stored in a configuration file as plaintext.
The developer needs to secure the API credentials and enforce automatic credentials rotation
on a quarterly basis.
Which solution will meet these requirements MOST securely?
A. Use AWS Key Management Service (AWS KMS) to encrypt the configuration file. Decrypt the
configuration file when users make API calls to the SaaS vendor. Enable rotation.
B. Retrieve temporary credentials from AWS Security Token Service (AWS STS) every 15
minutes. Use the temporary credentials when users make API calls to the SaaS vendor.
C. Store the credentials in AWS Secrets Manager and enable rotation. Configure the API to
have Secrets Manager access.
D. Store the credentials in AWS Systems Manager Parameter Store and enable rotation.
Retrieve the credentials when users make API calls to the SaaS vendor.
Answer: C
Explanation:
Store the credentials in AWS Secrets Manager and enable rotation. Configure the API to have
Secrets Manager access. This is correct. This solution will meet the requirements most
securely, because it uses a service that is designed to store and manage secrets such as API
credentials. AWS Secrets Manager helps you protect access to your applications, services, and
IT resources by enabling you to rotate, manage, and retrieve secrets throughout their lifecycle1.
You can store secrets such as passwords, database strings, API keys, and license codes as
encrypted values2. You can also configure automatic rotation of your secrets on a schedule that
you specify3. You can use the AWS SDK or CLI to retrieve secrets from Secrets Manager when
you need them4. This way, you can avoid storing credentials in plaintext files or hardcoding
them in your code.
18. A developer maintains an Amazon API Gateway REST API. Customers use the API through
a frontend UI and Amazon Cognito authentication.
The developer has a new version of the API that contains new endpoints and backward-
incompatible interface changes. The developer needs to provide beta access to other
developers on the team without affecting customers.
Which solution will meet these requirements with the LEAST operational overhead?
A. Define a development stage on the API Gateway API. Instruct the other developers to point
the endpoints to the development stage.
B. Define a new API Gateway API that points to the new API application code. Instruct the other
developers to point the endpoints to the new API.
C. Implement a query parameter in the API application code that determines which code version
to call.
D. Specify new API Gateway endpoints for the API endpoints that the developer wants to add.
Answer: A
Explanation:
Amazon API Gateway is a service that enables developers to create, publish, maintain, monitor,
and secure APIs at any scale. The developer can define a development stage on the API
Gateway API and instruct the other developers to point the endpoints to the development stage.
This way, the developer can provide beta access to the new version of the API without affecting
customers who use the production stage. This solution will meet the requirements with the least
operational overhead.
Reference: [What Is Amazon API Gateway? - Amazon API Gateway]
[Set up a Stage in API Gateway - Amazon API Gateway]
19. A developer is building an application that uses AWS API Gateway APIs. AWS Lambda
function, and AWS Dynamic DB tables. The developer uses the AWS Serverless Application
Model (AWS SAM) to build and run serverless applications on AWS. Each time the developer
pushes of changes for only to the Lambda functions, all the artifacts in the application are
rebuilt.
The developer wants to implement AWS SAM Accelerate by running a command to only
redeploy the Lambda functions that have changed.
Which command will meet these requirements?
A. sam deploy -force-upload
B. sam deploy -no-execute-changeset
C. sam package
D. sam sync -watch
Answer: D
Explanation:
The command that will meet the requirements is sam sync -watch. This command enables AWS
SAM Accelerate mode, which allows the developer to only redeploy the Lambda functions that
have changed. The -watch flag enables file watching, which automatically detects changes in
the source code and triggers a redeployment. The other commands either do not enable AWS
SAM Accelerate mode, or do not redeploy the Lambda functions automatically.
Reference: AWS SAM Accelerate
21. What is the maximum execution duration per request for AWS Lambda functions?
A. 5 minutes
B. 15 minutes
C. 30 minutes
D. 60 minutes
Answer: B
22. In AWS, what is the best practice for deploying an application across multiple Availability
Zones?
A. Use Elastic Load Balancing
B. Deploy in a single zone and replicate to others manually
C. Use Amazon RDS multi-AZ deployments
D. All of the above
Answer: A
23. A company is creating a new application that gives users the ability to upload and share
short video files. The average size of the video files is 10 MB. After a user uploads a file, a
message needs to be placed into an Amazon Simple Queue Service (Amazon SQS) queue so
the file can be processed. The files need to be accessible for processing within 5 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Write the files to Amazon S3 Glacier Deep Archive. Add the S3 location of the files to the
SQS queue.
B. Write the files to Amazon S3 Standard. Add the S3 location of the files to the SQS queue.
C. Write the files to an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD
volume. Add the EBS location of the files to the SQS queue.
D. Write messages that contain the contents of the uploaded files to the SQS queue.
Answer: B
24. A developer maintains applications that store several secrets in AWS Secrets Manager. The
applications use secrets that have changed over time. The developer needs to identify required
secrets that are still in use. The developer does not want to cause any application downtime.
What should the developer do to meet these requirements?
A. Configure an AWS CloudTrail log file delivery to an Amazon S3 bucket. Create an Amazon
CloudWatch alarm for the GetSecretValue. Secrets Manager API operation requests
B. Create a secrets manager-secret-unused AWS Config managed rule. Create an Amazon
EventBridge rule to Initiate notification when the AWS Config managed rule is met.
C. Deactivate the applications secrets and monitor the applications error logs temporarily.
D. Configure AWS X-Ray for the applications. Create a sampling rule lo match the
GetSecretValue Secrets Manager API operation requests.
Answer: B
Explanation:
This solution will meet the requirements by using AWS Config to monitor and evaluate whether
Secrets Manager secrets are unused or have been deleted, based on specified time periods.
The secrets manager-secret-unused managed rule is a predefined rule that checks whether
Secrets Manager secrets have been rotated within a specified number of days or have been
deleted within a specified number of days after last accessed date. The Amazon EventBridge
rule will trigger a notification when the AWS Config managed rule is met, alerting the developer
about unused secrets that can be removed without causing application downtime.
Option A is not optimal because it will use AWS CloudTrail log file delivery to an Amazon S3
bucket, which will incur additional costs and complexity for storing and analyzing log files that
may not contain relevant information about secret usage.
Option C is not optimal because it will deactivate the application secrets and monitor the
application error logs temporarily, which will cause application downtime and potential data loss.
Option D is not optimal because it will use AWS X-Ray to trace secret usage, which will
introduce additional overhead and latency for instrumenting and sampling requests that may not
be related to secret usage.
Reference: [AWS Config Managed Rules], [Amazon EventBridge]
25. A company needs to set up secure database credentials for all its AWS Cloud resources.
The company's resources include Amazon RDS DB instances Amazon DocumentDB clusters
and Amazon Aurora DB instances. The company's security policy mandates that database
credentials be encrypted at rest and rotated at a regular interval.
Which solution will meet these requirements MOST securely?
A. Set up IAM database authentication for token-based access. Generate user tokens to
provide centralized access to RDS DB instances. Amazon DocumentDB clusters and Aurora DB
instances.
B. Create parameters for the database credentials in AWS Systems Manager Parameter Store
Set the Type parameter to Secure Sting. Set up automatic rotation on the parameters.
C. Store the database access credentials as an encrypted Amazon S3 object in an S3 bucket
Block all public access on the S3 bucket. Use S3 server-side encryption to set up automatic
rotation on the encryption key.
D. Create an AWS Lambda function by using the SecretsManagerRotationTemplate template in
the AWS Secrets Manager console. Create secrets for the database credentials in Secrets
Manager Set up secrets rotation on a schedule.
Answer: D
Explanation:
This solution will meet the requirements by using AWS Secrets Manager, which is a service that
helps protect secrets such as database credentials by encrypting them with AWS Key
Management Service (AWS KMS) and enabling automatic rotation of secrets. The developer
can create an AWS Lambda function by using the SecretsManagerRotationTemplate template
in the AWS Secrets Manager console, which provides a sample code for rotating secrets for
RDS DB instances, Amazon DocumentDB clusters, and Amazon Aurora DB instances. The
developer can also create secrets for the database credentials in Secrets Manager, which
encrypts them at rest and provides secure access to them. The developer can set up secrets
rotation on a schedule, which changes the database credentials periodically according to a
specified interval or event.
Option A is not optimal because it will set up IAM database authentication for token-based
access, which may not be compatible with all database engines and may require additional
configuration and management of IAM roles or users.
Option B is not optimal because it will create parameters for the database credentials in AWS
Systems Manager Parameter Store, which does not support automatic rotation of secrets.
Option C is not optimal because it will store the database access credentials as an encrypted
Amazon S3 object in an S3 bucket, which may introduce additional costs and complexity for
accessing and securing the data.
Reference: [AWS Secrets Manager], [Rotating Your AWS Secrets Manager Secrets]
26. A developer is building an application that uses Amazon DynamoDB. The developer wants
to retrieve multiple specific items from the database with a single API call.
Which DynamoDB API call will meet these requirements with the MINIMUM impact on the
database?
A. BatchGetltem
B. Getltem
C. Scan
D. Query
Answer: A
27. A large company has its application components distributed across multiple AWS accounts.
The company needs to collect and visualize trace data across these accounts.
What should be used to meet these requirements?
A. AWS X-Ray
B. Amazon CloudWatch
C. Amazon VPC flow logs
D. Amazon OpenSearch Service
Answer: A
28. A company wants to share information with a third party. The third party has an HTTP API
endpoint that the company can use to share the information. The company has the required API
key to access the HTTP API.
The company needs a way to manage the API key by using code. The integration of the API
key with the application code cannot affect application performance.
Which solution will meet these requirements MOST securely?
A. Store the API credentials in AWS Secrets Manager. Retrieve the API credentials at runtime
by using the AWS SDK. Use the credentials to make the API call.
B. Store the API credentials in a local code variable. Push the code to a secure Git repository.
Use the local code variable at runtime to make the API call.
C. Store the API credentials as an object in a private Amazon S3 bucket. Restrict access to the
S3 object by using IAM policies. Retrieve the API credentials at runtime by using the AWS SDK.
Use the credentials to make the API call.
D. Store the API credentials in an Amazon DynamoDB table. Restrict access to the table by
using resource-based policies. Retrieve the API credentials at runtime by using the AWS SDK.
Use the credentials to make the API call.
Answer: A
Explanation:
AWS Secrets Manager is a service that helps securely store, rotate, and manage secrets such
as API keys, passwords, and tokens. The developer can store the API credentials in AWS
Secrets Manager and retrieve them at runtime by using the AWS SDK. This solution will meet
the requirements of security, code management, and performance. Storing the API credentials
in a local code variable or an S3 object is not secure, as it exposes the credentials to
unauthorized access or leakage. Storing the API credentials in a DynamoDB table is also not
secure, as it requires additional encryption and access control measures. Moreover, retrieving
the credentials from S3 or DynamoDB may affect application performance due to network
latency.
Reference: [What Is AWS Secrets Manager? - AWS Secrets Manager] [Retrieving a Secret -
AWS Secrets Manager]
29. A developer is troubleshooting an Amazon API Gateway API Clients are receiving HTTP
400 response errors when the clients try to access an endpoint of the API.
How can the developer determine the cause of these errors?
A. Create an Amazon Kinesis Data Firehose delivery stream to receive API call logs from API
Gateway.
Configure Amazon CloudWatch Logs as the delivery stream's destination.
B. Turn on AWS CloudTrail Insights and create a trail Specify the Amazon Resource Name
(ARN) of the trail for the stage of the API.
C. Turn on AWS X-Ray for the API stage Create an Amazon CtoudWalch Logs log group
Specify the Amazon Resource Name (ARN) of the log group for the API stage.
D. Turn on execution logging and access logging in Amazon CloudWatch Logs for the API
stage. Create a CloudWatch Logs log group. Specify the Amazon Resource Name (ARN) of the
log group for the API stage.
Answer: D
Explanation:
This solution will meet the requirements by using Amazon CloudWatch Logs to capture and
analyze the logs from API Gateway. Amazon CloudWatch Logs is a service that monitors,
stores, and accesses log files from AWS resources. The developer can turn on execution
logging and access logging in Amazon CloudWatch Logs for the API stage, which enables
logging information about API execution and client access to the API. The developer can create
a CloudWatch Logs log group, which is a collection of log streams that share the same
retention, monitoring, and access control settings. The developer can specify the Amazon
Resource Name (ARN) of the log group for the API stage, which instructs API Gateway to send
the logs to the specified log group. The developer can then examine the logs to determine the
cause of the HTTP 400 response errors.
Option A is not optimal because it will create an Amazon Kinesis Data Firehose delivery stream
to receive API call logs from API Gateway, which may introduce additional costs and complexity
for delivering and processing streaming data.
Option B is not optimal because it will turn on AWS CloudTrail Insights and create a trail, which
is a feature that helps identify and troubleshoot unusual API activity or operational issues, not
HTTP response errors.
Option C is not optimal because it will turn on AWS X-Ray for the API stage, which is a service
that helps analyze and debug distributed applications, not HTTP response errors.
Reference: [Setting Up CloudWatch Logging for a REST API], [CloudWatch Logs Concepts]
30. A developer is creating a template that uses AWS CloudFormation to deploy an application.
The application is serverless and uses Amazon API Gateway, Amazon DynamoDB, and AWS
Lambda.
Which AWS service or tool should the developer use to define serverless resources in YAML?
A. CloudFormation serverless intrinsic functions
B. AWS Elastic Beanstalk
C. AWS Serverless Application Model (AWS SAM)
D. AWS Cloud Development Kit (AWS CDK)
Answer: C
Explanation:
AWS Serverless Application Model (AWS SAM) is an open-source framework that enables
developers to build and deploy serverless applications on AWS. AWS SAM uses a template
specification that extends AWS CloudFormation to simplify the definition of serverless resources
such as API Gateway, DynamoDB, and Lambda. The developer can use AWS SAM to define
serverless resources in YAML and deploy them using the AWS SAM CLI.
Reference: [What Is the AWS Serverless Application Model (AWS SAM)? - AWS Serverless
Application Model] [AWS SAM Template Specification - AWS Serverless Application Model]
31. A company is planning to use AWS CodeDeploy to deploy an application to Amazon Elastic
Container Service (Amazon ECS) During the deployment of a new version of the application,
the company initially must expose only 10% of live traffic to the new version of the deployed
application. Then, after 15 minutes elapse, the company must route all the remaining live traffic
to the new version of the deployed application.
Which CodeDeploy predefined configuration will meet these requirements?
A. CodeDeployDefault ECSCanary10Percent15Minutes
B. CodeDeployDefault LambdaCanary10Percent5Minutes
C. CodeDeployDefault LambdaCanary10Percent15Minutes
D. CodeDeployDefault ECSLinear10PercentEvery1 Minutes
Answer: A
Explanation:
CodeDeploy Predefined Configurations: CodeDeploy offers built-in deployment configurations
for common scenarios.
Canary Deployment: Canary deployments gradually shift traffic to a new version, ideal for
controlled rollouts like this requirement.
CodeDeployDefault.ECSCanary10Percent15Minutes: This configuration matches the
company's requirements, shifting 10% of traffic initially and then completing the rollout after 15
minutes.
Reference: AWS CodeDeploy Deployment
Configurations: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-
configurations-create.html
32. A company is migrating its PostgreSQL database into the AWS Cloud. The company wants
to use a database that will secure and regularly rotate database credentials. The company
wants a solution that does not require additional programming overhead.
Which solution will meet these requirements?
A. Use Amazon Aurora PostgreSQL tor the database. Store the database credentials in AWS
Systems Manager Parameter Store Turn on rotation.
B. Use Amazon Aurora PostgreSQL for the database. Store the database credentials in AWS
Secrets Manager Turn on rotation.
C. Use Amazon DynamoDB for the database. Store the database credentials in AWS Systems
Manager Parameter Store Turn on rotation.
D. Use Amazon DynamoDB for the database. Store the database credentials in AWS Secrets
Manager Turn on rotation.
Answer: B
Explanation:
This solution meets the requirements because it uses a PostgreSQL-compatible database that
can secure and regularly rotate database credentials without requiring additional programming
overhead. Amazon Aurora PostgreSQL is a relational database service that is compatible with
PostgreSQL and offers high performance, availability, and scalability. AWS Secrets Manager is
a service that helps you protect secrets needed to access your applications, services, and IT
resources. You can store database credentials in AWS Secrets Manager and use them to
access your Aurora PostgreSQL database. You can also enable automatic rotation of your
secrets according to a schedule or an event. AWS Secrets Manager handles the complexity of
rotating secrets for you, such as generating new passwords and updating your database with
the new credentials. Using Amazon DynamoDB for the database will not meet the requirements
because it is a NoSQL database that is not compatible with PostgreSQL. Using AWS Systems
Manager Parameter Store for storing and rotating database credentials will require additional
programming overhead to integrate with your database.
Reference: [What Is Amazon Aurora?], [What Is AWS Secrets Manager?]
33. When using Amazon DynamoDB, what feature automatically adjusts throughput capacity in
response to dynamically changing workloads?
A. DynamoDB Accelerator (DAX)
B. Auto Scaling
C. Global Tables
D. On-Demand Capacity
Answer: B
34. An 1AM role is attached to an Amazon EC2 instance that explicitly denies access to all
Amazon S3 API actions. The EC2 instance credentials file specifies the 1AM access key and
secret access key, which allow full administrative access.
Given that multiple modes of 1AM access are present for this EC2 instance, which of the
following is correct?
A. The EC2 instance will only be able to list the S3 buckets.
B. The EC2 instance will only be able to list the contents of one S3 bucket at a time.
C. The EC2 instance will be able to perform all actions on any S3 bucket.
D. The EC2 instance will not be able to perform any S3 action on any S3 bucket.
Answer: D
35. A developer is creating an AWS Lambda function that needs network access to private
resources in a VPC.
A. Attach the Lambda function to the VPC through private subnets. Create a security group that
allows network access to the private resources. Associate the security group with the Lambda
function.
B. Configure the Lambda function to route traffic through a VPN connection. Create a security
group that allows network access to the private resources. Associate the security group with the
Lambda function.
C. Configure a VPC endpoint connection for the Lambda function. Set up the VPC endpoint to
route traffic through a NAT gateway.
D. Configure an AWS PrivateLink endpoint for the private resources. Configure the Lambda
function to reference the PrivateLink endpoint.
Answer: A
Explanation:
Comprehensive Detailed Step by Step Explanation with All AWS Developer
Reference: When you need to provide an AWS Lambda function access to private resources in
a VPC, the most common and straightforward approach is to attach the Lambda function to a
VPC via private subnets. Once the Lambda function is associated with the VPC, you need to
configure appropriate security groups to control the access to the private resources.
Lambda with VPC Access: Lambda functions can be attached to private subnets in a VPC,
allowing them to access resources like RDS, EC2, or internal services within that VPC.
Security Groups: A security group acts as a virtual firewall for the Lambda function, ensuring
that it can access only the necessary resources and ports in the VPC. Alternatives:
Option B involves routing traffic through a VPN, which adds unnecessary complexity and
operational overhead compared to simply attaching the Lambda to the VPC.
Option C requires configuring a VPC endpoint and a NAT gateway, which can be complex and
costly.
Option D refers to AWS PrivateLink, which is used to access services over private connections,
but it's unnecessary in this scenario unless you need a cross-VPC connection.
Reference: Lambda functions in a VPC
36. A developer is working on a web application that uses Amazon DynamoDB as its data store.
The application has two DynamoDB tables one table that is named artists and one table that is
named songs The artists table has artistName as the partition key. The songs table has
songName as the partition key and artistName as the sort key
The table usage patterns include the retrieval of multiple songs and artists in a single database
operation from the webpage. The developer needs a way to retrieve this information with
minimal network traffic and optimal application performance.
Which solution will meet these requirements?
A. Perform a BatchGetltem operation that returns items from the two tables. Use the list of
songName artistName keys for the songs table and the list of artistName key for the artists
table.
B. Create a local secondary index (LSI) on the songs table that uses artistName as the partition
key Perform a query operation for each artistName on the songs table that filters by the list of
songName Perform a query operation for each artistName on the artists table
C. Perform a BatchGetltem operation on the songs table that uses the songName/artistName
keys.
Perform a BatchGetltem operation on the artists table that uses artistName as the key.
D. Perform a Scan operation on each table that filters by the list of songName/artistName for the
songs table and the list of artistName in the artists table.
Answer: A
Explanation:
Scenario: Application needs to fetch songs and artists efficiently in a single operation.
BatchGetItem: This DynamoDB operation retrieves multiple items across different tables based
on their primary keys in a single request.
Optimized for Request Batching: This approach reduces network traffic compared to performing
multiple queries individually.
Data Modeling: The songs table is designed appropriately for this access pattern using
artistName as
the sort key.
Reference: Amazon DynamoDB
BatchGetItem:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetI tem.ht
37. A developer is creating a simple proof-of-concept demo by using AWS CloudFormation and
AWS Lambda functions. The demo will use a CloudFormation template to deploy an existing
Lambda function The Lambda function uses deployment packages and dependencies stored in
Amazon S3 The developer defined anAWS Lambda Function resource in a CloudFormation
template. The developer needs to add the S3 bucket to the CloudFormation template.
What should the developer do to meet these requirements with the LEAST development effort?
A. Add the function code in the CloudFormation template inline as the code property
B. Add the function code in the CloudFormation template as the ZipFile property.
C. Find the S3 key for the Lambda function Add the S3 key as the ZipFile property in the
CloudFormation template.
D. Add the relevant key and bucket to the S3Bucket and S3Key properties in the
CloudFormation template
Answer: D
Explanation:
S3Bucket and S3Key: These properties in a CloudFormation AWS::Lambda::Function resource
specify the location of the function's code in S3.
Least Development Effort: This solution minimizes code changes, relying on CloudFormation to
reference the existing S3 deployment package.
Reference: AWS::Lambda::Function
Resource https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-
lambda-function.html
38. A company has a web application that is hosted on Amazon EC2 instances The EC2
instances are configured to stream logs to Amazon CloudWatch Logs The company needs to
receive an Amazon Simple Notification Service (Amazon SNS) notification when the number of
application error messages exceeds a defined threshold within a 5-minute period Which solution
will meet these requirements?
A. Rewrite the application code to stream application logs to Amazon SNS Configure an SNS
topic to send a notification when the number of errors exceeds the defined threshold within a
5-minute period
B. Configure a subscription filter on the CloudWatch Logs log group. Configure the filter to send
an SNS notification when the number of errors exceeds the defined threshold within a 5-minute
period.
D. Install and configure the Amazon Inspector agent on the EC2 instances to monitor for errors
Configure Amazon Inspector to send an SNS notification when the number of errors exceeds
the defined threshold within a 5-minute period
D. Create a CloudWatch metric filter to match the application error pattern in the log data. Set
up a CloudWatch alarm based on the new custom metric. Configure the alarm to send an SNS
notification when the number of errors exceeds the defined threshold within a 5-minute period.
Answer: D
Explanation:
CloudWatch for Log Analysis: CloudWatch is the best fit here because logs are already
centralized.
Here's the process:
Metric Filter: Create a metric filter on the CloudWatch Logs log group. Design a pattern to
specifically identify application error messages.
Custom Metric: This filter generates a new custom CloudWatch metric (e.g., ApplicationErrors).
This metric tracks the error count.
CloudWatch Alarm: Create an alarm on the ApplicationErrors metric. Configure the alarm with
your
desired threshold and a 5-minute evaluation period.
SNS Action: Set the alarm to trigger an SNS notification when it enters the alarm state.
Reference: CloudWatch Metric
Filters: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
CloudWatch
Alarms:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmai
l.html
39. A company has an Amazon S3 bucket containing premier content that it intends to make
available to only paid subscribers of its website. The S3 bucket currently has default
permissions of all objects being private to prevent inadvertent exposure of the premier content
to non-paying website visitors.
How can the company Limit the ability to download a premier content file in the S3 Bucket to
paid subscribers only?
A. Apply a bucket policy that allows anonymous users to download the content from the S3
bucket.
B. Generate a pre-signed object URL for the premier content file when a pad subscriber
requests a download.
C. Add a Docket policy that requires multi-factor authentication for request to access the S3
bucket objects.
D. Enable server-side encryption on the S3 bucket for data protection against the non-paying
website visitors.
Answer: B
Explanation:
This solution will limit the ability to download a premier content file in the S3 bucket to paid
subscribers only because it uses a pre-signed object URL that grants temporary access to an
S3 object for a specified duration. The pre-signed object URL can be generated by the
company’s website when a paid subscriber requests a download, and can be verified by
Amazon S3 using the signature in the URL.
Option A is not optimal because it will allow anyone to download the content from the S3 bucket
without verifying their subscription status.
Option C is not optimal because it will require additional steps and costs to configure multi-factor
authentication for accessing the S3 bucket
objects, which may not be feasible or user-friendly for paid subscribers.
Option D is not optimal because it will not prevent non-paying website visitors from accessing
the S3 bucket objects, but only encrypt them at rest.
Reference: Share an Object with Others, [Using Amazon S3 Pre-Signed URLs]
40. A developer wants the ability to roll back to a previous version of an AWS Lambda function
in the event of errors caused by a new deployment.
How can the developer achieve this with MINIMAL impact on users?
A. Change the application to use an alias that points to the current version. Deploy the new
version of the code Update the alias to use the newly deployed version. If too many errors are
encountered, point the alias back to the previous version.
B. Change the application to use an alias that points to the current version. Deploy the new
version of the code. Update the alias to direct 10% of users to the newly deployed version. If too
many errors are encountered, send 100% of traffic to the previous version
C. Do not make any changes to the application. Deploy the new version of the code. If too many
errors are encountered, point the application back to the previous version using the version
number in the Amazon Resource Name (ARN).
D. Create three aliases: new, existing, and router. Point the existing alias to the current version.
Have the router alias direct 100% of users to the existing alias. Update the application to use
the router alias. Deploy the new version of the code. Point the new alias to this version. Update
the router alias to direct 10% of users to the new alias. If too many errors are encountered, send
100% of traffic to the existing alias.
Answer: A
41. An application uses Lambda functions to extract metadata from files uploaded to an S3
bucket; the metadata is stored in Amazon DynamoDB. The application starts behaving
unexpectedly, and the developer wants to examine the logs of the Lambda function code for
errors. Based on this system configuration, where would the developer find the logs?
A. Amazon S3
B. AWS CloudTrail
C. Amazon CloudWatch
D. Amazon DynamoDB
Answer: C
Explanation:
Amazon CloudWatch is the service that collects and stores logs from AWS Lambda functions.
The developer can use CloudWatch Logs Insights to query and analyze the logs for errors and
metrics.
Option A is not correct because Amazon S3 is a storage service that does not store Lambda
function logs.
Option B is not correct because AWS CloudTrail is a service that records API calls and events
for AWS services, not Lambda function logs.
Option D is not correct because Amazon DynamoDB is a database service that does not store
Lambda function logs.
Reference: AWS Lambda Monitoring, [CloudWatch Logs Insights]
42. A development team maintains a web application by using a single AWS CloudFormation
template. The template defines web servers and an Amazon RDS database. The team uses the
Cloud Formation template to deploy the Cloud Formation stack to different environments.
During a recent application deployment, a developer caused the primary development database
to be dropped and recreated. The result of this incident was a loss of data. The team needs to
avoid accidental database deletion in the future.
Which solutions will meet these requirements? (Choose two.)
A. Add a CloudFormation Deletion Policy attribute with the Retain value to the database
resource.
B. Update the CloudFormation stack policy to prevent updates to the database.
C. Modify the database to use a Multi-AZ deployment.
D. Create a CloudFormation stack set for the web application and database deployments.
E. Add a Cloud Formation DeletionPolicy attribute with the Retain value to the stack.
Answer: A, B
Explanation:
AWS CloudFormation is a service that enables developers to model and provision AWS
resources using templates. The developer can add a CloudFormation Deletion Policy attribute
with the Retain value to the database resource. This will prevent the database from being
deleted when the stack is deleted or updated. The developer can also update the
CloudFormation stack policy to prevent updates to the database. This will prevent accidental
changes to the database configuration or properties.
Reference: [What Is AWS CloudFormation? - AWS CloudFormation]
[DeletionPolicy Attribute - AWS CloudFormation]
[Protecting Resources During Stack Updates - AWS CloudFormation]
43. A developer uses AWS CloudFormation to deploy an Amazon API Gateway API and an
AWS Step Functions state machine The state machine must reference the API Gateway API
after the CloudFormation template is deployed. The developer needs a solution that uses the
state machine to reference the API Gateway endpoint.
Which solution will meet these requirements MOST cost-effectively?
A. Configure the CloudFormation template to reference the API endpoint in the
DefinitionSubstitutions property for the AWS StepFunctions StateMachme resource.
B. Configure the CloudFormation template to store the API endpoint in an environment variable
for the AWS::StepFunctions::StateMachine resourc Configure the state machine to reference
the environment variable
C. Configure the CloudFormation template to store the API endpoint in a standard AWS:
SecretsManager Secret resource Configure the state machine to reference the resource
D. Configure the CloudFormation template to store the API endpoint in a standard
AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to reference the
resource.
Answer: A
Explanation:
CloudFormation and Dynamic
Reference: The DefinitionSubstitutions property in CloudFormation allows you to pass values
into Step Functions state machines at runtime.
Cost-Effectiveness: This solution is cost-effective as it leverages CloudFormation's built-in
capabilities, avoiding the need for additional services like Secrets Manager or AppConfig.
Reference: AWS Step Functions State
Machine: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-
stepfunctions-statemachine.html
CloudFormation DefinitionSubstitutions: https://github.com/aws-cloudformation/aws-
cloudformation-resource-providers-stepfunctions/issues/14
44. An application that runs on AWS receives messages from an Amazon Simple Queue
Service (Amazon SQS) queue and processes the messages in batches. The application sends
the data to another SQS queue to be consumed by another legacy application. The legacy
system can take up to 5 minutes to process some transaction data.
A developer wants to ensure that there are no out-of-order updates in the legacy system. The
developer cannot alter the behavior of the legacy system.
Which solution will meet these requirements?
A. Use an SQS FIFO queue. Configure the visibility timeout value.
B. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure
the DelaySeconds values.
C. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure
the visibility timeout value.
D. Use an SQS FIFO queue. Configure the DelaySeconds value.
Answer: A
Explanation:
An SQS FIFO queue is a type of queue that preserves the order of messages and ensures that
each message is delivered and processed only once1. This is suitable for the scenario where
the developer wants to ensure that there are no out-of-order updates in the legacy system.
The visibility timeout value is the amount of time that a message is invisible in the queue after a
consumer receives it2. This prevents other consumers from processing the same message
simultaneously. If the consumer does not delete the message before the visibility timeout
expires,
the message becomes visible again and another consumer can receive it2.
In this scenario, the developer needs to configure the visibility timeout value to be longer than
the maximum processing time of the legacy system, which is 5 minutes. This will ensure that the
message remains invisible in the queue until the legacy system finishes processing it and
deletes it. This will prevent duplicate or out-of-order processing of messages by the legacy
system.
45. A developer created an AWS Lambda function that accesses resources in a VPC. The
Lambda function polls an Amazon Simple Queue Service (Amazon SOS) queue for new
messages through a VPC endpoint. Then the function calculates a rolling average of the
numeric values that are contained in the messages. After initial tests of the Lambda function,
the developer found that the value of the rolling average that the function returned was not
accurate.
How can the developer ensure that the function calculates an accurate rolling average?
A. Set the function's reserved concurrency to 1. Calculate the rolling average in the function.
Store the calculated rolling average in Amazon ElastiCache.
B. Modify the function to store the values in Amazon ElastiCache. When the function initializes,
use the previous values from the cache to calculate the rolling average.
C. Set the function's provisioned concurrency to 1. Calculate the rolling average in the function.
Store the calculated rolling average in Amazon ElastiCache.
D. Modify the function to store the values in the function's layers. When the function initializes,
use the previously stored values to calculate the rolling average.
Answer: B