0% found this document useful (0 votes)
71 views21 pages

Solution Architect Notes

S3 IA provides infrequent access to objects but requires rapid access when needed. GP2 volume performance scales linearly at 3 IOPS per GiB up to 6000 IOPS for a 2TB volume, increasing by 3000 IOPS for each additional 1TB. GP3 scales linearly at 500 IOPS per GiB up to 3000 IOPS for volumes up to 10TB, then increases 3000 IOPS for every 20TB.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views21 pages

Solution Architect Notes

S3 IA provides infrequent access to objects but requires rapid access when needed. GP2 volume performance scales linearly at 3 IOPS per GiB up to 6000 IOPS for a 2TB volume, increasing by 3000 IOPS for each additional 1TB. GP3 scales linearly at 500 IOPS per GiB up to 3000 IOPS for volumes up to 10TB, then increases 3000 IOPS for every 20TB.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 21

S3 IA

Infrequent Access but requires rapid access when needed.

gp2
baseline performance scales linearly at 3 IOPS per GiB of volume size.
GP2 with 2TB volumes has 6000 IOPS. If we add additional 1TB it will increase
by another 3000 IOPS
Increase from 2TB (6000IOPS) to 3TB (9000IOPS), difference is 1TB(3000IOPS).

gp3 (?)
baseline performance scales linearly at 500 IOPS per GiB of volume size.
max 3000 iops till 10TB and then increase 3000 iops for every 20TB

Public virtual interface on a Direct Connect connection


Copy data to S3

ACM is regional service. so for every region certificate to be provisioned.

What is DLQ in AWS Lambda?


DLQ stands for Dead Letter Queue in AWS Lambda.
It is a feature that enables you to set up an Amazon SQS queue or an Amazon
SNS topic to capture and store failed messages that could not be processed by your
Lambda function.
This can be useful for debugging and error handling.

Kinesis Data Streams


Kinesis Data Streams captures item-level modifications in any DynamoDB table
and replicates them to a Kinesis data stream. Your applications can access this
stream and view item-level changes in near-real time.

Amazon Inspector
Amazon Inspector is an automated security assessment service that helps you
test the network accessibility of your Amazon EC2 instances and the security state
of your applications running on the instances.

Lambda@Edge
There are several benefits to using Lambda@Edge for authorization operations.
First, performance is improved by running the authorization function using
Lambda@Edge closest to the viewer, reducing latency and response time to the viewer
request. The load on your origin servers is also reduced by offloading CPU-
intensive operations such as verification of JSON Web Token (JWT) signatures.
Finally, there are security benefits such as filtering out unauthorized requests
before they reach your origin infrastructure.

https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-
to-use-lambdaedge-and-json-web-tokens-to-enhance-web-application-security/

AWS Application Discovery Service


AWS Application Discovery Service collects and presents data to enable
enterprise customers to understand the configuration, usage, and behavior of
servers in their IT environments. Server data is retained in the Application
Discovery Service where it can be tagged and grouped into applications to help
organize AWS migration planning. Collected data can be exported for analysis in
Excel or other cloud migration analysis tools.
AWS Application Discovery Service supports agent-based and agentless modes of
operation. With the agentless discovery, VMware customers collect VM configuration
and performance profiles without deploying the AWS Application Discovery Agent on
each host, which accelerates data collection. Customers in a non-VMware environment
or that need additional information, like network dependencies and information
about running processes, may install the Application Discovery Agent on servers and
virtual machines (VMs) to collect data.

SQS short polling


With short polling, the ReceiveMessage request queries only a subset of the
servers (based on a weighted random distribution) to find messages that are
available to include in the response. Amazon SQS sends the response right away,
even if the query found no messages.

SQS long polling


With long polling, the ReceiveMessage request queries all of the servers for
messages. Amazon SQS sends a response after it collects at least one available
message, up to the maximum number of messages specified in the request. Amazon SQS
sends an empty response only if the polling wait time expires.

Service Catalog
Amazon Service Catalog allows organizations to create and manage catalogs of
IT services that are approved for use on Amazon Web Services. These IT services can
include everything from virtual machine images, servers, software, and databases to
complete multi-tier application architectures. Amazon Service Catalog allows you to
centrally manage deployed IT services and your applications, resources, and
metadata. This helps you achieve consistent governance and meet your compliance
requirements, while enabling users to quickly deploy only the approved IT services
they need.
An administrator can create templates within a service catalog portfolio that
can be selected by an end user for deployment. The template includes the resources
and dependencies required by the application, so the user can self install the
application without necessarily knowing what resources need to be provisioned to
support the application. The template will also contain the security policies
required to ensure the correct permissions are granted for the end user when the
application is launched.

Q
needs to improve the scalable performance and availability of the database.
Which solution meets these requirements?

A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add
an Amazon RDS for MySQL read replica when resource utilization hits a threshold
B. Migrate the database to Amazon Aurora, and add a read replica Add a database
connection pool outside of the Lambda handler function
C. Migrate the database to Amazon Aurora, and add a read replica Use Amazon Route
53 weighted records
D. Migrate the database to Amazon Aurora, and add an Aurora Replica Configure
Amazon RDS Proxy to manage database connection pools
D.
Lambdas are stateless and can't rely on connection pool. To get over this
problem, AWS provide RDS proxy for connection pool management.

What is the difference between EC2 VM Import and Amazon Server Migration Service?
Amazon Server Migration Service is a significant enhancement of EC2 VM
Import. The Amazon Server Migration Service provides automated, live incremental
server replication and Amazon Web Services Console support. For customers using EC2
VM Import for migration, we recommend using Amazon Server Migration Service.

FSx for lustre


FSx for Lustre makes it easy and cost-effective to launch and run the
popular, high-performance Lustre file system. You use Lustre for workloads where
speed matters, such as machine learning, high performance computing (HPC), video
processing, and financial modeling.
Amazon FSx for Lustre is designed for high-performance computing (HPC)
workloads, while Amazon FSx for Windows is optimized for Windows-based file
servers.

AWS OpsWorks
AWS OpsWorks is a configuration management service that provides managed
instances of Chef and Puppet.

AWS DataSync
AWS DataSync is a secure, online service that automates and accelerates
moving data between on premises and AWS Storage services.

Organization-level CloudTrail
Using AWS CloudTrail, a user in a management account can create an
organization trail that logs all events for all AWS accounts in that organization.
Organization trails are automatically applied to all member accounts in the
organization. Member accounts can see the organization trail, but can't modify or
delete it.

VPC endpoint service


Create a VPC endpoint service using the centralized application NLB and enable the
option to require endpoint acceptance. Create a VPC endpoint in each of the
business unit VPCs using the service name of the endpoint service. Accept
authorized endpoint requests from the endpoint service console.

kinesis data firehose vs data stream


https://jayendrapatil.com/aws-kinesis-data-streams-vs-kinesis-firehose/?
utm_content=cmp-true
Kinesis data streams is highly customizable and best suited for developers
building custom applications or streaming data for specialized needs.
Going to write custom code
Real time (200ms latency for classic, 70ms latency for enhanced fan-
out)
You must manage scaling (shard splitting/merging)
Data storage for 1 to 7 days, replay capability, multi consumers
Use with Lambda to insert data in real-time to ElasticSearch
Kinesis Data Firehose handles loading data streams directly into AWS products
for processing. Firehose also allows for streaming to S3, OpenSearch Service, or
Redshift, where data can be copied for processing through additional services.
Fully managed, send to S3, Splunk, Redshift, ElasticSearch
Serverless data transformations with Lambda
Near real time (lowest buffer time is 1 minute)
Automated Scaling
No data storage

AWS Database Migration Service


AWS Database Migration Service (AWS DMS) is a managed migration and
replication service that helps you move your databases and analytics workloads to
AWS quickly and securely.

AWS DataSync
DataSync provides built-in security capabilities such as encryption of data
in-transit, and data integrity verification in-transit and at-rest. It optimizes
use of network bandwidth, and automatically recovers from network connectivity
failures. In addition, DataSync provides control and monitoring capabilities such
as data transfer scheduling and granular visibility into the transfer process
through Amazon CloudWatch metrics, logs, and events.

DMS vs Datasync
Datasync is for files, DMS is for databases
Datasync is meant for continuous syncing by design while DMS syncing is only
till cutover happens.
DMS don't use Datasync

The AWS Storage Gateway


is a service connecting an on-premises software appliance with cloud-based
storage. Once the AWS Storage Gateway’s software appliance is installed on a local
host, you can mount Storage Gateway volumes to your on-premises application servers
as iSCSI devices, enabling a wide variety of systems and applications to make use
of them. Data written to these volumes is maintained on your on-premises storage
hardware while being asynchronously backed up to AWS, where it is stored in Amazon
Glacier or in Amazon S3 in the form of Amazon EBS snapshots. Snapshots are
encrypted to make sure that customers do not have to worry about encrypting
sensitive data themselves. When customers need to retrieve data, they can restore
snapshots locally, or create Amazon EBS volumes from snapshots for use with
applications running in Amazon EC2. It provides low-latency performance by
maintaining frequently accessed data on-premises while securely storing all of your
data encrypted.

AWS Data Pipeline vs AWS DataSync vs AWS Storage Gateway


AWS Storage Gateway
If you need to extend your on-premises storage, backup your data, or
access your data from the cloud, then AWS Storage Gateway is a good option.
AWS DataSync
If you need to copy large amounts of data to and from AWS, then AWS
DataSync is a better option.
AWS Data Pipeline
is a web service that provides a simple management system for data-
driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the
“data sources” that contain your data, the “activities” or business logic such as
EMR jobs or SQL queries, and the “schedule” on which your business logic executes.
For example, you could define a job that, every hour, runs an Amazon Elastic
MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service
(Amazon S3) log data, loads the results into a relational database for future
lookup, and then automatically sends you a daily summary email;
AWS Data Pipeline belongs to "Data Transfer" category of the tech stack,
while AWS Storage Gateway can be primarily classified under "Data Backup".
Features offered by AWS Data Pipeline are:
You can find (and use) a variety of popular AWS Data Pipeline tasks in
the AWS Management Console’s template section.
Hourly analysis of Amazon S3‐based log data
Daily replication of AmazonDynamoDB data to Amazon S3
Features offered by AWS Storage Gateway are:
Gateway-Cached Volumes – Gateway-Cached volumes allow you to utilize
Amazon S3 for your primary data, while retaining some portion of it locally in a
cache for frequently accessed data.
Gateway-Stored Volumes – Gateway-Stored volumes store your primary data
locally, while asynchronously backing up that data to AWS.
Data Snapshots – Gateway-Cached volumes and Gateway-Stored volumes
provide the ability to create and store point-in-time snapshots of your storage
volumes in Amazon S3.

Amazon S3 Transfer Acceleration


Amazon S3 Transfer Acceleration can speed up content transfers to and from
Amazon S3 by as much as 50-500% for long-distance transfer of larger objects.

AWS Server Migration Service vs CloudEndure Migration


CloudEndure Migration
is a block-level replication tool that simplifies the process of
migrating applications from physical, virtual, and cloud-based servers to AWS.
AWS Server Migration Service
is an agentless migration service to migrate on-premises virtual
machines to AWS using virtual appliance.

Application Migration Service


is the next generation of CloudEndure Migration, and offers key features and
operational benefits that are not available with CloudEndure Migration.
AWS Application Migration Service (AWS MGN) is the recommended service to
migrate your applications to AWS.
With Application Migration Service, you can migrate your applications from
physical infrastructure, VMware vSphere, Microsoft Hyper-V, Amazon Elastic Compute
Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and other clouds to
AWS.
Application Migration Service is not yet supported in China Regions. If you
are migrating to AWS Outposts or an AWS China Region, consider using CloudEndure
Migration.
With AWS Application Migration Service you can:
Operate the service from the AWS Management Console.
Control permissions and access using AWS Identity and Access Management
(IAM).
Operate the service without a connection to the public internet.
Store your migration metadata in the same AWS Region as your migrated
instances.
Utilize an agentless replication option (for vCenter), if needed.
Use APIs that are better suited for migration-specific workflows, as
well as a CLI and SDKs.
Use Amazon CloudWatch and AWS CloudTrail to monitor AWS Application
Migration Service.
Better control how your test and cutover instances are launched using
Amazon EC2 launch templates (rather than Blueprints).
Use tags to organize your source servers and control access
permissions.
Automate modernizations to your migrated applications.
Plan and manage your application migrations.
Plan and manage your migration waves.
Application Migration Service is not yet supported in China Regions.
Operate the service from the AWS Management Console.
Control permissions and access using AWS Identity and Access Management
(IAM).
Operate the service without a connection to the public internet.
Store your migration metadata in the same AWS Region as your migrated
instances.
Utilize an agentless replication option (for vCenter), if needed.
Use APIs that are better suited for migration-specific workflows, as
well as a CLI and SDKs.
Use Amazon CloudWatch and AWS CloudTrail to monitor AWS Application
Migration Service.
Better control how your test and cutover instances are launched using
Amazon EC2 launch templates (rather than Blueprints).
Use tags to organize your source servers and control access
permissions.
Automate modernizations to your migrated applications.
Plan and manage your application migrations.
Plan and manage your migration waves.

Migrate physical server to aws


You can use CloudEndure Migration to quickly lift-and-shift physical,
virtual, or cloud servers without compatibility issues, performance impact, or long
cutover windows. CloudEndure Migration continuously replicates your source servers
to your AWS account. Then, when you’re ready to migrate, it automatically converts
and launches your servers on AWS so you can quickly benefit from the cost savings,
productivity, resilience, and agility of the AWS Cloud.
Once your applications are running on AWS, you can leverage AWS services and
capabilities to quickly and easily replatform or refactor these applications –
which makes lift-and-shift a fast route to modernization.
With CloudEndure Migration, an agent-based solution, you can migrate legacy
applications as well as all applications and databases that run on supported
versions of Windows and Linux operating systems (OS). This includes Windows Server
versions 2003/2008/2012/2016/2019 and Linux distributions, such as CentOS, RHEL,
OEL, SUSE, Ubuntu, and Debian. CloudEndure Migration supports common databases,
including Oracle and SQL Server, and mission-critical applications such as SAP.

AWS certificate manager certificate in EC2


Public ACM certificates can be installed on Amazon EC2 instances that are
connected to a Nitro Enclave, but not to other Amazon EC2 instances.
You can use private certificates issued with Private CA with EC2 instances,
containers, and on your own servers. At this time, public ACM certificates can be
used only with specific AWS services, including AWS Nitro Enclaves.

API Gateway Regional endpoint vs Edge-optimized endpoint


Edge optimized just means that routing to your endpoint occurs from
Cloudfront edge locations and will traverse the AWS backbone from those locations.
Regional means the call will directly hit the API gateway endpoint in the deployed
region and would traverse the internet to get there.

S3 access point
S3 Access Points simplify how you manage data access for your application set
to your shared datasets on S3. You no longer have to manage a single, complex
bucket policy with hundreds of different permission rules that need to be written,
read, tracked, and audited. With S3 Access Points, you can now create application-
specific access points permitting access to shared datasets with policies tailored
to the specific application.

S3 Intelligent-Tiering
S3 Intelligent-Tiering delivers automatic storage cost savings in three low-
latency and high-throughput access tiers. For data that can be accessed
asynchronously, you can choose to activate automatic archiving capabilities within
the S3 Intelligent-Tiering storage class.

SQS visibility timeout


To prevent other consumers from processing the message again, Amazon SQS sets
a visibility timeout, a period of time during which Amazon SQS prevents all
consumers from receiving and processing the message. The default visibility timeout
for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
When using sqs as queuing service, when you read the message off the queue it
does not automatically delete the message off the queue. So when you are processing
the message, sqs will wait for the time period defined as visibility timeout before
other consumers will get the same message again.
The best time value to set for the visibility timeout will be at least the
timeout value for the consumer process. If the consumer is able to successfully
complete the processing then it would delete the message off the queue else if it
times out then the message reappears in the queue for other consumer to pick it
again.

Pilot light environment


Pilot light is an example of active/passive failover configuration.
Pilot light – involves running core services in standby mode, and triggering
additional services as needed in case of disaster.

Warm standby
Involves running a full backup system in standby mode, with live data
replicated from the production environment.

Snowball
can't be shipped cross-region

AWS Backup
is a cost-effective, fully managed, policy-based service that simplifies data
protection at scale.
AWS Backup is an ideal solution for implementing standard backup plans for
your AWS resources across your AWS accounts and Regions. Because AWS Backup
supports multiple AWS resource types, it makes it easier to maintain and implement
a backup strategy for workloads using multiple AWS resources that need to be backed
up collectively. AWS Backup also enables you to collectively monitor a backup and
restore operation that involves multiple AWS resources.

EFS Cross-Region Replication


Cross-regional replication can be configured inside a single AWS region or
across two AWS regions in the same AWS partition in minutes for new or existing EFS
file systems.
EFS cross-Region Replication is now possible with RTO of 15mins

AWS Application Migration Service (MGN) vs VM import/export vs


ServerMigrationService
The SMS service was likely deprecated by MGN which is why you’re struggling
to find information about it.
MGN is the AWS recommended service for lift-and-shift migrations unless the
target region is in China where it isn’t available. Only CloudEndure and
Import/Export services are available.
Worth noting that with MGN you can achieve a live-migration while
import/export (which pre-dates MGN) is a cold-migration.

API Gateway resource policies


You can use API Gateway resource policis to allow users from specified aws
account, from specified IP ranges or CIDR blocks or from specified VPCs or VPC
endpoints. request limit is not part of resource policies.

API Gateway usage plans


API Gateway usage plans can limit the API access and be sure that the usage
does not exceed thrsholds we define.

Athena
Athena supports creating tables and querying data from CSV, TSV, custom-
delimited, and JSON formats; data from Hadoop-related formats: ORC, Apache Avro and
Parquet; logs from Logstash, AWS CloudTrail logs, and Apache WebServer logs

RDS Proxy
RDS Proxy is a fully-managed, highly available, and easy-to-use database
proxy feature of Amazon RDS that enables your applications to:
1) improve scalability by pooling and sharing database connections;
2) improve availability by reducing database failover times by up to
66% and preserving application connections during failovers; and
3) improve security by optionally enforcing AWS IAM authentication to
databases and securely storing credentials in AWS Secrets Manager.
AWS Network Firewall
AWS Network Firewall is a stateful, managed, network firewall and intrusion
detection and prevention service for your virtual private cloud (VPC) that you
create in Amazon Virtual Private Cloud (Amazon VPC).
With Network Firewall, you can filter traffic at the perimeter of your VPC.
This includes filtering traffic going to and coming from an internet gateway, NAT
gateway, or over VPN or AWS Direct Connect. Network Firewall uses the open source
intrusion prevention system (IPS), Suricata, for stateful inspection. Network
Firewall supports Suricata compatible rules.

AWS Key Management Service (AWS KMS) CMK (Customer Master Key)
AWS KMS is replacing the term customer master key (CMK) with AWS KMS key and
KMS key.
You cannot manage Amazon managed CMKs, rotate them, or change their key
policies. AWS managed customer master key (CMK) key policies can't be modified
because they're read-only

Customer managed CMK


Use a customer managed CMK if you want to grant cross-account access to your
S3 objects. You can configure the policy of a customer managed CMK to allow access
from another account.

VPC Traffic Mirroring


VPC Traffic Mirroring is an AWS feature used to copy network traffic from the
elastic network interface of an EC2 instance to a target for analysis.

Global Accelerator
Global Accelerator does not support client IP address preservation for
Network Load Balancer and Elastic IP address endpoints.

BIND
BIND is a nameserver service responsible for performing domain-name-to-IP
conversion on Linux-based DNS servers.

AWS Organizations has two available feature sets:


- All features
- Consolidated Billing features

AWS Resource Access Manager


Shared the transit gateway with the entire organization by using AWS Resource
Access Manager.

Transit Gateway
Transit GW + Direct Connect GW + Transit VIF + enabled SiteLink if two
different DX locations

What is a state machine in AWS Step Functions


In Step Functions, a workflow is called a state machine, which is a series of
event-driven steps. Each step in a workflow is called a state. A Task state
represents a unit of work that another AWS service, such as AWS Lambda, performs. A
Task state can call any AWS service or API.

Gateway Load Balancer


Gateway Load Balancer helps you easily deploy, scale, and manage your third-
party virtual appliances. It gives you one gateway for distributing traffic across
multiple virtual appliances while scaling them up or down, based on demand.

Gateway Load Balancer endpoint


A Gateway Load Balancer endpoint is a VPC endpoint that provides private
connectivity between virtual appliances in the service provider VPC, and
application servers in the service consumer VPC. The Gateway Load Balancer is
deployed in the same VPC as that of the virtual appliances.

Amazon IoT Core


Amazon IoT Core is a managed cloud platform that lets connected devices
easily and securely interact with cloud applications and other devices.

Amazon S3 Glacier vs Amazon S3 Glacier Deep Archive


Amazon S3 Glacier offers an instant retrieval storage solution for data that
is not frequently used but might require use or retrieval after long periods.
Retrieval time:
Expedited: 1-5 minutes
Standard: 3-5 hours
Bulk: 5-12 hours
Storage cost:
1 GB at $0.004 per month
1 TB at $4.10 per month
The Deep Archive offers the same solution with a slight difference – it
stores data that has been used, is hardly required soon but needs to be archived.
Retrieval time:
Standard: up to 12 hours
Bulk: up to 48 hours.
Storage cost:
1 GB at $0.00099 per month
1 TB at $1.01 per month

Redshift Concurrency Scaling


https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html
Support virtually unlimited concurrent users and concurrent queries, with
consistently fast query performance.
When you turn on concurrency scaling, Amazon Redshift automatically adds
additional cluster capacity to process an increase in both read and write queries.
Users see the most current data, whether the queries run on the main cluster
or a concurrency-scaling cluster. You're charged for concurrency-scaling clusters
only for the time they're actively running queries.

AWS Service Catalog concepts


product
A product is a blueprint for building your AWS resources that you want
to make available for deployment on AWS along with the configuration information.
You create a product by importing a CloudFormation template, or, in case of AWS
Marketplace-based products, by copying the product to the AWS Service Catalog. A
product can belong to multiple portfolios.
portfolio
A portfolio is a collection of products, together with the
configuration information. You can use portfolios to manage the user access to
specific products. You can grant portfolio access at an IAM user, IAM group, and
IAM role level.
provisioned product
A provisioned product is a CloudFormation stack that is, the AWS
resources that are created. When an end user launches a product, the AWS Service
Catalog provisions the product in form of a CloudFormation stack.
Constraints
Constraints control the way users can deploy a product. With launch
constraints, you can specify a role that the AWS Service Catalog can assume to
launch a product from the portfolio.

AWS Transfer Family


AWS Transfer Family is a secure transfer service that enables you to
transfer files into and out of AWS storage services.
AWS Transfer Family supports transferring data from or to the following
AWS storage services.
Amazon Simple Storage Service (Amazon S3) storage.
Amazon Elastic File System (Amazon EFS) Network File System (NFS)
file systems.
AWS Transfer Family supports transferring data over the following
protocols:
Secure Shell (SSH) File Transfer Protocol (SFTP): version 3
File Transfer Protocol Secure (FTPS)
File Transfer Protocol (FTP)
Applicability Statement 2 (AS2)

AWS Snowcone
AWS Snowcone is a portable, rugged, and secure device for edge computing and
data transfer. You can use a Snowcone device to collect, process, and move data to
the AWS Cloud, either offline by shipping the device to AWS, or online by using AWS
DataSync.
Snowcone is available in two flavors:
Snowcone – Snowcone has two vCPUs, 4 GB of memory, and 8 TB of hard
disk drive (HDD) based storage.
Snowcone SSD – Snowcone SSD has two vCPUs, 4 GB of memory, and 14 TB of
solid state drive (SSD) based storage.
Use Cases
For edge computing applications, to collect data, process the data to
gain immediate insight, and then transfer the data online to AWS.
To transfer data that is continuously generated by sensors or
machines online to AWS in a factory or at other edge locations.
To distribute media, scientific, or other content from AWS
storage services to your partners and customers.
To aggregate content by transferring media, scientific, or other
content from your edge locations to AWS.
For one-time data migration scenarios where your data is ready to
be transferred, Snowcone offers a quick and low-cost way to transfer up to 8 TB or
14 TB of data to the AWS Cloud by shipping the device back to AWS.

AWS Snowball
With AWS Snowball (Snowball), you can transfer hundreds of terabytes or
petabytes of data between your on-premises data centers and Amazon Simple Storage
Service (Amazon S3). It mainly Uses a secure storage device for physical
transportation.
AWS Snowball devices
Snowcone
It is a small device used for edge computing, storage, and data
transfer.
You can transfer up to 8 TB with a single AWS Snowcone device and
can transfer larger data sets with multiple devices, either in parallel or
sequentially.
Snowball
AWS Snowball is a data migration and edge computing device that
comes in two device options:
Compute Optimized and Storage Optimized.
Snowball Edge Storage Optimized
devices provide 40 vCPUs of compute capacity coupled with
80 terabytes of usable block or Amazon S3-compatible object storage.
It is well-suited for local storage and large-scale data
transfer.
Snowmobile
It is the most bigger one. AWS Snowmobile moves up to 100 PB of
data in a 45-foot long ruggedized shipping container and is ideal for multi-
petabyte or Exabyte-scale digital media migrations and data center shutdowns.
A Snowmobile arrives at the customer site and appears as a
network-attached data store for more secure, high-speed data transfer.

AWS AppSync
AWS AppSync allows your applications to access exactly the data they need.
Create a flexible API to securely access, manipulate, and combine data from
multiple sources.
Pay only for requests to your API and for real-time messages delivered to
connected clients.

External launch
External launch type (doc from AWS): The External launch type is used to run
your containerized applications on your on-premise server or virtual machine (VM)
that you register to your Amazon ECS cluster and manage remotely.

AWS Savings plan


Savings Plans are a flexible pricing model that offer low prices on Amazon
EC2, AWS Lambda, and AWS Fargate usage, in exchange for a commitment to a
consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you
sign up for a Savings Plan, you will be charged the discounted Savings Plans price
for your usage up to your commitment.
AWS offers two types of Savings Plans:
Compute Savings Plans
Compute Savings Plans provide the most flexibility and help to
reduce your costs by up to 66%. These plans automatically apply to EC2 instance
usage regardless of instance family, size, AZ, Region, OS or tenancy, and also
apply to Fargate or Lambda usage. For example, with Compute Savings Plans, you can
change from C4 to M5 instances, shift a workload from EU (Ireland) to EU (London),
or move a workload from EC2 to Fargate or Lambda at any time and automatically
continue to pay the Savings Plans price.
EC2 Instance Savings Plans
EC2 Instance Savings Plans provide the lowest prices, offering
savings up to 72% in exchange for commitment to usage of individual instance
families in a Region (e.g. M5 usage in N. Virginia). This automatically reduces
your cost on the selected instance family in that region regardless of AZ, size, OS
or tenancy. EC2 Instance Savings Plans give you the flexibility to change your
usage between instances within a family in that region. For example, you can move
from c5.xlarge running Windows to c5.2xlarge running Linux and automatically
benefit from the Savings Plan prices.

Apache Parquet
Apache Parquet is a incredibly versatile open source columnar storage format.
It is 2x faster to unload and takes up 6x less storage in Amazon S3 as compared to
text formats. It also allows you to save the Parquet files in Amazon S3 as an open
format with all data transformation and enrichment carried out in Amazon Redshift.
Parquet is a self-describing format and the schema or structure is embedded
in the data itself therefore it is not possible to track the data changes in the
file. To track the changes, you can use Amazon Athena to track object metadata
across Parquet files as it provides an API for metadata.

Trusted Adviser
Trusted Adviser can only do assessment and recommendations.

GuardDuty
GuardDuty does require a delegated administrator account to be set up in the
organization in AWS Organizations before it can be enabled.
` Creating a delegated administrator account for GuardDuty is a necessary step
in order to enable GuardDuty, but it alone is not sufficient to
maximize scalability for the security team.
The security team will also need to be notified of any security issues that
GuardDuty detects, and that is done by subscribing the security team to an SNS
topic.

Q: How is AWS Global Accelerator different from Amazon CloudFront?


AWS Global Accelerator and Amazon CloudFront are separate services that use
the AWS global network and its edge locations around the world. CloudFront improves
performance for both cacheable content (such as images and videos) and dynamic
content (such as API acceleration and dynamic site delivery).
Global Accelerator improves performance for a wide range of applications over
TCP or UDP by proxying packets at the edge to applications running in one or more
AWS Regions.
Global Accelerator is a good fit for non-HTTP use cases, such as gaming
(UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that
specifically require static IP addresses or deterministic, fast regional failover.
Both services integrate with AWS Shield for DDoS protection.

VPC prefix list


https://www.chiwaichan.co.nz/2022/05/28/leveraging-aws-prefix-lists/
A prefix list is a collection of one or more IP CIDR blocks used to simplify
the configuration and management of security groups and routing tables. There are
customer-managed prefix lists and AWS-managed prefix lists.
AWS-managed Prefix Lists:
as the name indicates these lists are managed by AWS, and they are used
to maintain a set of IP address ranges for AWS services, e.g. S3, DynamoDB and
CloudFront.
Customer-managed Prefix Lists:
these are created and maintained by anyone who has access to the AWS
Console, AWS APIs or AWS SDKs.
prefix list can be created through AWS console and terarform

IP access control group


An IP access control group acts as a virtual firewall that controls the IP
addresses from which users are allowed to access their WorkSpaces.
To specify the CIDR address ranges, add rules to your IP access control
group, and then associate the group with your directory.
You can associate each IP access control group with one or more directories.
You can create up to 100 IP access control groups per Region per AWS account.

However, you can only associate up to 25 IP access control groups with a


single directory.

Q: How do I control which Amazon Virtual Private Clouds (VPCs) can communicate with
each other?
You can segment your network by creating multiple route tables in an AWS
Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to
create isolated networks inside an AWS Transit Gateway similar to virtual routing
and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a
default route table. The use of multiple route tables is optional.

Spread placement group


A spread placement group is a group of instances that are each placed on
distinct hardware. Spread placement groups are recommended for applications that
have a small number of critical instances that should be kept separate from each
other.
Amazon Aurora Serverless v1
Amazon Aurora Serverless v1 (Amazon Aurora Serverless version 1) is an on-
demand autoscaling configuration for Amazon Aurora. An Aurora Serverless v1 DB
cluster is a DB cluster that scales compute capacity up and down based on your
application's needs. This contrasts with Aurora provisioned DB clusters, for which
you manually manage capacity. Aurora Serverless v1 provides a relatively simple,
cost-effective option for infrequent, intermittent, or unpredictable workloads. It
is cost-effective because it automatically starts up, scales compute capacity to
match your application's usage, and shuts down when it's not in use.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-
serverless.html

AWS Identity and Access Management (IAM) Access Analyzer


AWS Identity and Access Management (IAM) Access Analyzer generates
comprehensive findings to help you identify resources that grant public and cross-
account access.

Amazon EKS Anywhere


Amazon EKS Anywhere helps to simplify the creation and operation of on-
premises Kubernetes clusters while automating cluster management, so that you can
reduce your support costs and avoid the maintenance of redundant open-source and
third-party tools.

Amazon ECS Anywhere


Amazon ECS Anywhere provides support for registering an external instance
such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster.
External instances are optimized for running applications that generate
outbound traffic or process data.
If your application requires inbound traffic, the lack of Elastic Load
Balancing support makes running these workloads less efficient.
Amazon ECS added a new EXTERNAL launch type that you can use to create
services or run tasks on your external instances.

S3 Replication Time Control


S3 Replication Time Control (S3 RTC) helps you meet compliance or business
requirements for data replication and provides visibility into Amazon S3
replication times. S3 RTC replicates most objects that you upload to Amazon S3 in
seconds, and 99.99 percent of those objects within 15 minutes.

Q 44 Explanation
Amazon RDS Multi-AZ Deployments -
Amazon RDS Multi-AZ deployments provide enhanced availability and durability
for Database (DB) Instances, making them a natural fit for production database
workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically
creates a primary DB Instance and synchronously replicates the data to a standby
instance in a different Availability Zone (AZ). Each AZ runs on its own physically
distinct, independent infrastructure, and is engineered to be highly reliable. In
case of an infrastructure failure (for example, instance hardware failure, storage
failure, or network disruption), Amazon RDS performs an automatic failover to the
standby, so that you can resume database operations as soon as the failover is
complete. Since the endpoint for your DB Instance remains the same after a
failover, your application can resume database operation without the need for
manual administrative intervention.

Enhanced Durability -

Multi-AZ deployments for the -


,
Oracle -
, and

PostgreSQL -
engines utilize synchronous physical replication to keep data on the standby
up-to-date with the primary. Multi-AZ deployments for the

SQL Server -
engine use synchronous logical replication to achieve the same result,
employing SQL Server-native Mirroring technology. Both approaches safeguard your
data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon
RDS automatically initiates a failover to the up-to-date standby. Compare this to a
Single-AZ deployment: in case of a Single-AZ database failure, a user-
initiated point-in-time-restore operation will be required. This operation can take
several hours to complete, and any data updates that occurred after the latest
restorable time (typically within the last five minutes) will not be available.

Amazon Aurora -
employs a highly durable, SSD-backed virtualized storage layer purpose-built
for database workloads. Amazon Aurora automatically replicates your volume six
ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant,
transparently handling the loss of up to two copies of data without affecting
database write availability and up to three copies without affecting read
availability. Amazon Aurora storage is also self-healing. Data blocks and disks are
continuously scanned for errors and replaced automatically.

Increased Availability -
You also benefit from enhanced database availability when running Multi-AZ
deployments. If an Availability Zone failure or DB Instance failure occurs, your
availability impact is limited to the time automatic failover takes to complete:
typically under one minute for Amazon Aurora and one to two minutes for other
database engines (see the

RDS FAQ -
for details).
The availability benefits of Multi-AZ deployments also extend to planned
maintenance and backups. In the case of system upgrades like OS patching or DB
Instance scaling, these operations are applied first on the standby, prior to
the automatic failover. As a result, your availability impact is, again, only the
time required for automatic failover to complete.
Unlike Single-AZ deployments, I/O activity is not suspended on your primary
during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL
engines, because the backup is taken from the standby. However, note that you may
still experience elevated latencies for a few minutes during backups for Multi-
AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-
AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you
have created in any of three Availability Zones. If no Amazon Aurora Replicas have
been provisioned, in the case of a failure, Amazon RDS will attempt to create a new
Amazon Aurora DB instance for you automatically.

Q 101
The EBS Volumes attached to the EC2 Instance will always have to remain in the same
availability zone as the EC2 Instance. Possible reason to this is because of the
fact that EBS Volumes are present outside of the host machine and instances have to
be connected over the network, if the EBS Volumes are present outside the
Availability Zone there can be potential latency issues and subsequent performance
degradation.
What you can do in such scenario is, get the Snapshot of the EBS Volume (Snapshot
sequentially captures the state of your EBS Volume and stores it in S3 Bucket
(friendly reminder that it will cost you man) ) and post that you have two
options , you can either create an EBS Volume from this snapshot in your desired
Availability Zone or you can create an AMI from this snapshot in your desired
Availability Zone and then go ahead and launch your EC2 instance from it.

Q 113
Q: What types of licensing options are available with Amazon RDS for Oracle?
There are two types of licensing options available for using Amazon RDS for Oracle:
Bring Your Own License (BYOL): In this licensing model, you can use your existing
Oracle Database licenses to run Oracle deployments on Amazon RDS. To run a DB
instance under the BYOL model, you must have the appropriate Oracle Database
license (with Software Update License & Support) for the DB instance class and
Oracle Database edition you wish to run. You must also follow Oracle's policies for
licensing Oracle Database software in the cloud computing environment. DB instances
reside in the Amazon EC2 environment, and Oracle's licensing policy for Amazon EC2
is located here.
License Included: In the "License Included" service model, you do not need
separately purchased Oracle licenses; the Oracle Database software has been
licensed by AWS. "License Included" pricing is inclusive of software, underlying
hardware resources, and Amazon RDS management capabilities.

Q 720
AWS API Gateway Endpoint types:
• An API endpoint type refers to the hostname of the API. The API
endpoint type can be edge-optimized, regional, or private, depending on where the
majority of your API traffic originates from. An edge-optimized API endpoint is
best for geographically distributed clients. API requests are routed to the nearest
CloudFront Point of Presence (POP). This is the default endpoint type for API
Gateway REST APIs. A regional API endpoint is intended for clients in the same
region. When a client running on an EC2 instance calls an API in the same region,
or when an API is intended to serve a small number of clients with high demands, a
regional API reduces connection overhead. A private API endpoint is an API endpoint
that can only be accessed from your Amazon Virtual Private Cloud (VPC) using an
interface VPC endpoint, which is an endpoint network interface (ENI) that you
create in your VPC.

Q 895
Q: Can I change my file system’s storage capacity and throughput capacity?
A: Yes, you can increase the storage capacity, and increase or decrease the
throughput capacity of your file system – while continuing to use it – at any time
by clicking “Update storage" or "Update throughput” in the Amazon FSx Console, or
by calling “update-file-system” in the AWS CLI/API and specifying the desired
level.

Q 551
D. Create a master account for billing using Organizations, and create each team's
account from that master account. Create a security
account for logs and cross-account access. Apply service control policies on each
account, and grant the Security team cross-account access
to all accounts. Security will create IAM policies for each account to maintain
least privilege access.

CloudWatch Alarms is based on metrics, not an event/action (that's CloudWatch


Events)

Q 630
you'd have a limit of 5,000 TGW attachments (can be increased), or 10k static
routes per TGW (one for each VPC CIDR), or 50Gbps throughput, or the VPN throughput
of your firewalls.

AWS CodeBuild is a fully managed continuous integration service that compiles


source code, runs tests, and produces software packages that are ready to deploy.

Q 640
Create a portfolio for each business unit and add products to the portfolios using
AWS CloudFormation in AWS Service Catalog.

Q 672
You cannot use SSM document to scan

Administrators use deny list SCPs in the root of the organization to manage access
to restricted services.

Data access patterns are unpredictable" best fits to Intelligent Tiering

Q 688
Multi master is a regional service, so no multi master cross region for Aurora yet

Q 692
we cannot modify the IPv4 CIDR for the subnet so we need to delete and recreate

Why use Transfer Acceleration?


You might want to use Transfer Acceleration on a bucket for various
reasons:
1. Your customers upload to a centralized bucket from all over the
world.
2. You transfer gigabytes to terabytes of data on a regular basis
across continents.
3. You can't use all of your available bandwidth over the internet when
uploading to Amazon S3.

When you create an account, AWS Organizations initially assigns a long (64
characters), complex, randomly generated password to the root user. You can't
retrieve this initial password. To access the account as the root user for the
first time, you must go through the process for password recovery.

AWS Site-to-Site VPN


It is either On prem - TGW - VPC attachment OR On prem - VPN - VPC
attachment.

Web ACL
in case of (not?)affect legimate traffic, set the action to count first.

Amazon DLM features:


Automated snapshot and AMI creation:
Create a policy that automates the creation, retention, and deletion of EBS
snapshots and EBS-backed AMIs.

Fast snapshot restore integration:


Automate the creation of snapshots that are enabled for fast snapshot
restore. Fast snapshot restore enables you to restore volumes that are fully
initialized at creation and instantly deliver all of their provisioned performance.

Built-in cross-Region copy:


Automatically copy snapshots that are created by a lifecycle policy to up to
three AWS Regions.

Automated cross-account snapshot copy:


Use cross-account sharing in conjunction with a cross-account copy event
policy to automatically share and copy snapshots created by a policy across
accounts.

Q: How is AWS Global Accelerator different from Amazon CloudFront?

A: AWS Global Accelerator and Amazon CloudFront are separate services that
use the AWS global network and its edge locations around the world. CloudFront
improves performance for both cacheable content (such as images and videos) and
dynamic content (such as API acceleration and dynamic site delivery). Global
Accelerator improves performance for a wide range of applications over TCP or UDP
by proxying packets at the edge to applications running in one or more AWS Regions.
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT
(MQTT), or Voice over IP, as well as for HTTP use cases that specifically require
static IP addresses or deterministic, fast regional failover. Both services
integrate with AWS Shield for DDoS protection.
That is correct. AWS Global Accelerator is a service that improves the
availability and performance of applications over the internet by routing user
traffic to the optimal AWS region for that user, using the AWS global network. It
is particularly useful for applications that use non-HTTP protocols, such as UDP,
as it can proxy and route packets at the edge to applications running in multiple
AWS regions. Amazon CloudFront is a content delivery network (CDN) service that
improves the performance of web applications by caching content at edge locations
around the world. It can be used to improve performance for both cacheable content
(such as images and videos) and dynamic content (such as API acceleration and
dynamic site delivery). Both services can be used together to improve the overall
performance and availability of an application, but Global Accelerator is better
suited for UDP traffic and non-HTTP use cases.

Q: How do I control which Amazon Virtual Private Clouds (VPCs) can communicate with
each other?
You can segment your network by creating multiple route tables in an AWS
Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to
create isolated networks inside an AWS Transit Gateway similar to virtual routing
and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a
default route table. The use of multiple route tables is optional.

CloudEndure Disaster Recovery enables organizations to quickly and easily shift


their disaster recovery strategy to AWS from existing physical or virtual data
centers, private clouds, or other public clouds, in addition to supporting cross-
region / cross-AZ disaster recovery in AWS.

Q 407
The maximum quota is 125 peering connections per VPC.
Virtual private gateways per Region is 5

Q 454
ES data can not be used as source for DMS. It can be used as target.

Amazon Mechanical Turk (MTurk)


Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace that makes it
easier for individuals and businesses to outsource their processes and jobs to a
distributed workforce who can perform these tasks virtually. This could include
anything from conducting simple data validation and research to more subjective
tasks like survey participation, content moderation, and more. MTurk enables
companies to harness the collective intelligence, skills, and insights from a
global workforce to streamline business processes, augment data collection and
analysis, and accelerate machine learning development.

Amazon Simple Workflow (Amazon SWF)


Amazon Simple Workflow (Amazon SWF) is a task coordination and state
management service for cloud applications.

Placement group
Within the same region instances can be moved into placement group without
terminating (stop, modify and re-start).
If different region then need to be terminated and relaunched.

Q 503
Default AWS Organizations policy for new ORG is "FullAWSAccess" set on each OU. It
gives full access to every operation. Users from master can assume a role (set
druing invitation process) in each connected account to get full Admin access.

Lambda@Edge
Lambda@Edge does not cache data. Lambda@Edge is a feature of Amazon
CloudFront that lets you run code closer to users of your application, which
improves performance and reduces latency. With Lambda@Edge, you don't have to
provision or manage infrastructure in multiple locations around the world.

S3 event notifications

Q 526
ASG direct doesn't support waf - needs an ALB/Cloudfront in front

Q 563
Between executionRoleArn (option C) and taskRoleArn (D), only the latter is used to
interact with DynamoDB. The former is used to download images or write logs to
Cloudwatch.

Q 567
Optimal performance with Athena is achieved with columnar storage and partitioning
the data.

Q 569
DynamoDB does not support strongly consistent reads ACROSS REGIONS.

Q 589
Beanstalk is region service. It CANNOT "automatically scaling web server
environment that spans two separate Regions"

Q 605
Lambda aliases
Lambda aliases are a way to point to multiple versions of a Lambda function.
This can be useful for testing new versions of a function before rolling them out
to production, or for running multiple versions of a function in parallel to test
different approaches.
When you create a Lambda alias, you can specify a routing configuration. This
configuration determines how traffic is routed between the alias and the function
versions it points to.
There are two types of routing configurations:
Static routing: With static routing, you specify a fixed percentage of
traffic that is routed to each function version.
Canary routing: With canary routing, you specify a starting percentage
of traffic that is routed to a new function version. Over time, the percentage of
traffic routed to the new function version increases, while the percentage of
traffic routed to the old function version decreases.

Q 617
AWS SSO does not support mobile apps.

Q 648
Aurora Global database supports one master only, so other regions do not support
write.

Q 696
we cannot use "public" certificate for ec2 from amazon certificate manager

Q 697
Transit Gateway doesn't support routing between VPC with identical CIDRs

Q 704
To allow an IAM user or role to connect to your DB cluster, you must create an IAM
policy. After that, you attach the policy to an IAM user or role.

Q 711
The CloudWatch embedded metric format is a JSON specification used to instruct
CloudWatch Logs to automatically extract metric values embedded in structured log
events. You can use CloudWatch to graph and create alarms on the extracted metric
values.

Q 740
EFS Cross-Region Replication

Q 752
API gateway usage plans

Q 773
S3 Replication Time Control is designed to replicate 99.99% of objects within 15
minutes after upload, with the majority of those new objects replicated in seconds.

Q 776
Global Accelerator does not support client IP address preservation for Network Load
Balancer and Elastic IP address endpoints.

Q 789
DynamoDB On-demand mode is a good option if any of the following are true:
You create new tables with unknown workloads.
You have unpredictable application traffic.
You prefer the ease of paying for only what you use

Q 791
AWS Config is for monitoring and alert, it doesn't prevent.

Q 796
1. As WAF is front end , it is responsible for threats not ALB or EC2, its too late
for threat analysis.
2. Inspector doesn't analyse ALB but EC2, ECR
3. WAF logging integrates with S3, KDF, Cloudwatch where as ALB access logs with S3
(no KDF)
4. WAF marketplace rules help threat detection

Q 803
If the accounts are within same organization then only resources can be shared.
Q 813
you cant attach VPCs from multi-regions to one transit GW

Q 824
You can't associate more than one SSL or Transport Layer Security (TLS) certificate
to an individual CloudFront distribution. However, certificates provided by AWS
Certificate Manager (ACM) support up to 10 subject alternative names, including
wildcards. To turn on SSL or HTTPS for multiple domains served through one
CloudFront distribution, assign a certificate from ACM that includes all the
required domains.

Q 832
Peering connections allow connectivity between VPCs but do not provide the ability
to route internet traffic through a central egress VPC.

Q 834
SQS agent is not exist, you need use sdk to build procuder and consumer

Q 843
conditional access -> attribute-based access controls (ABACs).

Q 857
S3 Glacier Deep Archive bulk retrieval time is max 48 hours.

Q 919
Client VPN endpoint and does not require additional resources such as a transit
gateway or additional VPN connections.

Q 924
SNS fan-out pattern

Q 940
If an API request exceeds the API request rate for its category, the request
returns the RequestLimitExceeded error code. To prevent this error, ensure that
your application doesn't retry API requests at a high rate. You can do this by
using care when polling and by using exponential backoff retries

Q 948
Increasing the health check grace period for the Auto Scaling group would give the
instances more time to run the user data scripts and download critical content from
the S3 bucket, which would prevent them from being terminated due to health check
failures

Q 953
If you receive a capacity error when launching an instance in a placement group
that already has running instances, stop and start all of the instances in the
placement group, and try the launch again.

Q 964
Kinesis Producer Library (KPL) is aimed to help devs to achieve high write
throughtput into Kinesis data stream.

Q 1005
On demand capacity is much more expensive than provisioned/reserved capacity

Q 491
Cloudwatch does not provide cost-savings suggestions.

Q 511
Cache control should be done at the Cloudfront not API Stage.
Lambda@Edge does not cache data.

Q 569
DynamoDB does not support strongly consistent reads ACROSS REGIONS.

Bonus Q
A financial services company uses Amazon RDS for Oracle with Transparent Data
Encryption (TDE). The company is required to encrypt its data at rest at all times.
The key required to decrypt the data has to be highly available, and access to the
key must be limited. As a regulatory requirement, the company must have the ability
to rotate the encryption key on demand. The company must be able to make the key
unusable if any potential security breaches are spotted. The company also needs to
accomplish these tasks with minimum overhead.
What should the database administrator use to set up the encryption to meet these
requirements?

A. AWS CloudHSM
B. AWS Key Management Service (AWS KMS) with an AWS managed key
C. AWS Key Management Service (AWS KMS) with server-side encryption
D. AWS Key Management Service (AWS KMS) CMK with customer-provided material

Exam
-----
sharing ec2 with AWS RAM
storage version 4 (FSX?)
json supported db? / which db or service dont use json?
endpoint for code commit / code artifact

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy