Solution Architect Notes
Solution Architect Notes
gp2
baseline performance scales linearly at 3 IOPS per GiB of volume size.
GP2 with 2TB volumes has 6000 IOPS. If we add additional 1TB it will increase
by another 3000 IOPS
Increase from 2TB (6000IOPS) to 3TB (9000IOPS), difference is 1TB(3000IOPS).
gp3 (?)
baseline performance scales linearly at 500 IOPS per GiB of volume size.
max 3000 iops till 10TB and then increase 3000 iops for every 20TB
Amazon Inspector
Amazon Inspector is an automated security assessment service that helps you
test the network accessibility of your Amazon EC2 instances and the security state
of your applications running on the instances.
Lambda@Edge
There are several benefits to using Lambda@Edge for authorization operations.
First, performance is improved by running the authorization function using
Lambda@Edge closest to the viewer, reducing latency and response time to the viewer
request. The load on your origin servers is also reduced by offloading CPU-
intensive operations such as verification of JSON Web Token (JWT) signatures.
Finally, there are security benefits such as filtering out unauthorized requests
before they reach your origin infrastructure.
https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-
to-use-lambdaedge-and-json-web-tokens-to-enhance-web-application-security/
Service Catalog
Amazon Service Catalog allows organizations to create and manage catalogs of
IT services that are approved for use on Amazon Web Services. These IT services can
include everything from virtual machine images, servers, software, and databases to
complete multi-tier application architectures. Amazon Service Catalog allows you to
centrally manage deployed IT services and your applications, resources, and
metadata. This helps you achieve consistent governance and meet your compliance
requirements, while enabling users to quickly deploy only the approved IT services
they need.
An administrator can create templates within a service catalog portfolio that
can be selected by an end user for deployment. The template includes the resources
and dependencies required by the application, so the user can self install the
application without necessarily knowing what resources need to be provisioned to
support the application. The template will also contain the security policies
required to ensure the correct permissions are granted for the end user when the
application is launched.
Q
needs to improve the scalable performance and availability of the database.
Which solution meets these requirements?
A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add
an Amazon RDS for MySQL read replica when resource utilization hits a threshold
B. Migrate the database to Amazon Aurora, and add a read replica Add a database
connection pool outside of the Lambda handler function
C. Migrate the database to Amazon Aurora, and add a read replica Use Amazon Route
53 weighted records
D. Migrate the database to Amazon Aurora, and add an Aurora Replica Configure
Amazon RDS Proxy to manage database connection pools
D.
Lambdas are stateless and can't rely on connection pool. To get over this
problem, AWS provide RDS proxy for connection pool management.
What is the difference between EC2 VM Import and Amazon Server Migration Service?
Amazon Server Migration Service is a significant enhancement of EC2 VM
Import. The Amazon Server Migration Service provides automated, live incremental
server replication and Amazon Web Services Console support. For customers using EC2
VM Import for migration, we recommend using Amazon Server Migration Service.
AWS OpsWorks
AWS OpsWorks is a configuration management service that provides managed
instances of Chef and Puppet.
AWS DataSync
AWS DataSync is a secure, online service that automates and accelerates
moving data between on premises and AWS Storage services.
Organization-level CloudTrail
Using AWS CloudTrail, a user in a management account can create an
organization trail that logs all events for all AWS accounts in that organization.
Organization trails are automatically applied to all member accounts in the
organization. Member accounts can see the organization trail, but can't modify or
delete it.
AWS DataSync
DataSync provides built-in security capabilities such as encryption of data
in-transit, and data integrity verification in-transit and at-rest. It optimizes
use of network bandwidth, and automatically recovers from network connectivity
failures. In addition, DataSync provides control and monitoring capabilities such
as data transfer scheduling and granular visibility into the transfer process
through Amazon CloudWatch metrics, logs, and events.
DMS vs Datasync
Datasync is for files, DMS is for databases
Datasync is meant for continuous syncing by design while DMS syncing is only
till cutover happens.
DMS don't use Datasync
S3 access point
S3 Access Points simplify how you manage data access for your application set
to your shared datasets on S3. You no longer have to manage a single, complex
bucket policy with hundreds of different permission rules that need to be written,
read, tracked, and audited. With S3 Access Points, you can now create application-
specific access points permitting access to shared datasets with policies tailored
to the specific application.
S3 Intelligent-Tiering
S3 Intelligent-Tiering delivers automatic storage cost savings in three low-
latency and high-throughput access tiers. For data that can be accessed
asynchronously, you can choose to activate automatic archiving capabilities within
the S3 Intelligent-Tiering storage class.
Warm standby
Involves running a full backup system in standby mode, with live data
replicated from the production environment.
Snowball
can't be shipped cross-region
AWS Backup
is a cost-effective, fully managed, policy-based service that simplifies data
protection at scale.
AWS Backup is an ideal solution for implementing standard backup plans for
your AWS resources across your AWS accounts and Regions. Because AWS Backup
supports multiple AWS resource types, it makes it easier to maintain and implement
a backup strategy for workloads using multiple AWS resources that need to be backed
up collectively. AWS Backup also enables you to collectively monitor a backup and
restore operation that involves multiple AWS resources.
Athena
Athena supports creating tables and querying data from CSV, TSV, custom-
delimited, and JSON formats; data from Hadoop-related formats: ORC, Apache Avro and
Parquet; logs from Logstash, AWS CloudTrail logs, and Apache WebServer logs
RDS Proxy
RDS Proxy is a fully-managed, highly available, and easy-to-use database
proxy feature of Amazon RDS that enables your applications to:
1) improve scalability by pooling and sharing database connections;
2) improve availability by reducing database failover times by up to
66% and preserving application connections during failovers; and
3) improve security by optionally enforcing AWS IAM authentication to
databases and securely storing credentials in AWS Secrets Manager.
AWS Network Firewall
AWS Network Firewall is a stateful, managed, network firewall and intrusion
detection and prevention service for your virtual private cloud (VPC) that you
create in Amazon Virtual Private Cloud (Amazon VPC).
With Network Firewall, you can filter traffic at the perimeter of your VPC.
This includes filtering traffic going to and coming from an internet gateway, NAT
gateway, or over VPN or AWS Direct Connect. Network Firewall uses the open source
intrusion prevention system (IPS), Suricata, for stateful inspection. Network
Firewall supports Suricata compatible rules.
AWS Key Management Service (AWS KMS) CMK (Customer Master Key)
AWS KMS is replacing the term customer master key (CMK) with AWS KMS key and
KMS key.
You cannot manage Amazon managed CMKs, rotate them, or change their key
policies. AWS managed customer master key (CMK) key policies can't be modified
because they're read-only
Global Accelerator
Global Accelerator does not support client IP address preservation for
Network Load Balancer and Elastic IP address endpoints.
BIND
BIND is a nameserver service responsible for performing domain-name-to-IP
conversion on Linux-based DNS servers.
Transit Gateway
Transit GW + Direct Connect GW + Transit VIF + enabled SiteLink if two
different DX locations
AWS Snowcone
AWS Snowcone is a portable, rugged, and secure device for edge computing and
data transfer. You can use a Snowcone device to collect, process, and move data to
the AWS Cloud, either offline by shipping the device to AWS, or online by using AWS
DataSync.
Snowcone is available in two flavors:
Snowcone – Snowcone has two vCPUs, 4 GB of memory, and 8 TB of hard
disk drive (HDD) based storage.
Snowcone SSD – Snowcone SSD has two vCPUs, 4 GB of memory, and 14 TB of
solid state drive (SSD) based storage.
Use Cases
For edge computing applications, to collect data, process the data to
gain immediate insight, and then transfer the data online to AWS.
To transfer data that is continuously generated by sensors or
machines online to AWS in a factory or at other edge locations.
To distribute media, scientific, or other content from AWS
storage services to your partners and customers.
To aggregate content by transferring media, scientific, or other
content from your edge locations to AWS.
For one-time data migration scenarios where your data is ready to
be transferred, Snowcone offers a quick and low-cost way to transfer up to 8 TB or
14 TB of data to the AWS Cloud by shipping the device back to AWS.
AWS Snowball
With AWS Snowball (Snowball), you can transfer hundreds of terabytes or
petabytes of data between your on-premises data centers and Amazon Simple Storage
Service (Amazon S3). It mainly Uses a secure storage device for physical
transportation.
AWS Snowball devices
Snowcone
It is a small device used for edge computing, storage, and data
transfer.
You can transfer up to 8 TB with a single AWS Snowcone device and
can transfer larger data sets with multiple devices, either in parallel or
sequentially.
Snowball
AWS Snowball is a data migration and edge computing device that
comes in two device options:
Compute Optimized and Storage Optimized.
Snowball Edge Storage Optimized
devices provide 40 vCPUs of compute capacity coupled with
80 terabytes of usable block or Amazon S3-compatible object storage.
It is well-suited for local storage and large-scale data
transfer.
Snowmobile
It is the most bigger one. AWS Snowmobile moves up to 100 PB of
data in a 45-foot long ruggedized shipping container and is ideal for multi-
petabyte or Exabyte-scale digital media migrations and data center shutdowns.
A Snowmobile arrives at the customer site and appears as a
network-attached data store for more secure, high-speed data transfer.
AWS AppSync
AWS AppSync allows your applications to access exactly the data they need.
Create a flexible API to securely access, manipulate, and combine data from
multiple sources.
Pay only for requests to your API and for real-time messages delivered to
connected clients.
External launch
External launch type (doc from AWS): The External launch type is used to run
your containerized applications on your on-premise server or virtual machine (VM)
that you register to your Amazon ECS cluster and manage remotely.
Apache Parquet
Apache Parquet is a incredibly versatile open source columnar storage format.
It is 2x faster to unload and takes up 6x less storage in Amazon S3 as compared to
text formats. It also allows you to save the Parquet files in Amazon S3 as an open
format with all data transformation and enrichment carried out in Amazon Redshift.
Parquet is a self-describing format and the schema or structure is embedded
in the data itself therefore it is not possible to track the data changes in the
file. To track the changes, you can use Amazon Athena to track object metadata
across Parquet files as it provides an API for metadata.
Trusted Adviser
Trusted Adviser can only do assessment and recommendations.
GuardDuty
GuardDuty does require a delegated administrator account to be set up in the
organization in AWS Organizations before it can be enabled.
` Creating a delegated administrator account for GuardDuty is a necessary step
in order to enable GuardDuty, but it alone is not sufficient to
maximize scalability for the security team.
The security team will also need to be notified of any security issues that
GuardDuty detects, and that is done by subscribing the security team to an SNS
topic.
Q: How do I control which Amazon Virtual Private Clouds (VPCs) can communicate with
each other?
You can segment your network by creating multiple route tables in an AWS
Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to
create isolated networks inside an AWS Transit Gateway similar to virtual routing
and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a
default route table. The use of multiple route tables is optional.
Q 44 Explanation
Amazon RDS Multi-AZ Deployments -
Amazon RDS Multi-AZ deployments provide enhanced availability and durability
for Database (DB) Instances, making them a natural fit for production database
workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically
creates a primary DB Instance and synchronously replicates the data to a standby
instance in a different Availability Zone (AZ). Each AZ runs on its own physically
distinct, independent infrastructure, and is engineered to be highly reliable. In
case of an infrastructure failure (for example, instance hardware failure, storage
failure, or network disruption), Amazon RDS performs an automatic failover to the
standby, so that you can resume database operations as soon as the failover is
complete. Since the endpoint for your DB Instance remains the same after a
failover, your application can resume database operation without the need for
manual administrative intervention.
Enhanced Durability -
PostgreSQL -
engines utilize synchronous physical replication to keep data on the standby
up-to-date with the primary. Multi-AZ deployments for the
SQL Server -
engine use synchronous logical replication to achieve the same result,
employing SQL Server-native Mirroring technology. Both approaches safeguard your
data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon
RDS automatically initiates a failover to the up-to-date standby. Compare this to a
Single-AZ deployment: in case of a Single-AZ database failure, a user-
initiated point-in-time-restore operation will be required. This operation can take
several hours to complete, and any data updates that occurred after the latest
restorable time (typically within the last five minutes) will not be available.
Amazon Aurora -
employs a highly durable, SSD-backed virtualized storage layer purpose-built
for database workloads. Amazon Aurora automatically replicates your volume six
ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant,
transparently handling the loss of up to two copies of data without affecting
database write availability and up to three copies without affecting read
availability. Amazon Aurora storage is also self-healing. Data blocks and disks are
continuously scanned for errors and replaced automatically.
Increased Availability -
You also benefit from enhanced database availability when running Multi-AZ
deployments. If an Availability Zone failure or DB Instance failure occurs, your
availability impact is limited to the time automatic failover takes to complete:
typically under one minute for Amazon Aurora and one to two minutes for other
database engines (see the
RDS FAQ -
for details).
The availability benefits of Multi-AZ deployments also extend to planned
maintenance and backups. In the case of system upgrades like OS patching or DB
Instance scaling, these operations are applied first on the standby, prior to
the automatic failover. As a result, your availability impact is, again, only the
time required for automatic failover to complete.
Unlike Single-AZ deployments, I/O activity is not suspended on your primary
during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL
engines, because the backup is taken from the standby. However, note that you may
still experience elevated latencies for a few minutes during backups for Multi-
AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-
AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you
have created in any of three Availability Zones. If no Amazon Aurora Replicas have
been provisioned, in the case of a failure, Amazon RDS will attempt to create a new
Amazon Aurora DB instance for you automatically.
Q 101
The EBS Volumes attached to the EC2 Instance will always have to remain in the same
availability zone as the EC2 Instance. Possible reason to this is because of the
fact that EBS Volumes are present outside of the host machine and instances have to
be connected over the network, if the EBS Volumes are present outside the
Availability Zone there can be potential latency issues and subsequent performance
degradation.
What you can do in such scenario is, get the Snapshot of the EBS Volume (Snapshot
sequentially captures the state of your EBS Volume and stores it in S3 Bucket
(friendly reminder that it will cost you man) ) and post that you have two
options , you can either create an EBS Volume from this snapshot in your desired
Availability Zone or you can create an AMI from this snapshot in your desired
Availability Zone and then go ahead and launch your EC2 instance from it.
Q 113
Q: What types of licensing options are available with Amazon RDS for Oracle?
There are two types of licensing options available for using Amazon RDS for Oracle:
Bring Your Own License (BYOL): In this licensing model, you can use your existing
Oracle Database licenses to run Oracle deployments on Amazon RDS. To run a DB
instance under the BYOL model, you must have the appropriate Oracle Database
license (with Software Update License & Support) for the DB instance class and
Oracle Database edition you wish to run. You must also follow Oracle's policies for
licensing Oracle Database software in the cloud computing environment. DB instances
reside in the Amazon EC2 environment, and Oracle's licensing policy for Amazon EC2
is located here.
License Included: In the "License Included" service model, you do not need
separately purchased Oracle licenses; the Oracle Database software has been
licensed by AWS. "License Included" pricing is inclusive of software, underlying
hardware resources, and Amazon RDS management capabilities.
Q 720
AWS API Gateway Endpoint types:
• An API endpoint type refers to the hostname of the API. The API
endpoint type can be edge-optimized, regional, or private, depending on where the
majority of your API traffic originates from. An edge-optimized API endpoint is
best for geographically distributed clients. API requests are routed to the nearest
CloudFront Point of Presence (POP). This is the default endpoint type for API
Gateway REST APIs. A regional API endpoint is intended for clients in the same
region. When a client running on an EC2 instance calls an API in the same region,
or when an API is intended to serve a small number of clients with high demands, a
regional API reduces connection overhead. A private API endpoint is an API endpoint
that can only be accessed from your Amazon Virtual Private Cloud (VPC) using an
interface VPC endpoint, which is an endpoint network interface (ENI) that you
create in your VPC.
Q 895
Q: Can I change my file system’s storage capacity and throughput capacity?
A: Yes, you can increase the storage capacity, and increase or decrease the
throughput capacity of your file system – while continuing to use it – at any time
by clicking “Update storage" or "Update throughput” in the Amazon FSx Console, or
by calling “update-file-system” in the AWS CLI/API and specifying the desired
level.
Q 551
D. Create a master account for billing using Organizations, and create each team's
account from that master account. Create a security
account for logs and cross-account access. Apply service control policies on each
account, and grant the Security team cross-account access
to all accounts. Security will create IAM policies for each account to maintain
least privilege access.
Q 630
you'd have a limit of 5,000 TGW attachments (can be increased), or 10k static
routes per TGW (one for each VPC CIDR), or 50Gbps throughput, or the VPN throughput
of your firewalls.
Q 640
Create a portfolio for each business unit and add products to the portfolios using
AWS CloudFormation in AWS Service Catalog.
Q 672
You cannot use SSM document to scan
Administrators use deny list SCPs in the root of the organization to manage access
to restricted services.
Q 688
Multi master is a regional service, so no multi master cross region for Aurora yet
Q 692
we cannot modify the IPv4 CIDR for the subnet so we need to delete and recreate
When you create an account, AWS Organizations initially assigns a long (64
characters), complex, randomly generated password to the root user. You can't
retrieve this initial password. To access the account as the root user for the
first time, you must go through the process for password recovery.
Web ACL
in case of (not?)affect legimate traffic, set the action to count first.
A: AWS Global Accelerator and Amazon CloudFront are separate services that
use the AWS global network and its edge locations around the world. CloudFront
improves performance for both cacheable content (such as images and videos) and
dynamic content (such as API acceleration and dynamic site delivery). Global
Accelerator improves performance for a wide range of applications over TCP or UDP
by proxying packets at the edge to applications running in one or more AWS Regions.
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT
(MQTT), or Voice over IP, as well as for HTTP use cases that specifically require
static IP addresses or deterministic, fast regional failover. Both services
integrate with AWS Shield for DDoS protection.
That is correct. AWS Global Accelerator is a service that improves the
availability and performance of applications over the internet by routing user
traffic to the optimal AWS region for that user, using the AWS global network. It
is particularly useful for applications that use non-HTTP protocols, such as UDP,
as it can proxy and route packets at the edge to applications running in multiple
AWS regions. Amazon CloudFront is a content delivery network (CDN) service that
improves the performance of web applications by caching content at edge locations
around the world. It can be used to improve performance for both cacheable content
(such as images and videos) and dynamic content (such as API acceleration and
dynamic site delivery). Both services can be used together to improve the overall
performance and availability of an application, but Global Accelerator is better
suited for UDP traffic and non-HTTP use cases.
Q: How do I control which Amazon Virtual Private Clouds (VPCs) can communicate with
each other?
You can segment your network by creating multiple route tables in an AWS
Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to
create isolated networks inside an AWS Transit Gateway similar to virtual routing
and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a
default route table. The use of multiple route tables is optional.
Q 407
The maximum quota is 125 peering connections per VPC.
Virtual private gateways per Region is 5
Q 454
ES data can not be used as source for DMS. It can be used as target.
Placement group
Within the same region instances can be moved into placement group without
terminating (stop, modify and re-start).
If different region then need to be terminated and relaunched.
Q 503
Default AWS Organizations policy for new ORG is "FullAWSAccess" set on each OU. It
gives full access to every operation. Users from master can assume a role (set
druing invitation process) in each connected account to get full Admin access.
Lambda@Edge
Lambda@Edge does not cache data. Lambda@Edge is a feature of Amazon
CloudFront that lets you run code closer to users of your application, which
improves performance and reduces latency. With Lambda@Edge, you don't have to
provision or manage infrastructure in multiple locations around the world.
S3 event notifications
Q 526
ASG direct doesn't support waf - needs an ALB/Cloudfront in front
Q 563
Between executionRoleArn (option C) and taskRoleArn (D), only the latter is used to
interact with DynamoDB. The former is used to download images or write logs to
Cloudwatch.
Q 567
Optimal performance with Athena is achieved with columnar storage and partitioning
the data.
Q 569
DynamoDB does not support strongly consistent reads ACROSS REGIONS.
Q 589
Beanstalk is region service. It CANNOT "automatically scaling web server
environment that spans two separate Regions"
Q 605
Lambda aliases
Lambda aliases are a way to point to multiple versions of a Lambda function.
This can be useful for testing new versions of a function before rolling them out
to production, or for running multiple versions of a function in parallel to test
different approaches.
When you create a Lambda alias, you can specify a routing configuration. This
configuration determines how traffic is routed between the alias and the function
versions it points to.
There are two types of routing configurations:
Static routing: With static routing, you specify a fixed percentage of
traffic that is routed to each function version.
Canary routing: With canary routing, you specify a starting percentage
of traffic that is routed to a new function version. Over time, the percentage of
traffic routed to the new function version increases, while the percentage of
traffic routed to the old function version decreases.
Q 617
AWS SSO does not support mobile apps.
Q 648
Aurora Global database supports one master only, so other regions do not support
write.
Q 696
we cannot use "public" certificate for ec2 from amazon certificate manager
Q 697
Transit Gateway doesn't support routing between VPC with identical CIDRs
Q 704
To allow an IAM user or role to connect to your DB cluster, you must create an IAM
policy. After that, you attach the policy to an IAM user or role.
Q 711
The CloudWatch embedded metric format is a JSON specification used to instruct
CloudWatch Logs to automatically extract metric values embedded in structured log
events. You can use CloudWatch to graph and create alarms on the extracted metric
values.
Q 740
EFS Cross-Region Replication
Q 752
API gateway usage plans
Q 773
S3 Replication Time Control is designed to replicate 99.99% of objects within 15
minutes after upload, with the majority of those new objects replicated in seconds.
Q 776
Global Accelerator does not support client IP address preservation for Network Load
Balancer and Elastic IP address endpoints.
Q 789
DynamoDB On-demand mode is a good option if any of the following are true:
You create new tables with unknown workloads.
You have unpredictable application traffic.
You prefer the ease of paying for only what you use
Q 791
AWS Config is for monitoring and alert, it doesn't prevent.
Q 796
1. As WAF is front end , it is responsible for threats not ALB or EC2, its too late
for threat analysis.
2. Inspector doesn't analyse ALB but EC2, ECR
3. WAF logging integrates with S3, KDF, Cloudwatch where as ALB access logs with S3
(no KDF)
4. WAF marketplace rules help threat detection
Q 803
If the accounts are within same organization then only resources can be shared.
Q 813
you cant attach VPCs from multi-regions to one transit GW
Q 824
You can't associate more than one SSL or Transport Layer Security (TLS) certificate
to an individual CloudFront distribution. However, certificates provided by AWS
Certificate Manager (ACM) support up to 10 subject alternative names, including
wildcards. To turn on SSL or HTTPS for multiple domains served through one
CloudFront distribution, assign a certificate from ACM that includes all the
required domains.
Q 832
Peering connections allow connectivity between VPCs but do not provide the ability
to route internet traffic through a central egress VPC.
Q 834
SQS agent is not exist, you need use sdk to build procuder and consumer
Q 843
conditional access -> attribute-based access controls (ABACs).
Q 857
S3 Glacier Deep Archive bulk retrieval time is max 48 hours.
Q 919
Client VPN endpoint and does not require additional resources such as a transit
gateway or additional VPN connections.
Q 924
SNS fan-out pattern
Q 940
If an API request exceeds the API request rate for its category, the request
returns the RequestLimitExceeded error code. To prevent this error, ensure that
your application doesn't retry API requests at a high rate. You can do this by
using care when polling and by using exponential backoff retries
Q 948
Increasing the health check grace period for the Auto Scaling group would give the
instances more time to run the user data scripts and download critical content from
the S3 bucket, which would prevent them from being terminated due to health check
failures
Q 953
If you receive a capacity error when launching an instance in a placement group
that already has running instances, stop and start all of the instances in the
placement group, and try the launch again.
Q 964
Kinesis Producer Library (KPL) is aimed to help devs to achieve high write
throughtput into Kinesis data stream.
Q 1005
On demand capacity is much more expensive than provisioned/reserved capacity
Q 491
Cloudwatch does not provide cost-savings suggestions.
Q 511
Cache control should be done at the Cloudfront not API Stage.
Lambda@Edge does not cache data.
Q 569
DynamoDB does not support strongly consistent reads ACROSS REGIONS.
Bonus Q
A financial services company uses Amazon RDS for Oracle with Transparent Data
Encryption (TDE). The company is required to encrypt its data at rest at all times.
The key required to decrypt the data has to be highly available, and access to the
key must be limited. As a regulatory requirement, the company must have the ability
to rotate the encryption key on demand. The company must be able to make the key
unusable if any potential security breaches are spotted. The company also needs to
accomplish these tasks with minimum overhead.
What should the database administrator use to set up the encryption to meet these
requirements?
A. AWS CloudHSM
B. AWS Key Management Service (AWS KMS) with an AWS managed key
C. AWS Key Management Service (AWS KMS) with server-side encryption
D. AWS Key Management Service (AWS KMS) CMK with customer-provided material
Exam
-----
sharing ec2 with AWS RAM
storage version 4 (FSX?)
json supported db? / which db or service dont use json?
endpoint for code commit / code artifact