0% found this document useful (0 votes)
20 views26 pages

AWS Cloud Security Builder

The document outlines a comprehensive AWS Academy Lab Project focused on cloud security, detailing four phases: securing data in Amazon S3, securing VPCs, using AWS KMS for resource security, and monitoring and logging. Each phase includes specific tasks such as creating bucket policies, enabling versioning, configuring VPC flow logs, and implementing encryption with AWS KMS. The project emphasizes best practices for securing AWS resources and monitoring access to ensure compliance and security.

Uploaded by

Sreegar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views26 pages

AWS Cloud Security Builder

The document outlines a comprehensive AWS Academy Lab Project focused on cloud security, detailing four phases: securing data in Amazon S3, securing VPCs, using AWS KMS for resource security, and monitoring and logging. Each phase includes specific tasks such as creating bucket policies, enabling versioning, configuring VPC flow logs, and implementing encryption with AWS KMS. The project emphasizes best practices for securing AWS resources and monitoring access to ensure compliance and security.

Uploaded by

Sreegar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

CSEC-608 Cloud Security

AWS Academy Lab Project - Cloud Security


Builder

000554427

Sreegar Prasad Ravi

1|Page
Table of Contents
Securing and Monitoring Resources with AWS.....................................................................................4
Phase 1: Securing data in Amazon S3........................................................................................................ 4
Description........................................................................................................................................................ 4
Observation...................................................................................................................................................... 4
Task 1.1: Create a bucket, apply a bucket policy, and test access.........................................4
Task 1.2: Enable versioning and object-level logging on a bucket.......................................5
Task 1.3: Implement the S3 Inventory feature on a bucket....................................................6
Task 1.4: Confirm that versioning works as intended..............................................................6
Task 1.5: Confirm object-level logging and query the access logs by using Athena....7
Phase 2: Securing VPCs.................................................................................................................................... 8
Description........................................................................................................................................................ 8
Observation...................................................................................................................................................... 8
Task 2.1: Review LabVPC and its associated resources...........................................................8
Task 2.2: Create a VPC flow log........................................................................................................... 9
Task 2.3: Access the WebServer instance from the internet and review VPC flow
logs in CloudWatch................................................................................................................................... 9
Task 2.4: Configure route table and security group settings...............................................10
Task 2.5: Secure the WebServerSubnet with a network ACL..............................................11
Task 2.6: Review NetworkFirewallVPC and its associated resources.............................12
Task 2.7: Create a network firewall................................................................................................12
Task 2.8: Create route tables............................................................................................................. 13
Task 2.9: Configure logging for the network firewall.............................................................14
Task 2.10: Configure the firewall policy and test access.......................................................15
Phase 3: Securing AWS resources by using AWS KMS.....................................................................16
Description..................................................................................................................................................... 16
Observation.................................................................................................................................................... 16
Task 3.1: Create a customer-managed key and configure key rotation..........................16
Task 3.2: Update the AWS KMS key policy and analyze an IAM policy...........................16
Task 3.3: Use AWS KMS to encrypt data in Amazon S3.........................................................17
Task 3.4: Use AWS KMS to encrypt the root volume of an EC2 instance........................18
Task 3.5: Use AWS KMS envelope encryption to encrypt data in place..........................19
Task 3.6: Use AWS KMS to encrypt a Secrets Manager secret............................................19

2|Page
Phase 4: Monitoring and logging...............................................................................................................20
Description..................................................................................................................................................... 20
Observation.................................................................................................................................................... 20
Task 4.1: Use CloudTrail to record Amazon S3 API calls.......................................................20
Task 4.2: Use CloudWatch Logs to monitor secure logs........................................................22
Task 4.3: Create a CloudWatch alarm to send notifications for security incidents...23
Task 4.4: Configure AWS Config to assess security settings and remediate the
configuration of AWS resources.......................................................................................................24
Reflection............................................................................................................................................................. 25
Certificate............................................................................................................................................................ 26

3|Page
Securing and Monitoring Resources with AWS

Phase 1: Securing data in Amazon S3


Description

In this phase, I secure customer data in S3 so that only the right people can see it.
I first create a new bucket and apply a policy that lets only certain users read and write.
Then, I turned on versioning so I could track every change in each file. I also enable
server access logging so every request is recorded. Next, I set up S3 Inventory to get a
daily report of all objects. By the end, the bucket holds encrypted files and shows who
did what. Only the account manager, Paulo, can read it while Mary is blocked.

Observation
Task 1.1: Create a bucket, apply a bucket policy, and test access
I created a new S3 bucket (data–bucket–0df941711a7d39f06), uploaded a test
file, and then wrote a two-statement policy: one “Allow” for our IAM role plus two test
users (Paulo and Sofia) and a “Deny” for everyone else. Switching to Paulo’s login in
incognito, I confirmed Paulo could list and download objects only from the data-bucket,
and then switched to Mary to verify that access was blocked.

Fig 1.1 Bucket Policy created for data-bucket.

4|Page
Fig 1.2 Paulo can access the bucket and the objects in it.

Task 1.2: Enable versioning and object-level logging on a bucket


I turned on versioning on data-bucket so every change is tracked. Then I enabled
server access logging, directing logs to a dedicated S3-Objects-Access-Log bucket with a
prefix. Finally, I verified the log bucket’s policy to see if S3 writes logs there.

Fig 1.3 Server access logging is enabled.

5|Page
Fig 1.4 The S3 bucket Access log policy is added.

Task 1.3: Implement the S3 Inventory feature on a bucket


I enabled the S3 Inventory feature to monitor changes to objects that are stored
in the S3 bucket. For that, I set up S3 Inventory on the data-bucket to produce Parquet
reports of all object metadata and object-level changes into an S3-inventory bucket.

Fig 1.5 The S3 inventory feature is enabled on data-bucket.

Task 1.4: Confirm that versioning works as intended


I logged in as Paulo, uploaded the file customers.csv, checked its SSE-S3
encryption, then modified and re-uploaded it. In the console, I saw two versions appear,

6|Page
and I verified that the older one still held only the original two rows. Switching to Mary, I
could not access the bucket.

Fig 1.6 Versioning is confirmed to be working as there are two versions visible.

Task 1.5: Confirm object-level logging and query the access logs by using
Athena
As the admin, I looked through the logs in s3-objects-access-log and used Athena
to create an external table over those logs. Using a SELECT query, I filtered for IAM-user
actions and saw HTTP status of 200 for Paulo and HTTP status of 403 for Mary.

Fig 1.7 The athena query shows that Paulo was able to access the csv file.

7|Page
Phase 2: Securing VPCs
Description
In this phase, I review the LabVPC’s subnets, route table, and the IAM role for
flow logs. Then, I create a VPC flow log to send all traffic data to CloudWatch. I test
access by trying HTTP and SSH from the internet and watch the flow-log entries to see
rejects. Next, I fix connectivity by adding an internet-gateway route. After that, I applied
a subnet ACL that blocks all traffic by default and then opens only ports 22 and 80.

In the NetworkFirewallVPC, I inspect its subnets and internet gateway. I create an


AWS Network Firewall and set up three route tables: one for the internet-gateway
ingress, one for the firewall subnet, and one for the WebServer2 subnet. I enable
CloudWatch logging for both alert and flow logs. Then I define a stateful rule group to
drop port 8080, allow ports 80, 22, 443, and permit ICMP. Finally, I test HTTP, SSH, and
ping connections to confirm my firewall rules work as intended.

Observation
Task 2.1: Review LabVPC and its associated resources
I observed that the LabVPC is in us-east-1, and I noted its WebServerSubnet,
main route table, and IAM role for flow logs. On EC2, I confirmed the WebServer
instance had a public IP and a security group.

Fig 2.1 The LabVPC is available.

8|Page
Fig 2.2 The IAM role for flow logs is available.

Task 2.2: Create a VPC flow log


I created a flow log named LabVPCFlowLogs to capture all traffic from LabVPC
into a new CloudWatch Logs group, using the existing VPCFlowLogsRole that I noticed
earlier in the last task.

Fig 2.3 New VPC flow log is created.

Task 2.3: Access the WebServer instance from the internet and review VPC
flow logs in CloudWatch
From Cloud9 IDE, I ran a netcat against the server’s HTTP (80) and SSH (22)
ports and saw that it timed out both times. In CloudWatch Logs, under

9|Page
LabVPCFlowLogs, entries appeared showing “REJECT” for my IP on those ports. This
confirmed that the flow log is capturing rejects as expected.

Fig 2.4 The CloudWatch logs show that HTTP and SSH netcat were rejected.

Task 2.4: Configure route table and security group settings


I added a 0.0.0.0/0 route to the internet gateway in the subnet’s route table, then
updated the WebServer’s security group to allow HTTP from anywhere and SSH only
from Cloud9’s IP and the EC2 Instance Connect IP ranges. After this, HTTP and SSH tests
passed, showing how both route and SG rules must align.

Fig 2.5 The route table is configured by adding 0.0.0.0/0 route to the internet gateway.

10 | P a g e
Fig 2.6 Inbound rules are added to the Webserver security group.

Task 2.5: Secure the WebServerSubnet with a network ACL


On the ACL for WebServerSubnet, I first changed rule number 100 to Deny all,
confirmed both HTTP and SSH block, then only opened port 22 on rule 100, and later
added an HTTP allow from anywhere as rule 90. Finally, confirmed access to the web
server’s webpage.

Fig 2.7 The network ACL’s inbound rules are modified.

11 | P a g e
Task 2.6: Review NetworkFirewallVPC and its associated resources
I examined the second VPC (NetworkFirewallVPC)—its two subnets, Internet
Gateway, and default ACLs—and verified WebServer2 allowed HTTP, SSH, and port 8080.

Fig 2.8 Webserver2’s website works.

Fig 2.9 Another website on webserver2 that runs on 8080.

Task 2.7: Create a network firewall


I created a Network Firewall in NetworkFirewallVPC, chose us-east-1a subnet,
disabled delete and subnet change protection, and waited for “Ready” before I could
continue to the next step.

12 | P a g e
Fig 2.10 NetworkFirewall created in networkfirewallVPC.

Task 2.8: Create route tables


I made three route tables: IGW-Ingress-Route-Table, Firewall-Route-Table, and
WebServer2-Route-Table in NetworkFirewallVPC. I added routes to the Gateway Load
Balancer endpoint and internet gateway and associated them with each subnet.

Fig 2.11: The IGW-Ingress route table is created.

13 | P a g e
Fig 2.12: The Firewall route table is created.

Fig 2.13 Webserver2 route table is created.

Task 2.9: Configure logging for the network firewall


I created a CloudWatch log group called NetworkFirewallVPCLogs with a-six-
month retention period. Then I enabled alert and flow logs for the firewall for that
group. Test HTTP access by loading Webserver2’s website, which generates log entries
for analysis.

14 | P a g e
Fig 2.14 The logs are being created for NetworkFirewall.

Task 2.10: Configure the firewall policy and test access


I added a stateful rule group with five rules for ports 8080, which should be
dropped, 80, 22, 443, and ICMP with a pass action. I tested HTTP, SSH, and ping; saw
port 8080 blocked correctly.

Fig 2.15 The stateful rules are working.

15 | P a g e
Phase 3: Securing AWS resources by using AWS KMS
Description
In phase 3, I create a customer-managed key named MyKMSKey, grant my Voclabs role
administrative and usage permissions, and enable automatic annual rotation. I then
switch an S3 bucket’s encryption to SSE-KMS, verify that authorized users can upload
while others can’t, and launch an EC2 instance with its root volume encrypted by the
key. On WebServer2, I practice envelope encryption by generating a data key with AWS
KMS generate-data-key, encrypting and decrypting a local text file via OpenSSL, and
confirming it works. Finally, I secure a Secrets Manager secret by creating “mysecret”
encrypted under MyKMSKey, then retrieve it over SSH with aws secretsmanager get-
secret-value to prove the integration.

Observation
Task 3.1: Create a customer-managed key and configure key rotation
I created an AWS KMS customer-managed key named MyKMSKey, granted the
voclabs role Key administrator and Key user permissions, and turned on annual
rotation.

Fig 3.1 KMSKey is created and the roatation is set to a year.

Task 3.2: Update the AWS KMS key policy and analyze an IAM policy
I modified the key policy MyKMSKey also to allow Sofia’s IAM user to use it. Then
I reviewed the PolicyForFinancialAdvisors IAM policy, noting it grants S3 complete

16 | P a g e
control, plus KMS decrypt/encrypt, and Sofia is a member of the FinancialAdvisorGroup
IAM group.

Fig 3.2 Sofia is added as the user of the key.

Task 3.3: Use AWS KMS to encrypt data in Amazon S3


I modified the encryption settings on data-bucket from SSE-S3 to SSE-KMS using
MyKMSKey. I logged in as Sofia, and I uploaded the loan-data.csv. Then, as Paulo, I saw a
failure as the IAM policy doesn’t grant him the AWS KMS service access that Sofia has.

Fig 3.3 loan-data.csv is encrypted.

17 | P a g e
Fig 3.4 The user Paulo cant access the file.

Task 3.4: Use AWS KMS to encrypt the root volume of an EC2 instance
I launched a new EC2 called EncryptedInstance with Amazon Linux 2 AMI as the
AMI and t2.micro as the instance type, and selected MyKMSKey for encrypting its AMI
root volume. Inspecting the Storage tab confirmed the volume was encrypted.

Fig 3.5 The newly created instance is encrypted.

18 | P a g e
Task 3.5: Use AWS KMS envelope encryption to encrypt data in place
On WebServer2, I generated a data key using AWS KMS generate-data-key, stored
the CiphertextBlob, decrypted it when needed, and finally used OpenSSL to encrypt and
decrypt a text file called data_unencrypted.txt and confirmed if the file was encrypted.

Fig 3.6 The file is confirmed to be encrypted.

Task 3.6: Use AWS KMS to encrypt a Secrets Manager secret


In Secrets Manager, I created a key called secret value “my secret data”, encrypted
with MyKMSKey, and named the secret as “mysecret”. Over SSH, I ran aws
secretsmanager get-secret-value and saw the secret's key-value pair data returned in
the results.

19 | P a g e
Fig 3.7 The value of mysecret is retrieved from webserver2.

Phase 4: Monitoring and logging


Description
In this phase, I first enable CloudTrail to capture all S3 API calls and store them in
a dedicated bucket. I then use Athena to query those logs for specific PutObject and
GetObject actions, confirming who accessed which files. Next, on my EncryptedInstance,
I install the CloudWatch agent to send logs to the /var/log/secure file, which gets sent to
CloudWatch Logs. I test by logging in as both a valid and an invalid user, watching
entries like “Accepted publickey” and “Invalid user” appear. To get immediate alerts, I
create a metric filter for “Invalid user” events and wire it to a CloudWatch alarm that
sends an email via SNS when too many failures occur. Finally, I turned on AWS Config to
record all resources and added a managed rule for S3 bucket logging. When my test
bucket shows noncompliance, I use the Config console to enable server access logs and
bring it back into compliance.

Observation
Task 4.1: Use CloudTrail to record Amazon S3 API calls
I created a trail called “data-bucket-reads-writes” that captures both
management and data events in the cloudtrail-logs bucket. After uploading customer-
data.csv, I used Athena to create an external table over those logs and ran a query for my
PutObject events. The query I used was :

 Step 5:
SELECT

eventTime,

userIdentity.principalId,

requestParameters,

eventName

FROM cloudtrail_logs_cloudtrail_logs_0df941711a7d39f06

WHERE eventName = 'PutObject'

AND json_extract_scalar(requestParameters, '$.key') = 'customer.csv'

LIMIT 10;

20 | P a g e
 Step 6:
SELECT

eventTime,

sourceipaddress,

useragent,

userIdentity.principalId,

requestParameters,

eventName

FROM cloudtrail_logs_cloudtrail_logs_0df941711a7d39f06

WHERE eventName = 'GetObject'

AND json_extract_scalar(requestParameters, '$.key') = 'customer.csv'

LIMIT 10;

Fig 4.1: The cloud trail is created.

21 | P a g e
Fig 4.2 Athena query for og information for when i opened the customer-data.csv file.

Task 4.2: Use CloudWatch Logs to monitor secure logs


On EncryptedInstance, I installed the CloudWatch agent and collectd Linux
daemon, output the logs to file /var/log/secure, and started it. Then from Cloud9, I
SSH’d successfully as ec2-user and again with an invalid user. Saw both “Accepted
publickey” and “Invalid user” entries in the CloudWatch log group. This proved that the
CloudWatch agent is working correctly and that the secure logs are being written to the
CloudWatch log group.

Fig 4.3 Installed the linux demaon and cloudagent in the encrypted instance.

22 | P a g e
Fig 4.4 The unsuccessful login was captured in the log.

Task 4.3: Create a CloudWatch alarm to send notifications for security


incidents
In the EncryptedInstanceSecureLogs CloudWatch log group, I added a metric
filter for "Invalid user". I created an alarm that gets sent whenever NotValidUsers is
greater than or equal to 5 in 24 hours, publishing to an SNS email topic. Triggering
multiple bad SSH logins caused the alarm to go “In Alarm” and sent me a test email,
completing the alert setup.

Fig 4.5 Got an email regarding five unsuccessful login attempts.

23 | P a g e
Task 4.4: Configure AWS Config to assess security settings and remediate
the configuration of AWS resources
I enabled AWS Config to record all resources, added the AWS managed rule
named s3-bucket-logging-enabled, and saw my test bucket flagged as noncompliant. I
then walked through the AWS Config console to manually remediate the issue by
enabling server access by logging in to that bucket.

Fig 4.6 AWS config is set up and a rule is added for S3 bucket logging.

Fig 4.7 Manual remediation action is set up in AWS config.

24 | P a g e
Fig 4.8 The compliance bucket is remediated and is now compliant.

Reflection
Over the course of this project, I increased my understanding of AWS security
services, such as fine-grained S3 bucket policies, versioning, and inventory, as well as
VPC flow logs, network ACLs, and AWS Network Firewall. I understood how to use the
KMS key creation, envelope encryption, and Secrets Manager integration, as well as
CloudTrail, CloudWatch alarms, and AWS Config for continuous monitoring. One of the
challenges I faced was creating JSON files for the bucket policy and an Athena query.

25 | P a g e
Certificate

26 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy