Group6 Milestone3
Group6 Milestone3
Milestone #3
Date: 03/05/2025
NAME STUDENT ID
1
Introduction – Asif Sheriff Ahamadulla Sheriff
York University is in the midst of a significant shift—moving from its traditional, on-premises
IT infrastructure to a more flexible, cloud-based environment. This change isn’t just about
adopting new technology—it’s about addressing the real, day-to-day struggles students, faculty,
and staff face with outdated systems. Over the years, the university has experienced growing
pains, especially during peak periods like registration and exams, where slow performance and
system downtime have become far too common.
To improve this, several key systems are being prioritized for migration:
By moving these systems to platforms like AWS and Azure, the university hopes to unlock
better performance, security, and scalability—making it easier to serve the campus community
anytime, anywhere.
Of course, transitions like this come with their own set of challenges. Assuming that:
● There will be a learning curve, especially for staff and instructors who aren’t familiar
with cloud tools.
● Not everyone will be eager to change, especially departments deeply tied to current
legacy systems.
● Moving sensitive data and making sure it integrates well with existing tools could raise
some privacy and reliability concerns.
● Finally, regulatory compliance—especially when it comes to student financial or health
data—will require careful planning and possibly extra configurations before going live.
These are realistic concerns, but by acknowledging them early, it's possible to plan proactively
and help ensure a smoother transition across the university.
Cloud Architecture Design, Tools, and Methodologies – Sudarsh Venkat Jeyaraman Rajesh
and Dharshan Madhavan
Revised Architectural Diagram to add traffic flows, subnet labels, and compliance tags:
2
1. Network Segmentation and Subnet Labeling:
The architecture is designed with a clear separation of public and private subnets across multiple
Availability Zones to ensure high availability and fault tolerance. Each subnet is explicitly
labeled to reflect its role in the infrastructure:
This subnet labeling supports clear zoning, enforces security boundaries, and simplifies
maintenance and scalability.
3
2. Traffic Flow Illustration
The diagram now includes traffic flow arrows representing the direction and nature of
communication between components. Key flows include:
● User → ALB (HTTPS): Incoming traffic from users is routed via the ALB to ensure
secure and load-balanced access to the application layer.
● ALB → EC2 (HTTP): Internal traffic routes requests from ALB to EC2 app servers
hosted in private subnets.
● EC2 → RDS (MySQL over TCP 3306): The application communicates with the
database using encrypted and restricted internal traffic.
● EC2 → Internet (via NAT Gateway): Outbound internet access is granted to EC2
instances using NAT, ensuring secure updates and API calls.
● Admin Access → Bastion Host (SSH 22) → EC2: Secure shell access is allowed only
through the Bastion Host in the public subnet.
Each traffic flow is marked with the relevant protocol and port number to enhance security
visibility.
To meet compliance and security requirements, relevant AWS services are tagged and visually
identified in the diagram:
● Encryption at Rest:
○ RDS, EBS volumes, and S3 buckets use AWS KMS keys for secure data storage.
● Encryption in Transit:
○ TLS 1.2 is enforced for all incoming and inter-service communications via ALB.
● IAM & RBAC:
○ IAM roles and policies are tagged on each computer resource to reflect the
principle of least privilege.
● Audit & Logging:
○ AWS CloudTrail and AWS Config icons indicate continuous compliance
monitoring and activity tracking.
4
● Network Security:
These compliance tags ensure the solution adheres to AWS Well-Architected Framework
principles—especially Security, Reliability, and Operational Excellence.
To ensure logical network segmentation and avoid IP conflicts, the following CIDR block
allocations are used within the VPC:
This CIDR structure supports a clean separation between internet-facing, application, and data
layers.
Different EC2 instance types are used based on workload requirements to ensure both
performance and cost-efficiency:
5
Tier Component Instance Type Justification
Web/Application EC2 App Servers t3.medium Balanced performance and cost for web
layer
Bastion Host Admin Access t3.micro Low usage, used only for SSH access
Auto Scaling Groups dynamically adjust the number of EC2 instances during traffic spikes,
ensuring optimal performance.
6. Encryption Standards
ALB (HTTPS) N/A TLS 1.2 via ACM ACM-issued certs, TLS 1.2
6
7. VPC Segmentation (Color-Coded Zones):
Public Subnet Zone (Color: Light Blue)
● Bastion Host
Key Characteristics:
Private subnets are isolated from direct internet access and host internal compute and data
workloads.
● CI/CD Agents
Key Characteristics:
7
8. AWS Tools & Services:
Amazon VPC Like a private neighborhood in the cloud where you control who can enter
and what they can do.
Subnets These are like different buildings in your cloud neighborhood — public ones
(Public/Private) face the internet, and private ones are protected inside.
EC2 Instances Virtual computers that run your apps or websites in the cloud.
Auto Scaling A service that automatically adds or removes virtual computers based on
Groups (ASG) how busy your app is, helping save money and ensure performance.
Elastic Load Like a smart receptionist, it evenly directs website traffic to different servers
Balancer (ALB) to keep things running smoothly.
Amazon RDS A managed database service that stores your app’s data and automatically
8
(Multi-AZ) backs it up in multiple places for safety.
Amazon S3 A secure storage space where you can keep files, images, and backups —
like cloud-based USB drives.
AWS Lambda A way to run small bits of code automatically without needing a full server
— great for background tasks.
AWS Like a blueprint that builds your cloud setup the same way every time,
CloudFormation automatically.
Terraform Another tool for automating cloud setup, but it works across different cloud
providers, not just AWS.
AWS CodePipelineLike an assembly line that automatically tests and delivers new features for
your app.
AWS CodeDeploy Handles the rollout of new app versions to servers safely and consistently.
AWS WAF Protects your app from hackers and bad traffic, like a security gatekeeper.
AWS Shield Automatically defends your website from attacks that try to flood it with
traffic (DDoS attacks).
IAM Controls who can access what in your cloud — like ID cards for users and
systems.
AWS KMS Manages the "keys" used to lock and unlock your data, keeping it secure.
AWS Config Tracks changes to your cloud setup and checks if everything still follows
your security rules.
AWS CloudTrail Keep a record of everything that happens in your cloud, like a security
camera log.
AWS CloudWatch Monitors your cloud setup’s health and alerts you if something goes wrong.
9
AWS Secrets Safely stores passwords and other sensitive information instead of
Manager hardcoding them in apps.
Amazon Route 53 A smart address book that helps users connect to your website quickly and
reliably.
AWS Global Make sure users get the fastest connection to your app, no matter where in
Accelerator the world they are.
AWS DMS Helps you move your existing database to AWS with minimal downtime.
NAT Gateway Lets servers in private areas of your cloud reach the internet securely without
exposing them.
Bastion Host A secure entry point for admins to access private cloud servers — like a
guarded door.
9. Migration Strategy:
Tasks:
1. Assess source databases – Determine size, dependencies, and engine type (MySQL,
PostgreSQL, etc.).
2. Choose AWS RDS or Aurora – Based on compatibility, performance, and cost.
3. Provision RDS instance (Multi-AZ) – Set up database in AWS with high availability.
4. Set up AWS DMS – Configure replication tasks and source/target endpoints.
5. Perform schema conversion – If needed, use the AWS Schema Conversion Tool.
6. Migrate initial data snapshot – Bulk load existing data into the cloud database.
7. Enable continuous data replication – Keep changes in sync until the final cutover.
8. Test migrated database – Validate structure, indexes, and data integrity.
9. Configure security – Set up IAM roles, encryption (KMS), and security groups.
10
10. Finalize cutover – Stop writes to the old DB, do a final sync, and point apps to the new
DB.
Tasks:
1. Inventory existing servers and apps – Identify which VMs and containers to migrate.
2. Set up VPC, subnets, and routing – Ensure networking is in place.
3. Provision EC2 instances – Choose instance types and AMIs, attach to Auto Scaling.
4. Configure security groups/NACLs – Ensure network segmentation.
5. Deploy apps to EC2 or EKS – Use AWS SMS for lift-and-shift or containerize and
deploy to EKS.
6. Set up CI/CD pipeline (CodePipeline, CodeDeploy) for automated deployments.
7. Configure load balancers (ALB) – Route external/internal traffic appropriately.
8. Connect to database – Update application configs to point to AWS RDS.
9. Implement monitoring (CloudWatch, CloudTrail) – Track app health and logs.
Goal: Switch traffic to the AWS environment and confirm full functionality.
Tasks:
11
9. Notify stakeholders – Declare successful migration.
10. Decommission legacy infrastructure – After stable cutover and rollback window.
12
● Monitor system logs and traffic post-migration
● Notify stakeholders and gather approvals
As part of York University’s cloud migration strategy to Amazon Web Services (AWS), the
security and compliance architecture was strengthened to ensure data privacy, regulatory
compliance, and system resilience. The following section outlines the implementation of key
security mechanisms and compliance strategies that align with Canadian data protection laws,
specifically PIPEDA (Personal Information Protection and Electronic Documents Act) and
FIPPA (Freedom of Information and Protection of Privacy Act).
To maintain secure and controlled access to AWS resources, the team implemented a robust
Identity and Access Management (IAM) system:
● Role-Based Access Control (RBAC): IAM roles were defined for users and services,
applying the principle of least privilege. Each role was associated with specific policies
that limited access strictly to required AWS resources.
● Multi-Factor Authentication (MFA): Enabled for all administrative and privileged
users; requiring users to provide a second factor, such as a mobile device token or
hardware key, during login.
● Federated Authentication (AWS SSO): Integrated with York University's Active
Directory to streamline access through existing credentials. This feature reduces the
overhead of managing separate AWS accounts.
● Temporary Credentials: The AWS Security Token Service (STS) was used to grant
temporary, limited-duration access, preventing long-term exposure to access keys.
● IAM Access Analyzer & AWS Config: Used to continuously monitor and validate
permissions, ensuring they are not overly permissive or violating internal security
policies.
IAM in AWS functions like a digital security office at the university: only authorized students,
faculty, or staff can enter certain buildings or access confidential files based on their role.
PIPEDA/FIPPA Alignment:
● Fine-grained IAM controls help secure student records from unauthorized access.
● Integration with SSO ensures secure and traceable access to systems.
13
Data Encryption and Protection
● Encryption at Rest:
○ Data in Amazon S3, RDS databases, and EBS volumes is encrypted using AWS
Key Management Service (KMS).
○ Customer-Managed Keys (CMKs) were created with strict permissions to allow
only authorized roles.
○ Key rotation policies were configured to automate periodic encryption key
changes.
● Encryption in Transit:
○ TLS 1.2+ was enforced across all services for secure communication.
○ Amazon Certificate Manager (ACM) was used to manage SSL/TLS certificates
for all public-facing applications.
○ AWS PrivateLink was enabled for service-to-service communication, avoiding
exposure to the public internet.
● Backup Encryption:
14
○ All backups created using AWS Backup are encrypted.
○ Snapshots of RDS and EC2 volumes are stored in encrypted S3 buckets and
Glacier for long-term archival.
PIPEDA/FIPPA Alignment:
● Full data encryption ensures compliance with regulations for data confidentiality, both in
storage and during transfer.
● Logging and key rotation policies support audit and data integrity requirements.
The AWS environment was aligned to meet strict Canadian data protection standards. The
following compliance controls were enforced:
Audit Logging Record all changes and access AWS CloudTrail, CloudWatch
attempts Logs
Compliance Enforce best practices and track AWS Config, Security Hub
Monitoring violations
Threat Detection Identify unusual activities and attacks GuardDuty, VPC Flow Logs
15
● IAM Policy Reviews: Scheduled to detect privilege escalation risks and inactive user
roles.
Monitoring Tasks:
● Weekly:
○ GuardDuty threat intelligence alerts
○ AWS Config non-compliance rules (e.g., open security groups)
○ IAM Access Analyzer reports excessive permissions
● Monthly:
○ CloudTrail audit log analysis
○ Role review to remove unused permissions
Used a multi-tiered Virtual Private Cloud (VPC) approach to implement a secure network
design.
16
● Public Subnets: Host Application Load Balancers (ALBs), NAT Gateways, and Bastion
Hosts. These are accessible from the internet and tightly controlled.
● Private Subnets: Used for application servers and databases (EC2, RDS), inaccessible
from the public internet.
● Security Groups: Define inbound/outbound rules for each instance, such as allowing
HTTPS but blocking all other traffic.
● NACLs: Provide additional subnet-level restrictions to prevent unauthorized lateral
movement.
● Transit Gateway & VPN: Ensure encrypted connectivity between AWS and on-campus
systems.
● AWS WAF: Filters traffic to web applications, blocking malicious input like SQL
injection or cross-site scripting.
● AWS Shield Advanced: Protects the infrastructure from volumetric and targeted DDoS
attacks.
● Amazon GuardDuty: Analyzes DNS logs, VPC traffic, and CloudTrail logs for threat
detection.
PIPEDA/FIPPA Alignment:
● Segregated access ensures only appropriate users can reach sensitive systems.
● Real-time monitoring helps detect and mitigate breaches immediately.
17
● End-to-end encrypted communication supports data protection laws.
This security-first foundation enables York University to operate with confidence in a digital
environment that prioritizes privacy, resilience, and auditability while maintaining compliance
with PIPEDA and FIPPA regulations.
CI/CD & Implementation Steps – Amitha Sivaji
1. Implementation Overview
● VPC Configuration
○ Created a custom VPC with a CIDR block.
○ Set up public and private subnets across multiple AZs.
○ Configured NAT Gateway for outbound traffic from private subnets.
● Networking & DNS
○ Added route tables for subnet routing.
18
○ Integrated Amazon Route 53 for DNS management and disaster recovery failover.
● Compute Layer
○ Launched EC2 instances into private subnets.
○ Configured Elastic Load Balancers (ALB/NLB) for high availability.
19
○ Cluster provisioning via Terraform, configuration using Ansible.
● Access Control
○ Defined IAM roles & policies based on least privilege.
○ Set up AWS WAF and AWS Shield Advanced to mitigate DDoS and web
threats.
● Terraform
○ Defined infrastructure (VPCs, subnets, EC2, EKS) using IaC for repeatability.
● Ansible
○ Automated installation and configuration of Kubernetes components, app servers,
and security policies.
● AWS Lambda
Instance Optimization
20
● Reserved Instances (RI): Used for long-term, predictable workloads (72% savings).
● Spot Instances: Used for CI builds and stateless services (90% cost reduction).
Storage Optimization
● Compute Savings Plans: Enabled for flexible cost savings across EC2, Lambda, Fargate.
Pre-Migration Planning
● Used AWS DMS to migrate databases from on-prem to Amazon RDS with near-zero
downtime.
Application Containerization
21
A critical priority was to ensure continuous service availability during the go-live.
● Blue-Green Deployment:
○ Created two production environments (blue: current, green: new).
○ Deployed updates to green and ran smoke tests.
○ Switched traffic using Elastic Load Balancer and Route 53 once verified.
7. Post-Migration Validation
● Performance Testing: Conducted load tests with CloudWatch and traced performance
● Disaster Recovery Testing: Simulated failover scenarios with Route 53 health checks and
AWS Backup restores.
22
Monitoring, Testing, & Optimization – Mohana Sarvani Palla
Established a multi-layered monitoring system tracking 47+ metrics across compute, storage, and
networking layers. The CloudWatch dashboard was configured with custom widgets to visualize:
● Real-Time Compute Metrics: CPU utilization (threshold: 80% for 5+ minutes),
memory pressure (alert at 85%), and disk I/O bottlenecks across 68 EC2 instances. Auto
Scaling policies were fine-tuned to add t3.large instances when CPU sustained >80% for
5 minutes and scale down when <40% for 30 minutes.
● Database Performance Monitoring: Amazon RDS read/write latency (alert threshold:
500 ms), connection count (max: 1,500 concurrent connections), and replication lag
(critical at >5 seconds). Custom alarms notified the database team via SNS when
anomalies were detected.
● Application Health Checks: Elastic Load Balancers were monitored for HTTP 5xx
errors (critical if >1% of requests), healthy host counts (minimum 3 instances per AZ),
and request latency (threshold: 2 seconds at 95th percentile). Failed health checks
automatically trigger AWS Systems Manager documents to restart unresponsive services.
23
Fig: AWS SNS for CloudWatch alarm email notifications.
Incident Response:
On March 15, 2025, at 9:15 AM, during final exam week, the Moodle Learning Management
System experienced
● CPU spike to 92% across front-end instances
● Concurrent user sessions peaked at 8,742
● API latency increased to 4.3 seconds
24
Automated Response:
1. CloudWatch triggered Auto Scaling to add 6 t3.large instances by 9:18 AM
2. Lambda function cleared temporary storage (recovered 15% disk space)
3. Route 53 weighted routing shifted 20% of the traffic to the standby region
Resolution: The system stabilized by 9:22 AM with latency reduced to 1.2 seconds. Total
downtime: 0 minutes.
Performance Benchmarking
25
Faculty Feedback (52 participants across 6 departments):
● 89% reported "improved" or "significantly improved" system reliability
● Common qualitative feedback included: "Grade submission now completes in under 10
seconds during peak hours compared to previous 2-3 minute delays."
Savings Implementation
26
● Lifecycle policies archived 23TB of research data to Glacier Deep Archive
2025 Plan
Q2 Priorities:
● Implement predictive scaling using Amazon Forecast
● Migrate the remaining 23 on-prem backups to S3 Glacier
● Conduct AWS Well-Architected Review
Q3 Initiatives:
● Deploy AWS Trusted Advisor for weekly cost anomaly detection
● Train 100% of the IT staff on CloudWatch Insights and X-Ray
● Implement synthetic monitoring for the student portal
27
Achievements
During the implementation phase, several key milestones were reached that significantly
enhanced system performance, security, and efficiency across York University’s cloud
environment:
● AWS S3 with lifecycle policies was introduced for archiving old student records, cutting
long-term storage costs by 25%.
Challenges
● Some departments were slow to adopt new workflows due to limited cloud training,
leading to delays in the full-scale migration of the HRIS and MyFile systems.
● Legacy applications posed compatibility issues, especially those developed in-house
without modern APIs.
● Certain security tools like AWS GuardDuty were underutilized in the initial setup,
leaving minor blind spots in threat detection.
● Manual access reviews delayed user provisioning, highlighting a need for automation in
IAM auditing.
Future Recommendations
To build on the progress made and ensure long-term success, the following actions are
recommended:
28
handling in the cloud, and incident response.
● Automate user provisioning and deprovisioning using AWS SSO or third-party
identity providers to streamline access management.
By focusing on regular reviews, automation, and user readiness, York University can continue
strengthening its cloud infrastructure while ensuring it remains agile, secure, and cost-effective.
Conclusion—Asif Sheriff Ahamadulla Sheriff
The cloud migration initiative delivers measurable improvements to York University’s digital
infrastructure. By modernizing critical systems such as the Student Information System, eClass,
and HRIS, the university enhances system reliability, reduces downtime, and optimizes resource
usage. The integration of scalable cloud services, automated monitoring, and robust identity
management strengthens data protection while improving operational efficiency.
These changes directly support a better student experience, providing faster access to academic
tools and more reliable platforms for learning and administration. Cost savings through resource
optimization and storage management contribute to York’s long-term sustainability. The
transition positions the university to respond effectively to the needs of its academic community,
ensuring secure, accessible, and modern digital services across campus.
● Gupta, A. (2023, March 17). Build CI/CD Pipeline on AWS with Code Commit | Code
Build | Code Deploy — Part 1. Medium. https://awstip.com/build-ci-cd-pipeline-on-aws-
with-code-commit-code-build-code-deploy-part-1-a03453e029d5
29