DevOps Project
DevOps Project
Before commencing the project, make sure you have a basic understanding of the following
topics, as they will simplify the implementation process.
PREREQUISITES:
2) GitLab account
✓ Login to https://gitlab.com
✓ You can sign in via GitHub/Gmail
✓ Verify email and phone
✓ fill up the questionaries
✓ provide group name & project name as per your choice
3) Terraform Installed
Check out the official website to install terraform Here
Navigate to the IAM dashboard on AWS, then select "Users." Enter the username and proceed to
the next step
Assign permissions by attaching policies directly, opting for "Administrator access," and then
create the user.
Within the user settings, locate "Create access key," and choose the command line interface
(CLI) option to generate an access key.
Upon creation, you can view or download the access key and secret access key either from the
console or via CSV download.
Now go to your terminal and follow below steps:
sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure (input created accesskeyid and secret access key)
cat ~/.aws/config
cat ~/.aws/credentials
aws iam list-users (to list all IAM users in an AWS account)
Let’s begin with the project. This project is divided in to two parts.
Part1:
Write Run
Provision
Terraform Terraform
resources
code commands
Here, we write terraform code, run terraform commands and create infrastructure manually to
ensure everything works fine before automating
Part2:
Step2: We will start writing our Terraform code in the “cicdtf” folder. The first step in writing
Terraform code is to define a provider. To do this, we will create a file called provider.tf with
the following content:
We will be deploying a VPC, a security group, a subnet and an EC2 instance as part of the initial phase.
Files:
main.tf: Configures EC2 instance details, including AMI, instance type, and security groups.
variables.tf: Defines variables needed for EC2 instance customization.
outputs.tf: Outputs instance details like public IP, instance ID, etc.
Snap of folder structure:
The below Terraform script (main.tf) sets up an AWS Virtual Private Cloud (VPC) with a CIDR block of
10.0.0.0/16, enabling DNS support and hostnames. It creates a subnet (10.0.1.0/24) in us-east-1a with
public IP mapping. Additionally, it establishes a security group allowing inbound SSH (port 22) traffic
from any IP address and permitting all outbound traffic from the instances within the VPC.
To know more about modules and different parameters being used in this project, check out the official
documentation of Terraform Here
https://gitlab.com/N4si/cicdtf
https://gitlab.com/Sakeena19/cicdtf
Step3:
we will create an EC2 instance in the web module and use the security group and subnet
defined in the VPC module. This demonstrates how to share values between different modules
in Terraform.
Main Module (Root Module): The main.tf file acts as the parent module
Child Modules: The VPC and web modules are child modules.
To share values from one child module to another, we follow these steps:
Define Outputs: Specify the values (e.g., subnet ID, security group ID) as outputs in the VPC module.
Use Variables: Reference these outputs as variables in the web module.
The script in main.tf file in web module is as follows:
Step6: Now to start using these modules, we have to define both vpc and web in the root
module(main.tf) as shown below.
source = "./vpc": Specifies the path to the VPC module directory. This imports the VPC module defined
in the ./vpc folder.
source = "./web": Specifies the path to the web module directory. This imports the EC2 module
defined in the ./web folder.
sn = module.vpc.pb_sn: Passes the subnet ID output (pb_sn) from the VPC module to the EC2
module, assigning it to the variable sn.
sg = module.vpc.sg: Passes the security group ID output (sg) from the VPC module to the EC2
module, assigning it to the variable sg.
Step7:
Now to check whether the code is working fine, lets run terraform commands. Make sure to connect aws
with terraform (using aws configure) before running and save all the files if not done already.
To initialize terraform, use “terraform init” command which setups everything necessary for terraform
to manage your infrastructure such as modules, plugins, backend config etc., as defined in your
configuration files.
Run “terraform plan” command is used to create an execution plan to see what changes terraform will
make to your infrastructure without actually applying those changes.
In below snap shown, it’s going to create 4 components: vpc, ec2 instance , subnet and security group
will be created.
You can also run by checking “terraform apply -auto-approve” command which executes the terraform
plan without requiring interactive communication and proceeds with deployment.
When we run apply, terraform. tfstate file will be created which is not a good practise to have it in local
machine, we will setup backend in later steps to store on S3 using DynamoDB.
Also it will create vpc, subnet and ec2 instance as well which can be verified in your aws console.
Now that the code is working fine locally, we'll configure a backend on S3, push the code to GitLab, and
proceed with the second part of the project: setting up a CI/CD pipeline to automate the infrastructure
deployment tasks we previously performed manually.
before this, delete everything using “terraform destroy -auto-approve” to proceed with automation.
Step8: Set up a backend using S3 and Dynamo DB.
Follow below video or documentation mentioned which has a complete process on how to setup S3
bucket and DynamoDB in detail.
https://developer.hashicorp.com/terraform/language/settings/backends/s3
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
https://youtu.be/o04xfWEouKM?si=OGNj1c9R2iqe9TOM
Once the code has been written in a file, run below terraform commands to create s3 and DynamoDB
table.
This configuration will create an S3 bucket with versioning and server-side encryption enabled, as well as
a DynamoDB table named state-lock with on-demand billing and a string primary key LOCKID.
After applying the changes, you can verify whether the s3 bucket and DynamoDB table created in your
aws console.
Now create an backend.tf file which will have your bucket details and dynamo dB table.
To use only necessary files and ignore other files, create .gitignore file which can be found Here
The next step is to create a branch called "dev" because we cannot directly push our code to the main
branch. This allows us to make changes and test them safely before merging into the main branch which
is the best practice.
To create a branch, use “git checkout -b dev” which will create a branch and switch at a time.
git add . (Adds all changes from current working Dir to staging area)
git commit -m “initial commit” (commits the staged changes to local repo)
git push -u origin dev (pushes code from local to remote repo i.e., GitLab in the branch named dev)
The dev branch should be created in your GitLab repo from which you can create merge request to
merge from dev to main.
After merging you can view your code available in main branch.
Now as the code is ready, lets write a CICD pipeline script in Gitlab.
The pipeline configuration file must be named "gitlab-ci.yml" for GitLab to recognize it as the CI/CD
configuration file. This naming convention ensures that GitLab understands and processes the
configuration defined within.
The main purpose of defining this file is to automate the terraform commands so that whenever a person
makes any change in infrastructure the pipeline will trigger automatically.
As it’s not a best practise to hardcode the aws secret and access key in code, variables can be created
for storing access keys and secret access keys in your GitLab repository.
Navigate to your project repo -> settings -> CICD ->Variables -> Add variables -> Add variables for your
access key and secret access key.
Once the above changes done, the pipeline will start triggering automatically executing all the steps we
scripted in .gitlab-ci.yml file.
This GitLab CI/CD pipeline script is designed to automate the deployment and management of
infrastructure using Terraform. The script uses a Docker image that has Terraform installed and sets
environment variables for AWS credentials and a GitLab token. It also caches Terraform plugins and
modules to improve efficiency. The pipeline is divided into four stages: validate, plan, apply, and
destroy.
In the validate stage, the script checks if the Terraform configuration files are correct. The plan stage
then generates a Terraform execution plan and saves it as an artifact called planfile. The apply stage
uses this plan to create or update the infrastructure, but this stage must be triggered manually to
execute. Similarly, the destroy stage, which is also manually triggered, destroys the Terraform-managed
resources automatically.
Before running these stages, the script outputs the Terraform version and initializes Terraform with a
backend configuration specified in a tfstate.config file. By organizing the pipeline in this way, the script
ensures that infrastructure changes are validated, planned, and applied in a controlled and orderly
manner, with the option to manually control the application and destruction of infrastructure changes.
Whenever the pipeline executes, the validate and plan stages run automatically, while the apply and
destroy stages require manual execution, as defined in the script. This approach follows industry best
practices, allowing verification of changes and manual approval before they are applied or destroyed.