0% found this document useful (0 votes)
51 views24 pages

DevOps Project

Uploaded by

Pranav Pandkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views24 pages

DevOps Project

Uploaded by

Pranav Pandkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

DevOps Project to automate infrastructure on AWS using

Terraform and GitLab CICD

Before commencing the project, make sure you have a basic understanding of the following
topics, as they will simplify the implementation process.

Basic Terraform Knowledge (resource)


Understanding of CICD (resource)
GitLab CI Knowledge (resource)

PREREQUISITES:

1) Aws account creation


Check out the official site to create aws account Here

2) GitLab account
✓ Login to https://gitlab.com
✓ You can sign in via GitHub/Gmail
✓ Verify email and phone
✓ fill up the questionaries
✓ provide group name & project name as per your choice

3) Terraform Installed
Check out the official website to install terraform Here

4) AWS CLI Installed

Navigate to the IAM dashboard on AWS, then select "Users." Enter the username and proceed to
the next step
Assign permissions by attaching policies directly, opting for "Administrator access," and then
create the user.

Within the user settings, locate "Create access key," and choose the command line interface
(CLI) option to generate an access key.

Upon creation, you can view or download the access key and secret access key either from the
console or via CSV download.
Now go to your terminal and follow below steps:
sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure (input created accesskeyid and secret access key)
cat ~/.aws/config
cat ~/.aws/credentials
aws iam list-users (to list all IAM users in an AWS account)

5) Code editor (Vscode)


Download it from Here

Let’s begin with the project. This project is divided in to two parts.
Part1:

Write Run
Provision
Terraform Terraform
resources
code commands

Here, we write terraform code, run terraform commands and create infrastructure manually to
ensure everything works fine before automating

Part2:

Create CICD pipeline script on Gitlab to automate terraform resource creation


Step1: Create a new folder named “cicdtf” and open it in vscode to start writing the code.

Step2: We will start writing our Terraform code in the “cicdtf” folder. The first step in writing
Terraform code is to define a provider. To do this, we will create a file called provider.tf with
the following content:

We will be deploying a VPC, a security group, a subnet and an EC2 instance as part of the initial phase.

The folder structure is as follows:

1. VPC Module (vpc folder):

Files:

main.tf: Defines resources like VPC, subnets, and security groups.


variables.tf: Declares input variables for customization.
outputs.tf: Specifies outputs like VPC ID, subnet IDs, etc.

2.EC2 Module (web folder):


Files:

main.tf: Configures EC2 instance details, including AMI, instance type, and security groups.
variables.tf: Defines variables needed for EC2 instance customization.
outputs.tf: Outputs instance details like public IP, instance ID, etc.
Snap of folder structure:

Let’s start with defining vpc,

The below Terraform script (main.tf) sets up an AWS Virtual Private Cloud (VPC) with a CIDR block of
10.0.0.0/16, enabling DNS support and hostnames. It creates a subnet (10.0.1.0/24) in us-east-1a with
public IP mapping. Additionally, it establishes a security group allowing inbound SSH (port 22) traffic
from any IP address and permitting all outbound traffic from the instances within the VPC.

To know more about modules and different parameters being used in this project, check out the official
documentation of Terraform Here

Make use of below repositories to check out the code.

https://gitlab.com/N4si/cicdtf

https://gitlab.com/Sakeena19/cicdtf
Step3:

we will create an EC2 instance in the web module and use the security group and subnet
defined in the VPC module. This demonstrates how to share values between different modules
in Terraform.

Main Module (Root Module): The main.tf file acts as the parent module
Child Modules: The VPC and web modules are child modules.

To share values from one child module to another, we follow these steps:

Define Outputs: Specify the values (e.g., subnet ID, security group ID) as outputs in the VPC module.
Use Variables: Reference these outputs as variables in the web module.
The script in main.tf file in web module is as follows:

Step4: define outputs.tf file in vpc module.

output "pb_sn": Defines an output variable named pb_sn.


value = aws_subnet.pb_sn.id: This line assigns the ID of the subnet resource (aws_subnet.pb_sn) to the
output variable. This allows other modules to access the subnet ID. Similar for security group as well.

Step5: Define variables.tf file in web module.


These variables are used to pass the security group ID and subnet ID from the VPC module to the web
module.

Step6: Now to start using these modules, we have to define both vpc and web in the root
module(main.tf) as shown below.

source = "./vpc": Specifies the path to the VPC module directory. This imports the VPC module defined
in the ./vpc folder.
source = "./web": Specifies the path to the web module directory. This imports the EC2 module
defined in the ./web folder.
sn = module.vpc.pb_sn: Passes the subnet ID output (pb_sn) from the VPC module to the EC2
module, assigning it to the variable sn.
sg = module.vpc.sg: Passes the security group ID output (sg) from the VPC module to the EC2
module, assigning it to the variable sg.

Step7:

Now to check whether the code is working fine, lets run terraform commands. Make sure to connect aws
with terraform (using aws configure) before running and save all the files if not done already.

To initialize terraform, use “terraform init” command which setups everything necessary for terraform
to manage your infrastructure such as modules, plugins, backend config etc., as defined in your
configuration files.

To check if our code is valid, use “terraform validate” command.

Run “terraform plan” command is used to create an execution plan to see what changes terraform will
make to your infrastructure without actually applying those changes.

In below snap shown, it’s going to create 4 components: vpc, ec2 instance , subnet and security group
will be created.
You can also run by checking “terraform apply -auto-approve” command which executes the terraform
plan without requiring interactive communication and proceeds with deployment.
When we run apply, terraform. tfstate file will be created which is not a good practise to have it in local
machine, we will setup backend in later steps to store on S3 using DynamoDB.

Also it will create vpc, subnet and ec2 instance as well which can be verified in your aws console.

Now that the code is working fine locally, we'll configure a backend on S3, push the code to GitLab, and
proceed with the second part of the project: setting up a CI/CD pipeline to automate the infrastructure
deployment tasks we previously performed manually.

before this, delete everything using “terraform destroy -auto-approve” to proceed with automation.
Step8: Set up a backend using S3 and Dynamo DB.

Follow below video or documentation mentioned which has a complete process on how to setup S3
bucket and DynamoDB in detail.
https://developer.hashicorp.com/terraform/language/settings/backends/s3
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
https://youtu.be/o04xfWEouKM?si=OGNj1c9R2iqe9TOM

The code for creating s3 and DynamoDB is as follows:

Once the code has been written in a file, run below terraform commands to create s3 and DynamoDB
table.

terraform init (initialize your working directory)

terraform plan (plan the changes)

terraform apply (apply the changes)

This configuration will create an S3 bucket with versioning and server-side encryption enabled, as well as
a DynamoDB table named state-lock with on-demand billing and a string primary key LOCKID.
After applying the changes, you can verify whether the s3 bucket and DynamoDB table created in your
aws console.
Now create an backend.tf file which will have your bucket details and dynamo dB table.

Run “terraform init” to initialize the backend.


To automate all the above actions, let’s move to part2, i.e., create GitLab repo, push code to repo and
setup cicd pipeline.

Go to GitLab and create a new repository:


Click on new project -> create a blank project -> provide project name, visibility, enable readme ->
create project.
To push the code, first step is to initialize the repository.

To use only necessary files and ignore other files, create .gitignore file which can be found Here

To connect with your GitLab repo, use


git remote add origin https://gitlab.com/Sakeena19/cicdtf.git

The next step is to create a branch called "dev" because we cannot directly push our code to the main
branch. This allows us to make changes and test them safely before merging into the main branch which
is the best practice.
To create a branch, use “git checkout -b dev” which will create a branch and switch at a time.
git add . (Adds all changes from current working Dir to staging area)
git commit -m “initial commit” (commits the staged changes to local repo)
git push -u origin dev (pushes code from local to remote repo i.e., GitLab in the branch named dev)

The dev branch should be created in your GitLab repo from which you can create merge request to
merge from dev to main.

After merging you can view your code available in main branch.
Now as the code is ready, lets write a CICD pipeline script in Gitlab.

The pipeline configuration file must be named "gitlab-ci.yml" for GitLab to recognize it as the CI/CD
configuration file. This naming convention ensures that GitLab understands and processes the
configuration defined within.

The main purpose of defining this file is to automate the terraform commands so that whenever a person
makes any change in infrastructure the pipeline will trigger automatically.

As it’s not a best practise to hardcode the aws secret and access key in code, variables can be created
for storing access keys and secret access keys in your GitLab repository.

Navigate to your project repo -> settings -> CICD ->Variables -> Add variables -> Add variables for your
access key and secret access key.
Once the above changes done, the pipeline will start triggering automatically executing all the steps we
scripted in .gitlab-ci.yml file.

cicd pipeline script explanation (.gitlab-ci.yml file):

This GitLab CI/CD pipeline script is designed to automate the deployment and management of
infrastructure using Terraform. The script uses a Docker image that has Terraform installed and sets
environment variables for AWS credentials and a GitLab token. It also caches Terraform plugins and
modules to improve efficiency. The pipeline is divided into four stages: validate, plan, apply, and
destroy.

In the validate stage, the script checks if the Terraform configuration files are correct. The plan stage
then generates a Terraform execution plan and saves it as an artifact called planfile. The apply stage
uses this plan to create or update the infrastructure, but this stage must be triggered manually to
execute. Similarly, the destroy stage, which is also manually triggered, destroys the Terraform-managed
resources automatically.

Before running these stages, the script outputs the Terraform version and initializes Terraform with a
backend configuration specified in a tfstate.config file. By organizing the pipeline in this way, the script
ensures that infrastructure changes are validated, planned, and applied in a controlled and orderly
manner, with the option to manually control the application and destruction of infrastructure changes.

Whenever the pipeline executes, the validate and plan stages run automatically, while the apply and
destroy stages require manual execution, as defined in the script. This approach follows industry best
practices, allowing verification of changes and manual approval before they are applied or destroyed.

1) Logs from validate stage:


2) Logs from plan stage:
3) Logs from apply:
4) Logs from destroy:

All 4 states have been executed.


Verify full logs in below text file:

The pipeline performs the following steps:

• Initializes Terraform with the specified backend configuration.


• Applies the Terraform plan to create infrastructure resources (VPC, Subnet, Security Group, and
EC2 instance).
• Saves .terraform directory to cache for future use.
• Cleans up the environment after the job is completed.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy