0% found this document useful (0 votes)
32 views101 pages

Training Notes

The document provides an overview of Python basics, file handling, version control systems, and Linux commands. Key topics include variable scope, data types, file operations, Git commands, and essential Linux commands for file management. It serves as a comprehensive guide for beginners in programming and version control.

Uploaded by

devanand10cr7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views101 pages

Training Notes

The document provides an overview of Python basics, file handling, version control systems, and Linux commands. Key topics include variable scope, data types, file operations, Git commands, and essential Linux commands for file management. It serves as a comprehensive guide for beginners in programming and version control.

Uploaded by

devanand10cr7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

DAY:1 Python Basics

1)Global And Local Scope:


=>Created inside a function
=>Created in main python code
2)Iterator:
=>object contains countable values & be iterated
3)__init __()
=>Class initiated (object created)__init__() executed .
4)list | tuple | set
Changeable | unchangeable | unchangeable
Ordered | ordered | unordered
Indexed | indexed | unindexed
5)upcast down cast
Class Animal:
Class Dog:
Dog object=Animal{upcast}
Dog{downcast}
6)isalphanum()
7)typecast int(‘a’)
8)primary built in data types
Numeric Types:
• Integers (int): Represent whole numbers (e.g., 10, -5, 0).
• Floating-point numbers (float): Represent numbers with decimal points
(e.g., 3.14, -2.5).
• Complex numbers (complex): Represent numbers with a real and
imaginary part (e.g., 2 + 3j).
Text Type:
• Strings (str): Represent sequences of characters (e.g., "hello", "world!").
Boolean Type:
• Booleans (bool): Represent truth values, either True or False.
Sequence Types:
• Lists (list): Ordered, mutable (changeable) collections of items (e.g., [1, 2,
"a"]).
• Tuples (tuple): Ordered, immutable (unchangeable) collections of items
(e.g., (1, 2, "a")).
• Sets (set): Unordered collections of unique items (e.g., {1, 2, 3}).
Mapping Type:
• Dictionaries (dict): Unordered collections of key-value pairs
(e.g., {"name": "Alice", "age": 30})
9)Membership operator:
In & not in
10)if and it’s importance of indentation
11)pass =>to avoid error when if is empty
12)Arbitrary Arguments =>*args when no.of parameters is not known
13)Inheritance
14)Block
15) is operator in Python is used to check if two variables refer to the same
object in memory. It's also known as the identity operator.
16)copy list=>
17)modules & Python Scripts
18)Class => blueprint of creating objects
19)copy()
20)Range using Slice operators
DAY:2 FILE
What is file?

Container in compute storage device used to store data

File->open->r/w->close(freed)

File Handling in Python:

1)open

2)Read or write

3)Close

Mode Description
r(read) File is exist read

w(write) File is exist is overwritten (content in files


cleared)else it is created and written

a(append) File is exist is append at end of file else it is


created

x File exist operation failed else open for exclusive


creation
t(default) Open in text mode

+(reading &writing) Open file for updating

File Open:

f = open("demofile.txt")

The code above is the same as:

f = open("demofile.txt", "rt") r->read t->text

**Shows error when file is not found**


Without Close :

Reading: with open("myfile.txt","r") as file1:

read() file_content=file1.read()

file1=open(“filename”,”r”) print(file_content)

read_content=file1.read() **Memory can be freed without


using close**
print(read_content)
Output:
Closing:
This is for reading the file
file1.close()

It doesn’t provide any errors even without close()

Using try and except Using try

try: try:
file1=open(“filename”,”r”)
file1=open("sample.txt","r") read_content=file1.read()
read_content=file1.read() print(read_content)
file1.close()
print(read_content)
file1.close()

except:

print("File not Found")


Output:
finally: File operation is completed
print("File operation is completed") Traceback (most recent call last):
File "c:\Users\pmlba\OneDrive\Desktop\Cloud
Training\Day 2\File Handling\read.py", line 3, in
<module>
Output: file1=open("sample.txt","r")
File not Found FileNotFoundError: [Errno 2] No such file or directory:
File operation is completed 'sample.txt'
Write:

with open("sample.txt","w") as file1:

file1.write("File is wriiten when not exist") =>File Created and content is written

filer=open("sample.txt","r")

content=filer.read()

print(content)

with open("myfile.txt","w") as file2:

file2.write("File is wriiten when exist") =>File content is overwritten

filers=open("myfile.txt","r")

content=filers.read()

print(content)

Above two code don’t print anything

with open("sample.txt","w") as file1:

file1.write("File is written when not exist")

filer=open("sample.txt","r")

content=filer.read()

print(content)

filer.close()

This Code print the content in the file

Append:

with open("sample.txt","a") as file1:

file1.write("File is written when not exist") =>File Created and content is written

filer=open("sample.txt","r")

content=filer.read()

print(content)

filer.close()

with open("myfile.txt","a") as file2:


file2.write("File is written when exist") =>content is appended

filers=open("myfile.txt","r")

content=filers.read()

print(content)

filers.close()

Pandas is a Python library.

Pandas is used to analyse data.

Pandas on CSV-Comma Separated Values

df=pd.read_csv("data.csv")

print(df)

print(df.head())

print(df.tail())

print("Sum:",df.sum())=> dtype:float64

print("Mean:",df.mean())=>datatype:float64

print(df.iloc[:,2]) →[rows:columns] ,if only one value given it is row

print(df.loc[:,"Pulse"])

DAY: 3 Version Control System


• Version Control System or SCM (Source Control
System or Source Control Management)
• Linux
• File Management
• Collaboration on code
• Keep track of code and code changes
Web based git Repository Hosting service

Configure Git
Steps
1. Open the command line
2. Set your username using git config --global user.name "Your Name"
3. Set your email address using git config --global user.email
"your_email@example.com"
4. Verify your configuration by displaying
$ git config --get user.name
git config --get user.email

Git Hub to Working Directory


git clone
$ git clone https =>not need public key
• used to point to an existing repo and make a clone or copy of that repo
at in a new directory, at another location.
• The original repository can be located on the local filesystem or on
remote machine accessible supported protocols. The git clone
command copies an existing Git repository.
Working Directory to Git Hub
• git status
Displays the state of the working directory and the staging area .
• git add “file_name”
To update a change in the working directory to the staging area. It tells
Git that you want to include updates to a particular file in the next
commit.
$ git commit -m "new added" →This is the result when the created file in local
[main a4f08da] new added directory to be updated with github repo
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 new
• git commit -m “message”
Git add doesn't really affect the repository in any significant way—
changes are not actually recorded until you run git commit.

• git push origin main


To upload local repository content to a remote repository.

→We use main branch to update the live website.


→To have any other updates create another branch then you can
connect to main

Tracked Untracked
• managed by the repository • New or modified files that Git
• Status checked when is not yet tracking, meaning
immediately cloned they won't be included in
$ git status commits unless explicitly
On branch main added.
Your branch is up to date with • Untracked→Tracked
'origin/main'. $ git status
On branch main
nothing to commit, working Your branch is up to date with
tree clean 'origin/main'.

Changes to be committed:
(use "git restore --staged <file>..." to
unstage)
new file: new

Deleting files in Github


1)You can delete the file in your workspace it doesn’t affect the github
2)So,After deleting the file
git add “file_deleted_name” =>This file is deleted
3)$ git commit -m"delete new1" →This is the result when the file is

[main 5b3fdf6] delete new1 deleted in local directory to be updated


with github repo
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 new1
4)$git push origin main
The file will be deleted in repo
• git pull
Scenario: Now I am creating the new file in repo ,that file should
be inserted to the local directory .
Possible ways:
• git clone “This clone all file in the repo”
All files are duplicated
• git pull “This only pull newly created file to local directory”

$ git pull
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
Unpacking objects: 100% (3/3), 993 bytes | 43.00 KiB/s, done.
From https://github.com/ervishnucs/training
5b3fdf6..5f41040 main -> origin/main
Updating 5b3fdf6..5f41040
Fast-forward
new1 | 1 +
1 file changed, 1 insertion(+)
create mode 100644 new1
Initialising Git Repo with working Directory
1. git init →Initialise the repo
2. git add file name
3. git commit -m”new changes”
4. git remote -v → If the Repo URL presents here no need to add or set

5. git remote add origin “repo_url” →add to remote


$ git remote add origin https://github.com/vidhya-2711/newmain.git
error: remote origin already exists.

6. git remote set-url origin “repo_url” →set url to repo

$ git remote add origin https://github.com/Sakthi1205/Sample.git


error: remote origin already exists.
$ git remote remove https://github.com/Sakthi1205/Sakthi.git
error: No such remote: 'https://github.com/Sakthi1205/Sakthi.git'
$ git remote set-url origin https://github.com/Sakthi1205/Sample.git

7. git branch →shows all branches present in repo


* master →current branch
main
8. git branch “branch_name” →create new branch
9. git push origin ”current_branch” →push contents from local to repo
when branch is master
Error:
pmlba@VISHNU MINGW64 ~/OneDrive/Desktop/Cloud
Training/Day 3/Git/sample (master)
$ git push -u origin main
error: src refspec main does not match any
10.git push - -set-upstream origin “branch_name”→ Change the branch to
main
remote:
To https://github.com/ervishnucs/sample.git
* [new branch] main -> main

Day: 4

Parameter Windows Linux


Customization UI only customizable) Modify UI ,Kernel &
Terminal
User Interface CMD-GUI Focused CLI- Command Line

Performance More CPU &RAM Light weighted &Fast


Security Need of third party No need of third
security provider party provider
Software paid Open-source
File System NTFS 12 file system
supported (NFS
Technology)

Best for Bussines man Developer,Hackers &


&casual server providers
Why AWS Services need linux?
As we know AWS service can be accessed by CLI
Linux Commands:
1)man ls-
• It is used to display the documentation on just about any linux command
that can be executed on the terminal
• Used to display the documentation on “ls” command in linux

2)pwd
• print working directory
• This command in linux is used to display the path of current working
directory inside the terminal

[command] - - help →describe about the command and its uses

3)cd
It is used to change the current directory to the specified folder inside the
terminal
4)touch file_name_with_extension
Used to create file without content

5)cat
Used to read the content of one or more file c
Cat >>
Copying content from one file to another file

Source files Destination files

6)mk dir
used to make new directory

7)Cd~-
Go to next directory
8)cd..
Go to previous directory
9)ls
List all available file in directory
ls -l
Permissions ,timelapse and owner of file is shown
ls -s
Used to show the all the hidden file in directory
ls -a
Used to show all files including hidden files, Permissions ,timelapse and owner
of file is shown
ls -S
Sort the all file in directory with respect to files

10)cp
It is used to copy files directly from one location to another from inside the
terminal
cp [args,filename]
11)mv
Move files and directory from one to another directory
→mv file_name destination_path
Move a file or rename a file
→mv curr_file_name new_file_name

12)rm
remove the file it is non-empty & cause data loss
13)rm dir
used to remove empty directories in the file systes
if files in the directory is not empty and rm dir is called then it throws error

14)grep
The grep command in linux search trough a specified file and prints all lines
that match a given word or pattern.
1. Case insensitive search
The -i option enables to search for a string case insensitively in the given file. It
matches the words like “UNIX”, “Unix”, “unix”.
grep -i "UNix" geekfile.txt

Output:

Case insensitive search


2. Displaying the Count of Number of Matches Using grep
We can find the number of lines that matches the given string/pattern
grep -c "unix" geekfile.txt

Output:
Displaying the count number of the matches
3. Display the File Names that Matches the Pattern Using grep
We can just display the files that contains the given string/pattern.
grep -l "unix" *

or
grep -l "unix" f1.txt f2.txt f3.xt f4.txt

Output:

The file name that matches the pattern


4. Checking for the Whole Words in a File Using grep
By default, grep matches the given string/pattern even if it is found as a
substring in a file. The -w option to grep makes it match only the whole words.
grep -w "unix" geekfile.txt

Output:

checking whole words in a file


5. Displaying only the matched pattern Using grep
By default, grep displays the entire line which has the matched string. We can
make the grep to display only the matched string by using the -o option.
grep -o "unix" geekfile.txt

Output:

Displaying only the matched pattern


6. Show Line Number While Displaying the Output Using grep -n
To show the line number of file with the line matched.
grep -n "unix" geekfile.txt
Output:

Show line number while displaying the output


7. Inverting the Pattern Match Using grep
You can display the lines that are not matched with the specified search string
pattern using the -v option.
grep -v "unix" geekfile.txt

Output:

Inverting the pattern match


8. Matching the Lines that Start with a String Using grep
The ^ regular expression pattern specifies the start of a line. This can be used in
grep to match the lines which start with the given string or pattern.
grep "^unix" geekfile.txt

Output:

Matching the lines that start with a string


9. Matching the Lines that End with a String Using grep
The $ regular expression pattern specifies the end of a line. This can be used in
grep to match the lines which end with the given string or pattern.
grep "os$" geekfile.txt

15)head
print the first 10 lines of the content in file also you can specify number of lined
to be shown
head -n number filename

16)tail
print the last 10 lines of the content in file also you can specify number of lined
to be shown
17)history
used to view the history of all the commands previously executed
18)optional feature

Intro to File System:


→What is file system?

• It is system used by OS to manage files.


• The file system controls how data’s is stored and retrieved from computer hard
drive .
• It’s in Structured way.

Ex: ext4→Linux file system


→Linux File System Hierarchy Standard

• FSHS describes directory structure and it’s content in linux or linux like OS.
• It explains where files and directory should be located what it should contain
ls -l / →shows file system
→Root Directory

• Linux uses a tree-like structure, starting from the root (/) directory. Everything in Linux
(files, directories, devices) is represented as a file.
• It is primary hierarchy of all file and root directory of the entire systems.
• Every single file start from root directory
/ → Root directory: The starting point of the Linux filesystem.
/boot →grub,kernel & initrd(initialise ram disk)contains files for system booting .
/bin→contains user binary and executable files and common commands
Files created by users stored in /bin , all rm mkdir commands implementation done
in bin .
/sbin →contains executable files unlike binaries it only contain system binaries which
require root privilege to perform certain task and for system maintenance file
(commands like fdisk, reboot, shutdown) used by administrators.

→/etc Editable text configuration – Contains configuration files required by all


programs .It also contains startup and shutdown script used to start or stop the
programs.
→/usr Unix System resources OS non essential binary files ,library files(third party
files) that are not essential for booting.
→/home contains personal directories for all users except the root user First file
created by user stored in home
→/var content of the file that are expected to grow can be found under this
directory
Eg.. system log files
→/opt It contains add application from the individual software
→/dev devices attached to Linux system will be available as file in the disk
→/proc It contains info about system process.It is a virtual file system of a linux
system stored in RAM .
Ex: meminfo ,CPUinfo
→/temp used to store temporary files creatd by system and user.
→/mnt temporary mount directory where sys admins mount file system

Characters NFS SMB

Environment Primarily used in Linux Primarily used in Windows environments


environments

Performance Known for fast performance Known for reliability, security, and compatibility
and low overhead

Authentication Uses host-based Uses user-based authentication


authentication

Sharing Allows multiple users to Allows clients to share files with each other
share the same file
Printer sharing Doesn't provide printer Supports printer sharing
sharing

What is the difference between Linux and Windows files?


Linux has several directories under root (/), while Windows has relatively few. This is
because Windows keeps everything except applications under the C:\Windows directory.
Applications reside either in the Program Files or the Program Files (x86) directories. Linux
keeps its applications under the /usr directory. The Linux /home directory corresponds to
the Windows C:\Users directory.

NTFS
100Mb →|4kb||4kb||4kb|→ stored
1024kbht tracker

A sandbox in Linux is a restricted environment that isolates processes or applications from the rest of
the system. This allows users to test programs without risking damage to the system.

Software development

Developers can use sandboxes to test new code, identify bugs, and validate changes.

How Web Application works?

A web application works by sending requests from a user's browser to a web server, which then
processes the request and sends back a response. The browser interprets the response and displays
the web application.

Steps

1. A user enters a URL or clicks a link to access the web application.

2. The web server receives the request and forwards it to the appropriate application.
3. The application generates a response in HTML, CSS, and JavaScript.

4. The web server sends the response back to the user's browser.

5. The browser interprets the response and displays the web application.

6. The user interacts with the web application, which may trigger client-side JavaScript.

Features

Web applications can include features like data processing, user authentication, and real-time
updates. They can be accessed from anywhere with an internet connection.

Considerations

Web applications can be affected by internet speed, app performance, and security.

Examples

Examples of web applications include social media platforms, online banking systems, and e-
commerce sites.

DLL→

Class A , Class B ,Class C, Class D

Iss →

ISS and DLL are both file types that can be used in software. ISS may refer to ISS products that use
the iss-pam1.dll ICQ parser. DLL stands for Dynamic Link Library, a file that contains code and data
that can be used by multiple programs.

ISS

• ISS products use the iss-pam1.dll ICQ parser.

• A module that exploits a stack buffer overflow in the ISS products that use the iss-pam1.dll
ICQ parser has been identified.

DLL

• A DLL is a library that contains code and data that can be used by more than one program at
the same time.

• DLLs allow developers to write modular and efficient code and share resources among
different applications.

• DLLs can be loaded into memory and executed on demand.

• DLLs can contain exported functions that are intended to be called by other modules.

• DLL hijacking attacks can be stealthy, difficult to detect, and can potentially result in serious
consequences.

• Multiple versions of the same DLL can exist in different file system locations within an
operating system.
DNS

HTTtracker,godaddy,catalyst

Continuous request more than 100 people at same time how it handled?

If you're talking specifically about webservers, then no, your formula doesn't work, because
webservers are designed to handle multiple, simultaneous requests, using forking or threading.

This turns the formula into something far harder to quantify - in my experience, web servers can
handle LOTS (i.e. hundreds or thousands) of concurrent requests which consume little or no time,
but tend to reduce that concurrency quite dramatically as the requests consume more time.

That means that "average service time" isn't massively useful - it can hide wide variations, and it's
actually the outliers that affect you the most.

SDLC(Software Development Life Cycle)

• Planning, Develop, Test and Deploy


• Each phase include set of activities and deliverable
• Predefined time and cost
• Mechanism project tracking and growth

Goals:
• Create high quality software as per client expectation,
• Satisfy client and bug free software

SDLC Phases:

1)Gathering and Analysis


2)Planning
3)Designing
4)Developing
5)Testing
6)Deployment and maintenance
Gatherin Plannin Designing Developing Testing Deploymen
g and g DDS -Design t&
Analysis Come up Document Maintenan
Gathering with specification Code the Test ce
requiremen schedule (HDD-High software as functionality
t of projects and detailed per the verify work as
and client . resource design Getting
design. per SRS feedback
Like no.of request Module name,
days,cost description from client
for
SRS ,relationship, db checking for
release
→Software creativity, issues before
requiremen release.
articulate)
t
(LDD – Low
Specificatio
n detailed design
-module
logic,db
size7type,interf
ace
detail,i/o,errors)

Figure out how Any


requirements programming
satisfied and lang Identify bugs
Scope for verify Documentati and error
current on
Gather release Create
requiremen and its representation Maintenan
t from purpose of the project ce Satisfy
client Complete Errors the client
interactionPlan and Give info to task by unit reported
and users.
test developing or task based ,tracked,rectifi
Clear team ed &retest
documen
Requiremen Fix the
ts
ts reports of
Seniority client and
member user
Seniority
Business
member
analyst
Adaptive-
Business from users
Domain
analyst
Domain
expert Preventive
expert -from
hackers

Agile:
• Methodology used to create projects.
• Ability to move quickly
• Responding Swiftly to changa
Non Agile Methodology:
• The client can’t see the output of the project until the completion of the
project.
• Allocate extensive of time
Agile Methodology:
It have sprints or iterations which are shorter in duration during which pre-
determined features are developed.
Scrum:
Framework for managing work with an emphasis or software development
Software development→Iterations(with time-boxed)

Sprint typically (two weeks)

Scrum Master
Daily

Product Owner
Sprint Roles Weekly

Development Team
Advantages:
• The Delivery of software is unremitting (Continuous).
• The Customers are satisfied because of after features are satisfied. If
customer have any feedback or changes it can do done releases.
• The daily changes are required for the business people and users.

Disadvantages:
• Documentations are reduced . For each iteration documentation is done
but for next iteration the previous documentation is not required as
many changes can be done.
• So complete project documentation is not done.
• Requirement may not be clear. It’s difficult to predict the expected
results.
• Starting stages of SDLC is predictable.

evO
DevOps is a set of practices, tools, and cultural philosophies that integrate software
development and IT operations. The goal of DevOps is to improve the speed, reliability, and
security of software.

Docker is a free, open-


source platform that helps
developers build, test, and
run applications. It's used
to package software into
containers that contain
everything needed to run
the software.

• Build,Test,Deploy
• “Development Phase”
• But continuous change lead to manual
difficulties in operations.
• No pipieline
Jenkins is an open-source
automation tool that's • Build,Test,Deploy
• “Development & Operation
used to build, test, and
Phase ”
deploy software. It's also • Automation Pipeline
used to manage and (CI/CD)
automate the software
delivery process.

DevOps Practices
Continuous Integration
Continuous integration is a software development practice where developers regularly merge their
code changes into a central repository, after which automated builds and tests are run. The key
goals of continuous integration are to find and address bugs quicker, improve software quality, and
reduce the time it takes to validate and release new software updates.

Continuous Delivery
Continuous delivery is a software development practice where code changes are automatically built,
tested, and prepared for a release to production. It expands upon continuous integration by
deploying all code changes to a testing environment and/or a production environment after the build
stage. When continuous delivery is implemented properly, developers will always have a
deployment-ready build artifact that has passed through a standardized test process.

Microservices
The microservices architecture is a design approach to build a single application as a set of small
services. Each service runs in its own process and communicates with other services through a
well-defined interface using a lightweight mechanism, typically an HTTP-based application
programming interface (API). Microservices are built around business capabilities; each service is
scoped to a single purpose. You can use different frameworks or programming languages to write
microservices and deploy them independently, as a single service, or as a group of services.

Infrastructure as Code
Infrastructure as code is a practice in which infrastructure is provisioned and managed using code
and software development techniques, such as version control and continuous integration. The
cloud’s API-driven model enables developers and system administrators to interact with
infrastructure programmatically, and at scale, instead of needing to manually set up and configure
resources. Thus, engineers can interface with infrastructure using code-based tools and treat
infrastructure in a manner similar to how they treat application code. Because they are defined by
code, infrastructure and servers can quickly be deployed using standardized patterns, updated with
the latest patches and versions, or duplicated in repeatable ways.

Configuration Management
Developers and system administrators use code to automate operating system and host
configuration, operational tasks, and more. The use of code makes configuration changes
repeatable and standardized. It frees developers and systems administrators from manually
configuring operating systems, system applications, or server software.

Policy as Code
With infrastructure and its configuration codified with the cloud, organizations can monitor and
enforce compliance dynamically and at scale. Infrastructure that is described by code can thus be
tracked, validated, and reconfigured in an automated way. This makes it easier for organizations to
govern changes over resources and ensure that security measures are properly enforced in a
distributed manner (e.g. information security or compliance with PCI-DSS or HIPAA). This allows
teams within an organization to move at higher velocity since non-compliant resources can be
automatically flagged for further investigation or even automatically brought back into compliance.

Monitoring and Logging


Organizations monitor metrics and logs to see how application and infrastructure performance
impacts the experience of their product’s end user. By capturing, categorizing, and then analyzing
data and logs generated by applications and infrastructure, organizations understand how changes or
updates impact users, shedding insights into the root causes of problems or unexpected changes.
Active monitoring becomes increasingly important as services must be available 24/7 and as
application and infrastructure update frequency increases. Creating alerts or performing real-time
analysis of this data also helps organizations more proactively monitor their services.

Communication and Collaboration


Increased communication and collaboration in an organization is one of the key cultural aspects of
DevOps. The use of DevOps tooling and automation of the software delivery process establishes
collaboration by physically bringing together the workflows and responsibilities of development
and operations. Building on top of that, these teams set strong cultural norms around information
sharing and facilitating communication through the use of chat applications, issue or project tracking
systems, and wikis. This helps speed up communication across developers, operations, and even
other teams like marketing or sales, allowing all parts of the organization to align more closely on
goals and projects.

DevOps Tools
The DevOps model relies on effective tooling to help teams rapidly and reliably deploy and innovate
for their customers. These tools automate manual tasks, help teams manage complex environments
at scale, and keep engineers in control of the high velocity that is enabled by DevOps. AWS provides
services that are designed for DevOps and that are built first for use with the AWS cloud. These
services help you use the DevOps practices described above.

DAY 5:

AWS ->Cloud computing platform provided by amazon

It provides a wide range of cloud base services including computing power, storage ,databases,
networking ,security and more.

It allows business and developers to build and manage applications without needing physical
hardware.

Key Features:
• On demand service- Pay only for what you use.
• Scalability -Easly scale up or down based on demand
• Security -Provides built in security features and compliance standard
• Global Availability- Aws operates multiple data centres world wide.

Core AWS Services:

Compute:

Amazon EC2-Virtual servers in the cloud.

Amazon Lambda-Serverless computing runs code without managing servers.

Amazon ECS- Elastic container service manage service for running docker containers.

Storage:

S3 -Simple storage service- scalable object storage of files for backups and website.

EBS-Elastics Block storage-persist storage for EC2 instance.

Glazier-Low cost storage for long term backup(not utilized frequently)

Databases:

RDS-Manage Database. SQL

DynamoDB- No SQL No sequeal database for high speed application.


Datawarehouse: Datacenter:

Redshift- fully managed cloud data warehouse that makes it simple and cost-effective to analyze all
your data.

Networking and Security:

VPC-Virtual Private cloud →Isolated network for isolated AWS resources

IAM →Identity Access Management

RouteS3→DNS and store recent request and its info for fast acces

Cloudfront→Content delievery network for fast website delievery.

Monitoring & Management:

Cloudwatch→monitor resources

Cloudtrail→logs

DevOps & Automation:

Cloud Formation→Infrastructure as a cloud(IaaS)

DAY:6

Security

IAM (Identity and Access Management) is a framework used to manage users, roles, and permissions
within an organization or system. It ensures that the right individuals have the appropriate access to
resources while maintaining security and compliance.

It is responsible for securing the underlying infrastructure that supports the cloud.

Amazon Simple Storage Service (Amazon S3)

Amazon S3 that stores files of different types like Photos, Audio, and Videos as Objects providing
more scalability and security to.

It allows the users to store and retrieve any amount of data at any point in time from anywhere on
the web.

Data Storage:

Backup and Recovery

Hosting Static Websites


Amazon S3 Buckets and Objects

S3 Versioning

Github : https://github.com/ervishnucs/S3_Quiz

EC2 Instance -AMI ,OS ,EBS Snapshot,

Traditional servers:

Physical hardware →CPU, RAM, OS, Disk, NIC(Network Internet Connection)

EC2 Elastic Compute Cloud service :

EC2 Instance -Virtual Servers in AWS

(Billed by seconds)

Provision EC2 Instance (Based on our Demand)

You can create your Instance based on AMI

AMI’s are pre configured templates that include base OS and any additional software .

In a single package , It provides various combinations of CPU ,Memory , Storage and networking
capacity for provisioning instance .

Factors Influenced in Billing:

• Type of instance
• How long do you run for?
• Right size your resources
• Choose the right IOPS of storage (Input output per second)
• Choose the right OS image
Creating Instances In EC2:

Creating Role in IAM: To access S3 from EC2


Changing Instance Role :

Running in EC2 in windows .

Writing and Adding file in S3 using EC2


Adding text files in bucket:

Writing Shell commands for adding files to bucket:


DAY 7:
AWS Lambda

Serverless Computing

Runs Code Without provisioning or Managing

Automatic Scaling

It runs when triggered

Event Driven Execution: lambda function execute in response to triggers from AWS Services like S3
and API gateway

Run time env→Supports multiple languages, Python, Java, Go, node.js

Stateless →Each execution is independent, so persistence storage must be handled using external
services like S3, Dynamo DB

Permissions Via IAM →Lambda use IAM roles to access other AWS Security.

Scaling and Concurrency → Automatically scales based on request mode

Monitoring→ Integrated with AWS cloud watch for logging and performance tracking

Task :

When one image is added to the bucket it also to be added to bucket copy using AWS lambda

import json

import boto3

s3 = boto3.client('s3') # Use boto3 client instead of resource

def lambda_handler(event, context):


print("Event:", json.dumps(event, indent=2))

for record in event['Records']:

print("Processing Record:", json.dumps(record, indent=2))

source_bucket = record['s3']['bucket']['name']

file_key = record['s3']['object']['key']

destination_bucket = source_bucket + "-copy" # Ensure correct destination bucket name

copy_source = {

'Bucket': source_bucket,

'Key': file_key

try:

s3.copy_object(CopySource=copy_source, Bucket=destination_bucket, Key=file_key)

print(f"File '{file_key}' copied from '{source_bucket}' to '{destination_bucket}' successfully.")

except Exception as e:

print(f"Error copying file: {str(e)}")

return {

'statusCode': 200,

'body': json.dumps('Lambda executed successfully!')

}
Aws Lambda

→Read CSV Update To Dynamo DB


Add Event :

S3 Put

"Records": [

"eventVersion": "2.0",

"eventSource": "aws:s3",

"awsRegion": "us-east-1",

"eventTime": "1970-01-01T00:00:00.000Z",

"eventName": "ObjectCreated:Put",

"userIdentity": {

"principalId": "EXAMPLE"

},

"requestParameters": {

"sourceIPAddress": "127.0.0.1"

},

"responseElements": {

"x-amz-request-id": "EXAMPLE123456789",

"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"

},

"s3": {

"s3SchemaVersion": "1.0",

"configurationId": "testConfigRule",

"bucket": {

"name": "vqgbn",

"ownerIdentity": {

"principalId": "EXAMPLE"

},

"arn": "arn:aws:s3:::vqgbn"
},

"object": {

"key": "new.csv",

"size": 1024,

"eTag": "0123456789abcdef0123456789abcdef",

"sequencer": "0A1B2C3D4E5F678901"

Lambda Function

import boto3

s3_client = boto3.client("s3")

dynamodb = boto3.resource("dynamodb")

table = dynamodb.Table("MarkSheet")

def lambda_handler(event, context):

bucket_name = event['Records'][0]['s3']['bucket']['name']

s3_file_name = event['Records'][0]['s3']['object']['key']

resp = s3_client.get_object(Bucket=bucket_name,Key=s3_file_name)

data = resp['Body'].read().decode("utf-8")

Students = data.split("\n")

for stud in Students:

print(stud)

stud_data = stud.split(",")

# add to dynamodb

try:
table.put_item(

Item = {

"Roll_no" :int(stud_data[0]),

"Name" : stud_data[1],

"Mark" : stud_data[2]

except Exception as e:

print("End of file")

S3 Bucket:

DynamoDb:
Task2:

Create Lamda function to add items to CSV file from Table (DyanamoDB)

Lambda Functions:

import boto3

import csv

import os

# Initialize AWS clients

dynamodb = boto3.resource("dynamodb")

s3_client = boto3.client("s3")
# DynamoDB table and S3 bucket details

DYNAMODB_TABLE = "MarkSheet" # Your actual DynamoDB table name

S3_BUCKET_NAME = "qws34" # Your actual S3 bucket name

CSV_FILE_NAME = "/tmp/new.csv" # Temporary file in Lambda

S3_KEY = "new.csv" # Path where file will be stored in S3

def lambda_handler(event, context):

table = dynamodb.Table(DYNAMODB_TABLE)

# Scan DynamoDB table to get all items

response = table.scan()

items = response.get("Items", [])

if not items:

print("No data found in DynamoDB table.")

return {

"statusCode": 404,

"body": "No data found in DynamoDB."

# Correct headers based on DynamoDB fields

headers = ["Roll_no", "Name", "Mark"]

# Write data to a temporary CSV file

with open(CSV_FILE_NAME, mode="w", newline="") as file:

writer = csv.DictWriter(file, fieldnames=headers)

writer.writeheader() # Write CSV headers

for item in items:

writer.writerow({

"Roll_no": item.get("Roll_no", " "),


"Name": item.get("Name", ""),

"Mark": item.get("Mark", " ")

})

print(f"CSV file created successfully: {CSV_FILE_NAME}")

# Upload the CSV file to S3

s3_client.upload_file(CSV_FILE_NAME, S3_BUCKET_NAME, S3_KEY)

print(f"CSV file uploaded to S3: s3://{S3_BUCKET_NAME}/{S3_KEY}")

# Clean up temporary file (optional)

os.remove(CSV_FILE_NAME)

return {

"statusCode": 200,

"body": f"CSV file successfully uploaded to s3://{S3_BUCKET_NAME}/{S3_KEY}"

Event :

"Records": [

"eventVersion": "2.0",

"eventSource": "aws:s3",

"awsRegion": "us-east-1",

"eventTime": "1970-01-01T00:00:00.000Z",

"eventName": "ObjectCreated:Put",
"userIdentity": {

"principalId": "EXAMPLE"

},

"requestParameters": {

"sourceIPAddress": "127.0.0.1"

},

"responseElements": {

"x-amz-request-id": "EXAMPLE123456789",

"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"

},

"s3": {

"s3SchemaVersion": "1.0",

"configurationId": "testConfigRule",

"bucket": {

"name": "qws34",

"ownerIdentity": {

"principalId": "EXAMPLE"

},

"arn": "arn:aws:s3:::qws34"

},

"object": {

"key": "new.csv",

"size": 1024,

"eTag": "0123456789abcdef0123456789abcdef",

"sequencer": "0A1B2C3D4E5F678901"

}
Task3:

Upload image meta data to Table when image is uploaded to the bucket

import boto3

import datetime

dynamodb = boto3.resource('dynamodb')

table = dynamodb.Table('Image_meta_data') # Change to your actual table name

def lambda_handler(event, context):

# Extract S3 event information

s3_event = event['Records'][0]['s3']

bucket_name = s3_event['bucket']['name']

object_key = s3_event['object']['key']

file_size = s3_event['object']['size']
# Current timestamp (upload time)

upload_time = datetime.datetime.utcnow().isoformat()

# Prepare metadata item

metadata_item = {

'file_id': object_key, # Use S3 object key as primary key

'filename': object_key,

'size': file_size,

'upload_date': upload_time

# Store metadata into DynamoDB

table.put_item(Item=metadata_item)

print(f"Metadata stored for {object_key} in DynamoDB.")

return {

'statusCode': 200,

'body': f'Metadata stored for {object_key}'

Event :

"Records": [

"eventVersion": "2.0",

"eventSource": "aws:s3",

"awsRegion": "us-east-1",

"eventTime": "1970-01-01T00:00:00.000Z",

"eventName": "ObjectCreated:Put",
"userIdentity": {

"principalId": "EXAMPLE"

},

"requestParameters": {

"sourceIPAddress": "127.0.0.1"

},

"responseElements": {

"x-amz-request-id": "EXAMPLE123456789",

"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"

},

"s3": {

"s3SchemaVersion": "1.0",

"configurationId": "testConfigRule",

"bucket": {

"name": "vpimageprocessing ",

"ownerIdentity": {

"principalId": "EXAMPLE"

},

"arn": "arn:aws:s3:::vpimageprocessing "

},

"object": {

"key": "krish.jpg",

"size": 1024,

"eTag": "0123456789abcdef0123456789abcdef",

"sequencer": "0A1B2C3D4E5F678901"

}
Day 8:
Launching Apache Server:
1⃣ Check Security Group Rules

• Go to EC2 Console > Instances > Select your instance > Security tab > Security Groups.

• Ensure that there is an Inbound Rule allowing:

o HTTP (Port 80)

o Source: 0.0.0.0/0 (for global access)

o (For IPv6, you can add ::/0 to allow all IPv6 traffic.)

Example:

Type Protocol Port Range Source

HTTP TCP 80 0.0.0.0/0

2⃣ Check if Apache (httpd) is Running

Run the following command:

bash

CopyEdit

sudo systemctl status httpd

• If it's inactive (dead) or failed, start it:

bash

CopyEdit
sudo systemctl start httpd

sudo systemctl enable httpd

• Confirm it's running:

bash

CopyEdit

sudo systemctl status httpd

3⃣ Ensure File Exists in the Correct Directory

The default web root directory is:

css

CopyEdit

/var/www/html/

Check if your index.html exists there:

bash

CopyEdit

ls -l /var/www/html/

If not, create one for testing:

bash

CopyEdit

echo "<h1>Hello from EC2 Web Server!</h1>" | sudo tee /var/www/html/index.html

4⃣ Check File and Directory Permissions

Make sure the /var/www/html directory and files have proper permissions:

bash

CopyEdit

sudo chmod -R 755 /var/www/html

sudo chown -R apache:apache /var/www/html


sudo yum update -y

sudo yum install -y httpd

sudo systemctl start httpd

sudo systemctl enable httpd

echo "<h1>Hello from EC2!</h1>" | sudo tee /var/www/html/index.html

sudo chmod -R 755 /var/www/html

sudo chown -R apache:apache /var/www/html

Why This Happens

• Your network administrator (or ISP) is blocking direct access to public IP addresses (like
http://3.91.123.45).

• This is common in corporate or educational networks, where direct access to IP-based URLs
is restricted (sometimes for security reasons, to block unregistered websites, or enforce
content filtering).

Day 9: Revision & Task Completion

Day 10 & 11:

API

An API Gateway is a server that acts as an entry point of client request to access microservice or API
in the system .

It provides a single interface to manage and root request to appropriate services , offering various
benefits such as

• Request Routine – It Routes incoming request to the current backend service. Based on URL
paths ,headers or other factors.
• Authentication and Authorization : It can handle security concern by verifying manging users
amenities and permissions before passing request to backend services .
• Rate Limiting the gateway can limit the no if request from user or clients to prevent overload
on backend service.
• Load Balancing: Gateway can distribute request evenly across multiple instance . Of a service
to ensure optimal load handling .
• Monitoring & Logging : Provides centralized monitoring and logging for easier tracking for
traffic ,errors and performance.

API gateway is serverless and fully managed service and make it easy for developers to create,
publish, monitor , migrate and secure API

• RESTful API
• Web socket Application

Task : Creating a Application Utilizing the AWS Lambda ,API Gateway

Github : https://github.com/ervishnucs/SceneIt

Important terms :Stages, Routes, Integrations ,CORS

ERRORS: Failed to fetch inner html →Correct id of elements

CORS → Allow origin access , Allow header content

For each changes deploy the stages manually .

Day 12 & 13 [6/3/25 -7/3/25]: AWS Project Submission


and Viva
GitHub: https://github.com/KIT-Devops-batch26/ecommerce-website-ervishnucs

Day 14 [8/3/25]:
DevOps

SRE →Software Reliabilty Engr

Platforn engr

Site Reliability Er
Devops vs Platform Er

Software Delievery Model →Waterfall and Agile

IT Governance ,GDPR

Git lab & Git Bucket

Git pull → fetch & download content from a remote repo & immediate update to local repo

For each request we should write what are they obtained? Tools used ? what are achieved? What is
done with the file?

It is similar to “Reademe .file”

Features→ Working branch /Testing

Environment:

Devlopement

Testing

Pre Product Demo – Some un handled edge cases may not be handled .Some team are allocated to
this env .Minor edge case can be identified.

Hot Storage : Immediate Retriving

Cold Storage : Stored for long period

Buid ===

Repo:nexus artifact repo

Sonar Qube →Code faulty check that deploy everywhere

Platform dept: Maintain & mangae with help of this tool

DevOps →They used to work with it

Pipeline===

Code run parallely

Jenkins , Gitlab

CI/CD Deploy==

{ Microservice Architect.--> Integ. Of diff app → Deploy } Build ,test ,deploy ,run

Linux –tar files windows →zip files

Release==

Monitoring in metrics
Logging and monitoring

Observability →came up eith inference of event prediction

Kernel level monitoring (Spluk tool)

Security===

Service level authentication and authorization

Scan vulnerability on dependency

What is Virtualization?

Virtualization is a technology that creates virtual versions of computer resources such as hardware
platforms, operating systems, storage devices, and network resources. It’s like creating a software-
based replica of a physical machine, allowing you to run multiple isolated environments on the same
hardware or across a distributed system.

• Imagine you have a powerful computer but you only use a small portion of its resources.

• Virtualization allows you to split that computer into several virtual machines (VMs), each
acting like a separate computer with its operating system and applications.

• Each virtual machine is isolated from the others, meaning issues in one virtual machine
won’t affect others.

• This allows you to optimize resource utilization, run multiple applications on a single
machine, and improve scalability by easily adding or removing virtual machines as needed.

What is Containerization?

Containerization is a lightweight form of virtualization that allows you to run applications and their
dependencies in isolated containers. Each container shares the same operating system kernel but is
isolated from other containers, providing a portable and consistent runtime environment for
applications.

• Containers provide process isolation, ensuring that applications running in one container do
not affect applications running in other containers.

• Containers encapsulate all dependencies and configuration required to run an application,


making them portable across different environments.

• Containers are lightweight compared to traditional virtual machines (VMs) because they
share the host operating system kernel.

• Containers are designed to be scalable, allowing you to quickly scale up or down based on
demand.

• Containers enable developers to build, test, and deploy applications more efficiently, leading
to faster release cycles and improved collaboration between development and operations
teams.
Virtualization Vs. Containerization

Below are the differences between virtualization and Containerization.

Aspect Virtualization Containerization

Each VM runs its own guest Containers share the host operating
Isolation operating system system kernel

Each VM requires its own set of Containers are lightweight and share
Resource Usage resources host resources

May have higher overhead due to Lower overhead as containers share the
Performance multiple OS instances host OS kernel

VMs are less portable due to Containers are highly portable across
Portability varying guest OS different systems

Deployment Slower deployment times due to Faster deployment times as containers


Speed OS boot process start quickly

Resource Requires more resources as each More efficient resource utilization with
Utilization VM has its own OS containerization

Mature ecosystem with various Growing ecosystem with tools like


Ecosystem hypervisors and tools Docker and Kubernetes

Ideal for running multiple Ideal for microservices architectures and


Use Cases applications on a single host cloud-native applications
Docker is a platform that allows you to package, ship, and run
applications within lightweight, isolated environments called
containers, ensuring consistent execution across different
systems. Docker is written in the Go programming language and
takes advantage of several features of the Linux kernel to deliver
its functionality. Docker uses a technology called namespaces to
provide the isolated workspace called the container. When you
run a container, Docker creates a set of namespaces for that
container.

The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as
images, containers, networks, and volumes.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When
you use commands such as docker run, the client sends these commands to dockerd, which carries
them out.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and
Docker looks for images on Docker Hub by default. You can even run your own private registry.

When you use the docker pull or docker run commands, Docker pulls the required images from your
configured registry. When you use the docker push command, Docker pushes your image to your
configured registry.

Images

An image is a read-only template with instructions for creating a Docker container. Often, an image is
based on another image, with some additional customization.

WSL for Docker Desktop

Docker Desktop for Windows, WSL (Windows Subsystem for Linux) 2 is a backend that allows you to
run Linux containers on Windows, leveraging a full Linux kernel and improving file system sharing
and boot times, according to Docker Docs.

Here's a more detailed explanation:


• What is WSL?

WSL is a feature in Windows that allows you to run a Linux environment (including command-line
tools and GUI apps) directly on Windows, without needing a virtual machine or dual boot.

• Why use WSL 2 with Docker?

Docker Desktop for Windows uses WSL 2 as a backend to run the Docker daemon and containers,
enabling you to run both Linux and Windows containers on the same machine.

Docker should be run in latest image

Day :15 10/3/25


Docker Commands

Docker Run command

This command is used to run a container from an image. The docker run command is a combination
of the docker create and docker start commands. It creates a new container from the image specified
and starts that container. if the docker image is not present, then the docker run pulls that.

$ docker run <image_name>


To give name of container
$ docker run --name <container_name> <image_name>

Docker Pull

This command allows you to pull any image which is present in the official registry of docker, Docker
hub. By default, it pulls the latest image, but you can also mention the version of the image.

$ docker pull <image_name>


Docker PS

This command (by default) shows us a list of all the running containers. We can use various flags with
it.

• -a flag: shows us all the containers, stopped or running.

• -l flag: shows us the latest container.

• -q flag: shows only the Id of the containers.

$ docker ps [options..]

Docker Stop

This command allows you to stop a container if it has crashed or you want to switch to another one.

$ docker stop <container_ID>

Docker Start

Suppose you want to start the stopped container again, you can do it with the help of this command.

$ docker start <container_ID>

Docker rm

To delete a container. By default when a container is created, it gets an ID as well as an imaginary


name such as confident_boyd, heuristic_villani, etc. You can either mention the container name or its
ID.
Some important flags:

• -f flag: remove the container forcefully.

• -v flag: remove the volumes.

• -l flag: remove the specific link mentioned.

$ docker rm {options} <container_name or ID>

Docker RMI

To delete the image in docker. You can delete the images which are useless from the docker local
storage so you can free up the space

docker rmi <image ID/ image name>

Docker Images

Lists all the pulled images which are present in our system.

$ docker images

Docker exec

This command allows us to run new commands in a running container. This command only works
until the container is running, after the container restarts, this command does not restart.

Some important flags:

• -d flag: for running the commands in the background.

• -i flag: it will keep STDIN open even when not attached.

• -e flag: sets the environment variables

$ docker exec {options}


Docker Ports (Port Mapping)

In order to access the docker container from the outside world, we have to map the port on our
host( Our laptop for example), to the port on the container. This is where port mapping comes into
play.

$ docker run -d -p <port_on_host>


<port_on_container> Container_name

So these were the 9 most basic docker commands that every beginner must know. Containerization
is a very vast topic but you can start from the very basic commands and by practicing them daily you
can master them.

Docker Login

The Docker login command will help you to authenticate with the Docker hub by which you can push
and pull your images.

docker login

It will ask you to enter the username and password after that you will authenticate with DockerHub
and you can perform the tasks.

Docker Push

Once you build your own customized image by using Dockerfile you need to store the image in the
remote registry which is DockerHub for that you need to push your image by using the following
command. To know more about How to Push a Container Image to a Docker Repository?

docker push <Image name/Image ID>

Docker Build

The docker build command is used to build the docker images with the help of Dockerfile.

docker build -t image_name:tag .


In the place of image_name use the name of the image you build with and give the tag number and .
“dot” represents the current directory.

Docker Stop

You can stop and start the docker containers where you can do the maintenance for containers. To
stop and start specific containers you can use the following commands.

docker stop container_name_or_id

Stop Multiple Containers

Instead of stopping a single container. You can stop multiple containers at a time by using the
following commands.

docker stop container1 container2 container3

Docker Restart

While running the containers in Docker you may face some errors and containers fails to start. You
can restart the containers to resolve the containers by using the following commands.

docker restart container_name_or_id

Docker Inspection

Docker containers will run into some errors in real time to debug the container’s errors you can use
the following commands.

docker inspect container_name_or_id

Docker Commit command

After running the containers by using the current image you can make the updates to the containers
by interacting with the containers from that containers you can create an image by using the
following commands.

docker commit container_name_or_id new_image_name:tag

Docker Basic Command

Following are the some of the docker basic commands

1. docker images: Docker images will list all the images which are pulled or build in that docker
host.

2. docker pull: Docker pull will the docker images from the dockerhub.

3. docker run: Docker run will run the docker image as an container.

4. docker ps: Docker run will list all the containers which are running in the docker host.

5. docker stop: Docker stop will stop the docker container which are already running.

6. docker rm: Docker rm command will remove the containers which are in the stop condition.

Docker Commands List


Following are the docker commands which listed form build and Docker image to running it an
Docker container and attaching the docker volumes to it.

Docker Image Command

1. docker build command: It will build Docker images by using the Dockerfile.

2. docker pull command: Docker pull command will pull the Docker image whcih is avalible in
the dockerhub.

3. docker images command: It will list all the images which are pulled and build in the docker
host.

4. docker inspect command: It will helps to debug the docker image if any errors occurred
while building an image or pulling the image.

5. docker push command: Docker command will push the docker image into the Dockerhub.

6. docker save command: It will save the docker image in the form of dockerfile.

7. docker rmi command: It will remove the docker image.

Docker File:

Stages of Creating Docker Image from Dockerfile

The following are the stages of creating docker image form Dockerfile:

1. Create a file named Dockerfile.

2. Add instructions in Dockerfile.

3. Build Dockerfile to create an image.

4. Run the image to create a container.

Dockerfile commands/Instructions

1. FROM

• Represents the base image(OS), which is the command that is executed first before any other
commands.

Syntax

FROM <ImageName>

Example: The base image will be ubuntu:19.04 Operating System.

FROM ubuntu:19.04

2. COPY

• The copy command is used to copy the file/folders to the image while building the image.

Syntax:

COPY <Source> <Destination>


Example: Copying the .war file to the Tomcat webapps directory

COPY target/java-web-app.war /usr/local/tomcat/webapps/java-web-app.war

3. ADD

• While creating the image, we can download files from distant HTTP/HTTPS destinations using
the ADD command.

Syntax

ADD <URL>

Example: Try to download Jenkins using ADD command

ADD https://get.jenkins.io/war/2.397/jenkins.war

4. RUN

• Scripts and commands are run with the RUN instruction. The execution of RUN commands or
instructions will take place while you create an image on top of the prior layers (Image).

Syntax

RUN < Command + ARGS>

Example

RUN touch file

5. CMD

• The main purpose of the CMD command is to start the process inside the container and it
can be overridden.

Syntax

CMD [command + args]

Example: Starting Jenkins

CMD ["java","-jar", "Jenkins.war"]

6. ENTRYPOINT

• A container that will function as an executable is configured by ENTRYPOINT. When you start
the Docker container, a command or script called ENTRYPOINT is executed.

• It can’t be overridden.The only difference between CMD and ENTRYPOINT is CMD can be
overridden and ENTRYPOINT can’t.

Syntax

ENTRYPOINT [command + args]

Example: Executing the echo command.

ENTRYPOINT ["echo","Welcome to GFG"]

7. MAINTAINER
• By using the MAINTAINER command we can identify the author/owner of the Dockerfile and
we can set our own author/owner for the image.

Syntax:

MAINTAINER <NAME>

Example: Setting the author for the image as a GFG author.

MAINTAINER GFG author

Docker Networking

Networking allows containers to communicate with each other and with the host system. Containers
run isolated from the host system and need a way to communicate with each other and with the
host system.

By default, Docker provides two network drivers for you, the bridge and the overlay drivers.

docker network ls

NETWORK ID NAME DRIVER

xxxxxxxxxxxx none null

xxxxxxxxxxxx host host

xxxxxxxxxxxx bridge bridge

Bridge Networking

The default network mode in Docker. It creates a private network between the host and containers,
allowing containers to communicate with each other and with the host system.
If you want to secure your containers and isolate them from the default bridge network you can also
create your own bridge network.

docker network create -d bridge my_bridge

Now, if you list the docker networks, you will see a new network.

docker network ls

NETWORK ID NAME DRIVER

xxxxxxxxxxxx bridge bridge

xxxxxxxxxxxx my_bridge bridge

xxxxxxxxxxxx none null

xxxxxxxxxxxx host host

This new network can be attached to the containers, when you run these containers.

docker run -d --net=my_bridge --name db training/postgres

This way, you can run multiple containers on a single host platform where one container is attached
to the default network and the other is attached to the my_bridge network.

These containers are completely isolated with their private networks and cannot talk to each other.

However, you can at any point of time, attach the first container to my_bridge network and enable
communication

docker network connect my_bridge web


Host Networking

This mode allows containers to share the host system's network stack, providing direct access to the
host system's network.

To attach a host network to a Docker container, you can use the --network="host" option when
running a docker run command. When you use this option, the container has access to the host's
network stack, and shares the host's network namespace. This means that the container will use the
same IP address and network configuration as the host.

Here's an example of how to run a Docker container with the host network:

docker run --network="host" <image_name> <command>

Keep in mind that when you use the host network, the container is less isolated from the host
system, and has access to all of the host's network resources. This can be a security risk, so use the
host network with caution.

Additionally, not all Docker image and command combinations are compatible with the host
network, so it's important to check the image documentation or run the image with the --
network="bridge" option (the default network mode) first to see if there are any compatibility issues.

Overlay Networking

This mode enables communication between containers across multiple Docker host machines,
allowing containers to be connected to a single network even when they are running on different
hosts.

Macvlan Networking

This mode allows a container to appear on the network as a physical host rather than as a container.
Docker Volumes

## Problem Statement

It is a very common requirement to persist the data in a Docker container beyond the lifetime of the
container. However, the file system

of a Docker container is deleted/removed when the container dies.

## Solution

There are 2 different ways how docker solves this problem.

1. Volumes

2. Bind Directory on a host as a Mount

### Volumes

Volumes aims to solve the same problem by providing a way to store data on the host file system,
separate from the container's file system,

so that the data can persist even if the container is deleted and recreated.

![image](https://user-images.githubusercontent.com/43399466/218018334-286d8949-d155-4d55-
80bc-24827b02f9b1.png)

Volumes can be created and managed using the docker volume command. You can create a new
volume using the following command:

```

docker volume create <volume_name>

```
Once a volume is created, you can mount it to a container using the -v or --mount option when
running a docker run command.

For example:

```

docker run -it -v <volume_name>:/data <image_name> /bin/bash

```

This command will mount the volume <volume_name> to the /data directory in the container. Any
data written to the /data directory

inside the container will be persisted in the volume on the host file system.

### Bind Directory on a host as a Mount

Bind mounts also aims to solve the same problem but in a complete different way.

Using this way, user can mount a directory from the host file system into a container. Bind mounts
have the same behavior as volumes, but

are specified using a host path instead of a volume name.

For example,

```

docker run -it -v <host_path>:<container_path> <image_name> /bin/bash

```

## Key Differences between Volumes and Bind Directory on a host as a Mount

Volumes are managed, created, mounted and deleted using the Docker API. However, Volumes are
more flexible than bind mounts, as

they can be managed and backed up separately from the host file system, and can be moved
between containers and hosts.
In a nutshell, Bind Directory on a host as a Mount are appropriate for simple use cases where you
need to mount a directory from the host file system into

a container, while volumes are better suited for more complex use cases where you need more
control over the data being persisted

in the container.

Versioning : Python 3.9

1.o.1 Python 3.9 alphine →compressed


version of python to make light weight
1. major version containers.
o. minor version →vulnerability fixes ,
minute features enabled here

1. patch version → bug install ,features


unstable can be fixed here ,visible
feautures

Day 15:
Artifactory is a branded term to refer to a repository manager that organizes all of your binary
resources. These resources can include remote artifacts, proprietary libraries, and other third-party
resources. A repository manager pulls all of these resources into a single location.

Incident response (IR) refers to an organization's processes and technologies for detecting,
containing, and recovering from cybersecurity incidents, breaches, or cyberattack

Root cause analysis (RCA) is defined as a collective term that describes a wide range of approaches,
tools, and techniques used to uncover causes of problems.

How to write a pipeline ?

Stage:Build

BuildImage

Docker Build

Stage :Push

Push to the docker hub


Docker push nginx (1.9.0) → this indicates the version to be build in your local machine

Stage:Deploy

DeploytoEC2

Docker run

Test :App test

.sh

Tools in Devops

1)Repo

Github , Gitlab & Bit Bucket

2)Artifactory JFONT ,NEXUX

3)CI Github action Jenkins Gitlab CI/CD


4)Continuoud Deployment (CD)

1)ARGO CD

5)Code Quality SonarQube

6)Hash Map wallet Key Vailt AWS

7)Monitoring Promethius Grafana

Need of Kubernetes

Container req increases , container overloads & stops

Scaling will help you to manage request

What is Kubernetes?

Container handling service

Self Healing →Stops or remove task

Auto Healing →Container reboots within few seconds

YAML:

---

doe: "a deer, a female deer"

ray: "a drop of golden sun"

pi: 3.14159
xmas: true

french-hens: 3

calling-birds:

- huey

- dewey

- louie

- fred

xmas-fifth-day:

calling-birds: four

french-hens: 3

golden-rings: 5

partridges:

count: 1

location: "a pear tree"

turtle-doves: two (two sapces only)

JSON

"doe": "a deer, a female deer",

"ray": "a drop of golden sun",

"pi": 3.14159,

"xmas": true,

"french-hens": 3,

"calling-birds": [

"huey",

"dewey",

"louie",

"fred"

],

"xmas-fifth-day": {

"calling-birds": "four",
"french-hens": 3,

"golden-rings": 5,

"partridges": {

"count": 1,

"location": "a pear tree"

},

"turtle-doves": "two"

Arrays :

---

items:

-1

-2

-3

-4

-5

names:

- "one"

- "two"

- "three"

- "four"

String :

---

foo: this is a normal string

---

foo: "this is not a normal string\n"

bar: this is not a normal string\n

nested :

---
foo:

bar:

- bar

- rab

- plop

Kubernetes:

Nodes →VM

Worker & master

Kubeproxy ,CORE DNS →Network level

Kube-proxy is the routing layer used by Kubernetes to route traffic between nodes in a cluster. Built
on iptables, kube-proxy operates at Layer 4, e.g., it routes TCP, UDP, and SCTP.

CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. When you
launch an Amazon EKS cluster with at least one node, two replicas of the CoreDNS image are
deployed by default, regardless of the number of nodes deployed in your cluster.

Kubelet →Container level

the primary node agent that runs on each node and is responsible for managing and maintaining
containers, ensuring they match the desired state specified in pod definitions

Day 16:
Kubernetes – Architecture

Kubernetes – Cluster Architecture

Kubernetes comes with a client-server architecture. It consists of master and worker nodes, with the
master being installed on a single Linux system and the nodes on many Linux workstations.

The master node, contains the components such as API Server, controller manager, scheduler, and
etcd database for stage storage. kubelet to communicate with the master, the kube-proxy for
networking, and a container runtime such as Docker to manage containers.

Kubernetes Components

Kubernetes is composed of a number of components, each of which plays a specific role in the
overall system. These components can be divided into two categories:

• nodes: Each Kubernetes cluster requires at least one worker node, which is a collection of
worker machines that make up the nodes where our container will be deployed.

• Control plane: The worker nodes and any pods contained within them will be under the
control plane.
Control Plane Components

It is basically a collection of various components that help us in managing the overall health of a
cluster. For example, if you want to set up new pods, destroy pods, scale pods, etc. Basically, 4
services run on Control Plane:

1. Kube-API server

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. It is
like an initial gateway to the cluster that listens to updates or queries via CLI like Kubectl. Kubectl
communicates with API Server to inform what needs to be done like creating pods or deleting pods
etc. It also works as a gatekeeper. It generally validates requests received and then forwards them to
other processes. No request can be directly passed to the cluster, it has to be passed through the API
Server.

2. Kube-Scheduler

When API Server receives a request for Scheduling Pods then the request is passed on to the
Scheduler. It intelligently decides on which node to schedule the pod for better efficiency of the
cluster.

3. Kube-Controller-Manager

The kube-controller-manager is responsible for running the controllers that handle the various
aspects of the cluster’s control loop. These controllers include the replication controller, which
ensures that the desired number of replicas of a given application is running, and the node
controller, which ensures that nodes are correctly marked as “ready” or “not ready” based on their
current state.

4. etcd

It is a key-value store of a Cluster. The Cluster State Changes get stored in the etcd. It acts as the
Cluster brain because it tells the Scheduler and other processes about which resources are available
and about cluster state changes.

Node Components(Data Pane)


These are the nodes where the actual work happens. Each Node can have multiple pods and pods
have containers running inside them. There are 3 processes in every Node that are used to Schedule
and manage those pods.

The following are the some of the components related to Node:

1. Container runtime

A container runtime is needed to run the application containers running on pods inside a pod.
Example-> Docker

2. kubelet

kubelet interacts with both the container runtime as well as the Node. It is the process responsible
for starting a pod with a container inside.

3. kube-proxy

It is the process responsible for forwarding the request from Services to the pods. It has intelligent
logic to forward the request to the right pod in the worker node.

Pod : Group of container or single

Indivisibe unit

It’s like container

They contains Security features

To view a list of all the pods in the cluster, you can use the following command:

kubectl get pods


To view a list of all the nodes in the cluster, you can use the following command:

kubectl get nodes

To view a list of all the services in the cluster, you can use the following command:

kubectl get services

What Does Minikube Do?

Minikube is a one node Kubernetes cluster that runs in VirtualBox on your local machine in order to
test Kubernetes on your local set up. In order to setup a production cluster, we need multiple master
notes and multiple worker nodes. Master nodes and worker nodes have their own separate
responsibilities. In order to test something on our local environment for example - deploying a new
application or a new component and to test it on our local machine, it will be very difficult and would
consume a lot of resources like CPU, memory etc.

Minikube comes into play here. Minikube is an open-source tool that is basically like a one node
cluster where the master processes and the work processes both run on one node. This node must
have a Docker container runtime pre-installed to run the containers or pods with containers on this
node.

Deploying a Service Using Minikube and Kubectl

Kubectl is the Kubernetes CLI tool. To know more about Kubectl, read the following article on
GeeksforGeeks - Kubernetes - Kubectl. Follow these steps to deploy a service using Minikube and
Kubectl:

Step 1. Enter the following command to start the Kubernetes cluster:

minikube start

If you are logged in as a root user you might get an error message:

Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.

This is because the default driver in the Docker driver and it should not be used with root privileges.
For fixing this we should log in as a different user that the root user.
Enter the following command to add a new user

adduser [USERNAME]

Now enter the following command to login as a new user

su - [USERNAME]

Now upon entering the same minikube start command

minikube start

you will get a similar output, wait for some time until the installation of dependencies gets
completed:

Step 2. Now if we check the number of nodes running, we will get the minikube node running.

kubectl get nodes

this will give you the following output:

or you can also check the status by running the following command:

minikube status

you will get a similar output on the terminal:

Step 2. To find out the services that are currently running on Kubernetes enter the following
command:

kubectl get services

you will only see the default Kubernetes service running.

Step 3. Enter the following command on your terminal to create a deployment

kubectl create deployment gggy --image=kicbase/echo-server:1.0

Step 4. Enter the following command to expose the deployment to the port 6000:

kubectl expose deployment gggy --type=NodePort --port=6000

Step 5. Now when you check out the list of services


kubectl get services

you will find the geeksforgeeks service:

Step 6. Now to run the service, enter the following minikube command:

minikube service geeksforgeeks

It will display the following result:

You have to just copy the address (http://127.0.0.1:42093/ for me) and paste it to your browser to
see the application website. It looks like the following:

Some common Minikube command

1. Deleting The Minikube Cluster

You just have to simply enter the following command to delete the minikube cluster.

minikube delete

2. To Pause Kubernetes Without Impacting Deployed Applications.

minikube pause

3. To Unpause The Instance

minikube unpause

4. To Stop The Cluster

minikube stop

5. To Delete All Minikube Clusters

minikube delete --all

C:\Users\pmlba>minikube start

* minikube v1.35.0 on Microsoft Windows 11 Home Single Language 10.0.22631.4602 Build


22631.4602

* Using the docker driver based on existing profile


* Starting "minikube" primary control-plane node in "minikube" cluster

* Pulling base image v0.0.46 ...

* Restarting existing docker container for "minikube" ...

! Failing to connect to https://registry.k8s.io/ from inside the minikube container

* To pull new external images, you may need to configure a proxy:


https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

* Preparing Kubernetes v1.32.0 on Docker 27.4.1 ...

* Verifying Kubernetes components...

- Using image gcr.io/k8s-minikube/storage-provisioner:v5

* Enabled addons: default-storageclass, storage-provisioner

* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

C:\Users\pmlba>kubectl get nodes

NAME STATUS ROLES AGE VERSION

minikube Ready control-plane 7d1h v1.32.0

C:\Users\pmlba>minikube status

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

C:\Users\pmlba>kubectl get srvice

error: the server doesn't have a resource type "srvice"

C:\Users\pmlba>kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d1h


C:\Users\pmlba>kubectl create deployment vishnu --image=kicbase/echo-server:1.0

deployment.apps/vishnu created

C:\Users\pmlba>minikube service vishnu

X Exiting due to SVC_NOT_FOUND: Service 'vishnu' was not found in 'default' namespace.

You may select another namespace by using 'minikube service vishnu -n <namespace>'. Or list out all
the services using 'minikube service list'

C:\Users\pmlba>kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d1h

C:\Users\pmlba>kubectl expose deployment vishnu --type=NodePort --port=6000

service/vishnu exposed

C:\Users\pmlba>kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d1h

vishnu NodePort 10.110.106.135 <none> 6000:30668/TCP 3s

C:\Users\pmlba>minikube service vishnu

|-----------|--------|-------------|---------------------------|

| NAMESPACE | NAME | TARGET PORT | URL |

|-----------|--------|-------------|---------------------------|

| default | vishnu | 6000 | http://192.168.49.2:30668 |

|-----------|--------|-------------|---------------------------|

* Starting tunnel for service vishnu.

|-----------|--------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |

|-----------|--------|-------------|------------------------|

| default | vishnu | | http://127.0.0.1:50317 |

|-----------|--------|-------------|------------------------|

* Opening service default/vishnu in default browser...

! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

In this if we delete the pod it will be deleted .

This is the file to create the pod

apiversions:apps/v1

kind:deployment

metadata:

name:nginx-deployment

spec:-------------------------------------→specification for pod

replicas:3--------------------------→this makes you to have replicas

selector:

matchlabels:

app:nginx

templates:

metadata:

labels:

app:nginx

spec:

containers:

-name:nginx

Image:nginx-latest

Port:

-containerPort:80
Even you create the duplicate pods the Kubernetes will have unique name for each pods

Round Robin Algo→Schedule pods in nodes across diff zones in diff nodes

However , we use only single URL to work in this only one endpoint for n no of pods

To automate pod creation when deleted

apiversions:apps/v1

kind:service ------------→this makes the diff in automation ,this help you manage states even pods
deleted

metadata:

name:nginx-deployment

spec:-------------------------------------→specification for pod

replicas:3--------------------------→this makes you to have replicas always 3 pods runs on backend if


one deleted it is managed

selector:

matchlabels:

app:nginx

templates:

metadata:

labels:

app:nginx

spec:

containers:

-name:nginx

Image:nginx-latest

Port:

-containerPort:80
Day :17 & 18

Task 1: Create next.js file in EC2 server and deploy using Jenkins

# 1. Update System Packages

sudo apt update && sudo apt upgrade -y

# 2. Install Node.js & npm (LTS version)

curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -

sudo apt install -y nodejs

node -v

npm -v

# 3. Install PM2 (Process Manager)

sudo npm install -g pm2

# 4. Upload Next.js Project

git clone https://github.com/your-repo.git

cd your-repo

npm install

npm run build

# 5. Start Next.js App with PM2

pm2 start npm --name "nextjs-app" -- start

pm2 save

pm2 startup

# 6. Install & Configure Nginx as Reverse Proxy

sudo apt install nginx -y

sudo vi /etc/nginx/sites-available/default
# Add this config:

server {

listen 80;

server_name YOUR_SERVER_IP;

location / {

proxy_pass http://localhost:3000;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

# Restart Nginx

sudo systemctl restart nginx

# Check Logs

pm2 logs nextjs-app

sudo tail -f /var/log/nginx/error.log

# 7. Install Jenkins (CI/CD Automation)

sudo apt update

sudo apt install openjdk-21-jdk -y

sudo wget -O /usr/share/keyrings/jenkins-keyring.asc https://pkg.jenkins.io/debian-stable/jenkins.io-


2023.key

echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable


binary/" | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt update

sudo apt install jenkins -y


sudo systemctl status jenkins

# 8. Get Jenkins Initial Admin Password

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

# 9. Set Permissions for Jenkins

sudo su - jenkins

sudo passwd jenkins

sudo vi /etc/sudoers

# Add this line:

jenkins ALL=(ALL) NOPASSWD: ALL

sudo chown -R jenkins:jenkins /var/lib/jenkins/workspace

# 10. Set Up GitHub Webhook (In GitHub Settings)

Payload URL: http://your-jenkins-server/github-webhook/

Content Type: application/json

Event Trigger: Just the push event

SSL Verification: Enable

# 11. Restart Everything

pm2 restart nextjs-app

sudo systemctl restart nginx

Jenkins: Freestyle vs. Pipeline Jobs Setup

1⃣ Freestyle Job Setup

A Freestyle Job is a simple, GUI-based approach for running builds.

Steps to Set Up a Freestyle Job

1. Login to Jenkins

o Open your browser and go to: http://your-server-ip:8080


o Use the initial admin password (sudo cat
/var/lib/jenkins/secrets/initialAdminPassword)

2. Create a New Job

o Click New Item → Select Freestyle Project

o Name it: nextjs-deploy → Click OK

3. Connect GitHub Repository

o Go to Source Code Management (SCM) → Select Git

o Enter your GitHub repo URL:

o https://github.com/your-username/your-nextjs-repo.git

o Add Git Credentials (if private repo)

4. Add Build Steps

o Click Add Build Step → Select Execute Shell

o Paste this script to install dependencies & build:

o npm install

o npm run build

5. Deploy the Next.js App with PM2

o Click Add Build Step → Execute Shell

o Add this script:

o pm2 restart nextjs-app || pm2 start npm --name "nextjs-app" -- start

o pm2 save

6. Set Up Post-Build Actions

o Click Add Post-Build Action → Git Publisher (optional for tagging builds)

7. Save & Build

o Click Save → Click Build Now

Freestyle Job Summary

• Trigger: Manual or GitHub Webhook

• Deployment: Uses shell scripts

• Ease of Use: GUI-based setup

2⃣ Pipeline Job Setup

A Pipeline Job is a script-based approach using Jenkinsfile for CI/CD automation.


Steps to Set Up a Pipeline Job

1. Create a New Pipeline Job

o Click New Item → Select Pipeline

o Name it: nextjs-deploy-pipeline → Click OK

2. Connect GitHub Repository

o Under Pipeline Definition, select Pipeline script from SCM

o Choose Git → Enter your repo URL

o Set Script Path to Jenkinsfile

3. Create a Jenkinsfile in Your Repo


Add a Jenkinsfile in your repository root with this content:

4. pipeline {

5. agent any

6.

7. stages {

8. stage('Checkout Code') {

9. steps {

10. git 'https://github.com/your-username/your-nextjs-repo.git'

11. }

12. }

13.

14. stage('Install Dependencies') {

15. steps {

16. sh 'npm install'

17. }

18. }

19.

20. stage('Build Project') {

21. steps {

22. sh 'npm run build'

23. }

24. }
25.

26. stage('Deploy with PM2') {

27. steps {

28. sh '''

29. pm2 restart nextjs-app || pm2 start npm --name "nextjs-app" -- start

30. pm2 save

31. '''

32. }

33. }

34. }

35.

36. post {

37. success {

38. echo "Deployment Successful "

39. }

40. failure {

41. echo "Deployment Failed "

42. }

43. }

44. }

45. Save & Run the Pipeline

o Click Save → Click Build Now

Pipeline Job Summary

• Trigger: Manual, GitHub Webhook, or Auto-Scheduled

• Deployment: Defined in a Jenkinsfile

• Flexibility: Easier to modify & version control

3⃣ Setting Up a GitHub Webhook (For Auto Deployment)

1. Go to Your GitHub Repo → Settings → Webhooks

2. Click "Add Webhook"

3. Set Payload URL:


4. http://your-jenkins-server/github-webhook/

5. Content Type: application/json

6. Triggers: Choose "Just the push event"

7. Save Webhook

Steps to Deploy "Hello World" in EC2 and Integrate with Jenkins & GitHub (CI/CD Pipeline)

1. Set Up EC2 Instance

o Launch an EC2 instance (Amazon Linux/Ubuntu).

o Install Git, Python, and Jenkins on the EC2 instance.

2. Create a Python File (hello.py) in EC2

o Write a simple "Hello, World!" Python script and store it in a GitHub repository.

3. Install & Configure Jenkins

o Install Jenkins on the EC2 instance.

o Set up a Jenkins Pipeline Job to pull the latest code from GitHub and deploy it.

4. Integrate Jenkins with GitHub

o Configure GitHub Webhook to trigger Jenkins on updates.

o Ensure Jenkins has permission to update the GitHub repository.

Step-by-Step Implementation

Step 1: Launch EC2 & Install Required Tools

1. SSH into your EC2 instance:

2. ssh -i your-key.pem ec2-user@your-ec2-ip

3. Install required packages:

4. sudo yum update -y # For Amazon Linux

5. sudo yum install git python3 -y

6. Install Jenkins:

7. sudo yum install java-11-openjdk -y

8. sudo wget -O /etc/yum.repos.d/jenkins.repo \

9. https://pkg.jenkins.io/redhat/jenkins.repo

10. sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key

11. sudo yum install jenkins -y


12. sudo systemctl enable jenkins

13. sudo systemctl start jenkins

14. Get the Jenkins initial password:

15. sudo cat /var/lib/jenkins/secrets/initialAdminPassword

16. Open http://<EC2-PUBLIC-IP>:8080 in a browser and complete the setup.

Step 2: Create the hello.py Script

1. Clone your GitHub repository:

2. git clone https://github.com/your-username/your-repo.git

3. cd your-repo

4. Create a Python script:

5. echo 'print("Hello, World!")' > hello.py

6. Commit & Push to GitHub:

7. git add hello.py

8. git commit -m "Initial hello world script"

9. git push origin main

Step 3: Configure Jenkins Job

1. Open Jenkins → Create a New Item → Choose Pipeline.

2. Go to Pipeline → Choose Pipeline Script from SCM.

3. Set GitHub Repository URL and add credentials if needed.

4. Use the following Jenkinsfile (store it in your repo as Jenkinsfile):

5. pipeline {

6. agent any

7. stages {

8. stage('Checkout') {

9. steps {

10. git branch: 'main', url: 'https://github.com/your-username/your-repo.git'

11. }

12. }

13. stage('Run Python Script') {


14. steps {

15. sh 'python3 hello.py'

16. }

17. }

18. stage('Update GitHub if Modified') {

19. steps {

20. script {

21. def changes = sh(script: 'git status --porcelain', returnStdout: true).trim()

22. if (changes) {

23. sh '''

24. git config --global user.email "your-email@example.com"

25. git config --global user.name "Jenkins CI"

26. git add .

27. git commit -m "Automated update from Jenkins"

28. git push origin main

29. '''

30. }

31. }

32. }

33. }

34. }

35. }

Step 4: Automate with GitHub Webhook

1. Go to GitHub Repo → Settings → Webhooks.

2. Add Jenkins Webhook URL:

3. http://<EC2-PUBLIC-IP>:8080/github-webhook/

4. Choose Just the push event.


Day 19
Terraform : Infrastructure as code

Task1 :Creating EC2 instance

provider "aws" {

region = "us-east-1"

resource "aws_instance" "ec2_instance" {

ami = "ami-0c55b159cbfafe1f0" # Replace with a valid AMI ID

instance_type = "t2.micro"

key_name = aws_key_pair.my_key.key_name

security_groups = [aws_security_group.ec2_sg.name]

tags = {

Name = "Terraform-EC2-With-SG"

Task2:Creating S3 Bukets

provider "aws" {

region = "us-east-1"

resource "aws_s3_bucket" "my_bucket" {

bucket = "my-terraform-bucket-${random_id.bucket_id.hex}"

tags = {

Name = "MyTerraformS3Bucket"

}
resource "random_id" "bucket_id" {

byte_length = 4

Task3:Creating EC2 instance along with a .pem file and attach to the instance

provider "aws" {

region = "us-east-1"

# Generate a new SSH private key

resource "tls_private_key" "my_ssh_key" {

algorithm = "RSA"

rsa_bits = 2048

# Save the private key locally

resource "local_file" "private_key" {

content = tls_private_key.my_ssh_key.private_key_pem

filename = "${path.module}/my-terraform-key.pem"

file_permission = "0600"

# Upload the generated public key to AWS as a Key Pair

resource "aws_key_pair" "my_key" {

key_name = "my-terraform-key"

public_key = tls_private_key.my_ssh_key.public_key_openssh

# Create an EC2 instance using the generated key


resource "aws_instance" "ec2_instance" {

ami = "ami-0c55b159cbfafe1f0" # Replace with a valid AMI ID

instance_type = "t2.micro"

key_name = aws_key_pair.my_key.key_name # Attach the AWS key pair

tags = {

Name = "Terraform-EC2-With-Generated-Key"

Terraform init

Terraform validate

Terraform plan

Terraform apply

Task4:Creating EC2 instance along with security group.

provider "aws" {

region = "us-east-1"

resource "aws_security_group" "ec2_sg" {

name = "ec2-security-group"

description = "Allow SSH & HTTP access"

ingress {

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"] # Allow SSH from anywhere (restrict this in production)

}
ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"] # Allow HTTP traffic

egress {

from_port = 0

to_port =0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

Breakdown of Each Block

1⃣ Creating a Security Group

resource "aws_security_group" "ec2_sg" {

name = "ec2-security-group"

description = "Allow SSH & HTTP access"

• aws_security_group: Defines a new security group.

• name: Assigns a name to the security group.

• description: Provides a description of what the SG does.

2⃣ Ingress Rules (Inbound Traffic)

Controls traffic coming into the EC2 instance.

CopyEdit

ingress {

from_port = 22
to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

Allows SSH access (port 22) from anywhere (0.0.0.0/0)


Security Concern: Allowing SSH access from all IPs is unsafe. Instead, restrict access to specific IPs,
e.g., your own public IP.

ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

Allows HTTP traffic (port 80) from anywhere (useful for web servers). If you need HTTPS (secure
traffic), add port 443:

3⃣ Egress Rules (Outbound Traffic)

Controls traffic going out from the EC2 instance.

egress {

from_port = 0

to_port =0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

Allows all outbound traffic (no restrictions).


This means your EC2 instance can reach the internet, necessary for software updates, APIs, etc.

Day 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy