Batch2 Notes
Batch2 Notes
shell scrpting
linux,
git -> source mgt tool, version controlling, we use github as remote repo or cetral
repo
jenkins -> interation mgt tool, where we actually intergrate the required plugins,
like,git, maven, ansible, k8s, docker , and so on
maven -> it's build tool for the java applications, (npm for reactjs nodejs
applications, gradle for andriod applicaitons,)
ansible - configuaration tool, roles, invetory file and modules , and ansible
controller an ansible worker nodes( ip adress of target servers,) ,ssh connection,
yaml
files,ansible playbook,
docker -> containerization platform, where we create the containers and
containerized applications , using the doker images ,
to create the docker images, we need docker file,
docker compose file ->using docker file we can run the multiple containers as
single service
docker swarm -> container mgt tool, meant for docker containers,( pod man)
stages :
stage 1 -> developers going to pull the source code into the remoterepo or
central, github,
stage 2 -> pull the code into ur local repo(git)
stage 3 -> converting the source code into artifacts, using the maven build,mvn
build , mvan install,mvn package, mvn clean, mvn test, jar or war
stage 4 -> sonar qube -> testing the sourcecode,
stage 5 -> jfrog or nexus as artifact repos, s3 bucket in aws
stage 6 -> docker -> converting artifacts into docker images, coverting the images
into docker containers or containerized applications,
stage 7 -> loading the conainterzed applications into th k8s cluster,helm charts
stage 8 -> prometheus and grafan(dashboard) monitoring the contanierized
application, whether the application is running as expected
jenkins
devops
ssh-keygen
forking repo
README file
git api
PAT creation
Braching stragies
linux - opensource operating system, where you can update and upgrade application
for free
----------------------------------------------------------
pom.xml(project object model) -> mvn install -> compile and create or build the
artifacts in pom.xml
it contains all the dependencies or plugins which used to build the artifacts,
war , jar,
maven project
<project>
<groupID>com.dbsbank</groupID> --> organization name
<artifactID>maven-java-project</artifactID> ---> project name
<version>1.0.0</version>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupID>Junit</groupID>
<artifactID>Junit</artifactID>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependencies>
<groupId>org.kie.kogito</groupId>
<artifactId>kogito-springboot-starter</artifactId>
<version>1.22.1.Final</version>
</dependency>
</dependencies>
</project>
scope of dependencies:
-----------------------
test
compile
run
provided
Testing Framework:
------------------
JUnit--->Java
NUnit--->.Net
PyTest--->Python
NodeJs/Angular---> Jasmin
-----------------------------------------------------------------------------------
---------------------
#Install Java:
amazon-linux-extras install java-openjdk11 -y
#Install GIT:
#Install Maven:
sudo wget https://dlcdn.apache.org/maven/maven-3/3.9.4/binaries/apache-maven-3.9.4-
bin.tar.gz
#Create a symlink for Apache maven directory to update the maven versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#cd /usr/lib/jvm/
#eg: /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-1.amzn2.0.3.x86_64
#java-11-openjdk-11.0.16.0.8-1.amzn2.0.1.x86_64
#java-11-openjdk-11.0.19.0.7-1.amzn2.0.1.x86_64
#java-11-openjdk-11.0.22.0.7-1.amzn2.0.1.x86_64
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.22.0.7-1.amzn2.0.1.x86_64"
export MAVEN_HOME=/opt/apache-maven-3.9.6
export M2=/opt/apache-maven-3.9.6/bin
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$MAVEN_HOME:$M2
source ~/.bash_profile
source ~/.bashrc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~
jenkins -> hostname (ip of server) , username , ssh credentials , home path
ssh credential -> public key and private key -> private key
for dockerhub or github -> user credentials -> username and token id
#Add User :
su - devopsadmin
ssh-keygen
ls ~/.ssh
#id_rsa.pub - public
#cat id_rsa & copy the private key and paste it into jenkins node config. enter
private key directly field. Then,
pipeline {
agent {
label 'slave1'
}
tools {
maven 'maven-3.9.6'
}
stages {
stage('SCM Checkout') {
steps {
// Get some code from a GitHub repository
git url:
'https://github.com/rajilingam/banking_web_application.git'
}
}
stage('Maven Build') {
steps {
// Run Maven on a Unix agent.
sh "mvn -Dmaven.test.failure.ignore=true clean package"
}
}
}
password@123
pipe
-----------------------------------------------------------------------------------
------------
ssh client -> mobexerm , putty, super putty, gitbash, default terminals in linux os
or macos
--------------------------------------------------------------------
devops
-------------------------------------------
aug7
-------------------------------------------
to run any web based applications , you must server that's web server -> nginx
client/user server - internet - nginx proxy server - web server of the amazon
reverse proxy -> it sits between and internet and web servers
hacking
client/user server - nginx proxy server - internet - web server of the amazon
clint 1.1.1.1 _> proxy -> internet ->
a= rama
echo $a
fedora - centos redhat, amz linux -> yum package manager- execute the commands
useradd -> to create a user or to add the user
alpine apk
almalinux yum
tomcat-> 8080
jenkins-> 8080
-----------------------------------------------------------------------------------
-----------------------------------------------
-----------------------------------------------------------------------------------
-------
three tier architecture or typical web application architecture or multi tier
architecture
-----------------------------------------------------------------------------------
-------
0 - 65535
Front-end/web server - User Interface -HTML, CSS, images, and client-side scripts
like JavaScript -nginx 80
-----------------------------------------------------------------------------------
------------------------------------
A Web server is a program that uses HTTP or HTTPS (Hypertext Transfer Protocol)
protocol to serves web
content (HTML and static content) to users.
Examples
An application server is a container upon which you can build and expose business
logic and processes to client applications through various protocols including HTTP
in a n-tier architecture
Examples
JBoss/WildFly - RedHat
WebLogic - Oracle
WebSphere Application Server - IBM
WebSphere Liberty Profile - IBM
Galssfish
A server is responsible for storing and managing data for the web application. It
stores structured data, such as user information, product details, or any other
data necessary for the application to function. When the application needs to
retrieve or store data, it communicates with the database server using queries.
Examples:
-----------------------------------------------------------------------------------
--------------------------------------------
Apache HTTP Server (HTTPD):80 and
Apache HTTP Server, commonly known as Apache, is one of the most popular and widely
used web servers in the world
-----------------------------------------------------------------------------------
--------------------------------------
Tomcat Server!!!!8080
Tomcat or Apache Tomcat is a light weight, open source web container used to deploy
and running the java based web applications,
developed by Apache Software Foundation (ASF).
-----------------------------------------------------------------------------------
-----------------------------------------------
#Install & configure Tomcat server :
#https://tomcat.apache.org/
sudo -i
yum update -y
#Install JDK
#Install epel Package:
amazon-linux-extras install epel -y
#Install Java:
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.amzn2.0.1.x86_64"
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.amzn2.0.1.x86_64"
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
source ~/.bashrc
source ~/.bash_profile
***********************************************************************************
*****************
#https://dlcdn.apache.org/tomcat/tomcat-8/v8.5.85/bin/apache-tomcat-8.5.85.tar.gz
https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.11/bin/apache-tomcat-10.1.11.tar.gz
cd /opt
sudo wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.11/bin/apache-tomcat-
10.1.11.tar.gz
mv apache-tomcat-10.1.11 tomcat
cd /opt/tomcat/bin
./startup.sh
***********************************************************************************
*****************
su - devopsadmin
ssh-keygen
ls ~/.ssh
cd /home/devopsadmin/.ssh
***********************************************************************************
************************************************
Go to the conf directory and open the server.xml and you will find below lines.
cd ..
cd conf/
vim server.xml
code quality ?
once you built the artifact using mvn , you have to check the quality of the source
code,
------
tomcat -> to change the port number -> server.xml
sonarqube -> to change the port number -> sonar properties file
-------------------------------------------------
nexus -8081
it's used to store the build artifacts and retrieve the build artifacts whenever
required
2 editions
usually we store
conf
etc/
nexus-default.properties
prerequisites
-------------------------
nexus installation
tomcat- apache
sonarqube -apache
source : https://help.sonatype.com/repomanager3/product-information/download
free -h
java -version
amazon-linux-extras list
Now, you need to set the JAVA_HOME and update the PATH in both the .bashrc
and .bash_profile files.
vim ~/.bashrc
export JAVA_HOME="/usr/lib/jvm/jre-1.8.0-openjdk"
export PATH="$JAVA_HOME/bin:$PATH"
export JAVA_HOME="/usr/lib/jvm/jre-1.8.0-openjdk"
export PATH="$JAVA_HOME/bin:$PATH"
Save and exit the text editor.
source ~/.bash_profile
sudo -i
cd opt
# https://download.sonatype.com/nexus/3/nexus-3.59.0-01-unix.tar.gz
wget https://download.sonatype.com/nexus/3/nexus-3.59.0-01-unix.tar.gz
mv /opt/nexus-3.59.0-01 /opt/nexus
as a good security practice , nexus is not advised to run nexus service as a root
user, so create a new user called nexusadmin and grant sudo access to mange nexus
services as follow
useradd nexusadmin
visudo
open /opt/nexus/bin/nexus.rc file and uncomment run as user parameter and set as
nexusadmin user
vim /opt/nexus/bin/nexus.rc
run_as_user="nexusadmin"
ln -s /opt/nexus/bin/nexus /etc/init.d/nexus
su - nexusadmin
#open the file as root user
vim /etc/systemd/system/nexus.service
------
[Unit]
Description=Nexus Repository Manager
After=network.target
[Service]
Type=forking
LimitNOFILE=65536
User=nexusadmin
ExecStart=/opt/nexus/bin/nexus start
ExecStop=/opt/nexus/bin/nexus stop
Restart=on-abort
[Install]
WantedBy=multi-user.target
------
3.15 v
admin
admin123
username : admin
password :
cat /opt/sonatype-work/nexus3/admin.password
---------------------------------------------------------------------------------
nexus artifact repo
snapshot -> it's used to store the ongoing versions or developement patches , v1-->
v1.1 , v1.2
you have the define the release repo and snapshot repo in pom.xml
/opt/maven/conf/setting.xml
have to modify the nexus server name, username and password in server tag and
modify the remote repo in mirror tag(change only the ip address of maven central)
in setting.xml file
<settings xmlns="http://maven.apache.org/SETTINGS/1.2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.2.0
https://maven.apache.org/xsd/settings-1.2.0.xsd">
<!-- ... (other settings) ... -->
<!-- servers
Define your server credentials here.
-->
<servers>
<server>
<id>nexus</id>
<username>admin</username>
<password>563856</password>
</server>
</servers>
-----------------------------------------------------------------------------------
--------------------------------------------
jenkins-> 8080
jenkins - integration mgt tool, which is used to automate your end to end cicd
pipeline,
master/slave architecture
jenkins master (master node) -> it instructs to jenkins slave to execute the the
tasks,projects,items,jobs
jenkins slave (agent or slave node)-> build tool(maven) , slave excutes the tasks
which assigned by the jenkins master
-----------------------------------------------------------------------------------
--------------------------------------------
jenkins installation
#Launch AWS Linux Instance with port 8080 & Name Tag
sudo -i
yum update -y
#Install Jenkins :
source : https://pkg.jenkins.io/redhat-stable/
#Install Java:
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.amzn2.0.1.x86_64"
export PATH="$JAVA_HOME/bin:$PATH"
source ~/.bashrc
#Start Jenkins:
# access the jenkins server over browser using public ip of the instance followed
by the default port number 8080
publicip:8080
cat /var/lib/jenkins/secrets/initialAdminPassword
admin password :
------
manage jenkins
System Configuration:
system cofigure
plugins -> to install multiple software plugins, which used to create the end to
end cicd pipeline
global tools config -> configuring the plugins to jenkins server
Nodes -> to create and configure the slave server
security :
credentials ->
---------------------
The main difference between Jenkins Freestyle projects and Pipeline is the usage
of GUI vs scripting.
Free style projects for testing the jobs before implementing into pipeline
basic purpose
Freestyle Projects
Pipeline Jobs
Use code (Groovy language - this is a Java like language) for giving instructions
Since, everything in one script you can keep that in the source control and have
ability to revert back to an earlier version at any time or keep track of changes
made to the script
Entire,
Declarative Pipeline -> the script start with pipeline -> most widely used
scripting language in to build end to end cicd pipeline
Scripted Pipeline -> the script start with node -> this is traditional and
complex pipeline
clone project git url < name github project repo >
build the code --> mvn clean package, mvn build ,mvn install, mvn test
test the code
jenkins master ---- jenkins slave1 --- build tools java web based application --
maven
in slave machine
#Add User :
su - devopsadmin
ssh-keygen
ls ~/.ssh
#cat id_rsa & copy the private key and paste it into jenkins node config. enter
private key directly field. Then,
----
GIT PARAMETER - TO SELECT THE BRANCH
BUILD EXECUTERS 3
BUILD QUEUE
CLONING STAGE
BUILDING STAGE
TESTING STAGE
POST BUIDING - DEPLOY STAGE
------------------------------------------
pipeline {
agent any
tools {
// Install the Maven version configured as "M2" and add it to the path.
maven "maven-3.9.4"
}
stages {
stage('cloning the project') {
steps {
// Get some code from a GitHub repository
git url: 'https://github.com/manju65char/insurance-web-
application.git'
}
}
stage('Maven Build') {
steps {
// Run Maven on a Unix agent.
sh "mvn -Dmaven.test.failure.ignore=true clean package"
}
}
stage('Maven test') {
steps {
// Run Maven on a Unix agent.
sh "mvn test"
}
}
---
deploying your source code into tomcat -> publish over ssh --> the plugin which we
use to deploy your code from source server to target server .
-> whenever you get any unstable error -> make sure with the target server is up
and running and check with user credentials
-------------------------------------
integration of sonar and nexus to jenkins
sonar-token - 68ad9e25dfd5621c15551d9c84963ae91b9cac2e
username: admin
password: 563856
sonar scanner for jenkin- sonarqube integration -> quality gate, code smell,
jacoco plugin - java code coverage - it's used to get the code coverage of your
project ( source code)
in nexus repo
create a repo
plugins required
sonatype nexus
username: admin
password: 563856
---
mvn clean pkg or mvn clean install -> remove the existing artifacts in your
pom.xml and target (jar or war file) ,
pipeline {
agent any
tools {
// Install the Maven version configured as "M2" and add it to the path.
maven "maven-3.9.4"
}
stages {
stage('Cloning the project') {
steps {
// Get some code from a GitHub repository
git url: 'https://github.com/manju65char/insurance-web-
application.git'
}
}
stage('Maven Build') {
steps {
// Run Maven on a Unix agent.
sh "mvn -Dmaven.test.failure.ignore=true clean package"
}
}
stage('Test & Jacoco Static Analysis') {
steps {
script {
junit 'target/surefire-reports/**/*.xml'
jacoco()
}
}
}
stage('Sonar Scanner Coverage') {
steps {
script {
sh """export MAVEN_OPTS="-Xmx1024m"
mvn sonar:sonar -
Dsonar.login=68ad9e25dfd5621c15551d9c84963ae91b9cac2e
-Dsonar.host.url=http://35.169.20.187:9000"""
}
}
}
stage('Publish to Nexus') {
steps {
nexusPublisher nexusInstanceId: 'nexus-server',
nexusRepositoryId: 'amazon-releases',
packages: [
[$class: 'MavenPackage',
mavenAssetList: [
[classifier: '', extension: '', filePath:
'./target/insure-me-1.0.jar']
],
mavenCoordinate: [
artifactId: 'insure-me',
groupId: 'com.project.manjuDOE',
packaging: 'jar',
version: '1.0'
]
]
]
}
}
stage('deploy into tomcat') {
steps {
script {
ssh publisher : snippet generator
}
}
}
}
}
------------------------
as devops engineer , need to hide the project related information and credentials
in the pipeline
shared library
in jenkins
create jenkins credentials for shared libraries to call the fuctions from share
library repository through jenkins pipepline to execute the code
to hide the code where you use the varaibles , and functions ,
in github
share library -> to store the variables , you are going to create the functions
and store the variable (codes) in that function
github or gitlab ---> create a repo in github, then clone the repo into your local
machine , then ,create var dir and create fucntions with groovy ,
pipeline {
agent any
tools {
// Install the Maven version configured as "M2" and add it to the path.
maven "maven-3.9.4"
}
environment {
REPOSITORY_NAME = "insurance-web-application"
REPOSITORY_URL = "https://github.com/manju65char/$
{env.REPOSITORY_NAME}.git"
}
stages {
stage('Cloning the Project') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
script {
withCredentials([
string(credentialsId: 'github-credentials', variable:
'github-credentials')
]) {
library identifier: 'jsl@master', retriever:
modernSCM([
$class: 'GitSCMSource',
remote: "https://github.com/manju65char/shared-
libraries.git",
credentialsId: 'github-credentials'
])
}
cloneProjectStage(REPOSITORY_URL, params.BRANCH_NAME)
}
}
}
}
stage('Maven Build') {
steps {
script {
try {
buildCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build code')
}
}
}
}
stage('Publish to Nexus') {
steps {
nexusPublisher nexusInstanceId: 'nexus-server',
nexusRepositoryId: 'amazon-releases',
packages: [
[$class: 'MavenPackage',
mavenAssetList: [
[classifier: '', extension: '', filePath:
'./target/insure-me-1.0.jar']
],
mavenCoordinate: [
artifactId: 'insure-me',
groupId: 'com.project.manjuDOE',
packaging: 'jar',
version: '1.0'
]
]
]
}
}
}
}
}
}
===================================================================================
=============================================
pipeline {
agent any
parameters {
choice(
choices: ['master', 'develop', 'feature/branch-1', 'feature/branch-2'],
description: 'Select the branch to build',
name: 'BRANCH_NAME'
)
}
environment {
REPOSITORY_NAME = "insurance-web-application"
REPOSITORY_URL = "https://github.com/manju65char/$
{env.REPOSITORY_NAME}.git"
}
stages {
stage('Cloning the Project') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
script {
withCredentials([
string(credentialsId: 'latest-github-credentials',
variable: 'latest-github-credentials')
]) {
library identifier: 'jsl@master', retriever:
modernSCM([
$class: 'GitSCMSource',
remote: "https://github.com/manju65char/shared-
libraries.git",
credentialsId: 'latest-github-credentials'
])
}
cloneProjectStage(REPOSITORY_URL, params.BRANCH_NAME)
}
}
}
}
stage('buildCodeStage') {
steps {
script {
try {
buildCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('mvn test') {
steps {
script {
try {
testCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('pubilish to nexus') {
steps {
script {
try {
artifactStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('dockerbuild') {
steps {
script {
try {
dockerBuildStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('Deploy to k8s') {
steps {
script {
try {
deploymentStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
}
}
======================================================
buildCodeStage.groovy
def call() {
sh 'echo "maven code build and created artifacts"'
}
===============================
testCodeStage.groovy
def call() {
sh 'echo "test the code"'
}
===============================
artifactStage.groovy
def call() {
sh 'echo "deploying the code into nexus server"'
}
===============================
dockerBuildStage.groovy
def call() {
sh 'echo "building the image "'
}
===============================
deploymentStage.groovy
def call() {
sh 'echo "deployed the containerized app into k8s cluster"'
}
===================================================================================
=============================================
slack
collaboration tool
slackwithjenkins
collabarative tool
requirements
slack account
create a channel in slack
install jenkins server -- and install slack notification plugin
source : https://slack.com/intl/en-in/
===============================
def cloneProjectStage(repositoryUrl, branchName) {
git branch: branchName, credentialsId: 'latest-github-credentials', url:
repositoryUrl
}
pipeline {
agent any
parameters {
choice(
choices: ['master', 'develop', 'feature/branch-1', 'feature/branch-
2'],
description: 'Select the branch to build',
name: 'BRANCH_NAME'
)
}
environment {
REPOSITORY_NAME = "insurance-web-application"
REPOSITORY_URL = "https://github.com/manju65char/$
{env.REPOSITORY_NAME}.git"
}
stages {
stage('Cloning the Project') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
script {
withCredentials([
string(credentialsId: 'latest-github-credentials',
variable: 'latest-github-credentials')
]) {
library identifier: 'jsl@master', retriever:
modernSCM([
$class: 'GitSCMSource',
remote: "https://github.com/manju65char/shared-
libraries.git",
credentialsId: 'latest-github-credentials'
])
}
cloneProjectStage(REPOSITORY_URL, params.BRANCH_NAME)
}
}
}
}
stage('buildCodeStage') {
steps {
script {
try {
buildCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('mvn test') {
steps {
script {
try {
testCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('pubilish to nexus') {
steps {
script {
try {
artifactStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('dockerbuild') {
steps {
script {
try {
dockerBuildStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('Deploy to k8s') {
steps {
script {
try {
deploymentStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
}
}
}
stage('Send Slack Notification') {
steps {
script {
slackSend (
color: currentBuild.resultIsBetterOrEqualTo('SUCCESS') ? 'good'
: 'danger',
message: "Pipeline for ${env.JOB_NAME} has completed ($
{currentBuild.result}).",
channel: '#build-notifications', // Replace with your Slack
channel name
teamDomain: 'manjusoftware-v6b3769', // Replace with your Slack
team name
tokenCredentialId: 'slack-credential-id' // Create a secret
text credential for your Slack token
)
}
}
}
}
}
=====================
pipeline {
agent any
parameters {
choice(
choices: ['master', 'develop', 'feature/branch-1', 'feature/branch-2'],
description: 'Select the branch to build',
name: 'BRANCH_NAME'
)
}
environment {
REPOSITORY_NAME = "insurance-web-application"
REPOSITORY_URL = "https://github.com/manju65char/$
{env.REPOSITORY_NAME}.git"
}
stages {
stage('Cloning the Project') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
script {
withCredentials([
string(credentialsId: 'latest-github-credentials',
variable: 'latest-github-credentials')
]) {
library identifier: 'jsl@master', retriever:
modernSCM([
$class: 'GitSCMSource',
remote: "https://github.com/manju65char/shared-
libraries.git",
credentialsId: 'latest-github-credentials'
])
}
cloneProjectStage(REPOSITORY_URL, params.BRANCH_NAME)
}
}
slackSend (
color: currentBuild.resultIsBetterOrEqualTo('SUCCESS') ? 'good'
: 'danger',
message: "Stage 'Cloning the Project' has completed ($
{currentBuild.result}).",
channel: '#build-notifications', // Replace with your Slack
channel name
teamDomain: 'manjusoftware-v6b3769', // Replace with your Slack
team name
tokenCredentialId: 'slack-credential-id' // Create a secret
text credential for your Slack token
)
}
}
stage('buildCodeStage') {
steps {
script {
try {
buildCodeStage()
} catch (Exception e) {
currentBuild.result = 'FAILURE'
error('Failed to build Docker container')
}
slackSend (
color: currentBuild.resultIsBetterOrEqualTo('SUCCESS') ?
'good' : 'danger',
message: "Stage 'buildCodeStage' has completed ($
{currentBuild.result}).",
channel: '#build-notifications', // Replace with your Slack
channel name
teamDomain: 'manjusoftware-v6b3769', // Replace with your
Slack team name
tokenCredentialId: 'slack-credential-id' // Create a secret
text credential for your Slack token
)
}
}
}
===================================================================================
===============================================
DOCKER
before Docker
app1 app2
server 8gb server 8gb
vm model :
virtual box
vm ware
docker model
artifacts --> docker image --> docker container -- pushing the docker images into
the docker hub
docker commit
docker pull
------
DOCKER
it's an open source containerized platform .
it's containerization platform , where you are going to create docker
images ,container , run the container delete containers
it's advanced than virtualization
dockerswarm for docker containers 1000 -- k8s pod --it smallest unit of k8s --
it's collections of containers
container -> it's like a vm and it does not have any OS, it's a lightweight os
level virtualization
virtualization -> process that allow for more efficient utilization of physical
computer hardware components , could computing
-===================================================
project
docker client --> it's intial way that many docker user interact with docker
daemon or docker engine
docker host --> docker host is the machine where you are installed the docker
images
docker daemon --> main component of docker architecture , it runs on the host os,
it 's responsible for creating , running,deleting the containers and images , to
manage the containers and images
dockerhub ( docker registry) -> it's like remote repo , where we stored all the
images ,
build server --> maven --> mvn clean package ---> jar or war --> docker -->
docker images -> docker file or pull image from docker hub
docker pull
docker dockerfile
docker pull <image name> --> to pull the images from docker hub
docker build -t <image name >:tag --> to build or create docker image
docker run <image id or image name > --> to convert the image into container
( docker build + docker run)
docker run -d --name <customized continer name > <image name> -->
docker rename <existing container name> < new container name> --> to change the
container name
==============================================================================
docker file : it's a text file , which containers a set of instructions, which are
used to build the application along with its dependencies .
it's a text file , which containers a set of instructions that automate the docker
image creation
D --> is the letter while creating the Dockerfile --> FROM, COPY, ADD, ARG,
ENTRYPOINT, CMD,WORKDIR,RUN,
start components also be the capital like FROM, COPY, ADD, ARG, ENTRYPOINT,
CMD,WORKDIR,RUN,
======================================
FROM -- is used to define the base image ,this component must be on top of the file
what are the commands you want to executtte in the container, where you are going
to use RUN component with executable commands
RUN echo"this is my image" >> file1.txt
COPY --> This component is used to copy the files(jar or war) from local machine to
container, you have to mention the source and destination point
ADD -- >
ADD is same as COPY ,it's useful copy the files from your local machine to
container , it has 2 two more extra capabailities
ADD can download the file directly from the internet to the container
=====================================
ex; if you want to run jenkins in the containers, default port for jenkins is 8080,
so you need to specify the jenkins public ip and port in EXPOSE component
EXPOSE instruction is useful to tell the users of the image about the ports and
protocols image/container is opening. It will not have any functionality it used
only to tell the information.
FROM almalinux
EXPOSE 8080/
WORKDIR /home/manjunathachar/dockerdemo/tem
like, you want to get into specific dir by default inside the container , in that
case, you can specify the dir name , so that you can directly get into it.
ex: if you want to get into tem dir inside the container,
so you can specify the tem in workdir component so that as soon as i open the
container , by default you would be there in tem directory
ENV is the instruction to provide environment variables to image and container. You
can override env variables at runtime
VOLUME : it'used to create the volume in the container --> why volume ,to attach or
mount the container's volume data to host machine(host volume), so that if the
container exits , the data of container is still available in host volume .
VOLUME /mydir
LABEL --> It's used to give some meta data information , just like tags , and it's
key value pair,
LABEL instruction to add metadata to your Docker image. Metadata in the form of
key-value pairs can provide information about the image, such as version,
maintainer, description, and more.
ex:
ex: jio and some other covers put together , you can easily filter based on
labeling
vim Dockerfile
===============================
CMD VS ENTRYPOINT
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"] 8081
------------------------------
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
ENTRYPOINT ["python3","manage.py","runserver","0.0.0.0:8080"] - --qa testing
team
----------
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
CMD ["python3","manage.py","runserver","0.0.0.0:8080"] -
----------------
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
ENTRYPOINT ["python3"] --> non changeable
CMD ["manage.py","runserver","0.0.0.0:8080"] ---> changeable
-------------
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
CMD ["python3"] --> changeable
ENTRYPOINT ["manage.py","runserver","0.0.0.0:8080"] ---> non changeable
=======
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
CMD ["python3","manage.py","runserver","0.0.0.0:8080"] -
--
CMD ["java","-jar","/app.jar"]
--
===================================================================================
======
CMD :
important point : your docker file must contains atleast one CMD to run your
container for infinite time
CMD: Sets default parameters that can be overridden from the Docker command line
interface (CLI) while running a docker container.
ex:
FROM ubuntu
CMD echo s1
docker build -t manjunathachar/sampleap:latest .
RUN vs CMD
FROM almalinux
RUN yum install nginx -y
CMD ["nginx", "-g", "daemon-off"]
CMD ["sleep", "20"]
# to get docker id -> docker ps -a -q
ENTRYPOINT is also used to run the container just like CMD. But there are few
differences.
-------------
cmd vs entry point
FROM ubuntu
COPY requirement.txt /app
COPY devops /app
RUN install python3 # need to be edited
ENTRYPOINT ["python3"] --> non changeable
CMD ["python3","manage.py","runserver","0.0.0.0:8080"]
always have main executable in the entrypoint [python3] --> i dont want to change
my pyhon3 to nodejs application, coz this is not a nodejs,
=============
creation of docker images using docker files :
FROM ubuntu
RUN apt-get update -y
RUN mkdir myfirstdir
RUN echo "my firest dockeer file content" >> file1.txt
ex : ENV
===========================================
FROM almalinux
RUN yum install nginx -y
CMD ["nginx", "-g", "daemon-off"]
CMD ["sleep", "20"]
===================
FROM ubuntu:latest
=================
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
============================
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
---
FROM ubuntu
WORKDIR /app
ADD https://download.sonatype.com/nexus/3/nexus-3.59.0-01-unix.tar.gz
RUN tar -zxvf nexus-3.59.0-01-unix.tar.gz
RUN mv /opt/nexus-3.59.0-01 /opt/nexus
VOLUME /opt/nexus
CMD ["/opt/nexus","start"]
================================
# Start Nexus
CMD ["/opt/nexus/bin/nexus", "run"]
====================
TASKS
task1
clone the insureme project
modify the docker file , replace the entrypoint with cmd instruction and specify
the different port number in order to run the application .
------------------
task2
size
ex : original image is ---> mynewimage latest cdcdf0e349e9 2 days ago
695MB
reduce the image size using multi build stages like build stage and run stages in
docker file or use any disto images .
=========================
Docker Compose
===========
pre-requisites
docker
sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker
sudo usermod -a -G docker ec2-user
=======================
source : https://medium.com/clarusway/how-can-we-easily-and-visually-explain-the-
docker-compose-53df77e9f046
Docker Compose is a tool that facilitates the composition of multiple images into a
single container or service
docker compose is a tool that allows you to run multiple containerized applications
docker compose uses a yaml files to define the services , networks and volumes that
make up your applications and then uses that files to create and manage the
containers needed to run your applcations
to build the entire app --> front end -- html --> backend-businesslogics -
java,python,nodejs--> dbbase-mysql
or
Docker Compose uses YAML files to describe what your applications need, like the
different parts they consist of (services), how they connect (networks), and where
they store data (volumes). It then takes these descriptions and sets up and
oversees the containers needed to make your applications work.
-> here containers run in isolation but can interact with each other
-> in docker compose , a user can start all services(containers) using a single
command
ex:
if you want to run web application, where dependencies like user
interface ,application server(tomcat), database(mysql) are required, so instead
running all containers individually, we can create docker-compose file or yaml
file, which can run all the containers as a service
commands
docker-compose --help -> to get all commands which help to use docker-compose
commands efficiently
docker-compose up -d -> to create and start the containers
docker-compose stop -> to stop the services
installation of docker-compose
vim .bash_profile
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$MAVEN_HOME:$M2:/usr/local/bin
vim .bashrc
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$MAVEN_HOME:$M2:/usr/local/bin
source ~/.bash_profile
source ~/.bashrc
===============
version: '3'
services:
webserver:
image: nginx:latest
container_image: name: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf # You can customize nginx configuration
by providing a custom nginx.conf file
git:
image: alpine/git:latest
container_name: git
command: ["tail", "-f", "/dev/null"] # Keeps the container running without
exiting
dockerfile
FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD: manju65char
ENV MYSQL_DATABASE: wordpress
EXPOSE 3306
version: '3'
services:
webserver:
build:
context: .
dockerfile: Dockerfile
ports:
- "9999:8081"
image: "manjunathchar/myapp"
docker-compose up -d
=================
vim docker-compose.yaml
version: '3'
services:
db:
image: mysql:5.7
volumes:
- ./mysql-data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: manju65char
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on: :
- db
image: wordpress:latest
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306 -->
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
- ./wp-content:/var/www/html/wp-content
===================================================================================
=============================================
docker volume :
----
docker volume --> it's persistant or permanent volume , host volume ,--> located in
docker host
Volume ???
Storage blocks/units.
dir1/file1.txt
1. Stateless Application
2. Statefull Application
stateless app :
whenever running any app , if app deosn't require any volume in target environment
on local machine
ex : google drive -> once we close g-drive we dont have trace of execution of it .
statefull application :
some applications require volume, whenever run any app , which entire target
environment and execute some tasks and leave some trace , the trace can be , in
the form of logs and reports
ex : if you are created webapp , and trying to monitor cpu of machine and produce
some results in the form of reports
-> earlier ( 10 years ago ) , containers were used only for stateless applications
Micro-service1 Micro-service2
(User-Login_Screen) (Application
Landing Page)
container1 username,password container2
(username)
(Container1 Volume) <--------------> (Container2
Volume)
copy
containers are dont have its filesystem default, it uses the host file
system . to see where the containers data are stored
/var/snap/docker/common/var-lib-docker/overlay2/
var/lib/docker//overlay2/
----
docker volume
-----------------------------------------------------------------------------------
------------------------------------------------
containers are dont have its filesystem default, it uses the host file system . to
see where the containers data are stored
/var/snap/docker/common/var-lib-docker/overlay2/
var/lib/docker//overlay2/
----
docker volume
alias dvl='docker volume ls' (You can add this line to your shell's configuration
file (e.g., .bashrc ($ dvl) or .bash_profile) to make the alias available every
time you open a new terminal session.)
docker volume ls
docker volume inspect volume1 to get complete info of volume or location of volume
-------
mkdir test-data
cd test-data
echo " hello guys" >> me.html
http://34.203.77.138:8989/me.html
docker volume ls --> you wont see the test-data volume , coz it's anonymous volume
- it's not managed by docker , for the best practice you are supposed to use docker
volume command to create a volume , it's recommended .
-p host-port:container-port
-v host-path:container path - inside container where you want mount the volume or
data
this directory can be mounted with any path inside the container
mounted with
docker host -----------------------------------------------------------------------
container
/var/lib/docker/volumes/volume1/nginx_data
usr/share/nginx/html
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
----------
lets take a mysql image
once you run the above command , you mysql container will be in exit status , so
check with logs
you need to specify the password in evnvironment variable so , you need to give
password when you are initializing the database
mysql -u root -p
enterpassword : 12345
exit;
cd /var/lib/docker/volumes/mysql-data/_data
-----
important :how to be shown same data to another container or how to
host/attach/mount same volume to multiple containers
docker volumes ls
Volume Mapping/Binding/attaching/mounting
whenever you run any microservices if it relies on another micro service , it's
able to transfer data from one server to another server
here micro service1 output is input for microservice 2 , so micro sevice2 depends
on microservice1, so to store data of each micro services we need volume
we can't copy the data from container1 to container2 directly, so whereas we use
HOST VOLUME, which helps us to transfer data from one container to another
container
as soon as the container task is over, it will exit or stop automatically , ( what
are the data which contains in container are no permanent , they are temporary , so
to overcome this where we use host voloume,
---
docker volume ls
docker volume inspect -> Inspect a volume - to get complete information of the
volume and mountpoint or volume location
docker volume prune --> to remove the volumes which are unused or if you delete the
container , eventhough the attached volume it not deleted, in that case you can
this command to delete all rarely used or unused volumes
as soon as you create volume , on docker host , the docker host will create volume
folder , once you are get into the container , and create some file and those file
are reflected in the host volume and once you delete container , those data still
available in host volume inside the data .
"Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/testvol1/_data",
--> volume path , it's not mounted/mapped to any container lets mount it
ls
where you can volume1 dir --> create some files
--
if you want transfer data from host volume to container2 volume
-----
containers are dont have its filesystem default, it uses the host file system . to
see where the containers data are stored
/var/snap/docker/common/var-lib-docker/overlay2/
var/lib/docker//overlay2/
----
docker volume
alias dvl='docker volume ls' (You can add this line to your shell's configuration
file (e.g., .bashrc ($ dvl) or .bash_profile) to make the alias available every
time you open a new terminal session.)
docker volume ls
docker volume inspect volume1 to get complete info of volume or location of volume
now just remove the container --> docker rm -f con id --> the data is still exist
in volume , then refresh the http://34.203.77.138/hi.html , execute the same below
command to add the volume to container ( u can use same host volume path to another
container)
-------
create a dir , and host it
mkdir test-data
docker run -d -v /root/test-data:/usr/share/nginx/html -p 8989:80 --name ng2
nginx:latest
cd test-data
echo " hello guys" >> me.html
http://34.203.77.138:8989/me.html
docker volume ls --> you wont see the test-data volume , coz it's anonymous volume
- it's not managed by docker , for the best practice you are supposed to use docker
volume command to create a volume , it's recommended .
-p host-port:container-port
-v host-path:container path - inside container where you want mount the volume or
data
this directory can be mounted with any path inside the container
mounted with
docker host -----------------------------------------------------------------------
container
/var/lib/docker/volumes/volume1/nginx_data
usr/share/nginx/html
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
----------
lets take a mysql image
once you run the above command , you mysql container will be in exit status , so
check with logs
docker logs 9a46d342c8a1
you need to specify the password in evnvironment variable so , you need to give
password when you are initializing the database
mysql -u root -p
enterpassword : 12345
exit;
cd /var/lib/docker/volumes/mysql-data/_data
-----
important :how to be shown same data to another container or how to
host/attach/mount same volume to multiple containers
docker volumes ls
docker networks
nginx run -d -p 8989:80 --name mynginx nginx:latest --> default network - bridge
network
3 tier ->
roboshop_nw
bridge network
host network
none network
---
there are 3 types
-------
bridge
whenever you create a container by default it's created using bridge network
if you want expose your application where we use dockerhost ip with host port and
container port
for every container we use different host port number to expose the app on browser
( 8080:80, 8081:80, 8888:80 etc...)
docker network inspect bridge --> to see complete inform of bridge network, you
will get what are the containers attached to this network .
EX :
--------
host
if you want create container with host network , where you can use host network --
now we can run the application over internet with hostip or ec2 publicip without
using port number 80
------
none
if you create container with non network,so it's not communicate internally and
externally, so it's used for testing purpose , where you can't run the application
internally and over the browser
-----------------------------------------------------------------------------------
-----------------------------------------------
docker netwoking,
docker volumes
-----------------------------------------------------------------------------------
-----------------------------------------------
using the docker engine --> create, delete , update the images and containers
k8s architecture :
kubectl
k8s cluster :
1. master node
kube scheduler
etcd
controller manager
2. worker nodes
kubelet
kubeproxy
cri-docker engine
==
k8s manifestfile/ k8s yaml files
apiVersion : apps/v1
kind : pod
metadata
spec
apiVersion : v1
kind
metadata
spec
===================================================================================
==============================================
oss or cloud
-----------------------------------------------------------------------------------
--------------------------------------------
k8s :
===
k8s architecture
kubectl
yaml file
master node :
api server
scheduler
etcd
controller manager
worker node :
kubelet
kubeproxy
cri- docker
pods...
------------
cluster : collection of nodes , including master node and worker node that run
containerized applications
it's a command line interpreter, that allows us to interact with k8s cluster
api server ?
he API Server is a critical part of the Kubernetes control plane, and it serves as
a front-end interface through which users, administrators, and various Kubernetes
components interact with the cluster's control plane.
whenever you submit a file to k8s cluster, the intially the request is received by
api server, so the role of api server
is to receive incoming request and verify the user that , who is submitted and
whether he is authorised or not, and what are the things he performs in the server
API server updates all the information(of containers and nodes) to etcd, it's a etc
database
scheduler :
Once it's updated ,etcd gives controller to scheduler, then scheduler will interact
to api server through etcd and collect the latest infomation, and identify the
healthy nodes to deploy the containers.
NEXT STEP : with this information api server will interact with its correspondence
worker nodes
kubelet : it's interface between workernode and masternode
docker engine
---
kube proxy --> it enable and manage the network connection between conatiners ,
nodes in the k8s cluster .
it's monitoring the running containers and it always ensure that,the containers are
running as expected ...
---
k8s distributions /technologies/Kubernetes Development Tools:
k3s
Skaffold: Command line tool that facilitates continuous development for Kubernetes
applications.
Kompose: Conversion tool for all Docker Compose users to help them
move to Kubernetes.
-----
Services --> it's used to expose your containerized application outside k8s
cluster or over internet ,
types
cluster ip :
NodePort :
LoadBalancer :
-----
#https://kubernetes.io/docs/setup/
#Add Port: 0 - 65535 /default cidr block is class b /16 --> ipv4 - 32 bits/ 4
octets /8 bits 2^8=256 -> 16 bits -> 256*256=65536
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On Both Master and Worker Nodes:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sudo -i
yum update -y
swapoff -a
#The Kubernetes scheduler determines the best available node on which to deploy
newly created pods. If memory swapping is allowed to occur on a host system, this
can lead to performance and stability issues within Kubernetes.
setenforce 0
#Disabling the SElinux makes all containers can easily access host filesystem.
sysctl --system
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#export KUBECONFIG=/etc/kubernetes/kubelet.conf
#We need to install a flannel network plugin to run coredns to start pod network
communication.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Execute the below commmand in Worker Nodes, to join all the worker nodes with
Master :
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
su - devopsadmin
ssh-keygen
ls ~/.ssh
cd /home/devopsadmin/.ssh
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
k8s objects/k8s resources / workloads
Pod:
A Pod is the smallest and simplest unit in the Kubernetes object model. It
represents a single instance of a running process in a cluster. A Pod can contain
one or more tightly related containers, such as an application container and a
sidecar container( supporting container to main application continer .
Example: Imagine a Pod called "web-app" that runs a web server (e.g., Nginx) and a
sidecar container responsible for logging.
Deployment/deployment set :
it's default deployment strategy( rolling update ),create replica set, pods , auto
scaling , auto healing feature
replicas : 3
Example: A Deployment named "web-deployment" that ensures there are always three
replicas of the "web-app" Pod running.
Service:
types
cluster ip
NodePort
LoadBalancer
Example: A Service called "web-service" that exposes the "web-app" Pods, allowing
external users to access the web server.
Volume:
A Volume is used to persist data beyond the lifetime of a Pod. It enables sharing
data between containers in the same Pod and provides a way to store data that
survives Pod restarts.
Example: A Volume named "data-volume" attached to the "web-app" Pod to store user
uploads and preserve them across Pod restarts.
ConfigMap:
Secret:
Similar to ConfigMaps, Secrets store sensitive data like passwords, tokens, or API
keys, providing a more secure way to handle confidential information.
Example: A Secret called "db-credentials" storing the database password for the
application to access the database.
Namespace:
A Namespace is a virtual cluster within a physical Kubernetes cluster, allowing you
to partition resources and create isolated environments.
DaemonSet:
A DaemonSet ensures that a copy of a Pod runs on each node in the cluster. It's
typically used for system-level tasks or agents that should be deployed on every
node.
------------------------------------------
5. StatefulSets:
StatefulSets are used for stateful applications that require unique network
identities and persistent storage for each Pod.
Example: A StatefulSet for a database application, where each Pod has a stable
hostname and corresponding storage.
9. PersistentVolumes (PV):
PersistentVolumes are storage resources in the cluster that exist independently of
Pods.
12. Jobs:
Jobs create Pods to run tasks until completion, ensuring the task is completed
successfully.
Example: A Job that runs a backup script once and terminates after the backup is
done.
13. CronJobs:
CronJobs are used to schedule Jobs to run periodically at specified intervals using
cron syntax.
-------------------------------------------------
types of deployment strategies in k8s
Rolling Update:
Rolling Update is the default deployment strategy in Kubernetes. It updates the
Pods in a Deployment or ReplicaSet gradually, one at a time, while ensuring the
desired number of replicas is maintained throughout the process. This strategy
ensures minimal downtime during updates.
Recreate:
The Recreate strategy terminates all existing Pods before creating new ones with
the updated configuration. This results in a brief period of downtime during the
deployment process.
Blue-Green Deployment:
In a Blue-Green Deployment, you have two identical environments (blue and green).
You switch traffic from the old environment (blue) to the new one (green) after the
update is complete. This strategy allows for instant rollback in case of issues.
Canary Deployment:
Canary Deployment gradually introduces the new version of an application to a
subset of users while still serving the old version to the majority. It allows you
to test the new version's performance and stability before rolling it out to
everyone.
A/B Testing:
A/B Testing is similar to Canary Deployment, but instead of testing versions, it
allows you to test different features or configurations for a subset of users.
Shadow Deployment:
Shadow Deployment sends a copy of production traffic to the new version (shadow
deployment) without serving it to end-users. This allows you to monitor the
behavior of the new version before making it fully active.
Rollback:
Rollback is not a deployment strategy itself, but it's a crucial feature in
Kubernetes that allows you to revert to a previous stable version if an update
causes issues.
--------------------------------------------------
Scheduling:
---------
Scheduling in Kubernetes is the process of assigning Pods to nodes in the cluster
based on resource requirements, constraints, and other policies. The Kubernetes
scheduler handles this responsibility, aiming to optimize resource utilization and
distribute workloads evenly across the cluster. The scheduler makes decisions based
on the current state of the cluster, such as node capacity, affinity/anti-affinity
rules, resource constraints, and Pod priorities.
Components of Scheduling:
Kube-scheduler:
The kube-scheduler is the core component responsible for making scheduling
decisions. It watches for new or pending Pods and assigns them to suitable nodes
based on configurable scheduling policies. The default scheduler in Kubernetes is
the "DefaultScheduler."
Node Readiness:
The scheduler considers node readiness when making scheduling decisions to avoid
placing Pods on nodes that are not ready.
Maintenance:
-----------
Maintenance in Kubernetes refers to the process of keeping the cluster healthy and
operational. It involves monitoring the cluster, ensuring its components are
running correctly, and taking necessary actions to address failures or prevent
disruptions.
Components of Maintenance:
kubelet:
The kubelet is a node-level agent that runs on each node and ensures that Pods are
running as expected. It communicates with the Kubernetes master node and manages
the containers associated with each Pod.
Cluster AutoScaler:
The Cluster AutoScaler automatically adjusts the number of nodes in the cluster
based on resource demands. It scales the cluster up or down to meet the workload
requirements efficiently.
HorizontalPodAutoscaler (HPA):
While primarily a scaling component, the HPA also plays a role in maintenance by
dynamically adjusting the number of replicas to meet resource demands and maintain
optimal perform
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
k8s cmds :
kubectl delete node <node_name> --> Delete the given node in cluster.
kubectl describe nodes | grep ALLOCATED -A 5 --> Describe all the nodes in
verbose.
kubectl get pods -o wide | grep <node_name> ---> List all pods in the current
namespace,with more details.
kubectl get no -o wide --> List all the nodes with mode details.
kubectl annotate node <node_name> ---> Add an annotation for the given node.
pod commands : po
kubectl get pods --all-namespaces --> to get all pods from all namespaces
kubectl get po --> To list the available pods in the default namespace.
kubectl describe pod <pod_name> --> To list the detailed description of pod.
kubectl delete pod <pod_name> --> To delete a pod with the name.
kubectl create pod <pod_name> --> To create a pod with the name.
Kubectl get pod -n <name_space> --> To list all the pods in a namespace.
Kubectl create pod <pod_name> -n <name_space> --> To create a pod with the name in
a namespace.
kubectl run myweb --image=nginx ---> create a pod from command line(this will
create a pod with the name myweb)
kubectl delete pod myweb ---> to destroy the pod(this will delete a pod with the
name myweb)
kubectl apply -f yaml_file_name -> > to apply yaml/config file to create a
resource
kubectl describe -n devops pod pod_name ---> > to describe any resource
kubectl descripbe deploy <name of deploy> --> to get complete inform of deploy
======
3. Namespaces : ns
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pod creation
# nginx-pod.yaml or yml
vim firstpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
*******************************************************************
kubectl create -f firstpod.yaml docker build -t
*******************************************************************
---
apiVersion: v1
kind: Pod
metadata:
name: multi
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: almalinux-container
image: almalinux