Cloud Computing
Cloud Computing
Course Outline
1
a. Resource Sharing − The Resources like data, hardware, and software can be shared between them.
b. Open-to-all − The software is designed and can be shared.
c. Fault Detection − The error or failure in the system is detected and can be corrected.
Apart from the functions, the main disadvantage is that all the plan has to be in the same location and this
disadvantage is overcome by the following systems −
Mainframe Computing
Cluster Computing
Grid Computing
1. Mainframe Computing
It was developed in the year 1951 and provides powerful features. Mainframe Computing is still in
existence due to its ability to deal with a large amount of data. For a company that needs to
access and share a vast amount of data then this computing is preferred. Among the four types of
computers, mainframe computer performs very fast and lengthy computations easily.
The type of services handled by them is bulk processing of data and exchanging large-sized hardware.
Apart from the performance, mainframe computing is very expensive.
2. Cluster Computing in Cluster Computing, the computers are connected to make it a single
computing. The tasks in Cluster computing are performed concurrently by each computer also
known as the nodes which are connected to the network. So, the activities performed by any
single node are known to all the nodes of the computing which may increase the performance,
transparency, and processing speed.
To eliminate the cost, cluster computing has come into existence. We can also resize the cluster
computing by removing or adding the nodes.
3. Grid Computing It was introduced in the year 1990. As the computing structure includes different
computers or nodes, in this case, the different nodes are placed in different geographical places
but are connected to the same network using the internet.
The other computing methods seen so far, it has homogeneous nodes that are located in the same place.
But in this grid computing, the nodes are placed in different organizations. It minimized the problems of
cluster computing but the distance between the nodes raised a new problem.
2. Web 2.0 This computing lets the users generate their content and collaborate with other people or
share the information using social media, for example, Facebook, Twitter, and Orkut. Web 2.0 is a
combination of the second-generation technology World Wide Web (WWW) along with the web
services and it is the computing type that is used today.
3. Virtualization It came into existence 40 years back and it is becoming the current technique used
in IT firms. It employs a software layer over the hardware and using this it provides the customer
with cloud-based services.
4. Utility Computing Based on the need of the user; utility computing can be used. It provides the
users, company, clients or based on the business need the data storage can be taken for rent
and used.
What Are the Types of Cloud Computing Services?
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Software as a Service (SaaS)
Function as a Service (Faas)
Identity as Service (IDaas)
Network as Service
2
Infrastructure as a Service (IaaS)
Infrastructure-as-a-Service provides access to fundamental resources such as physical machines, virtual
machines, virtual storage, etc. Apart from these resources, the IaaS also offers:
Virtual machine disk storage
Virtual local area network (VLANs)
Load balancers
IP addresses
Software bundles
All of the above resources are made available to end user via server virtualization. Moreover, these
resources are accessed by the customers as if they own them.
Cloud Computing IaaS
Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the Internet in a cost-effective
manner. Some of the key benefits of IaaS are listed below:
Full control of the computing resources through administrative access to VMs.
Flexible and efficient renting of computer hardware.
Portability, interoperability with legacy applications.
issues
Compatibility with legacy security vulnerabilities=> Because IaaS offers the customer to run legacy
software in provider's infrastructure, it exposes customers to all of the security vulnerabilities of such
legacy software.
Virtual Machine sprawl=>The VM can become out-of-date with respect to security updates because IaaS
allows the customer to operate the virtual machines in running, suspended and off state. However, the
provider can automatically update such VMs, but this mechanism is hard and complex.
Robustness of VM-level isolation=>IaaS offers an isolated environment to individual customers through
hypervisor. Hypervisor is a software layer that includes hardware support for virtualization to split a
physical computer into multiple virtual machines.
Data erase practices=>The customer uses virtual machines that in turn use the common disk resources
provided by the cloud provider. When the customer releases the resource, the cloud provider must ensure
that next customer to rent the resource does not observe data residue from previous customer.
Characteristics
Virtual machines with pre-installed software.
Virtual machines with pre-installed operating systems such as Windows, Linux, and Solaris.
On-demand availability of resources.
Allows to store copies of particular data at different locations.
The computing resources can be easily scaled up and down.
Platform as a Service (PaaS)
Platform-as-a-Service offers the runtime environment for applications. It also offers development and
deployment tools required to develop applications. PaaS has a feature of point-and-click tools that
enables non-developers to create web applications.
3
App Engine of Google and Force.comare examples of PaaS offering vendors. Developer may log on to
these websites and use the built-in API to create web-based applications.
Benefits of PaaS
Lower administrative overhead Customer need not bother about the administration because it is the
responsibility of cloud provider.
Lower total cost of ownership Customer need not purchase expensive hardware, servers, power, and
data storage.
Scalable solutions It is very easy to scale the resources up or down automatically, based on their
demand.
More current system software It is the responsibility of the cloud provider to maintain software versions
and patch installations.
Issues
Lack of portability between PaaS clouds
Although standard languages are used, yet the implementations of platform services may vary. For
example, file, queue, or hash table interfaces of one platform may differ from another, making it difficult to
transfer the workloads from one platform to another.
Event based processor scheduling
The PaaS applications are event-oriented which poses resource constraints on applications, i.e., they
have to answer a request in a given interval of time.
Security engineering of PaaS applications
Since PaaS applications are dependent on network, they must explicitly use cryptography and manage
security exposures.
Characteristics
PaaS offers browser-based development environment. It allows the developer to create database and edit
the application code either via Application Programming Interface or point-and-click tools.
PaaS provides built-in security, scalability, and web service interfaces.
PaaS provides built-in tools for defining workflow, approval processes, and business rules.
It is easy to integrate PaaS with other applications on the same platform.
PaaS also provides web services interfaces that allow us to connect the applications outside the platform.
PaaS Types
Stand-alone development environments the stand-alone Paa works as an independent entity for a
specific function. It does not include licensing or technical dependencies on specific SaaS applications.
Application delivery-only environments the application delivery PaaS includes on-demand
scalingandapplication security.
Open platform as a service Open PaaS offers an open-source software that helps a PaaS provider to run
applications.
Add-on development facilities the add-on PaaS allows to customize the existing SaaS platform.
SaaS (software as a service)
Software-as a-Service (SaaS) model allows to provide software application as a service to the end users.
It refers to a software that is deployed on a host service and is accessible via Internet. There are several
SaaS applications listed below:
4
Billing and invoicing system
Customer Relationship Management (CRM) applications
Help desk applications
Human Resource (HR) solutions
Some of the SaaS applications are not customizable such as Microsoft Office Suite. But SaaS provides
us Application Programming Interface (API), which allows the developer to develop a customized
application.
Characteristics
SaaS makes the software available over the Internet.
The software applications are maintained by the vendor.
The license to the software may be subscription based or usage based. And it is billed on recurring basis.
SaaS applications are cost-effective since they do not require any maintenance at end user side.
They are available on demand.
They can be scaled up or down on demand.
They are automatically upgraded and updated.
SaaS offers shared data model. Therefore, multiple users can share single instance of infrastructure. It is
not required to hard code the functionality for individual users.
All users run the same version of the software.
Benefits
1. Modest software tools
The SaaS application deployment requires a little or no client-side software installation, which results in
the following benefits:
No requirement for complex software packages at client side
Little or no risk of configuration at client-side Low distribution cost
2. Efficient use of software licenses
The customer can have single license for multiple computers running at different locations which
reduces the licensing cost. Also, there is no requirement for license servers because the software
runs in the provider's infrastructure.
3. Centralized management and data
The cloud provider stores data centrally. However, the cloud providers may store data in a decentralized
manner for the sake of redundancy and reliability.
4. Platform responsibilities managed by providers
All platform responsibilities such as backups, system maintenance, security, hardware refresh, power
management, etc. are performed by the cloud provider. The customer does not need to bother about
them.
5. Multitenant solutions
Multitenant solutions allow multiple users to share single instance of different resources in virtual isolation.
Customers can customize their application without affecting the core functionality.
Issues
1. Browser based risks
5
If the customer visits malicious website and browser becomes infected, the subsequent access to SaaS
application might compromise the customer's data.
To avoid such risks, the customer can use multiple browsers and dedicate a specific browser to access
SaaS applications or can use virtual desktop while accessing the SaaS applications.
2. Network dependence
The SaaS application can be delivered only when network is continuously available. Also, network should
be reliable but the network reliability cannot be guaranteed either by cloud provider or by the customer.
3. Lack of portability between SaaS clouds
Transferring workloads from one SaaS cloud to another is not so easy because work flow, business
logics, user interfaces, support scripts can be provider specific.
Open SaaS and SOA
Open SaaS uses those SaaS applications, which are developed using open-source programming
language. These SaaS applications can run on any open-source operating system and database. Open
SaaS has several benefits listed below:
No License Required Low Deployment Cost Less Vendor Lock-in More portable applications More Robust
Solution.
Identity as a Service
Identity refers to set of attributes associated with something to make it recognizable. All objects may have
same attributes, but their identities cannot be the same. A unique identity is assigned through unique
identification attribute.
There are several identity services that are deployed to validate services such as validating web sites,
transactions, transaction participants, client, etc. Identity-as-a-Service may include the following:
Directory services
Federated services
Registration
Authentication services
Risk and event monitoring
Single sign-on services
Identity and profile management
Benefits
Increased site conversation rates
Access to greater user profile content
Fewer problems with lost passwords
Ease of content integration into social networking sites
Network as a Service
Network-as-a-Service allows us to access to network infrastructure directly and securely. Naas makes it
possible to deploy custom routing protocols.
Naas uses virtualized network infrastructure to provide network services to the customer. It is the
responsibility of Naas provider to maintain and manage the network resources. Having a provider working
for a customer decreases the workload of the customer. Moreover, Naas offers network as a utility. Naas
is also based on pay-per-use model.
Benefits
Independence -->Each customer is independent and can segregate the network.
6
Bursting-->The customer pays for high-capacity network only on requirement.
Resilience-->The reliability treatments are available, which can be applied for critical applications.
Analytics-->The data protection solutions are available, which can be applied for highly sensitive
applications.
Ease of Adding New Service Elements-->It is very easy to integrate new service elements to the network.
Support Models-->A number of support models are available to reduce operation cost.
Isolation of Customer Traffic-->The customer traffic is logically isolated.
What Are Cloud Deployment Models?
The following are the Cloud Deployment Models:
1. Public Deployment model
2. Private deployment model
3. Hybrid deployment model
Private Deployment Model
Private Cloud allows systems and services to be accessible within an organization. The Private Cloud is
operated only within a single organization. However, it may be managed internally by the organization
itself or by third-party.
Advantages
1. High Security and Privacy- Private cloud operations are not available to general public and
resources are shared from distinct pool of resources. Therefore, it ensures
highsecurityandprivacy.
2. More Control- The private cloud has more control on its resources and hardware than public
cloud because it is accessed only within an organization.
3. Cost and Energy Efficiency The private cloud resources are not as cost effective as resources in
public clouds but they offer more efficiency than public cloud resources.
Disadvantages
1. Restricted Area of Operation- The private cloud is only accessible locally and is very difficult to
deploy globally.
2. High Priced- Purchasing new hardware in order to fulfill the demand is a costly transaction.
3. Limited Scalability- The private cloud can be scaled only within capacity of internal hosted
resources.
4. Additional Skills- In order to maintain cloud deployment, organization requires skilled expertise.
Public Deployment Model
Public Cloud allows systems and services to be easily accessible to general public. The IT giants such as
Google, Amazon and Microsoft offer cloud services via Internet.
Advantages
1. Cost Effective- Since public cloud shares same resources with large number of customers it turns
out inexpensive.
7
2. Reliability- The public cloud employs large number of resources from different locations. If any of
the resources fails, public cloud can employ another one.
3. Flexibility- The public cloud can smoothly integrate with private cloud, which gives customers a
flexible approach.
4. Location Independence- Public cloud services are delivered through Internet, ensuring location
independence.
5. Utility Style Costing- Public cloud is also based on pay-per-use model and resources are
accessible whenever customer needs them.
6. High Scalability- Cloud resources are made available on demand from a pool of resources, i.e.,
they can be scaled up or down according the requirement.
Disadvantages
1. Low Security- In public cloud model, data is hosted off-site and resources are shared publicly,
therefore does not ensure higher level of security.
2. Less Customizable-It is comparatively less customizable than private cloud.
Hybrid Deployment Model
Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are performed using public
cloud while the critical activities are performed using private cloud.
Advantages
1. Scalability- It offers features of both, the public cloud scalability and the private cloud scalability.
2. Flexibility- It offers secure resources and scalable public resources.
3. Cost Efficiency- Public clouds are more cost effective than private ones. Therefore, hybrid clouds
can be cost saving.
4. Security- The private cloud in hybrid cloud ensures higher degree of security.
Disadvantages
1. Networking Issues- Networking becomes complex due to presence of private and public cloud.
2. Security Compliance- It is necessary to ensure that cloud services are compliant with security
policies of the organization.
3. Infrastructure Dependency- The hybrid cloud model is dependent on internal IT infrastructure;
therefore, it is necessary to ensure redundancy across data centers.
Characteristics Of Cloud Computing
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of servers
based on the need. This is done by either increasing or decreasing the resources in the cloud.
2. Save Money: An advantage of cloud computing is the reduction in hardware costs. Instead of
purchasing in-house equipment, hardware needs are left to the vendor.
3. Reliability: If one server goes offline it will have no effect on availability, as the virtual servers will
continue to pull resources from the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed within data centers and so
benefit from the security measures that those facilities implement to prevent people from
accessing or disrupting them on-site.
8
5. Outsource Management: When you are managing the business, someone else manages your
computing infrastructure.
Top Reasons to Switch from On-premise to Cloud Computing
The following are the Top reasons to switch from on-premise to cloud computing:
1. Reduces cost: The cost-cutting ability of businesses that utilize cloud computing over time is one
of the main advantages of this technology. By the use of cloud server’s businesses will save and
reduce costs with no need to employ a staff of technical support personnel to address server
issues.
2. More storage: For software and applications to execute as quickly and efficiently as possible, it
provides more servers, storage space, and computing power. Many tools are available for cloud
storage such as Dropbox, OneDrive, Google Drive, iCloud Drive, etc.
3. Employees Better Work Life Balance: Direct connections between cloud computing benefits, and
the work and personal lives of an enterprise’s workers can both improve because of cloud
computing.
Top leading Cloud Computing companies
1. Amazon Web Services (AWS)
One of the most successful cloud-based businesses is Amazon Web Services (AWS), which is an
Infrastructure as a Service (Iaas) offering that pays rent for virtual computers on Amazon’s infrastructure.
2. Microsoft Azure Cloud Platform
Microsoft is creating the Azure platform which enables the .NET Framework Application to run over the
internet as an alternative platform for Microsoft developers. This is the classic Platform as a Service
(PaaS).
3. Google Cloud Platform (GCP)
Google has built a worldwide network of data centers to service its search engine. From this service,
Google has captured the world’s advertising revenue. By using that revenue, Google offers free software
to users based on infrastructure. This is called Software as a Service (SaaS).
Advantages of Cloud Computing
1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with the principal pay-as-
you-go model. It helps in lessening capital expenditures of Infrastructure, particularly for small
and medium-sized businesses companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources based on demand. It
ensures the efficiency of businesses in handling various workloads without the need for large
amounts of investments in hardware during the periods of low demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to data and applications
from anywhere over the internet. This encourages collaborative team participation from different
locations through shared documents and projects in real-time resulting in quality and productive
outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the infrastructure management
and keeping with the latest software automatically making updates they are new versions.
Disadvantages Of Cloud Computing
1. Security Concerns: Storing of sensitive data on external servers raised more security concerns
which is one of the main drawbacks of cloud computing.
9
2. Downtime and Reliability: Even though cloud services are usually dependable, they may also
have unexpected interruptions and downtimes. These might be raised because of server
problems, Network issues or maintenance disruptions in Cloud providers which negative effect on
business operations, creating issues for users accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services heavily rely on Internet
connectivity. For accessing the cloud resources the users should have a stable and high-speed
internet connection for accessing and using cloud resources. In regions with limited internet
connectivity, users may face challenges in accessing their data and applications.
4. Cost Management Complexity: The main benefit of cloud services is their pricing model that
coming with Pay as you go but it also leads to cost management complexities. On without proper
careful monitoring and utilization of resources optimization, Organizations may end up with
unexpected costs as per their use scale.
Cloud Sustainability
Energy Efficiency: Cloud Providers supports the optimization of data center operations for minimizing
energy consumption and improve efficiency.
Renewable Energy: On increasing the adoption of renewable energy sources like solar and wind power to
data centers and reduce carbon emissions.
Virtualization: Server virtualization facilitates better utilization of hardware resources, reducing the need
for physical servers and lowering the energy consumptions.
Use Cases of Cloud Computing
1. Scalable Infrastructure: Infrastructure as a Service (IaaS) enables organizations to scale
computing resources based on demand without investing in physical hardware.
2. Efficient Application Development: Platform as a Service (PaaS) simplifies application
development, offering tools and environments for building, deploying, and managing applications.
3. Streamlined Software Access: Software as a Service (SaaS) provides subscription-based access
to software applications over the internet, reducing the need for local installation and
maintenance.
4. Data Analytics: Cloud-based platforms facilitate big data analytics, allowing organizations to
process and derive insights from large datasets efficiently.
5. Disaster Recovery: Cloud-based disaster recovery solutions offer cost-effective data replication
and backup, ensuring quick recovery in case of system failures or disasters
10
CLOUD ARCHITECTURE AND TECHNOLOGIES
Objectives
11
Cloud Computing Paravirtualization VMware vSphere is highly developed infrastructure that offers a
management infrastructure framework for virtualization. It virtualizes the system, storage and networking
hardware.
What is Containerization?
Containerization is a type of virtualization in which all the components of an application are bundled into a
single container image and can be run in isolated user space on the same shared operating system.
Containers are lightweight, portable, and highly conducive to automation. As a result, containerization has
become a cornerstone of development pipelines and application infrastructure for a variety of use cases.
The Layers of Containerization
1. Hardware infrastructure: With any application, it all starts with physical compute resources
somewhere. Whether those resources are your own laptop or spread across multiple cloud
datacenters, they are a must-have for containers to work.
2. Host operating system: The next layer that sits atop the hardware layer is the host operating
system. As with the hardware layer, this could be as simple as the Windows or *nix operating
system running on your own computer or abstracted away completely by a cloud service provider.
3. Container engine: This is where things start to get interesting. Container engines run on top of
your host operating system and virtualize resources for containerized apps. The simplest
example of this layer is running Docker on your own computer.
4. Containerized apps: Containerized apps are units of code that include all the libraries, binaries,
and configuration an application requires to run.
The Benefits of Containerization
1. Portability: Containerization solves this problem because the same exact container images —
which include dependencies — can be run everywhere.
2. Speed: Containers tend to start up in a fraction of the time virtual machines or bare metal servers
take. While specific boot times will vary depending on resources and the size of an app, generally
speaking containers start up in seconds while virtual machines can take minutes.
3. Efficiency: Because containers only include what an app needs to run, they are significantly more
lightweight than virtual machines.
4. Simplicity of deployment: Because containers are portable and lightweight, they can easily be
deployed almost anywhere.
5. Scalability: Containerized applications start up quickly, don’t take up too much space, and are
easy to deploy. As a result, containerization makes it much easier to scale your deployments.
This is why containers have become a cornerstone of microservices and cloud-based
applications.
Specific Containerization Use Cases
1. Microservices: A microservices architecture is built around the idea of many small, independent,
and loosely coupled services working together. Because containers are a great way to deploy
isolated units of code, they have become the de-facto standard for deploying microservices.
2. CI/CD: Continuous integration/continuous deployment (CI/CD) is all about testing and deploying
reliable software fast. By bundling applications into portable, lightweight, and uniform units of
code, containerization enables better CI/CD because containers are automation friendly, reduce
dependency issues, and minimize resource consumption.
12
3. Modernizing legacy apps: Many teams are moving legacy monolithic applications to the cloud.
However, in order to do so, they need to be sure the app will actually run in the cloud. In many
cases, this means leveraging containerization to ensure the app can be deployed anywhere.
Kubernetes and Containers
Kubernetes, also known as K8s, is a popular tool to help scale and manage container deployments.
Containerization software like Docker or LXC lacks the functionality to orchestrate larger container
deployments, and K8s fills that gap
What exactly can Kubernetes do?
1. Rollouts and rollbacks: K8s allows you to automate the creation and deployment of new
containers or removal of existing containers in a container cluster based on predefined rules
around resource utilization.
2. Storage mounting: With Kubernetes, you can automatically mount storage resources for your
containers.
3. Resource allocation: Balancing CPU and RAM consumption at scale is a challenging task. K8s
enables you to define CPU and RAM requirements and then it automatically handles optimal
deployment of your containers within the constraints of your resources (nodes).
4. Self-healing: With K8s, you can define health checks and if your containers do not meet the
requirements, they will be automatically restored or replaced.
5. Configuration management: K8s helps securely manage container configurations including
sensitive data such as tokens and SSH keys.
6. Load balancing: Kubernetes can automatically perform load balancing across multiple containers
to enable efficient performance and resource utilSecuring containers.
Dockers
Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker
packages software into standardized units called containers that have everything the software needs to
run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale
applications into any environment and know your code will run.
Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship,
and run distributed applications at any scale.
Microservices and serverless computing
Microservices: A microservices architecture is built around the idea of many small, independent, and
loosely coupled services working together. Because containers are a great way to deploy isolated units of
code, they have become the de-facto standard for deploying microservices.
Serverless computing is a cloud computing execution model that allocates machine resources on an as-
used basis. Under a serverless model, developers can build and run applications without having to
manage any servers and pay only for the exact amount of resources used. Instead, the cloud service
provider is responsible for provisioning, managing, and scaling the cloud infrastructure that runs the
application code
Architecture Of Cloud Computing
Cloud computing architecture refers to the components and sub-components required for cloud
computing. These components typically refer to:
Front end (Fat client, thin client) Back-end platforms (Servers, Storage) Cloud-based delivery and a
network (Internet, Intranet, Intercloud)
13
Front End (User Interaction Enhancement)
The User Interface of Cloud Computing consists of 2 sections of clients. The Thin clients are the ones that
use web browsers facilitating portable and lightweight accessibilities and others are known as Fat Clients
that use many functionalities for offering a strong user experience.
Back-end Platforms (Cloud Computing Engine)
The core of cloud computing is made at back-end platforms with several servers for storage and
processing computing. Management of Applications logic is managed through servers and effective data
handling is provided by storage. The combination of these platforms at the backend offers the processing
power, and capacity to manage and store data behind the cloud.
Cloud-Based Delivery and Network
On-demand access to the computer and resources is provided over the Internet, Intranet, and Intercloud.
The Internet comes with global accessibility, the Intranet helps in internal communications of the services
within the organization and the Intercloud enables interoperability across various cloud services. This
dynamic network connectivity ensures an essential component of cloud computing architecture on
guaranteeing easy access and data transfer.
Cloud Security
Cloud security recommended to measures and practices designed to protect data, applications, and
infrastructure in cloud computing environments. The following are some of the best practices of cloud
security:
1. Data Encryption: Encryption is essential for securing data stored in the cloud. It ensures that data
remains unreadable to unauthorized users even if it is intercepted.
2. Access Control: Implementing strict access controls and authentication mechanisms helps ensure
that only authorized users can access sensitive data and resources in the cloud.
3. Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to
provide multiple forms of verification, such as passwords, biometrics, or security tokens, before
gaining access to cloud services.
14
DEVOPS FUNDAMENTALS
Objectives
15
DevOps creates efficiency across all tasks involved in the development, deployment, and maintenance of
software. Connecting development and operations leads to increased visibility, more accurate
requirements, improved communication, and faster time to market.
What makes devops different from other software development practices
DevOps bridges the gap between development and operations, creating significant efficiencies across the
development and deployment of software. DevOps includes a strong emphasis on automation, helping
reduce the overall number of errors.
What is Philosophy of DevOps
The philosophy of DevOps is to take end-to-end responsibility across all aspects of the project. Unlike
more traditional methods of developing software, DevOps bridges the gap between development and
operations teams—something that is often missing and can heavily impede the process of software
delivery.
Providing a comprehensive framework to develop and release software, DevOps connects development
and operations teams—a gap that can create challenges and inefficiencies in software delivery.
How do devops and agile relate to one another
Although both DevOps and agile are software development practices, they each have a slightly different
focus. DevOps is a culture that focuses on creating efficiency for all stakeholders involved in the
development, deployment, and maintenance of software.
Agile is a lean manufacturing process that helps provide a software development production framework.
Agile is often specific to the development team, where the scope of DevOps extends to all stakeholders
involved in the production and maintenance of software. DevOps and agile can be used together to
create a highly efficient software development environment.
Agile Methodologies and planning
Commonly used in software teams, agile development is a delivery approach that relates to lean
manufacturing. The development is completed in short, incremental sprints. Although it is different than
DevOps, the two approaches are not mutually exclusive—agile practices and tools can help drive
efficiencies within the development team, contributing to the overall DevOps culture.
Version control
With a team working together, version control is a crucial part of accurate, efficient software development.
A version control system—such as Git—takes a snapshot of your files, letting you permanently go back to
any version at any time. With a version control system, you can be confident you won’t run into conflicts
with the changes you’re working on.
CI/CD
Continuous integration is the process of automating builds and testing that occur as the code is
completed and committed to the system.
Once the code is committed, it follows an automated process that provides validation—and then commits
only tested and validated code into the main source code, which is often referred to as the master branch,
main, or trunk.
Continuous integration automates this process, which leads to significant efficiencies. Any bugs are
identified early on, prior to merging any code with the master branch. Continuous delivery is the
fundamental practice that occurs within DevOps enabling the delivery of fast, reliable software.
While the process is similar to the overarching concept of DevOps, continuous delivery is the framework
where every component of code is tested, validated, and committed as they are completed, resulting in
the ability to deliver software at any time. Continuous integration is a process that is a component of
continuous delivery.
16
DEVOPS TOOLS AND TECHNIQUES
Objectives
17
apt:
name: apache2
state: latest
This playbook installs Apache on all servers in the webservers group.
2. Puppet Puppet is a configuration management tool that uses a declarative language to automate
infrastructure tasks. It is agent-based, meaning it requires an agent installed on each managed
machine.
Key Features of Puppet
Agent-based – Uses a master-agent model to push configurations to multiple servers.
Declarative Approach – Administrators define the desired state of systems, and Puppet enforces it.
Scalability – Suitable for managing thousands of servers in large organizations.
Strong Compliance & Security Features – Helps enforce policies across IT infrastructure.
Example Use Case An enterprise needs to ensure that firewall rules and security patches are consistently
applied across 500+ servers. Puppet automates these tasks, preventing misconfigurations.
Example Puppet Manifest
package {‘apache2':
ensure => installed,
}
This manifest ensures that Apache is always installed on a system.
CI/CD Pipelines with Jenkins or GitLab
What is CI/CD?
Continuous Integration (CI) and Continuous Deployment (CD) are DevOps practices that automate
software build, test, and deployment processes. They help development teams deliver updates faster and
with fewer errors.
CI (Continuous Integration) – Developers push code frequently, triggering automated builds and tests.
CD (Continuous Deployment) – Once code passes tests, it is automatically deployed to production.
1. Jenkins Jenkins is an open-source automation server that facilitates CI/CD by building, testing,
and deploying software automatically. It integrates with Git, Docker, Kubernetes, and AWS,
making it highly versatile.
Key Features of Jenkins
Extensive Plugin Support – Over 1,500 plugins for integrating with cloud providers, databases, and
testing frameworks.
Pipeline as Code – Uses Jenkinsfile to define CI/CD workflows.
Scalability – Can distribute workloads across multiple machines for faster builds.
Example Jenkins Pipeline Script
pipeline {
18
agent any
stages {
stage('Build') {
steps {
sh 'mvn clean install'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'scp target/*.jar user@server:/deploy'
}
}
}
}
This Jenkinsfile defines a CI/CD pipeline that builds, tests, and deploys a Java application.
2. GitLab CI/CD GitLab provides built-in CI/CD pipelines that automate code integration and
deployment. Unlike Jenkins, it does not require additional installation, making it easier to set up.
Key Features of GitLab CI/CD
Built into GitLab – No need for a separate CI/CD tool.
YAML-based configuration – Uses .gitlab-ci.yml to define pipeline steps.
Docker & Kubernetes Integration – Ideal for containerized applications.
Example GitLab CI/CD Pipeline
stages:
- build
- test
- deploy
build:
stage: build
19
script:
- mvn clean install
test:
stage: test
script:
- mvn test
deploy:
stage: deploy
script:
- scp target/*.jar user@server:/deploy
This script automates the process of building, testing, and deploying an application.
Monitoring and Logging Tools
Why is Monitoring Important?
Monitoring tools help detect performance issues, track system health, and prevent failures. Logging tools
allow DevOps teams to analyze system logs for debugging and security purposes.
1. Prometheus Prometheus is an open-source monitoring and alerting tool that collects time-series
data from servers, applications, and containers.
Key Features of Prometheus
Metrics Collection – Monitors CPU, memory, network, and application health.
PromQL (Prometheus Query Language) – Analyzes and filters monitoring data.
Kubernetes Integration – Tracks containerized applications in real time.
Alerting Support – Sends notifications when system performance degrades.
Example Prometheus Query
rate(http_requests_total[5m])
This query tracks the number of HTTP requests over the last 5 minutes.
2. Grafana Grafana is a visualization tool that displays data from Prometheus, Elasticsearch, and
InfluxDB using customizable dashboards.
Key Features of Grafana
Beautiful Dashboards – Displays real-time system performance metrics.
Alerting System – Sends alerts via Slack, email, or SMS.
Multi-Data Source Support – Integrates with multiple monitoring tools.
Example Grafana Use Case A DevOps team sets up Grafana dashboards to monitor CPU usage, server
uptime, and database response times.
20
Conclusion
DevOps relies on powerful tools to automate infrastructure, streamline CI/CD workflows, and monitor
application health.
Configuration Management Tools (Ansible, Puppet) help manage infrastructure at scale.
CI/CD Pipelines (Jenkins, GitLab) enable automated software development and deployment.
Monitoring & Logging Tools (Prometheus, Grafana) provide visibility into system performance.
By mastering these tools, DevOps teams can build resilient, high-performing applications with minimal
manual intervention.
21
AUTOMATION IN CLOUD AND DEVOPS
Objectives
This document covers three key areas of automation in cloud and DevOps:
. Infrastructure as Code (IaC) for provisioning infrastructure
. Automated testing and deployment in CI/CD pipelines
. Scaling and optimizing cloud resources
Automating Infrastructure Provisioning (Infrastructure as Code - IaC)
What is Infrastructure as Code (IaC)?
Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure (servers,
databases, networking, etc.) using machine-readable configuration files instead of manual processes.
Benefits of IaC
Consistency – Eliminates configuration drift
Scalability – Deploy multiple environments quickly
Version Control – Uses Git for tracking infrastructure changes
Efficiency – Reduces manual provisioning efforts
Popular IaC Tools
1. Terraform
Terraform is an open-source IaC tool that allows declarative provisioning of cloud infrastructure
across AWS, Azure, and Google Cloud.
22
}
23
This playbook installs Apache Web Server on remote servers.
Automated Testing and Deployment
Why Automate Testing & Deployment?
24
}
}
This pipeline builds, tests, and deploys a Java application.
2. GitHub Actions GitHub Actions automates testing and deployment directly from GitHub
repositories.
build:
stage: build
script:
- mvn clean install
test:
25
stage: test
script:
- mvn test
deploy:
stage: deploy
script:
- scp target/*.jar user@server:/deploy
This pipeline automates the software development lifecycle.
Scaling and Optimizing Cloud Resources
Why Scale Cloud Resources?
26
Example Kubernetes HPA Command
kubectl autoscale deployment my-app --cpu-percent=50 --min=1 --max=10
This scales pods between 1 and 10 based on CPU utilization.
Conclusion Automation in Cloud and DevOps improves efficiency, security, and scalability.
Infrastructure as Code (IaC) tools like Terraform, Ansible, and CloudFormation automate infrastructure
setup.
CI/CD Pipelines with Jenkins, GitHub Actions, and GitLab streamline software delivery.
Cloud Scaling using AWS Auto Scaling and Kubernetes HPA optimizes resources.
By mastering these tools, DevOps teams can accelerate deployments, reduce costs, and improve system
reliability.
References
27
ADVANCED TOPICS
Objectives
28
Cost Overruns – Lack of visibility across multiple providers
Solution: Use FinOps tools like AWS Cost Explorer and Azure Cost Management
Hybrid and Multicloud Tools
Kubernetes – Manages containerized applications across clouds
Anthos (Google Cloud) – Unifies management of hybrid/multicloud deployments
AWS Outposts – Brings AWS services to on-premises data centers
Azure Arc – Extends Azure services to hybrid environments
29
- uses: actions/checkout@v2
- name: Run Snyk Security Scan
run: snyk test --all-projects
This workflow automatically scans code for vulnerabilities when changes are pushed to GitHub.
Use Spot and Reserved Instances – Save up to 70% compared to on-demand pricing
Monitor Cloud Spending – Use AWS Cost Explorer, Azure Cost Management
Shut Down Unused Resources – Automate termination of idle instances
Right-Sizing – Adjust server sizes to match actual workload needs
Use Serverless Computing – Pay only for what you use (AWS Lambda, Azure Functions)
Example: AWS Cost Explorer Query
{
"TimePeriod": {
"Start": "2023-01-01",
"End": "2023-01-31"
30
},
"Granularity": "MONTHLY",
"Metrics": ["BlendedCost"]
}
This query retrieves AWS cost data for January 2023.
Conclusion
Mastering Hybrid Cloud, DevSecOps, and Performance Optimization is essential for modern
DevOps teams.
References
31