Devaps Nodes
Devaps Nodes
Introduction:
Introduction, Agile development model, DevOps, and ITIL. DevOps process and Continuous Delivery, Release
management, Scrum, Kanban, delivery pipeline, bottlenecks, examples.
UNIT - II
UNIT – III
UNIT - IV
Build systems, Jenkins build server, Managing build dependencies, Jenkins plugins, and file system layout, The
host server, Build slaves, Software on the host, Triggers, Job chaining and build pipelines, Build servers and
infrastructure as code, Building by dependency order, Build phases, Alternative build servers, Collating quality
measures.
UNIT - V
TEXT BOOKS:
Joakim Verona. Practical Devops, Second Edition. Ingram short title; 2nd edition (2018). ISBN10: 1788392574 2.
Deepak Gaikwad, Viral Thakkar. DevOps Tools from Practitioner's Viewpoint. Wiley publications. ISBN:
9788126579952
REFERENCE BOOK:
1. Len Bass, Ingo Weber, Liming Zhu. DevOps: A Software Architect's Perspective. Addison Wesley; ISBN-10.
Unit 1Introduction:
A software life cycle model (also termed process model) is a pictorial and diagrammatic representation of the
software life cycle. A life cycle model represents all the methods required to make a software product transit through
its life cycle stages. It also captures the structure in which these methods are to be undertaken.
Requirement Analysis is the most important and necessary stage in SDLC. The senior members of the team
perform it with inputs from all the stakeholders and domain experts or SMEs in the industry. Planning for the quality
assurance requirements and identifications of the risks associated with the projects is also done at this stage.
Business analyst and Project organizer set up a meeting with the client to gather all the data likewhat the
customer wants to build, who will be the end user, what is the objective of the product.Before creating a product, a
core understanding or knowledge of the product is very necessary.
For Example , A client wants to have an application which concerns money transactions. In thismethod, the
requirement has to be precise like what kind of operations will be done, how it will be done, in which currency it
will be done, etc.Once the required function is done, an analysis is complete with auditing the feasibility of
thegrowth of a product. In case of any ambiguity, a signal is set up for further discussion.Once the requirement is
understood, the SRS (Software Requirement Specification) document iscreated. The developers should thoroughly
follow this document and also should be reviewed bythe customer for future reference.
Stage2: Defining Requirements:
Once the requirement analysis is done, the next stage is to certainly represent and document thesoftware
requirements and get them accepted from the project stakeholders.This is accomplished through "SRS"- Software
Requirement Specification document whichcontains all the product requirements to be constructed and developed
during the project lifecycle.
The next phase is about to bring down all the knowledge of requirements, analysis, and design of the
software project. This phase is the product of the last two, like inputs from the customer andrequirement gathering.
In this phase of SDLC, the actual development begins, and the programming is built. The implementation
of design begins concerning writing code. Developers have to follow the codingguidelines described by their
management and programming tools like compilers, interpreters,debuggers, etc. are used to develop and implement
the code.
Stage5: Testing:
After the code is generated, it is tested against the requirements to make sure that the productsare solving
the needs addressed and gathered during the requirements stage.During this stage, unit testing, integration testing,
system testing, acceptance testing are done.
Stage6: Deployment:
Once the software is certified, and no bugs or errors are stated, then it is deployed. Then based on the
assessment, the software may be released as it is or with suggested enhancement in the object segment. After the
software is deployed, then its maintenance begins.
Stage7: Maintenance:
Once when the client starts using the developed systems, then the real issues come up and requirements to
be solved from time to time. This procedure where the care is taken for the developed product is known as
maintenance.
Waterfall model:
Winston Royce introduced the Waterfall Model in 1970.This model has five phases: Requirements analysis
and specification, design, implementation, and unit testing, integration and system testing, and operation and
maintenance. The steps always follow in this order and do not overlap. The developer must complete every phase
before the next phase begins. This model is named "
Waterfall Model ", because its diagrammatic representation resembles a cascade of waterfalls.
The aim of this phase is to understand the exact requirements of the customer and to document them
properly. Both the customer and the software developer work together so as to document all the functions,
performance, and interfacing requirement of the software. It describes the "what" of the system to be produced and
not "how."In this phase, a large document called Software Requirement Specification(SRS) document is created
which contained a detailed description of what the system will do in the common language.
2. Design Phase:
This phase aims to transform the requirements gathered in the SRS into a suitable form which permits
further coding in a programming language. It defines the overall software architecture together with high level and
detailed design. All this work is documented as a Software Design Document (SDD).
During this phase, design is implemented. If the SDD is complete, the implementation or coding phase
proceeds smoothly, because all the information needed by software developers is contained in the SDD. During
testing, the code is thoroughly examined and modified. Small modules are tested in isolation initially. After that
these modules are tested by writing some overhead code to check the interaction between these modules and the
flow of intermediate output.
This phase is highly crucial as the quality of the end product is determined by the effectiveness of the
testing carried out. The better output will lead to satisfied customers, lower maintenance costs, and accurate results.
Unit testing determines the efficiency of individual modules. However, in this phase, the modules are tested for
their interactions with each other and with the system.
Maintenance is the task performed by every user once the software has been delivered to the customer,
installed, and operational.
Advantages of Waterfall model
This model is simple to implement also the number of resources that are required for it is minimal.
The requirements are simple and explicitly declared; they remain unchanged during the entire project
development.
The start and end points for each phase is fixed, which makes it easy to cover progress.
The release date for the complete product, as well as its final cost, can be determined before development.
It gives easy to control and clarity for the customer due to a strict reporting system.
In this model, the risk factor is higher, so this model is not suitable for more significant and complex
projects.
This model cannot accept the changes in requirements during development.
It becomes tough to go back to the phase. For example, if the application has now shifted to thecoding
phase, and there is a change in requirement, It becomes tough to go back and change it.
Since the testing done at a later stage, it does not allow identifying the challenges and risks in theearlier
phase, so the risk reduction strategy is difficult to prepare.
Introduction
The DevOps is the combination of two words, one is Development and other is Operations. It is a culture
to promote the development and operation process collectively.
What is DevOps?
It is Continuous Process
Before going further, we need to understand why we need the DevOps over the other methods.
The operation and development team worked in complete isolation.
After the design-build, the testing and deployment are performed respectively. That's whythey consumed
more time than actual build cycles.
Without the use of DevOps, the team members are spending a large amount of time ondesigning, testing,
and deploying instead of building the project.
Manual code deployment leads to human errors in production.
Coding and operation teams have their separate timelines and are not in synch, causingfurther delays.
DevOps History:
In 2009, the first conference named DevOps days was held in Ghent Belgium. Belgianconsultant and
Patrick Debois founded the conference.
In 2012, the state of DevOps report was launched and conceived by Alanna Brown atPuppet.
In 2014, the annual State of DevOps report was published by Nicole Forsgren, JezHumble, Gene Kim, and
others. They found DevOps adoption was accelerating in 2014also.
In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA (DevOps Research and
Assignment).
In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published "Accelerate: Buildingand Scaling High
Performing Technology Organizations".
Agile development:
1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software. Customer satisfaction and quality deliverables are the focus.
2. Welcome changing requirements, even late in development. Agile processes harness change for the
customer’s competitive advantage. Don’t fight change, instead learn to take advantage of it.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference
to the shorter timescale. Continually provide results throughout a project, not just at its culmination.
4. Business people and developers must work together daily throughout the project. Collaboration is key.
5. Build projects around motivated individuals. Give them the environment and support they need, and
trust them to get the job done. Bring talented and hardworking members to the team and get out of their
way.
6. The most efficient and effective method of conveying information to and within a development team
is face-to-face conversation. Eliminate as many opportunities for miscommunication as possible.
7. Working software is the primary measure of progress. It doesn’t need to be perfect, it needs to work.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be able
to maintain a constant pace indefinitely. Slow and steady wins the race.
9. Continuous attention to technical excellence and good design enhances agility. Don’t forget to pay
attention to the small stuff.
10. Simplicity—the art of maximizing the amount of work not done—is essential. Trim the fat.
11. The best architectures, requirements, and designs emerge from self-organizing teams. Related to
Principle 5, you’ll get the best work from your team if you let them figure out their own roles.
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly. Elicit and provide feedback, absorb the feedback, and adjust where needed.
ITIL:
ITIL is an abbreviation of Information Technology Infrastructure Library .It is a framework which helps
the IT professionals for delivering the best services of IT. This framework is a set of best practices to create and
improve the process of ITSM (IT Service Management). It provides a framework within an organization, which
helps in planning, measuring, and implementing the services of IT. The main motive of this framework is that the
resources are used in such a way so that the customer get the better services and business get the profit. It is not a
standard but a collection of best practices guidelines.
1. Service Strategy.
2. Service Design.
3. Service Transition.
4. Service Operation.
5. Continual Service Improvement.
Service Strategy:
Service Strategy is the first and initial stage in the lifecycle of the ITIL framework. The main aim of this
stage is that it offers a strategy on the basis of the current market scenario and business perspective for the services
of IT. This stage mainly defines the plans, position, patters, and perspective which are required for a service
provider. It establishes the principles and policies which guide the whole lifecycle of IT service. Following are the
various essential services or processes which comes under the Service Strategy
stage:
Financial Management
Demand Management
Service Portfolio Management
Business Relationship Management
Strategy Management
Strategy Management: The aim of this management process is to define the offerings, rivals, and capabilities of a
service provider to develop a strategy to serve customers. According to the version 3 (V3) of ITIL, this process
includes the following activities for IT services:
1. Identification of Opportunities
2. Identification of Constraints
3. Organizational Positioning
4. Planning
5. Execution
Following are the three sub-processes which comes under this management process:
Financial Management: This process helps in determining and controlling all the costs which are associated with
theservices of an IT organization. It also contains the following three basic activities:
1. Accounting
2. charging
3. Budgeting
Following are the four sub-processes which comes under this management process:
1. Financial Management Support
2. Financial Planning
3. Financial Analysis and Reporting
4. Service Invoicing
Demand Management: This management process is critical and most important in this stage. It helps the
service providers to understand and predict the customer demand for the IT services. Demand management is a
process which also works with the process of Capacity Management. Following are basic objectives of this process:
This process balances the resources demand and supply.
It also manages or maintains the quality of service.
According to the version 3 (V3) of ITIL, this process performs the following 3 activities:
Following are the two sub-processes which comes under this management process:
1. Demand Prognosis
2. Demand Control.
Business Relationship Management: This management process is responsible for maintaining a positive and good
relationship between the service provider and their customers. It also identifies the needs of a customer. And, then
ensure that the services are implemented by the service provider to meet those requirements. This process has been
released as a new process in the ITIL 2011.According to the version 3 (V3) of ITIL, this process performs the
following various activities:
This process is used to represent the service provider to the customer in a positive manner.
This process identifies the business needs of a customer.
It also acts as a mediator if there is any case of conflicting requirements from the different businesses.
Following are the six sub-processes which comes under this management process:
1. Maintain Customer Relationships
2. Identify Service Requirements
3. Sign up Customers to standard Services
4. Customer Satisfaction Survey5.Handle Customer Complaints6.Monitor Customer Complaints.
Service Portfolio Management: This management process defines the set of customer-oriented services which are
provided by a service provider to meet the customer requirements. The primary goal of this process is to maintain
the service portfolio. Following are the three types of services under this management process:
1. Live Services
2. Retired Services
3. Service Pipeline.
Following are the three sub-processes which comes under this management process:
1. Define and Analyse the new services or changed services of IT.
2. Approve the changes or new IT services
3. Service Portfolio review.
Service Design:
It is the second phase or a stage in the lifecycle of a service in the framework of ITIL. This stage provides
the blueprint for the IT services. The main goal of this stage is to design the new IT services. We can also change the
existing services in this stage. Following are the various essential services or processes which comes under the
Service Design stage:
Service Level Management
Capacity Management
Availability Management
Risk Management
Service Continuity Management
Service Catalogue Management
Information Security Management
Supplier Management
Compliance Management
Architecture Management
Service Level Management: In this process, the Service Level Manager is the process owner. This management is
fully redesigned in the ITIL 2011.Service Level Management deals with the following two different types of
agreements:
1. Operational Level Agreement
2. Service Level Agreement
According to the version 3 (V3) of ITIL, this process performs the following activities:
It manages and reviews all the IT services to match service Level Agreements.
It determines, negotiates, and agrees on the requirements for the new or changed IT services.
Following are the four sub-processes which comes under this management process:
1. Maintenance of SLM framework
2. Identifying the requirements of services
3. Agreements sign-off and activation of the IT services
4. Service level Monitoring and Reporting.
Capacity Management: This management process is accountable for ensuring that the capacity of the IT service
can meet the agreed capacity in a cost-effective and timely manner. This management process is also working with
other processes of ITIL for accessing the current infrastructure of IT. According to the version 3 (V3) of ITIL, this
process performs the following activities:
It manages the performance of the resources so that the IT services can easily meet their SLA targets.
It creates and maintains the capacity plan which aligns with the strategic plan of an organization.
It reviews the performance of a service and the capacity of current service periodically.
It understands the current and future demands of customer for the resources of IT.
Following are the four sub-processes which comes under this management process:
Availability Management: In this process, the Availability Manager is the owner. This management process has a
responsibility to ensure that the services of IT meet the agreed availability goals. This process also confirms that the
services which are new or changed does not affect the existing services. It is used for defining, planning, and
analyzing all the availability aspects of the services of IT. According to the version 3 (V3) of ITIL, this process
contains the following two activities:
1. Reactive Activity
2. Proactive Activity
Following are the four sub-processes which comes under this management process:
Risk Management: In this process, the Risk Manager is the owner. This management process allows the
risk manager to check, assess, and control the business risks. If any risk is identified in the process of business, the
risk of that entry is created in the ITIL Risk Register. According to the version 3 (V3) of ITIL, this process performs
the following activities in the given order:
It identifies the threats.
It finds the probability and impact of risk.
It checks the way for reducing those risks.
It always monitors the risk factors.
Following are the four sub-processes which comes under the Risk process:
Supplier Management : In this process, the Supplier Manager plays a role as an owner. The supplier manager is
responsible to verify that all the suppliers meet their contractual commitments. It also works with the Financial and
knowledge management, which helps in selecting the suppliers on the basis of previous knowledge. Following are
the various activities which are involved in this process:
It manages the sub-contracted suppliers.
It manages the relationship with the suppliers.
It helps in implementing the supplier policy.
It also manages the supplier policy and supports the SCMIS.
It also manages or maintains the performance of the suppliers.
According to the version 3 (V3) of ITIL, following are the six sub-processes which comes under this management
process:
Provide the Framework of Supplier Management
Evaluation and selection of new contracts and suppliers
Establish the new contracts and suppliers
Process the standard orders
Contract and Supplier Review
Contract Renewal or Termination.
Compliance Management : In this process, the Compliance Manager plays a role as an owner. This management
process allows the compliance manager to check and address all the issues which are associated with regulatory and
non-regulatory compliances. Under this compliance management process, no sub-process is specified or defined.
Here, the role of Compliance Manager is to certify that the guidelines, legal requirements, and standards are being
followed properly or not. This manager works in parallel with the following three managers:
Information Security Manager
Financial Manager
Service Design Manager.
Architecture Management: In this process, the Enterprise Architect plays a role as an owner. The main aim of
Enterprise Architect is to maintain and manage the architecture of the Enterprise. This management process helps
the Enterprise Architect by verifying that all the deployed services and products operate according to the specified
architecture baseline in the Enterprise. This process also defines and manages a baseline for the future technological
development. Under this Architecture management process, no sub-process is specified or defined.
Service Transition :
Service Transition is the third stage in the lifecycle of ITIL Management Frame work. The main goal of
this stage is to build, test, and develop the new or modified services of IT. This stage of service lifecycle manages
the risks to the existing services. It also certifies that the value of a business is obtained. This stage also makes sure
that the new and changed IT services meet the expectation of the business as defined in the previous two stages of
service strategy and service design in the life cycle.
It can also easily manage or maintains the transition of new or modified IT services from the Service Design stage to
Service Operation stage. There are following various essential services or processes which come under the Service
Transition stage:
1. Change Management
2. Release and Deployment Management
3. Service Asset and Configuration Management
4. Knowledge Management
5. Project Management (Transition Planning and Support)
6. Service Validation and Testing
7. Change Evaluation
Change Management: In this process, the Change Manager plays a role as an owner. The Change Manager
controls or manages the service lifecycle of all changes. It also allows the change Manager to implement all the
essential changes to be required with the less disruption of IT services. This management process also allows its
owner to recognize and stop any unintended change activity. Actually, this management process is tightly bound
with the process "Service Asset and Configuration Management" .Following are the three types of changes which
are defined by the ITIL.
Normal Change
Standard Change
Emergency Change
All these changes are also known as the Change Models. According to the version 3 (V3) of ITIL, following are the
eleven sub-processes which comes under this Change management process:
Change Management Support
RFC (Request for Change) Logging and Review
Change Assessment by the Owner (Change Manager)
Assess and Implement the Emergency Changes
Assessment of change Proposals
Change Scheduling and Planning
Change Assessment by the CAB
Change Development Authorization
Implementation or Deployment of Change
Minor change Deployment11.Post Implementation Review and change closure
Release and Deployment Management: In this process, the Release Manager plays a role as an owner. Sometimes,
this process is also known as the 'ITIL Release Management Process'. This process allows the Release Manager for
managing, planning, and controlling the updates &releases of IT services to the real environment. Following are the
three types of releases which are defined by the ITIL.
Minor release
Major Release
Emergency Release
According to the version 3 (V3) of ITIL, following are the six sub-processes which comes under this Change
management process:
Release Management Support
Release Planning
Release build
Release Deployment
Early Life Support
Release Closure
Service Asset and Configuration Management: In this process, the Configuration Manager plays a role as an
owner. This management process is a combination of two implicit processes:
Asset Management
Configuration Management
The aim of this management process is to manage the information about the (CIs) Configuration Items which are
needed to deliver the services of IT. It contains information about versions, baselines, and the relationships between
assets. According to the version 3 (V3) of ITIL, following are the five sub-processes which comes under this
Change management process:
Planning and Management
Configuration Control and Identification
Status Accounting and reporting
Audit and Verification
Manage the Information
Knowledge Management: In this process, the Knowledge Manager plays a role as an owner. This management
process helps the Knowledge Manager by analyzing, storing and sharing the knowledge and the data or information
in an entire IT organization. Under this Knowledge Management Process, no sub-process is specified or defined.
Transition Planning and Support: In this process, the Project Manager plays a role as an owner. This management
process manages the service transition projects. Sometimes, this process is also known as the Project Management
Process. In this process, the project manager is accountable for planning and coordinating resources to deploy IT
services within time, cost, and quality estimates. According to the version 3 (V3) of ITIL, this process performs the
following activities:
It manages the issues and risks.
It defines the tasks and activities which are to be performed by the separate processes.
It makes a group with the same type of releases.
It manages each individual deployment as a separate project.
According to the version 3 (V3) of ITIL, following are the four sub-processes which comesunder this Project
management process:
Initiate the Project
Planning and Coordination of a Project
Project Control
Project Communication and Reporting
Service Validation and Testing: In this process, the Test Manager plays a role as an owner. The main goal of this
management process is that it verifies whether the deployed releases and the resulting IT service meet the customer
expectations. It also checks whether the operations of IT are able to support the new IT services after the
deployment. This process allows the Test Manager to remove or delete the errors which are observed at the first
phase of the service operation stage in the lifecycle. It provides the quality assurance for both the services and
components. It also identifies the risks, errors and issues, and then they are eliminated through this current stage.
This management process has been released in the version 3 of ITIL as a new process. Following are the various
activities which are performed under this process:
Validation and Test Management
Planning and Design
Verification of Test Plan and Design
Preparation of the Test Environment
Testing
Evaluate Exit Criteria and Report
Clean up and closure
According to the version 3 (V3) of ITIL, following are the four sub-processes which comes under this management
process:
Test Model Definition
Release Component Acquisition
Release Test
Service Acceptance Testing
Change Evaluation: In this process, the Change Manager plays a role as an owner. The goal of this
management process is to avoid the risks which are associated with the major changes for reducing the chances of
failures. This process is started and controlled by the change management and performed by the change manager.
Following are the various activities which are performed under this process:
It can easily identify the risks.
It evaluates the effects of a change.
According to the version 3 (V3) of ITIL, following are the four sub-processes which comesunder this management
process:
Change the Evaluation prior to Planning
Change the Evaluation prior to Build
Change the Evaluation prior to Deployment
Change the Evaluation prior after Deployment
Service Operations:
Service Operations is the fourth stage in the lifecycle of ITIL. This stage provides the guidelines about how
to maintain and manage the stability in services of IT, which helps in achieving the agreed level targets of service
delivery. This stage is also responsible for monitoring the services of IT and fulfilling the requests. In this stage, all
the plans of transition and design are measured and executed for the actual efficiency. It is also responsible for
resolving the incidents and carrying out the operational tasks. There are following various essential services or
processes which comes under the stage of Service Operations:
1. Event Management
2. Access Management
3. Problem Management
4. Incident Management
5. Application Management
6. Technical Management
Event Management: In this process, the IT Operations Manager plays a role as an owner. The main goal of this
management process is to make sure that the services of IT and CIs are constantly monitored. It also helps in
categorizing the events so that appropriate action can be taken if needed. In this Management process, the process
owner takes all the responsibilities of processes and functions for the multiple service operations. Following are the
various purposes of Event Management Process:
It allows the IT Operations Manager to decide the appropriate action for the events.
It also provides the trigger for the execution of management activities of many services.
It helps in providing the basis for service assurance and service improvement.
The Event Monitoring Tools are divided into two types, which are defined by the Version 3 (V3)of ITIL:
Active Monitoring Tool
Passive Monitoring Tool
Following are the three types of events which are defined by the ITIL:
Warning
Informational
Exception
According to the version 3 (V3) of ITIL, following are the four sub-processes which comesunder this management
process:
Event Monitoring and Notification
First level Correlation and Event Filtering
Second level Correlation and Response Selection
Event Review and Closure.
Access Management: In this process, the Access Manager plays a role as an owner. This type of Management
process is also sometimes called as the 'Identity Management' or 'Rights Management' .The role of a process
manager is to provide the rights to use the services for authorized users. In this Management process, the owner of a
process follows those policies and guidelines which are defined by the (ISM) ' Information Security Management
'.Following are the six activities which come under this management process and are followed sequentially:
Request Access
Verification
Providing Rights
Monitoring or Observing the Identity Status
Logging and Tracking Status
Restricting or Removing Rights
According to the version 3 (V3) of ITIL, following are the two sub-processes which comes under this management
process:
Maintenance of Catalogue of User Roles and Access profiles
Processing of User Access Requests.
Problem Management: In this process, the Problem Manager plays a role as an owner. The main goal of this
management process is to maintain or manage the life cycle of all the problems which happen in the services of IT.
In the ITIL Framework, the problem is referred to as " an unknown cause or event of one or more incident ".It helps
in finding the root cause of the problem. It also helps in maintaining the information about the problems. Following
are the ten activities which come under this management process and are followed sequentially. These ten activities
are also called as a lifecycle of Problem Management:
Problem Detection
Problem Logging
Categorization of a Problem
Prioritization of a Problem
Investigation and Diagnosis of a Problem
Identify Workaround
Raising a Known Error Record
Resolution of a Problem
Problem Closure
Major Problem Review
Incident Management: In this process, the Incident Manager Plays a role as an owner. The main goal of
thismanagement process is to maintain or manage the life cycle of all the incidents which happen inthe services of
IT.An incident is a term which is defined as the failure of any Configuration Item (CI) or reductionin the quality of
services of IT.This management process maintains the satisfaction of users by managing the qualities of ITservice. It
increases the visibility of incidents.According to the version 3 (V3) of ITIL, following are the nine sub-processes
which comesunder this management process:
1.Incident Management Support
2.Incident Logging and Categorization
3.Pro-active User Information
4.First Level Support for Immediate Incident Resolution
5.Second Level Support for Incident Resolution
6.Handling of Major Incidents
7.Incident Monitoring and Escalation
8.Closure and Evaluation of Incident
9.Management Reporting of Incident
Application Management: In this function, the Application Analyst plays a role as an owner. This management
function maintains or improves the applications throughout the entire servicelifecycle. This function plays an
important and essential role in the applications and systemmanagement.Under this management function, no sub-
process is specified or defined. But, this managementfunction into the following six activities or stages:
Define
Design
Build
Deploy
Operate
Optimize
Technical Management: In this function, the Technical Analyst plays a role as an owner. This function acts as
standalonein the IT organizations, which basically consists of technical people and teams. The main goal of this
function is to provide or offer the technical expertise. And, it also supports for maintainingor managing of IT
infrastructure throughout the entire lifecycle of a service.The role of the Technical Analyst is to develop the skills,
which are required tooperate the day-to-day operations of IT infrastructure. Under this management function, no
sub- process is specified or defined.
It is the fifth stage in the lifecycle of ITIL service. This stage helps to identify and implementstrategies,
which is used for providing better services in future.Following are the various objectives or goals under this CSI:
It improves the quality services by learning from the past failures.
It also helps in analyzing and reviewing the improvement opportunities in every phase of theservice
lifecycle.
It also evaluates the service level achievement results.
It also describes the best guidelines to achieve the large-scale improvements in the quality of service.
It also helps in describing the concept of KPI, which is a process metrics-driven for evaluatingand
reviewing the performance of the services.
There are following various essential services or processes which come under the stage of CSI:
Service Review
Process Evaluation
Definition of CSI Initiatives
Monitoring of CSI Initiatives
This stage follows the following six-step approach (pre-defined question) for planning, reviewing, and implementing
the improvement process:
Service Review: In this process, the CSI Manager plays a role as an owner. The main aim of this
management process is to review the services of business and infrastructure on a regular basis.Sometimes, this
process is also called as " ITIL Service Review and Reporting ". Under thismanagement process, no sub-process is
specified or defined.
Process Evaluation: In this process, the Process Architect plays a role as an owner. The main aim of
thismanagement process is to evaluate the processes of IT services on a regular basis. This processaccepts inputs
from the process of Service Review and provides its output to the process
Definition of CSI Initiatives: In this process, the process owner is responsible for maintaining and managing the
processarchitecture and also ensures that all the processes of services cooperate in a seamless way.According to the
version 3 (V3) of ITIL, following are the five sub-processes which comes under this management process:
Process Management support
Process Benchmarking
Process Maturity Assessment
Process Audit
Process Control and Review
Definition of CSI Initiatives In this process, the CSI Manager plays a role as an owner. This management process is
alsocalled/known as a " Definition of Improvement Initiatives ". Definition of CSI Initiatives is a process, which is
used for describing the particular initiativeswhose aim is to improve the qualities of IT services and processes.In this
process, the CSI Manager (process owner) is accountable for managing and maintainingthe CSI registers and also
helps in taking the good decisions regarding improvement initiatives.Under this management process, no sub-
process is specified or defined.
Monitoring of CSI Initiatives: In this process, the CSI Manager plays a role as an owner. This management
process is alsocalled as a " CSI Monitoring ".Under this management process, no sub-process is specified or defined.
DevOps Process & Continuous development:
Continuous development is an umbrella term that describes the iterative process for developingsoftware to
be delivered to customers. It involves continuous integration, continuous testing, continuous delivery, and
continuous deployment. By implementing a continuous development strategy and its associated sub-strategies,
businessescan achieve faster delivery of new features or products that are of higher quality and lower risk, without
running into significantly bandwidth barriers.
Continuous integration: Continuous integration (CI) is a software development practice commonly applied in the
DevOps process flow. Developers regularly merge their code changes into a shared repository wherethose updates
are automatically tested.Continuous integration ensures the most up-to-date and validated code is always
readilyavailable to developers. CI helps prevent costly delays in development by allowing multipledevelopers to
work on the same source code with confidence, rather than waiting to integrateseparate sections of code all at once
on release day.This practice is a crucial component of the DevOps process flow, which aims to combine speedand
agility with reliability and security.
Continuous testing: Continuous testing is a verification process that allows developers to ensure the code
actuallyworks the way it was intended to in a live environment. Testing can surface bugs and particular aspects of
the product that may need fixing or improvement, and can be pushed back to thedevelopment stages for continued
improvement.
Continuous monitoring and feedback : Throughout the development pipeline, your team should have measures in
place for continuousmonitoring and feedback of the products and systems. Again, the majority of the
monitoring process should be automated to provide continuous feedback. This process allows IT operations to
identify issues and notify developers in real time.Continuous feedback ensures higher security and system reliability
as well as more agileresponses when issues do arises.
Continuous deployment: For the seasoned DevOps organization, continuous deployment may be the better option
over CD. Continuous deployment is the fully automated version of CD with no human (i.e., manual)intervention
necessary.In a continuous deployment process, every validated change is automatically released to users.This
process eliminates the need for scheduled release days and accelerates the feedback loop.Smaller, more frequent
releases allow developers to get user feedback quickly and address issueswith more agility and accuracy.Continuous
deployment is a great goal for a DevOps team, but it is best applied after the DevOps process has been ironed out.
For continuous deployment to work well, organizations need tohave a rigorous and reliable automated testing
environment. If you’re not there yet, starting withCI and CD will help you get there.
Continuous delivery: Continuous delivery (CD) is the next logical step from CI. Code changes are automatically
built,tested, and packaged for release into production. The goal is to release updates to the usersrapidly and
sustainably.To do this, CD automates the release process (building on the automated testing in CI) so thatnew builds
can be released at the click of a button.
Continuous delivery is an approach where teams release quality products frequently and predictably from
source code repository to production in an automated fashion.Some organizations release products manually by
handing them off from one team to the next, which is illustrated in the diagram below. Typically, developers are at
the left end of thisspectrum and operations personnel are at the receiving end. This creates delays at every hand-
off that leads to frustrated teams and dissatisfied customers. The product eventually goes livethrough a tedious and
error-prone process that delays revenue generation.
Release Management:
Release management is the process of overseeing the planning, scheduling, andcontrolling of software
builds throughout each stage of development and acrossvarious environments. Release management typically
included the testing anddeployment of software releases as well.
Release management has had an important role in the software development lifecyclesince before it was known as
release management. Deciding when and how to releaseupdates was its own unique problem even when software
saw physical disc releaseswith updates occurring as seldom as every few years.
Now that most software has moved from hard and fast release dates to the software as aservice (SaaS) business
model, release management has become a constant process that worksalongside development. This is especially true
for businesses that have converted to utilizingcontinuous delivery pipelines that see new releases occurring at
blistering rates. DevOps now plays a large role in many of the duties that were originally considered to be under the
purviewof release management roles; however, DevOps has not resulted in the obsolescence of releasemanagement.
Minimize downtime:
DevOps is about creating an ideal customer experience. Likewise, the goal of releasemanagement is to
minimize the amount of disruption that customers feel with updates.Strive to consistently reduce customer impact
and downtime with active monitoring, proactivetesting, and real-time collaborative alerts that enable you to quickly
notify you of issues during arelease. A good release manager will be able to identify any problems before the
customer.The team can resolve incidents quickly and experience a successful release when proactiveefforts are
combined with a collaborative response plan.
The staging environment requires constant upkeep. Maintaining an environment that is as closeas possible
to your production one ensures smoother and more successful releases. From QAto product owners, the whole team
must maintain the staging environment by running tests andcombing through staging to find potential issues with
deployment. Identifying problems instaging before deploying to production is only possible with the right staging
environment.Maintaining a staging environment that is as close as possible to production will enable DevOpsteams
to confirm that all releases will meet acceptance criteria more quickly.
Scrum:
Scrum is a framework used by teams to manage work and solve problems collaboratively inshort cycles. Scrum
implements the principles of Agile as a concrete set of artifacts, practices,and roles.
The Scrum lifecycle
The diagram below details the iterative Scrum lifecycle. The entire lifecycle is completed in fixed
time periods called sprints. A sprint is typically one-to-four weeks long
Scrum roles
There are three key roles in Scrum: the product owner, the Scrum master , and the Scrum team.
Product owner: The product owner is responsible for what the team builds, and why they build it. The
productowner is responsible for keeping the backlog of work up to date and in priority order.
Scrum master: The Scrum master ensures that the Scrum process is followed by the team. Scrum masters
arecontinually on the lookout for how the team can improve, while also resolving impediments andother blocking
issues that arise during the sprint. Scrum masters are part coach, part teammember, and part cheerleader.
Scrum team: The members of the Scrum team actually build the product. The team owns the engineering of the
product, and the quality that goes with it.
Product backlog
The product backlog is a prioritized list of work the team can deliver. The product owner isresponsible for
adding, changing, and reprioritizing the backlog as needed. The items at the topof the backlog should always be
ready for the team to execute on.
Sprint retrospective -The team takes time to reflect on what went well and which areas need improvement.
Theoutcome of the retrospective are actions for the next sprint. Increment The product of a sprint is called the
increment or potentially shippable increment . Regardless of the term, a sprint's output should be of shippable
quality, even if it's part of something bigger andcan't ship by itself. It should meet all the quality criteria set by the
team and product owner.
Repeat, learn, improve- The entire cycle is repeated for the next sprint. Sprint planning selects the next items on
the product backlog and the cycle repeats. While the team executes the sprint, the product owner ensures the items at
the top of the backlog are ready to execute in the following sprint.This shorter, iterative cycle provides the team with
lots of opportunities to learn and improve. Atraditional project often has a long lifecycle, say 6-12 months. While a
team can learn from atraditional project, the opportunities are far less than a team who executes in two-week sprints,
for example.This iterative cycle is, in many ways, the essence of Agile.Scrum is very popular because it provides
just enough framework to guide teams while givingthem flexibility in how they execute. Its concepts are simple and
easy to learn. Teams can getstarted quickly and learn as they go. All of this makes Scrum a great choice for teams
juststarting to implement Agile principles.
Kanban:
Kanban is a Japanese term that means signboard or billboard. An industrial engineer namedTaiichi Ohno
developed Kanban at Toyota Motor Corporation to improve manufacturingefficiency.Although Kanban was created
for manufacturing, software development shares many of thesame goals, such as increasing flow and throughput.
Software development teams can improvetheir efficiency and deliver value to users faster by using Kanban guiding
principles andmethods.
Kanban principles:
Adopting Kanban requires adherence to some fundamental practices that might vary from teams' previous methods.
Visualize work Understanding development team status and work progress can be challenging. Work progressand
current state is easier to understand when presented visually rather than as a list of work items or a
document.Visualization of work is a key principle that Kanban addresses primarily through Kanbanboards. These
boards use cards organized by progress to communicate overall status. Visualizingwork as cards in different states
on a board helps to easily see the big picture of where a projectcurrently stands, as well as identify potential
bottlenecks that could affect productivity.
Use a pull model:
Historically, stakeholders requested functionality by pushing work onto development teams,often with tight
deadlines. Quality suffered if teams had to take shortcuts to deliver thefunctionality within the timeframe.Kanban
focuses on maintaining an agreed-upon level of quality that must be met beforeconsidering work done. To support
this model, stakeholders don't push work on teams that arealready working at capacity. Instead, stakeholders add
requests to a backlog that a team pulls intotheir workflow as capacity becomes available.
Impose a WIP limit
Teams that try to work on too many things at once can suffer from reduced productivity due tofrequent and costly
context switching. The team is busy, but work doesn't get done, resulting inunacceptably high lead times. Limiting
the number of backlog items a team can work on at atime helps increase focus while reducing context switching.
The items the team is currentlyworking on are called work in progress (WIP).Teams decide on a WIP limit , or
maximum number of items they can work on at one time. Awell-disciplined team makes sure not to exceed their
WIP limit. If teams exceed their WIP limits,they investigate the reason and work to address the root cause.
Measure continuous improvement
To practice continuous improvement, development teams need a way to measure effectivenessand throughput.
Kanban boards provide a dynamic view of the states of work in a workflow, soteams can experiment with processes
and more easily evaluate impact on workflows. Teams thatembrace Kanban for continuous improvement use
measurements like lead time and cycle time.Kanban boards The Kanban board is one of the tools teams use to
implement Kanban practices. A Kanban boardcan be a physical board or a software application that shows cards
arranged into columns.Typical column names are To-do, Doing, and Done,but teams can customize the names
tomatch their workflow states.For example,a team might prefer touse New, Development, Testing, UAT, and
Done.Software development-based Kanban boards display cards that correspond to product backlogitems. The cards
include links to other items, such as tasks and test cases. Teams can customizethe cards to include information
relevant to their process.On a Kanban board, the WIP limit applies to all in-progress columns. WIP limits don't
apply tothe first and last columns, because those columns represent work that hasn't started or iscompleted. Kanban
boards help teams stay within WIP limits by drawing attention to columns that exceed the limits. Teams can then
determine a course of action to remove the bottleneck.
Cumulative flow diagrams
A common addition to software development-based Kanban boards is a chart called a cumulative flow diagram
(CFD) . The CFD illustrates the number of items in each state over time, typicallyacross several weeks. The
horizontal axis shows the timeline, while the vertical axis shows thenumber of product backlog items. Colored areas
indicate the states or columns the cards arecurrently in.
The CFD is particularly useful for identifying trends over time, including bottlenecks and other disruptions
to progress velocity. A good CFD shows a consistent upward trend while a team isworking on a project. The colored
areas across the chart should be roughly parallel if the team isworking within their WIP limits.
A bulge in one or more of the colored areas usually indicates a bottleneck or impediment in theteam's flow.
In the following CFD, the completed work in green is flat, while the testing state in blue is growing, probably due to
a bottleneck.
While broadly fitting under the umbrella of Agile development, Scrum and Kanban are quitedifferent.
Scrum focuses on fixed length sprints, while Kanban is a continuous flow model.
Scrum has defined roles, while Kanban doesn't define any team roles.
Scrum uses velocity as a key metric, while Kanban uses cycle time.
Teams commonly adopt aspects of both Scrum and Kanban to help them work most effectively.Regardless of which
characteristics they choose, teams can always review and adapt until theyfind the best fit. Teams should start simple
and not lose sight of the importance of deliveringvalue regularly to users.
Kanban with GitHub
GitHub offers a Kanban experience through project boards (classic). These boards helpyou organize and prioritize
work for specific feature development, comprehensive roadmaps, or release checklists. You can automate project
boards (classic) to sync card status with associatedissues and pull requests.
Delivery Pipeline:
A DevOps pipeline is a set of automated processes and tools that allows both developers andoperations
professionals to work cohesively to build and deploy code to a productionenvironment. While a DevOps pipeline
can differ by organization, it typically includes build automation/continuous integration, automation testing,
validation, and reporting. It mayalso include one or more manual gates that require human intervention before code
is allowed to proceed.Continuous is a differentiated characteristic of a DevOps pipeline. This includes
continuousintegration, continuous delivery/deployment (CI/CD), continuous feedback, and continuousoperations.
Instead of one-off tests or scheduled deployments, each function occurs on anongoing basis.Considerations for
building a DevOps pipeline Since there isn’t one standard DevOps pipeline, an organization’s design and
implementation of aDevOps pipeline depends on its technology stack, a DevOps engineer’s level of experience,
budget, andmore. A DevOps engineer should have a wide-ranging knowledge of both development and operations,
including coding, infrastructure management, system administration, and DevOps toolchains. Plus, each
organization has a different technology stack that can impact the process. For example, if your codebase is node.js,
factors include whether you use a local proxy npm registry,whether you download the source code and run `npm
install` at every stage in the pipeline, or doit once and generate an artifact that moves through the pipeline. Or, if an
application is container- based, you need to decide to use a local or remote container registry, build the container
onceand move it through the pipeline, or rebuild it at every stage.
While every pipeline is unique, most organizations use similar fundamental components. Eachstep is
evaluated for success before moving on to the next stage of the pipeline. In the event of afailure, the pipeline is
stopped, and feedback is provided to the developer.
Continuous deployment entails having a level of continuous testing and operations that is sorobust, new versions of
software are validated and deployed into a production environmentwithout requiring any human intervention.This is
rare and in most cases unnecessary. It is typically only the unicorn businesses who havehundreds or thousands of
developers and have many releases each day that require, or even wantto have, this level of automation. To simplify
the difference between continuous delivery and continuous deployment, think of delivery as the FedEx person
handing you a box, and deployment as you opening that box andusing what’s inside. If a change to the product is
required between the time you receive the boxand when you open it, the manufacturer is in trouble!
Continuous feedback The single biggest pain point of the old waterfall method of software development —
andconsequently why agile methodologies were designed — was the lack of timelyfeedback. When new features
took months or years to go from idea to implementation, itwas almost guaranteed that the end result would be
something other than what thecustomer expected or wanted. Agile succeeded in ensuring that developers received
faster feedback from stakeholders. Now with DevOps, developers receive continuous feedback not not only from
stakeholders, but from systematic testing and monitoring of their codein the pipeline.
Continuous testing is a critical component of every DevOps pipeline and one of the primaryenablers of continuous
feedback. In a DevOps process, changes move continuously fromdevelopment to testing to deployment, which leads
not only to faster releases, but a higher quality product. This means having automated tests throughout your pipeline,
including unit teststhat run on every build change, smoke tests, functional tests, and end-to-end tests.
Continuous monitoring is another important component of continuous feedback. A DevOpsapproach entails using
continuous monitoring in the staging, testing, and even developmentenvironments. It is sometimes useful to monitor
pre-production environments for anomalous behavior, but in general this is an approach used to continuously assess
the health and performance of applications in production. Numerous tools and services exist to provide this
functionality, and this may involve anythingfrom monitoring your on-premise or cloud infrastructure such as server
resources, networking,etc. or the performance of your application or its API interfaces.
Continuous operations is a relatively new and less common term, and definitions vary. Oneway to interpret it is as
“continuous uptime”. For example in the case of a blue/green deployment strategy in which you have two separate
production environments, one that is “blue” (publiclyaccessible) and one that is “green” (not publicly accessible). In
this situation, new code would bedeployed to the green environment, and when it was confirmed to be functional
then a switchwould be flipped (usually on a load-balancer) and traffic would switch from the “blue” system tothe
“green” system. The result is no downtime for the end-users. Another way to think of Continuous operations is as
continuous alerting. This is the notion thatengineering staff is on-call and notified if any performance anomalies in
the application or infrastructure occur. In most cases, continuous alerting goes hand in hand with
continuousmonitoring.One of the main goals of DevOps is to improve the overall workflow in the
softwaredevelopment life cycle (SDLC). The flow of work is often described as WIP or work in progress.Improving
WIP can be accomplished by a variety of means. In order to effectively remove bottlenecks that decrease the flow of
WIP, one must first analyze the people, process, andtechnology aspects of the entire SDLC. These are the 11
bottlenecks that have the biggest impact on the flow of work .
1. Inconsistent Environments
In almost every company I have worked for or consulted with, a huge amount of waste exists because the
various environments (dev, test, stage, prod) are configureddifferently. I call this “environment hell”. How many
times have you heard adeveloper say “it worked on my laptop”? As code moves from one environment to thenext,
software breaks because of the different configurations within each environment.I have seen teams waste days and
even weeks fixing bugs that are due toenvironmental issues and are not due to errors within the code.
Inconsistentenvironments are the number one killer of agility.
Create standard infrastructure blueprints and implement continuous delivery to ensure allenvironments are identical.
2. Manual Intervention
Manual intervention leads to human error and non-repeatable processes. Two areas wheremanual
intervention can disrupt agility the most are in testing and deployments. If testing is performed manually, it is
impossible to implement continuous integration and continuousdelivery in an agile manner (if at all). Also, manual
testing increases the chance of producingdefects, creating unplanned work. When deployments are
performed fully or partiallymanual, the risk of deployment failure increases significantly which lowers qualityand
reliability and increases unplanned work.
Automate the build and deployment processes and implement a test automationmethodology like test driven
development (TDD)
3. SDLC Maturity
The maturity of a team’s software development lifecycle (SDLC) has a direct impact on their ability to
deliver software. There is nothing new here; SDLC maturity has plagued IT for decades. In the age of DevOps,
where we strive to deliver software in shorter increments with ahigh degree of reliability and quality, it is even more
critical for a team to have a mature process.Some companies I visit are still practicing waterfall methodologies.
These companies strugglewith DevOps because they don’t have any experience with agile. But not all companies
that practice agile do it well. Some are early in their agile journey, while others have implementedwhat I call
“Wagile”: waterfall tendencies with agile terminology sprinkled in. I have seen teamswho have implemented
Kanban but struggle with the prioritization and control of WIP. I haveseen scrum teams struggle to complete the
story points that they promised. It takes time to getreally good at agile.
Invest in training and hold blameless post mortems to continously solicit feedback andimprove.4. Legacy
7. Automating waste
A very common pattern I run into is the automation of waste. This occurs when a team declaresitself a
DevOps team or a person declares themselves a DevOps engineer and immediately startswriting hundreds or
thousands of lines of Chef or Puppet scripts to automate their existing processes. The problem is that many of the
existing processes are bottlenecks and need to bechanged. Automating waste is like pouring concrete around
unbalanced support beams. It makes bad design permanent.Automate processes after the bottlenecks are removed.
Agile
Lean
Waterfall
Iterative
Spiral
DevOps
Each of these approaches varies in some ways from the others, but all have a common purpose:to help teams deliver
high-quality software as quickly and cost-effectively as possible.
1. Agile
The Agile model first emerged in 2001 and has since become the de facto industry standard.Some businesses value
the Agile methodology so much that they apply it to other types of projects, including nontech initiatives.In the
Agile model, fast failure is a good thing. This approach produces ongoing release cycles,each featuring small,
incremental changes from the previous release. At each iteration, the product is tested. The Agile model helps teams
identify and address small issues on projects before they evolve into more significant problems, and it engages
business stakeholders to givefeedback throughout the development process.As part of their embrace of this
methodology, many teams also apply an Agile framework knownas Scrum to help structure more complex
development projects. Scrum teams work in sprints,which usually last two to four weeks, to complete assigned
tasks. Daily Scrum meetings help thewhole team monitor progress throughout the project. And the ScrumMaster is
tasked withkeeping the team focused on its goal.
2. Lean
The Lean model for software development is inspired by "lean" manufacturing practices and principles. The seven
Lean principles (in this order) are: eliminate waste, amplify learning,decide as late as possible, deliver as fast as
possible, empower the team, build in integrity and seethe whole.The Lean process is about working only on what
must be worked on at the time, so there’s noroom for multitasking. Project teams are also focused on finding
opportunities to cut waste atevery turn throughout the SDLC process, from dropping unnecessary meetings to
reducingdocumentation.The Agile model is actually a Lean method for the SDLC, but with some notable
differences.One is how each prioritizes customer satisfaction: Agile makes it the top priority from the outset,creating
a flexible process where project teams can respond quickly to stakeholder feedback throughout the SDLC. Lean,
meanwhile, emphasizes the elimination of waste as a way to createmore overall value for customers — which, in
turn, helps to enhance satisfaction.
3. Waterfall
Some experts argue that the Waterfall model was never meant to be a process model for real projects. Regardless,
Waterfall is widely considered the oldest of the structured SDLCmethodologies. It’s also a very straightforward
approach: finish one phase, then move on to thenext. No going back. Each stage relies on information from the
previous stage and has its own project plan.The downside of Waterfall is its rigidity. Sure, it’s easy to understand
and simple to manage. Butearly delays can throw off the entire project timeline. With little room for revisions once a
stageis completed, problems can’t be fixed until you get to the maintenance stage. This model doesn’twork well if
flexibility is needed or if the project is long-term and ongoing.Even more rigid is the related Verification and
Validation model — or V-shaped model. Thislinear development methodology sprang from the Waterfall approach.
It’s characterized by acorresponding testing phase for each development stage. Like Waterfall, each stage begins
onlyafter the previous one has ended. This SDLC model can be useful, provided your project has nounknown
requirements.
4. Iterative
The Iterative model is repetition incarnate. Instead of starting with fully known requirements, project teams
implement a set of software requirements, then test, evaluate and pinpoint further requirements. A new version of
the software is produced with each phase, or iteration. Rinse andrepeat until the complete system is
ready.Advantages of the Iterative model over other common SDLC methodologies is that it produces aworking
version of the project early in the process and makes it less expensive to implementchanges. One disadvantage:
Repetitive processes can consume resources quickly.One example of an Iterative model is the Rational Unified
Process (RUP), developed by IBM’sRational Software division. RUP is a process product, designed to enhance team
productivity for a wide range of projects and organizations.RUP divides the development process into four phases:
Inception, when the idea for a project is set
Elaboration, when the project is further defined and resources are evaluated
Construction, when the project is developed and completed
Transition, when the product is releasedEach phase of the project involves business modeling, analysis and
design, implementation,testing, and deployment.
5. Spiral
One of the most flexible SDLC methodologies, Spiral takes a cue from the Iterative model andits repetition. The
project passes through four phases (planning, risk analysis, engineering andevaluation) over and over in a figurative
spiral until completed, allowing for multiple rounds of refinement.The Spiral model is typically used for large
projects. It enables development teams to build ahighly customized product and incorporate user feedback early on.
Another benefit of this SDLCmodel is risk management. Each iteration starts by looking ahead to potential risks and
figuringout how best to avoid or mitigate them.
6. DevOps
The DevOps methodology is a relative newcomer to the SDLC scene. It emerged from twotrends: the
application of Agile and Lean practices to operations work, and the general shift in business toward seeing the value
of collaboration between development and operations staff atall stages of the SDLC process.
In a DevOps model, Developers and Operations teams work together closely — and sometimesas one team —
to accelerate innovation and the deployment of higher-quality and more reliablesoftware products and
functionalities. Updates to products are small but frequent. Discipline,continuous feedback and process
improvement, and automation of manual development processes are all hallmarks of the DevOps model.
Amazon Web Services describes DevOps as the combination of cultural philosophies, practices,and tools that
increases an organization’s ability to deliver applications and services at highvelocity, evolving and improving
products at a faster pace than organizations using traditionalsoftware development and infrastructure management
processes. So like many SDLC models,DevOps is not only an approach to planning and executing work, but also a
philosophy thatdemands a nontraditional mindset in an organization.
Choosing the right SDLC methodology for your software development project requires carefulthought. But
keep in mind that a model for planning and guiding your project is only oneingredient for success. Even more
important is assembling a solid team of skilled talentcommitted to moving the project forward through every
unexpected challenge or setback.
DevOps Lifecycle
DevOps defines an agile relationship between operations and Development. It is a process that is practiced
by the development team and operational engineers together from beginning to thefinal stage of the product.
Learning DevOps is not complete without understanding the DevOps lifecycle phases. TheDevOps
lifecycle includes seven phases as given below:
1) Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decidedduring the
planning phase. And the developers begin developing the code for the application.There are no DevOps tools that
are required for planning, but there are several tools for maintaining the code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice inwhich the
developers require to commit changes to the source code more frequently. This may beon a daily or weekly basis.
Then every commit is built, and this allows early detection of problems if they are present. Building code is not
only involved compilation, but it alsoincludes unit testing, integration testing, code review
, and packaging.
The code supporting new functionality is continuously integrated with the existing code.Therefore; there is
continuous development of software. The updated code needs to be integratedcontinuously and smoothly with the
systems to reflect changes to the end-users.Jenkins is a popular tool used in this phase. Whenever there is a change
in the Git repository, then Jenkins fetches the updated code and prepares a build of that code, which is an
executablefile in the form of war or jar. Then this build is forwarded to the test server or the productionserver.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing, automation
testing tools such as TestNG, JUnit, Selenium , etc are used. These tools allow QAsto test multiple code-bases
thoroughly in parallel to ensure that there is no flaw in thefunctionality. In this phase,Docker Containers can be
used for simulating the test environment.
Selenium does the automation testing, and TestNG generates the reports. This entire testing phase can
automate with the help of a Continuous Integration tool called Jenkins .
Automation testing saves a lot of time and effort for executing the tests instead of doing thismanually.
Apart from that, report generation is a big plus. The task of evaluating the test casesthat failed in a test suite gets
simpler. Also, we can schedule the execution of the test cases at predefined times. After testing, the code is
continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process,where
important information about the use of the software is recorded and carefully processed tofind out trends and
identify problem areas. Usually, the monitoring is integrated within theoperational capabilities of the software
application.It may occur in the form of documentation files or maybe produce large-scale data about theapplication
parameters when it is in a continuous use position. The system errors such as server not reachable, low memory, etc
are resolved in this phase. It maintains the security andavailability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from theoperations of the
software. This is carried out by placing the critical phase of constant feedback between the operations and the
development of the next version of the current softwareapplication.The continuity is the essential factor in the
DevOps as it removes the unnecessary steps whichare required to take a software application from development,
using it to find out its issues andthen producing a better version. It kills the efficiency that may be possible with the
app andreduce the number of interested customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure thatthe code is
correctly used on all the servers.
The new code is deployed continuously, and configuration management tools play an essentialrole in
executing tasks frequently and quickly. Here are some popular tools which are used inthis phase, such as Chef,
Puppet, Ansible, and SaltStack .
Containerization tools are also playing an essential role in thedeployment phase. Vagrant and Docker are
popular tools that are used for this purpose. These tools help to produce consistency across development, staging,
testing, and production environment. Theyalso help in scaling up and scaling down instances softly.
Containerization tools help to maintain consistency across the environments where theapplication is tested,
developed, and deployed. There is no chance of errors or failure in the production environment as they package and
replicate the same dependencies and packages usedin the testing, development, and staging environment. It makes
the application easy to run ondifferent computers.
Devops influence on Architecture
DevOps Model
The DevOps model goes through several phases governed by cross-discipline teams.Those phases are as
follows:
Planning,Identify,andTrack Using the latest in project management tools and agile practices,track ideas and
workflows visually. This gives all important stakeholders a clear pathway to prioritization and better results. With
better oversight, project managers can ensure teams are onthe right track and aware of potential obstacles and
pitfalls. All applicable teams can better work together to solve any problems in the development process.
Development Phase Version control systems help developers continuously code, ensuring one patch connects
seamlessly with the master branch. Each complete feature triggers the developer to submit a request that, if
approved, allows the changes to replace existing code. Development isongoing.
Testing Phase After a build is completed in development, it is sent to QA testing. Catching bugsis important to the
user experience, in DevOps bug testing happens early and often. Practices likecontinuous integration allow
developers to use automation to build and test as a cornerstone of continuous development.
Deployment Phase In the deployment phase, most businesses strive to achieve continuousdelivery. This means
enterprises have mastered the art of manual deployment. After bugs have been detected and resolved, and the user
experience has been perfected, a final team isresponsible for the manual deployment. By contrast, continuous
deployment is a DevOpsapproach that automates deployment after QA testing has been completed.
Management Phase During the post-deployment management phase, organizations monitor and maintain the
DevOps architecture in place. This is achieved by reading and interpreting datafrom users, ensuring security,
availability and more.
A properly implemented DevOps approach comes with a number of benefits. These include thefollowing that we
selected to highlight:
Decrease Cost Of primary concern for businesses is operational cost, DevOps helpsorganizations keep their costs
low. Because efficiency gets a boost with DevOps practices,software production increases and businesses see
decreases in overall cost for production.
IncreasedProductivity and ReleaseTime With shorter development cycles and streamlined processes, teams are
more productive and software is deployed more quickly.
Customers are Served User experience, and by design, user feedback is important to theDevOps process. By
gathering information from clients and acting on it, those who practiceDevOps ensure that clients wants and needs
get honored, and customer satisfaction reaches newhighs.
It Gets More Efficient with Time DevOps simplifies the development lifecycle, which in previous iterations had
been increasingly complex. This ensures greater efficiency throughout aDevOps organization, as does the fact that
gathering requirements also gets easier. In DevOps,requirements gathering is a streamlined process, a culture of
accountability, collaboration andtransparency makes requirements gathering a smooth going team effort where no
stone is left unturned.
Monolithic software is designed to be self-contained, wherein the program's components or functions are tightly
coupled rather than loosely coupled, like in modular software programs. In amonolithic architecture, each
component and its associated components must all be present for code to be executed or compiled and for the
software to run.
Monolithic applications are single-tiered, which means multiple components are combined intoone large
application. Consequently, they tend to have large codebases, which can becumbersome to manage over time.
Furthermore, if one program component must be updated, other elements may also requirerewriting, and
the whole application has to be recompiled and tested. The process can be time-consuming and may limit the agility
and speed of software development teams. Despite theseissues, the approach is still in use because it does offer some
advantages. Also, many earlyapplications were developed as monolithic software, so the approach cannot be
completelydisregarded when those applications are still in use and require updates.
A monolithic architecture is the traditional unified model for the design of a software program.Monolithic,
in this context, means "composed all in one piece." According to the Cambridgedictionary, the adjective monolithic
also means both " too large" and "unable to be changed."
There are benefits to monolithic architectures, which is why many applications are still createdusing this
development paradigm. For one, monolithic programs may have better throughput thanmodular applications. They
may also be easier to test and debug because, with fewer elements,there are fewer testing variables and scenarios
that come into play.
At the beginning of the software development lifecycle, it is usually easier to go with themonolithic architecture
since development can be simpler during the early stages. A singlecodebase also simplifies logging, configuration
management, application performancemonitoring and other development concerns. Deployment can also be easier
by copying the packaged application to a server. Finally, multiple copies of the application can be placed behinda
load balancer to scale it horizontally.
That said, the monolithic approach is usually better for simple, lightweight applications. For more complex
applications with frequent expected code changes or evolving scalabilityrequirements, this approach is not suitable.
Generally, monolithic architectures suffer from drawbacks that can delay applicationdevelopment and deployment.
These drawbacks become especially significant when the product's complexity increases or when the development
team grows in size.
The code base of monolithic applications can be difficult to understand because they may beextensive, which can
make it difficult for new developers to modify the code to meet changing business or technical requirements. As
requirements evolve or become more complex, it becomes difficult to correctly implement changes without
hampering the quality of the code andaffecting the overall operation of the application.
Following each update to a monolithic application, developers must compile the entire codebaseand redeploy the
full application rather than just the part that was updated. This makescontinuous or regular deployments difficult,
which then affects the application's and team'sagility.
The application's size can also increase startup time and add to delays. In some cases, different parts of the
application may have conflicting resource requirements. This makes it harder to findthe resources required to scale
the application.
There is always a bottleneck. Even in a serverless system or one you think will“infinitely” scale, pressure will
always be created elsewhere. For example, if your APIscales, does your database also scale? If your database scales,
does your email system? Inmodern cloud systems, there are so many components that scalability is not always
thegoal. Throttling systems are sometimes the best choice.
2.Your data model is linked to the scalability of your application. If your table design isgarbage, your queries
will be cumbersome, so accessing data will be slow. Whendesigning a database (NoSQL or SQL), carefully consider
your access pattern and whatdata you will have to filter. For example, with DynamoDB, you need to consider
what“Key” you will have to retrieve data. If that field is not set as the primary or sort key, itwill force you to use a
scan rather than a faster query.
3. Scalability is mainly linked with cost. When you get to a large scale, considersystems where this
relationship does not track linearly. If, like many, you havesystems on RDS and ECS; these will scale nicely. But
the downside is that as you scale,you will pay directly for that increased capacity. It’s common for these workloads
to cost$50,000 per month at scale. The solution is to migrate these workloads to serverlesssystems proactively.
4. Favour systems that require little tuning to make fast. The days of configuring your own servers are over.
AWS, GCP and Azure all provide fantastic systems that don’t needexpert knowledge to achieve outstanding
performance.
5. Use infrastructure as code. Terraform makes it easy to build repeatable and version-controlled infrastructure. It
creates an ethos of collaboration and reduces errors bydefining them in code rather than “missing” a critical
checkbox.
6. Use a PaaS if you’re at less than 100k MAUs. With Heroku, Fly and Render, there isno need to spend hours
configuring AWS and messing around with your application build process. Platform-as-a-service should be
leveraged to deploy quickly and focus on the product.
7. Outsource systems outside of the market you are in. Don’t roll your own CMS orAuth, even if it costs you
tonnes. If you go to the pricing page of many third-partysystems, for enterprise-scale, the cost is insane - think
$10,000 a month for anauthentication system! “I could make that in a week,” you think. That may be true, but
itdoesn’t considers the long-term maintenance and the time you cannot spend on your core product. Where possible,
buy off the shelf.
8. You have three levers, quality, cost and time. You have to balance themaccordingly. You have, at best, 100
“points” to distribute between the three. Of course,you always want to maintain quality, so the other levers to pull
are time and cost.
Separation of concerns is a software architecture design pattern/principle for separating anapplication into distinct
sections, so each section addresses a separate concern. At its essence,Separation of concerns is about order. The
overall goal of separation of concerns is to establish awell-organized system where each part fulfills a meaningful
and intuitive role while maximizingits ability to adapt to change.
Separation of concerns in software architecture is achieved by the establishment of boundaries. A boundary is any
logical or physical constraint which delineates a given set of responsibilities.Some examples of boundaries would
include the use of methods, objects, components, andservices to define core behavior within an application; projects,
solutions, and folder hierarchiesfor source organization; application layers and tiers for processing organization.
1.Lack of duplication and singularity of purpose of the individual components render the overallsystem easier to
maintain.
2.The system becomes more stable as a byproduct of the increased maintainability.
3.The strategies required to ensure that each component only concerns itself with a single set of cohesive
responsibilities often result in natural extensibility points.
4.The decoupling which results from requiring components to focus on a single purpose leads tocomponents which
are more easily reused in other systems, or different contexts within the samesystem.
5.The increase in maintainability and extensibility can have a major impact on the marketabilityand adoption rate of
the system.
There are several flavors of Separation of Concerns. Horizontal Separation, Vertical Separation,Data Separation and
Aspect Separation. In this article, we will restrict ourselves to Horizontaland Aspect separation of concern.
Database schemas define the structure and interrelations of data managed by relational databases.While it is
important to develop a well-thought out schema at the beginning of your projects,evolving requirements make
changes to your initial schema difficult or impossible to avoid. Andsince the schema manages the shape and
boundaries of your data, changes must be carefullyapplied to match the expectations of the applications that use it
and avoid losing data currentlyheld by the database system.
Migrations manage incremental, often reversible, changes to data structures in a programmaticway. The goals of
database migration software are to make database changes repeatable,shareable, and testable without loss of data.
Generally, migration software produces artifacts that describe the exact set of operations required to transform a
database from a known state to thenew state. These can be checked into and managed by normal version control
software to track changes and share among team members.
While preventing data loss is generally one of the goals of migration software, changes that dropor destructively
modify structures that currently house data can result in deletion. To cope withthis, migration is often a supervised
process involving inspecting the resulting change scripts andmaking any modifications necessary to preserve
important information.
Migrations are helpful because they allow database schemas to evolve as requirements change.They help developers
plan, validate, and safely apply schema changes to their environments.These compartmentalized changes are defined
on a granular level and describe thetransformations that must take place to move between various "versions" of the
database.
In general, migration systems create artifacts or files that can be shared, applied to multipledatabase systems, and
stored in version control. This helps construct a history of modifications tothe database that can be closely tied to
accompanying code changes in the client applications.The database schema and the application's assumptions about
that structure can evolve intandem.
Some other benefits include being allowed (and sometimes required) to manually tweak the process by separating
the generation of the list of operations from the execution of them. Eachchange can be audited, tested, and modified
to ensure that the correct results are obtained whilestill relying on automation for the majority of the process.
State based migration software creates artifacts that describe how to recreate the desireddatabase state from scratch.
The files that it produces can be applied to an empty relationaldatabase system to bring it fully up to date.
After the artifacts describing the desired state are created, the actual migration involvescomparing the generated files
against the current state of the database. This process allows thesoftware to analyze the difference between the two
states and generate a new file or files to bringthe current database schema in line with the schema described by the
files. These changeoperations are then applied to the database to reach the goal state.
Like almost all migrations, state based migration files must be carefully examined byknowledgeable developers to
oversee the process. Both the files describing the desired final stateand the files that outline the operations to bring
the current database into compliance must bereviewed to ensure that the transformations will not lead to data loss.
For example, if thegenerated operations attempt to rename a table by deleting the current one and recreating it
withits new name, a knowledgable human must recognize this and intervene to prevent data loss.
State based migrations can feel rather clumsy if there are frequent major changes to the databaseschema that require
this type of manual intervention. Because of this overhead, this technique isoften better suited for scenarios where
the schema is well-thought out ahead of time withfundamental changes occurring infrequently.
However, state based migrations do have the advantage of producing files that fully describe thedatabase state in a
single context. This can help new developers onboard more quickly and workswell with workflows in version
control systems since conflicting changes introduced by code branches can be resolved easily.
The major alternative to state based migrations is a change based migration system. Change based migrations also
produce files that alter the existing structures in a database to arrive at thedesired state. Rather than discovering the
differences between the desired database state and thecurrent one, this approach builds off of a known database state
to define the operations to bring itinto the new state. Successive migration files are produced to modify the database
further,creating a series of change files that can reproduce the final database state when appliedconsecutively.
Because change based migrations work by outlining the operations required from a knowndatabase state to the
desired one, an unbroken chain of migration files is necessary from theinitial starting point. This system requires an
initial state, which may be an empty databasesystem or a files describing the starting structure, the files describing
the operations that take theschema through each transformation, and a defined order which the migration files must
beapplied.
Change based migrations trace the provenance of the database schema design back to the originalstructure through
the series of transformation scripts that it creates. This can help illustrate theevolution of the database structure, but
is less helpful for understanding the complete state of thedatabase at any one point since the changes described in
each file modify the structure produced by the last migration file.
Since the previous state is so important to change based systems, the system often uses adatabase within the
database system itself to track which migration files have been applied. Thishelps the software understand what state
the system is currently in without having to analyze thecurrent structure and compare it against the desired state,
known only by compiling the entireseries of migration files.
The disadvantage of this approach is that the current state of the database isn't described in thecode base after the
initial point. Each migration file builds off of the previous one, so while thechanges are nicely compartmentalized,
the entire database state at any one point is much harder to reason about. Furthermore, because the order of
operations is so important, it can be moredifficult to resolve conflicts produced by developers making conflicting
changes.
Change based systems, however, do have the advantage of allowing for quick, iterative changesto the database
structure. Instead of the time intensive process of analyzing the current state of the database, comparing it to the
desired state, creating files to perform the necessary operations,and applying them to the database, change based
systems assume the current state of the database based on the previous changes. This generally makes changes more
light weight, but does makeout of band changes to the database especially dangerous since migrations can leave the
targetsystems in an undefined state.
Microservices
Micro services, often referred to as Micro services architecture, is an architectural approach thatinvolves dividing
large applications into smaller, functional units capable of functioning andcommunicating independently.
This approach arose in response to the limitations of monolithic architecture. Because monolithsare large containers
holding all software components of an application, they are severely limited:inflexible, unreliable, and often develop
slowly.
With micro services, however, each unit is independently deployable but can communicate witheach other when
necessary. Developers can now achieve the scalability, simplicity, andflexibility needed to create highly
sophisticated software.
Microservices architecture presents developers and engineers with a number of benefits thatmonoliths cannot
provide. Here are a few of the most notable.
Smaller development teams can work in parallel on different components to update existingfunctionalities. This
makes it significantly easier to identify hot services, scale independentlyfrom the rest of the application, and
improve the application.
2. Improved scalability
Microservices launch individual services independently, developed in different languages or technologies; all tech
stacks are compatible, allowing DevOps to choose any of the most efficienttech stacks without fearing if they will
work well together. These small services work onrelatively less infrastructure than monolithic applications by
choosing the precise scalability of selected components per their requirements.
3. Independent deployment
Each microservice constituting an application needs to be a full stack. This enables microservicesto be deployed
independently at any point. Since microservices are granular in nature,development teams can work on one
microservice, fix errors, then redeploy it withoutredeploying the entire application.
Microservice architecture is agile and thus does not need a congressional act to modify the program by adding or
changing a line of code or adding or eliminating features. The softwareoffers to streamline business structures
through resilience improvisation and fault separation.
4. Error isolation
In monolithic applications, the failure of even a small component of the overall application canmake it inaccessible.
In some cases, determining the error could also be tedious. Withmicroservices, isolating the problem-causing
component is easy since the entire application isdivided into standalone, fully functional software units. If errors
occur, other non-related unitswill still continue to function.
With microservices, developers have the freedom to pick the tech stack best suited for one particular microservice
and its functions. Instead of opting for one standardized tech stack encompassing all of an application’s functions,
they have complete control over their options.
Put simply: microservices architecture makes app development quicker and more efficient. Agiledeployment
capabilities combined with the flexible application of different technologiesdrastically reduce the duration of the
development cycle. The following are some of the mostvital applications of microservices architecture.
Data processing
Since applications running on microservice architecture can handle more simultaneous requests,microservices can
process large amounts of information in less time. This allows for faster andmore efficient application performance.
Media content
Companies like Netflix and Amazon Prime Video handle billions of API requests daily. Servicessuch as OTT
platforms offering users massive media content will benefit from deploying amicroservices architecture.
Microservices will ensure that the plethora of requests for differentsubdomains worldwide is processed without
delays or errors.
Website migration
Website migration involves a substantial change and redevelopment of a website’s major areas,such as its domain,
structure, user interface, etc. Using microservices will help you avoid business-damaging downtime and ensure
your migration plans execute smoothly without anyhassles.
Microservices are perfect for applications handling high payments and transaction volumes andgenerating invoices
for the same. The failure of an application to process payments can causehuge losses for companies. With the help
of microservices, the transaction functionality can bemade more robust without changing the rest of the application.
Microservices tools
Building a microservices architecture requires a mix of tools and processes to perform the core building tasks and
support the overall framework. Some of these tools are listed below.
1. Operating system
The most basic tool required to build an application is an operating system (OS). One suchoperating system allows
great flexibility in development and uses in Linux. It offers a largelyself-contained environment for executing
program codes and a series of options for large andsmall applications in terms of security, storage, and networking.
2. Programming languages
One of the benefits of using a microservices architecture is that you can use a variety of programming languages
across applications for different services. Different programminglanguages have different utilities deployed based on
the nature of the microservice.
The various services need to communicate when building an application using a microservicesarchitecture. This is
accomplished using application programming interfaces (APIs). For APIs towork optimally and desirably, they need
to be constantly monitored, managed and tested, andAPI management and testing tools are essential for this.
4. Messaging tools
Messaging tools enable microservices to communicate both internally and externally. Rabbit MQand Apache Kafka
are examples of messaging tools deployed as part of a microservice system.
5. Toolkits
Toolkits in a microservices architecture are tools used to build and develop applications.Different toolkits are
available to developers, and these kits fulfill different purposes. Fabric8and Seneca are some examples of
microservices toolkits.
6. Architectural frameworks
Microservices architectural frameworks offer convenient solutions for application developmentand usually contain a
library of code and tools to help configure and deploy an application.
7. Orchestration tools
A container is a set of executables, codes, libraries, and files necessary to run a microservice.Container orchestration
tools provide a framework to manage and optimize containers withinmicroservices architecture systems.
8. Monitoring tools
Once a microservices application is up and running, you must constantly monitor it to ensureeverything is working
smoothly and as intended. Monitoring tools help developers stay on top of the application’s work and avoid
potential bugs or glitches.
9. Serverless tools
Serverless tools further add flexibility and mobility to the variousmicroservices within an application by eliminating
server dependency. This helps in the easier rationalization and division of application tasks.
With monolithic architectures, all processes are tightly coupled and run as a single service. Thismeans that if one
process of the application experiences a spike in demand, the entirearchitecture must be scaled. Adding or
improving a monolithic application’s features becomesmore complex as the code base grows. This complexity limits
experimentation and makes itdifficult to implement new ideas. Monolithic architectures add risk for application
availability because many dependent and tightly coupled processes increase the impact of a single processfailure.
With a microservices architecture, an application is built as independent components that runeach application
process as a service. These services communicate via a well-defined interfaceusing lightweight APIs. Services are
built for business capabilities and each service performs asingle function. Because they are independently run, each
service can be updated, deployed, andscaled to meet demand for specific functions of an application.
Data tier
The data tier in DevOps refers to the layer of the application architecture that is responsible for storing, retrieving,
and processing data. The data tier is typically composed of databases, datawarehouses, and data processing systems
that manage large amounts of structured andunstructured data.
In DevOps, the data tier is considered an important aspect of the overall application architectureand is typically
managed as part of the DevOps process. This includes:
1.Data management and migration: Ensuring that data is properly managed and migrated as part of the software
delivery pipeline.
2.Data backup and recovery: Implementing data backup and recovery strategies to ensurethat data can be
recovered in case of failures or disruptions.
3.Data security: Implementing data security measures to protect sensitive information andcomply with regulations.
4.Data performance optimization: Optimizing data performance to ensure that applicationsand services perform
well, even with large amounts of data.
5.Data integration: Integrating data from multiple sources to provide a unified view of dataand support business
decisions.
By integrating data management into the DevOps process, teams can ensure that data is properlymanaged and
protected, and that data-driven applications and services perform well and deliver value to customers.
The operation consists of the administrative processes, services, and support for the software.When both the
development and operations are combined with collaborating, then the DevOpsarchitecture is the solution to fix the
gap between deployment and operation terms; therefore,delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and largedistributed applications.
Agile Development is used in the DevOps architecture so thatintegration and delivery can be contiguous. When the
development and operations team worksseparately from each other, then it is time-consuming to design, test , and
deploy. And if theterms are not in sync with each other, then it may cause a delay in the delivery. So DevOpsenables
the teams to change their shortcomings and increases productivity.
Below are the various components that are used in the DevOps architecture
DevOps Components
1. Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the pre-defined individual
usage with fixed hardware allocation. And with DevOps, the usage of cloud, sharing of resources comes into the
picture, and the build is dependent upon the user's need,which is a mechanism to control the usage of resources or
capacity.
2. Code
Many good practices such as Git enables the code to be used, which ensures writing the code for business, helps to
track changes, getting notified about the reason behind the difference in theactual and the expected output, and if
necessary reverting to the original code developed. Thecode can be appropriately arranged in files, folders, etc. And
they can be reused.
3. Test
The application will be ready for production after testing. In the case of manual testing, itconsumes more time in
testing and moving the code to the output. The testing can be automated,which decreases the time for testing so that
the time to deploy the code to production can bereduced as automating the running of the scripts will remove many
manual steps.
4. Plan
DevOps use Agile methodology to plan the development. With the operations and developmentteam in sync, it helps
in organizing the work to plan accordingly to increase productivity.
5. Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the systemaccurately so that
the health of the application can be checked. The monitoring becomes morecomfortable with services where the log
data may get monitored through many third-party toolssuch as Splunk.
6. Deploy
Many systems can support the scheduler for automated deployment. The cloud management platform enables users
to capture accurate insights and view the optimization scenario, analyticson trends by the deployment of dashboards.
7. Operate
DevOps changes the way traditional approach of developing and testing separately. The teamsoperate in a
collaborative way where both the teams actively participate throughout the servicelifecycle. The operation team
interacts with developers, and they come up with a monitoring planwhich serves the IT and business requirements.
8. Release
Deployment to an environment can be done by automation. But when the deployment is made tothe production
environment, it is done by manual triggering. Many processes involved in releasemanagement commonly used to do
the deployment in the production environment manually tolessen the impact on the customers.
DevOps resilience
DevOps resilience refers to the ability of a DevOps system to withstand and recover fromfailures and disruptions.
This means ensuring that the systems and processes used in DevOps arerobust, scalable, and able to adapt to
changing conditions. Some of the key components of DevOps resilience include:
1.Infrastructure automation: Automating infrastructure deployment, scaling, andmanagement helps to ensure that
systems are deployed consistently and are easier tomanage in case of failures or disruptions.
2.Monitoring and logging: Monitoring systems, applications, and infrastructure in real-timeand collecting logs can
help detect and diagnose issues quickly, reducing downtime.
3.Disaster recovery: Having a well-designed disaster recovery plan and regularly testing itcan help ensure that
systems can quickly recover from disruptions.
4.Continuous testing: Continuously testing systems and applications can help identify andfix issues before they
become critical.
5.High availability: Designing systems for high availability helps to ensure that systemsremain up and running even
in the event of failures or disruptions.
By focusing on these components, DevOps teams can create a resilient and adaptive DevOpssystem that is able to
deliver high-quality applications and services, even in the face of failuresand disruptions.
Unit 3 Introduction to project management
Source code control (also known as version control) is an essential part of DevOps practices.Here are a few reasons
why:
Collaboration:Source code control allows multiple team members to work on the samecodebase simultaneously
and track each other's changes.
Traceability: Source code control systems provide a complete history of changes to the code,enabling teams to
trace bugs, understand why specific changes were made, and roll back to previous versions if necessary.
Branching and merging: Teams can create separate branches for different features or bug fixes,then merge the
changes back into the main codebase. This helps to ensure that different parts of the code can be developed
independently, without interfering with each other.
The history of source code management (SCM) in DevOps dates back to the early days of software development.
Early SCM systems were simple and focused on tracking changes tosource code over time.
In the late 1990s and early 2000s, the open-source movement and the rise of the internet led to a proliferation of new
SCM tools, including CVS (Concurrent Versions System), Subversion, andGit. These systems made it easier for
developers to collaborate on projects, manage multipleversions of code, and automate the build, test, and
deployment process.
As DevOps emerged as a software development methodology in the mid-2000s, SCM became anintegral part of the
DevOps toolchain. DevOps teams adopted Git as their SCM tool of choice,leveraging its distributed nature, branch
and merge capabilities, and integration with CI/CD pipelines
.
Today, Git is the most widely used SCM system in the world, and is a critical component of DevOps practices. With
the rise of cloud-based platforms, modern SCM systems also offer features like collaboration, code reviews, and
integrated issue tracking.
In DevOps, roles and code play a critical role in the development, delivery, and operation of software.
Roles:
Code:
Code is the backbone of DevOps and represents the software that is being developed,tested, deployed, and
maintained.
Code is managed using source code control systems like Git, which provide a way totrack changes to the
code over time, collaborate on the code with other team members,and automate the build, test, and
deployment process.
Code is continuously integrated and tested, ensuring that any changes to the code do notcause unintended
consequences in the production environment.
In conclusion, both roles and code play a critical role in DevOps. Teams work together to ensurethat code is
developed, tested, and delivered quickly and reliably to production, while operationsteams maintain the code in
production and respond to any issues that arise.
Overall, SCM has been an important part of the evolution of DevOps, enabling teams tocollaborate, manage code
changes, and automate the software delivery process.
Source code management system and migrations
A source code management (SCM) system is a software application that provides versioncontrol for source
code. It tracks changes made to the code over time, enabling teams torevert to previous versions if
necessary, and helps ensure that code can be collaborated on by multiple team members.
SCM systems typically provide features such as version tracking, branching and merging,change history,
and rollback capabilities. Some popular SCM systems include Git,Subversion, Mercurial, and Microsoft
Team Foundation Server.
Source code management (SCM) systems are often used to manage code migrations,which are the process
of moving code from one environment to another. This is typicallydone as part of a software development
project, where code is moved from adevelopment environment to a testing environment and finally to a
productionenvironment.
SCM systems provide a number of benefits for managing code migrations, including:
Version control
Branching and merging
Rollback
Collaboration
Automation
Version control : SCM systems keep a record of all changes to the code, enabling teams totrack the code as it
moves through different environments.
It integrates the work that is done simultaneously by different members of the team. In somerare cases,
when conflicting edits are made by two people to the same line of a file, thenhuman assistance is requested
by the version control system in deciding what should be done.
Version control provides access to the historical versions of a project. This is insuranceagainst computer
crashes or data loss. If any mistake is made, you can easily roll back to a previous version. It is also
possible to undo specific edits that too without losing the work done in the meanwhile. It can be easily
known when, why, and by whom any part of a filewas edited.
Leverages the productivity, expedites product delivery, and skills of the employees through better
communication and assistance,
Reduce possibilities of errors and conflicts meanwhile project development throughtraceability to every
small change,
Employees or contributors of the project can contribute from anywhere irrespective of thedifferent
geographical locations through this VCS,
For each different contributor to the project, a different working copy is maintained and notmerged to the
main file unless the working copy is validated. The most popular exampleis Git, Helix core, Microsoft TFS,
Informs us about Who, What, When, Why changes have been made.
Local Version Control Systems: It is one of the simplest forms and has a database that kept allthe changes to files
under revision control. RCS is one of the most common VCS tools. It keeps patch sets (differences between files) in
a special format on disk. By adding up all the patches itcan then re-create what any file looked like at any point in
time.
Centralized Version Control Systems : Centralized version control systems contain just onerepository globally
and every user need to commit for reflecting one’s changes in the repository.It is possible for others to see your
changes by updating. Two things are required to make your changes visible to others which are:
You commit
They update
The benefit of CVCS (Centralized Version Control Systems) makes collaboration amongstdevelopers along with
providing an insight to a certain extent on what everyone else is doing onthe project. It allows administrators to fine-
grained control over who can do what.
It has some downsides as well which led to the development of DVS. The most obvious is thesingle point of failure
that the centralized repository represents if it goes down during that periodcollaboration and saving versioned
changes is not possible. What if the hard disk of the centraldatabase becomes corrupted, and proper backups haven’t
been kept? You lose absolutelyeverything.
Distributed version control systems contain multiple repositories. Each user has their ownrepository and working
copy. Just committing your changes will not give others access to your changes. This is because commit will reflect
those changes in your local repository and you needto push them in order to make them visible on the central
repository. Similarly, When youupdate, you do not get others’ changes unless you have first pulled those changes
into your repository. To make your changes visible to others, 4 things are required:
You commit
You push
They pull
They update
The most popular distributed version control systems are Git, and Mercurial. They helpus overcome the problem of
single point of failure.
Teams can create separate branches of code for differentenvironments, making it easier to manage the migration
process.
Branching and merging are key concepts in Git-based version control systems, and are widelyused in DevOps to
manage the development of software.
Branching in Git allows developers to create a separate line of development for a new feature or bug fix. This
allows developers to make changes to the code without affecting the main branch, and to collaborate with others on
the same feature or bug fix.
Merging in Git is the process of integrating changes made in one branch into another branch. InDevOps, merging is
often used to integrate changes made in a feature branch into the main branch, incorporating the changes into the
codebase.
Improved collaboration: By allowing multiple developers to work on the same codebase at thesame time,
branching and merging facilitate collaboration and coordination among teammembers.
Improved code quality: By isolating changes made in a feature branch, branching and mergingmake it easier to
thoroughly review and test changes before they are integrated into the maincodebase, reducing the risk of
introducing bugs or other issues.
Increased transparency: By tracking all changes made to the codebase, branching and merging provide a clear
audit trail of how code has evolved over time.Overall, branching and merging are essential tools in the DevOps
toolkit, helping to improvecollaboration, code quality, and transparency in the software development process.
Rollback : In the event of a problem during a migration, teams can quickly revert to a previousversion of the code.
Rollback in DevOps refers to the process of reverting a change or returning to a previous versionof a system,
application, or infrastructure component.
Rollback is an important capability inDevOps, as it provides a way to quickly and efficiently revert changes that
have unintendedconsequences or cause problems in production.
1. Version control: By using a version control system, such as Git, DevOps teams can revert to a previous
version of the code by checking out an earlier commit.
2. Infrastructure as code: By using infrastructure as code tools, such as Terraform or Ansible,DevOps teams
can roll back changes to their infrastructure by re-applying an earlier version of the code.
3. Continuous delivery pipelines: DevOps teams can use continuous delivery pipelines toautomate the
rollback process, by automatically reverting changes to a previous version of thecode or infrastructure if
tests fail or other problems are detected.
4. Snapshots: DevOps teams can use snapshots to quickly restore an earlier version of a system
or infrastructure component.
Overall, rollback is an important capability in DevOps, providing a way to quickly revertchanges that have
unintended consequences or cause problems in production. By using acombination of version control,
infrastructure as code, continuous delivery pipelines, andsnapshots, DevOps teams can ensure that their
systems and applications can be quickly andeasily rolled back to a previous version if needed.
Collaboration:
SCM systems enable teams to collaborate on code migrations, with teammembers working on different aspects of
the migration process simultaneously.Collaboration is a key aspect of DevOps, as it helps to bring together
development, operations,and other teams to work together towards a common goal of delivering high-quality
softwarequickly and efficiently.In DevOps, collaboration is facilitated by a range of tools and practices, including:
Version control systems: By using a version control system, such as Git, teams can collaborateon code
development, track changes to source code, and merge code changes from multiplecontributors.
Continuous integration and continuous deployment (CI/CD): By automating the build, test,and deployment of
code, CI/CD pipelines help to streamline the development process and reducethe risk of introducing bugs or other
issues into the codebase.
Code review: By using code review tools, such as pull requests, teams can collaborate on codedevelopment, share
feedback, and ensure that changes are thoroughly reviewed and tested beforethey are integrated into the codebase.
Issue tracking:
By using issue tracking tools, such as JIRA or GitHub Issues, teams cancollaborate on resolving bugs, tracking
progress, and managing the development of new features.
Communication tools: By using communication tools, such as Slack or Microsoft Teams, teamscan collaborate
and coordinate their work, share information, and resolve problems quickly andefficiently.Overall, collaboration is a
critical component of DevOps, helping teams to work together effectively and efficiently to deliver high-quality
software. By using a range of tools and practices to facilitate collaboration, DevOps teams can improve the
transparency, speed, andquality of their software development processes.
Automation: Many SCM systems integrate with continuous integration and delivery (CI/CD) pipelines, enabling
teams to automate the migration process.In conclusion, SCM systems play a critical role in managing code
migrations. They provide away to track code changes, collaborate on migrations, and automate the migration
process,enabling teams to deliver code quickly and reliably to production.
Shared authentication
Shared authentication in DevOps refers to the practice of using a common identity management system tocontrol
access to the various tools, resources, and systems used in software development and operations.
This helps to simplify the process of managing users and permissions and ensures that everyone has thenecessary
access to perform their jobs. Examples of shared authentication systems include ActiveDirectory, LDAP, and
SAML-based identity providers.
Hosted Git servers are online platforms that provide Git repository hosting services for softwaredevelopment teams.
They are widely used in DevOps to centralize version control of source code, track changes, and collaborate on code
development. Some popular hosted Git servers include GitHub, GitLab,and Bitbucket. These platforms offer
features such as pull requests, code reviews, issue tracking, andcontinuous integration/continuous deployment
(CI/CD) pipelines. By using a hosted Git server, DevOpsteams can streamline their development processes and
collaborate more efficiently on code projects.
GitHub: One of the largest Git repository hosting services, GitHub is widely used by developersfor version control,
collaboration, and code sharing.
GitLab: An open-source Git repository management platform that provides version control,issue tracking, code
review, and more.
Bitbucket: A web-based Git repository hosting service that provides version control, issuetracking, and project
management tools.
Gitea: An open-source Git server that is designed to be lightweight, fast, and easy to use.
Gogs: Another open-source Git server, Gogs is designed for small teams and organizations and provides a simple,
user-friendly interface.
GitBucket: A Git server written in Scala that provides a wide range of features, including issuetracking, pull
requests, and code reviews.Organizations can choose the Git server implementation that best fits their needs, taking
intoaccount factors such as cost, scalability, and security requirements.
Docker intermission
Docker is an open-source project with a friendly-whale logo that facilitates the deployment of applications in
software containers. It is a set of PaaS products that deliver containers (software packages) using OS-level
virtualization. It embodies resource isolation features of the Linuxkernel but offers a friendly API.
In simple words, Docker is a tool or platform design to simplify the process of creating,deploying, and packaging
and shipping out applications along with its parts such as libraries andother dependencies. Its primary purpose is to
automate the
Rapid deployment
Faster configurations
Seamless portability
Virtual Machine is an application environment that imitates dedicated hardware by providing anemulation of the
computer system. Docker and Vmboth have their set of benefits and uses, butwhen it comes to running applications
in multiple environments, both can be utilized. So whichone wins? Let's get into a quick Docker vs. VM
comparison.
OS Support: VM requires a lot of memory when installed in an OS, whereas Docker containersoccupy less space.
Performance: Running several VMs can affect the performance, whereas, Docker containers arestored in a single
Docker engine; thus, they provide better performance.
Boot-up time: VMs have a longer booting time compared to Docker.Efficiency: VMs have lower efficiency than
Docker.
Scaling: VMs are difficult to scale up, whereas Docker is easy to scale up.
Space allocation: You cannot share data volumes with VMs, but you can share and reuse themamong various
Docker containers.
Portability: With VMs, you can face compatibility issues while porting across different platforms; Docker is easily
portable.Clearly, Docker is a hands-down winner.
Gerrit:
Gerrit is a web based code review tool which is integrated with Git and built on top of Gitversion control system
(helps developers to work together and maintain the history of their work). It allows to merge changes to Git
repository when you are done with the code reviews.Gerrit was developed by Shawn Pearce at Google which is
written in Java, Servlet,GWT(Google Web Toolkit). The stable release of Gerrit is 2.12.2 and published on March
11,2016 licensed under Apache License v2.
You can easily find the error in the source code using Gerrit.
You can work with Gerrit, if you have regular Git client; no need to install any Gerritclient.
Gerrit acts as a repository, which allows pushing the code and creates the review for your commit.
Advantages of Gerrit
Gerrit provides access control for Git repositories and web frontend for code review.
You can push the code without using additional command line tools.
Gerrit can allow or decline the permission on the repository level and down to the branchlevel.
Disadvantages of Gerrit
Reviewing, verifying and resubmitting the code commits slows down the time to market.
Gerrit is slow and it's not possible to change the sort order in which changes are listed.
Gerrit is an exceptionally extensible and configurable apparatus for online code survey and storehouse the
executives for projects utilizing the Git rendition control framework. Gerrit is similarly helpful where all clients are
believed committers, for example, might be the situation with shut source business advancement.
It is used to store the merged code base and the changes under review that have not being mergedyet. Gerrit has the
limitation of a single repository per project.
Gerrit is first and foremost an arranging region where changes can be looked at prior to turning into a piece of the
code base. It is likewise an empowering agent for this survey cycle, catching notes and remarks about the
progressions to empower conversation of the change. This is especially valuable with conveyed groups where this
discussion can’t occur eye to eye.
Knowledge exchange:
The code review process allows newcomers to see the code of other more experienced developers.
Developers can get feedback on their suggested changes.
Experienced developers can help to evaluate the impact on the whole code.
Shared code ownership: by reviewing code of other developers the whole team gets a solid knowledge of
the complete code base.
Pull request is a feature of Git-based version control systems that allows developers to proposechanges to a Git
repository and request feedback or approval f rom other team members. It is widely used in DevOps to facilitate
collaboration and code review in the software development process.
In the pull request model, a developer creates a new branch in a Git repository, makes changes tothe code, and then
opens a pull request to merge the changes into the main branch. Other teammembers can then review the changes,
provide feedback, and approve or reject the request.
Pull Requests are a mechanism popularized by github, used to help facilitate merging of work, particularly in the
context of open-source projects. A contributor works on their contribution in a fork (clone) of the central repository.
Once their contribution is finished they create a pull request to notify the owner of the central repository that their
work is ready to be merged into themainline. Tooling supports and encourages code review of the contribution
before accepting the request. Pull requests have become widely used in software development, but critics are
concerned by the addition of integration friction which can prevent continuous integration.
Pull requests essentially provide convenient tooling for a development workflow that existed in many open-source
projects, particularly those using a distributed source-control system (such as git). This workflow begins with a
contributor creating a new logical branch, either by starting a new branch in the central repository, cloning into a
personal repository, or both. The contributor then works on that branch, typically in the style of a Feature Branch,
pulling any updates from Mainline into their branch. When they are done they communicate with the maintainer of
the central repository indicating that they are done, together with a reference to their commits. This reference could
be the URL of a branch that needs to be integrated, or a set of patches in an email.
Once the maintainer gets the message, she can then examine the commits to decide if they are ready to go into
mainline. If not, she can then suggest changes to the contributor, who then has opportunity to adjust their
submission. Once all is ok, the maintainer can then merge, either with a regular merge/rebase or applying the
patches from the final email.
Github's pull request mechanism makes this flow much easier. It keeps track of the clones through its fork
mechanism, and automatically creates a message thread to discuss the pull request, together with behavior to handle
the various steps in the review workflow. These conveniences were a major part of what made github successful and
led to "pull request" becoming a fundamental part of the developer's lexicon.
So that's how pull requests work, but should we use them, and if so how? To answer that question, I like to step back
from the mechanism and think about how it works in the context of asource code management workflow. To help
me think about that, I wrote down a series of patterns for managing source code branching. I find understanding
these (specifically the Baseand Integration patterns) clarifies the role of pull requests.
In terms of these patterns, pull requests are a mechanism designed to implement a combination of Feature Branching
and Pre-Integration Reviews. Thus to assess the usefulness of pull requests we first need to consider how applicable
those patterns are to our situation. Like most patterns, they are sometimes valuable, and sometimes a pain in the
neck - we have to examine them based on our specific context. Feature Branching is a good way of packaging
together a logical contribution so that it can be assessed, accepted, or deferred as a single unit. This makes a lot of
sense when contributors are not trusted to commit directly to mainline. But Feature Branching comes at a cost,
which is that it usually limits the frequency of integration, leading to complicated merges and deterring refactoring.
Pre-Integration Reviews provide a clear place to do code review at the cost of a significant increase in integration
friction. [1]
That's a drastic summary of the situation (I need a lot more words to explain this further in the feature branching
article), but it boils down to the fact that the value of these patterns, and thus the value of pull requests, rest mostly
on the social structure of the team. Some teams work better with pull requests, some teams would find pull requests
a severe drag on the effectiveness. I suspect that since pull requests are so popular, a lot of teams are using them by
default when theywould do better without them.
While pull requests are built for Feature Branches, teams can use them within a Continuous Integration
environment. To do this they need to ensure that pull requests are small enough, and the team responsive enough, to
follow the CI rule of thumb that everybody does Mainline Integration at least daily. (And I should remind everyone
that Mainline Integration is more than just merging the current mainline into the feature branch). Using the
ship/show/ask classification can be an effective way to integrate pull requests into a more CI-friendly workflow.The
wide usage of pull requests has encouraged a wider use of code review, since pull requests provide a clear point for
Pre-Integration Review, together with tooling that encourages it. Code review is a Good Thing, but we must
remember that a pull request isn't the only mechanism we can use for it. Many teams find great value in the
continuous review afforded by Pair Programming. To avoid reducing integration frquency we can carry out post-
integration code review in several ways. A formal process can record a review for each commit, or a tech lead
canexamine risky commits every couple of days. Perhaps the most powerful form of code review is one that's
frequently ignored. A team that takes the attitude that the codebase is a fluid system, one that can be steadily refined
with repeated iteration carries out Refinement Code Review every time a developer looks at existing code. I often
hear people say that pull requests are necessary because without them you can't do code reviews - that's rubbish. Pre-
integration code review is just one way to do code reviews, and for many teams it isn't the best choice.
Improved code quality: Pull requests encourage collaboration and code review, helping to catch potential bugs and
issues before they make it into the main codebase.
Increased transparency: Pull requests provide a clear audit trail of all changes made to thecode, making it easier
to understand how code has evolved over time.
Better collaboration: Pull requests allow developers to share their work and get feedback fromothers, improving
collaboration and communication within the development team.Overall, the pull request model is an important tool
in the DevOps toolkit, helping to improve thequality, transparency, and collaboration of software development
processes.
GitLab
GitLab is an open-source Git repository management platform that provides a wide range of features for software
development teams. It is commonly used in DevOps for version control,issue tracking, code review, and continuous
integration/continuous deployment (CI/CD) pipelines.
GitLab provides a centralized platform for teams to manage their Git repositories, track changesto source code, and
collaborate on code development. It offers a range of tools to support codereview and collaboration, including pull
requests, code comments, and merge request approvals.
In addition, GitLab provides a CI/CD pipeline tool that allows teams to automate the process of building, testing,
and deploying code. This helps to streamline the development process andreduce the risk of introducing bugs or
other issues into the codebase.
Overall, GitLab is a comprehensive Git repository management platform that provides a widerange of tools and
features for software development teams. By using GitLab, DevOps teams canimprove the efficiency, transparency,
and collaboration of their software development processes.
What is Git?
Git is a distributed version control system, which means that a local clone of the project is acomplete version
control repository. These fully functional local repositories make it easy towork offline or remotely. Developers
commit their work locally, and then sync their copy of therepository with the copy on the server. This paradigm
differs from centralized version controlwhere clients must synchronize code with a server before creating new
versions of code.
Git's flexibility and popularity make it a great choice for any team. Many developers and collegegraduates already
know how to use Git. Git's user community has created resources to traindevelopers and Git's popularity make it
easy to get help when needed. Nearly every developmentenvironment has Git support and Git command line tools
implemented on every major operatingsystem.
Git basics
Every time work is saved, Git creates a commit. A commit is a snapshot of all files at a point intime. If a file hasn't
changed from one commit to the next, Git uses the previously stored file.
This design differs from other systems that store an initial version of a file and keep a record of deltas over time.
Commits create links to other commits, forming a graph of the development history. It's possibleto revert code to a
previous commit, inspect how files changed from one commit to the next, andreview information such as where and
when changes were made. Commits are identified in Git by a unique cryptographic hash of the contents of the
commit. Because everything is hashed, it'simpossible to make changes, lose information, or corrupt files without Git
detecting it.
Branches
Each developer saves changes to their own local code repository. As a result, there can be manydifferent changes
based off the same commit. Git provides tools for isolating changes and later merging them back together. Branches,
which are lightweight pointers to work in progress,manage this separation. Once work created in a branch is
finished, it can be merged back into theteam's main (or trunk) branch.
Files and commits
Files in Git are in one of three states: modified, staged, or committed. When a file is firstmodified, the changes exist
only in the working directory. They aren't yet part of a commit or thedevelopment history. The developer must
Stage the changed files to be included in the commit.The staging area contains all changes to include in the next
commit. Once the developer is happy with the staged files, the files are packaged as a commit with a message
describing what changed.This commit becomes part of the development history.
Staging lets developers pick which file changes to save in a commit in order to break down largechanges into a
series of smaller commits. By reducing the scope of commits, it's easier to reviewthe commit history to find specific
file changes.
Benefits of Git
Simultaneous development
Everyone has their own local copy of code and can work simultaneously on their own branches.Git works offline
since almost every operation is local.
Faster releases
Branches allow for flexible and simultaneous development. The main branch contains stable,high-quality code from
which you release. Feature branches contain work in progress, which aremerged into the main branch upon
completion. By separating the release branch fromdevelopment in progress, it's easier to manage stable code and
ship updates more quickly.
Built-in integration
Due to its popularity, Git integrates into most tools and products. Every major IDE has built-inGit support, and
many tools support continuous integration, continuous deployment, automatedtesting, work item tracking, metrics,
and reporting feature integration with Git. This integrationsimplifies the day-to-day workflow.
Using Git with a source code management tool increases a team's productivity by encouragingcollaboration,
enforcing policies, automating processes, and improving visibility and traceabilityof work. The team can settle on
individual tools for version control, work item tracking, andcontinuous integration and deployment. Or, they can
choose a solution like GitHub or AzureDevOps that supports all of these tasks in one place.
Pull requests
Use pull requests to discuss code changes with the team before merging them into the main branch. The discussions
in pull requests are invaluable to ensuring code quality and increaseknowledge across your team. Platforms like
GitHub and Azure DevOps offer a rich pull requestexperience where developers can browse file changes, leave
comments, inspect commits, view builds, and vote to approve the code.
Branch policies
Teams can configure GitHub and Azure DevOps to enforce consistent workflows and processacross the team. They
can set up branch policies to ensure that pull requests meet requirements before completion. Branch policies protect
important branches by preventing direct pushes,requiring reviewers, and ensuring clean builds.
Unit 4 Integrating the system
What is Jenkin?
Jenkins is an open source automation tool written in Java programming language that allowscontinuous integration.
Jenkins builds and tests our software projects which continuously making it easier for developers to integrate
changes to the project, and making it easier for users to obtain a fresh build.
It also allows us to continuously deliver our software by integrating with a large number of testing and deployment
technologies.
Jenkins offers a straightforward way to set up a continuous integration or continuous deliveryenvironment for almost
any combination of languages and source code repositories using pipelines, as well as automating other routine
development tasks.
With the help of Jenkins, organizations can speed up the software development process throughautomation. Jenkins
adds development life-cycle processes of all kinds, including build,document, test, package, stage, deploy static
analysis and much more.
Jenkins achieves CI (Continuous Integration) with the help of plugins. Plugins is used to allowthe integration of
various DevOps stages. If you want to integrate a particular tool, you have toinstall the plugins for that tool. For
example: Maven 2 Project, Git, HTML Publisher, AmazonEC2, etc.
For example: If any organization is developing a project, then Jenkins will continuously testyour project builds and
show you the errors in early stages of your development.
That lets you run multiple builds, tests, and product environment across the entire architecture.Jenkins Slaves can be
running different build versions of the code for different operating systemsand the server Master controls how each
of the builds operates.
Supported on a master-slave architecture, Jenkins comprises many slaves working for a master.This architecture -
the Jenkins Distributed Build - can run identical test cases in differentenvironments. Results are collected and
combined on the master node for monitoring.
Jenkins Applications
Jenkins helps to automate and accelerate the software development process. Here are some of themost common
applications of Jenkins:
Code coverage is determined by the number of lines of code a component has and how many of them get executed.
Jenkins increases code coverage which ultimately promotes a transparentdevelopment process among the team
members.
2. No Broken Code
Jenkins ensures that the code is good and tested well through continuous integration. The finalcode is merged only
when all the tests are successful. This makes sure that no broken code isshipped into production.
What are the Jenkins Features?
Jenkins offers many attractive features for developers:
Easy Installation
Jenkins is a platform-agnostic, self-contained Java-based program, ready to run with packages for Windows, Mac
OS, and Unix-like operating systems.
Easy Configuration
Jenkins is easily set up and configured using its web interface, featuring error checks and a built-inhelp function.
Available Plugins
There are hundreds of plugins available in the Update Center, integrating with every tool in the CI andCD toolchain.
Extensible
Jenkins can be extended by means of its plugin architecture, providing nearly endless possibilities for what it can do.
Easy Distribution
Jenkins can easily distribute work across multiple machines for faster builds, tests, and deploymentsacross multiple
platforms.
Jenkins is a popular open-source automation server that helps developers automate parts of the softwaredevelopment
process. A Jenkins build server is responsible for building, testing, and deploying software projects.
A Jenkins build server is typically set up on a dedicated machine or a virtual machine, and is used tomanage the
continuous integration and continuous delivery (CI/CD) pipeline for a software project. The build server is
configured with all the necessary tools, dependencies, and plugins to build, test, and deploythe project.
The build process in Jenkins typically starts with code being committed to a version control system (suchas Git),
which triggers a build on the Jenkins server. The Jenkins server then checks out the code, buildsit, runs tests on it,
and if everything is successful, deploys the code to a staging or productionenvironment.
Jenkins has a large community of developers who have created hundreds of plugins that extend itsfunctionality, so
it's easy to find plugins to support specific tools, technologies, and workflows. For example, there are plugins for
integrating with cloud infrastructure, running security scans, deploying tovarious platforms, and more.
Overall, a Jenkins build server can greatly improve the efficiency and reliability of the softwaredevelopment process
by automating repetitive tasks, reducing the risk of manual errors, and enablingdevelopers to focus on writing code.
Managing build dependencies is an important aspect of continuous integration and continuousdelivery (CI/CD)
pipelines. In software development, dependencies refer to external libraries,tools, or resources that a project relies on
to build, test, and deploy. Proper management of dependencies can ensure that builds are repeatable and that the
build environment is consistentand up-to-date.
Here are some common practices for managing build dependencies in Jenkins:
Dependency Management Tools: Utilize tools such as Maven, Gradle, or npm to managedependencies and
automate the process of downloading and installing required dependencies for a build.
Version Pinning: Specify exact versions of dependencies to ensure builds are consistent andrepeatable.
Caching: Cache dependencies locally on the build server to improve build performance andreduce the time it takes
to download dependencies.
Continuous Monitoring: Regularly check for updates and security vulnerabilities independencies to ensure the
build environment is secure and up-to-date.
Automated Testing: Automated testing can catch issues related to dependencies early in thedevelopment
process.By following these practices, you can effectively manage build dependencies and maintain thereliability and
consistency of your CI/CD pipeline.
Jenkins plugins
Jenkins plugins are packages of software that extend the functionality of the Jenkins automationserver. Plugins
allow you to integrate Jenkins with various tools, technologies, and workflows,and can be easily installed and
configured through the Jenkins web interface.Some popular Jenkins plugins include:
Git Plugin: This plugin integrates Jenkins with Git version control system, allowing you to pullcode changes, build
and test them, and deploy the code to production.
Maven Plugin: This plugin integrates Jenkins with Apache Maven, a build automation toolcommonly used in Java
projects.
Amazon Web Services (AWS) Plugin: This plugin allows you to integrate Jenkins withAmazon Web Services
(AWS), making it easier to run builds, tests, and deployments on AWSinfrastructure.
Slack Plugin: This plugin integrates Jenkins with Slack, allowing you to receive notificationsabout build status,
failures, and other important events in your Slack channels.
Blue Ocean Plugin: This plugin provides a new and modern user interface for Jenkins, makingit easier to use and
navigate.
Pipeline Plugin: This plugin provides a simple way to define and manage complex CI/CD pipelines in Jenkins.
Jenkins plugins are easy to install and can be managed through the Jenkins web interface. Thereare hundreds of
plugins available, covering a wide range of tools, technologies, and use cases, soyou can easily find the plugins that
best meet your needs.By using plugins, you can greatly improve the efficiency and automation of your
softwaredevelopment process, and make it easier to integrate Jenkins with the tools and workflows youuse.
Git Plugin
The Git Plugin is a popular plugin for Jenkins that integrates the Jenkins automation server withthe Git version
control system. This plugin allows you to pull code changes from a Gitrepository, build and test the code, and
deploy it to production.With the Git Plugin, you can configure Jenkins to automatically build and test your
codewhenever changes are pushed to the Git repository. You can also configure it to build and testcode on a
schedule, such as once a day or once a week.The Git Plugin provides a number of features for managing code
changes, including:
Branch and Tag builds: You can configure Jenkins to build specific branches or tags from your Git repository.
Pull Requests: You can configure Jenkins to build and test pull requests from your Gitrepository, allowing you to
validate code changes before merging them into the main branch.
Build Triggers: You can configure Jenkins to build and test code changes whenever changes are pushed to the Git
repository or on a schedule.
Code Quality Metrics: The Git Plugin integrates with tools such as SonarQube to provide codequality metrics,
allowing you to track and improve the quality of your code over time.
Notification and Reporting: The Git Plugin provides notifications and reports on build status,failures, and other
important events. You can configure Jenkins to send notifications via email,Slack, or other communication
channels.By using the Git Plugin, you can streamline your software development process and make iteasier to
manage code changes and collaborate with other developers on your team.
file system layout
In DevOps, the file system layout refers to the organization and structure of files and directorieson the systems and
servers used for software development and deployment. A well-designed filesystem layout is critical for efficient
and reliable operations in a DevOps environment.Here are some common elements of a file system layout in
DevOps:
Code Repository: A central code repository, such as Git, is used to store and manage sourcecode, configuration
files, and other artifacts.
Build Artifacts: Build artifacts, such as compiled code, are stored in a designated directory for easy access and
management.
Dependencies: Directories for storing dependencies, such as libraries and tools, are designatedfor easy management
and version control.
Configuration Files: Configuration files, such as YAML or JSON files, are stored in adesignated directory for easy
access and management.
Log Files: Log files generated by applications, builds, and deployments are stored in adesignated directory for easy
access and management.
Backup and Recovery: Directories for storing backups and recovery data are designated for easy management and
to ensure business continuity.
Environment-specific Directories: Directories are designated for each environment, such asdevelopment, test, and
production, to ensure that the correct configuration files and artifacts areused for each environment.By following a
well-designed file system layout in a DevOps environment, you can improve theefficiency, reliability, and security
of your software development and deployment processes.
In Jenkins, a host server refers to the physical or virtual machine that runs the Jenkinsautomation server. The host
server is responsible for running the Jenkins process and providingresources, such as memory, storage, and CPU, for
executing builds and other tasks.
The host server can be either a standalone machine or part of a network or cloud-basedinfrastructure. When running
Jenkins on a standalone machine, the host server is responsible for all aspects of the Jenkins installation, including
setup, configuration, and maintenance.
When running Jenkins on a network or cloud-based infrastructure, the host server is responsiblefor providing
resources for the Jenkins process, but the setup, configuration, and maintenancemay be managed by other
components of the infrastructure.
By providing the necessary resources and ensuring the stability and reliability of the host server,you can ensure the
efficient operation of Jenkins and the success of your software developmentand deployment processes.
Install Jenkins: You can install Jenkins on a server by downloading the Jenkins WAR file,deploying it to a servlet
container such as Apache Tomcat, and starting the server.
Configure Jenkins: Once Jenkins is up and running, you can access its web interface toconfigure and manage the
build environment. You can install plugins, set up security, andconfigure build jobs.
Create a Build Job: To build your project, you'll need to create a build job in Jenkins. This willdefine the steps
involved in building your project, such as checking out the code from versioncontrol, compiling the code, running
tests, and packaging the application.
Schedule Builds: You can configure your build job to run automatically at a specific time or when certain
conditions are met. You can also trigger builds manually from the web interface.
Monitor Builds: Jenkins provides a variety of tools for monitoring builds, such as build history, build console
output, and build artifacts. You can use these tools to keep track of the status of your builds and to diagnose
problems when they occur.
Build slaves
As you can see in the diagram provided above, on the left is the Remote source code repository.The Jenkins server
accesses the master environment on the left side and the master environmentcan push down to multiple other Jenkins
Slave environments to distribute the workload.
That lets you run multiple builds, tests, and product environment across the entire architecture.Jenkins Slaves can be
running different build versions of the code for different operating systemsand the server Master controls how each
of the builds operates.
Supported on a master-slave architecture, Jenkins comprises many slaves working for a master.This architecture -
the Jenkins Distributed Build - can run identical test cases in differentenvironments. Results are collected and
combined on the master node for monitoring.
The standard Jenkins installation includes Jenkins master, and in this setup, the master will bemanaging all our build
system's tasks. If we're working on a number of projects, we can runnumerous jobs on each one. Some projects
require the use of specific nodes, which necessitatesthe use of slave nodes.
The Jenkins master is in charge of scheduling jobs, assigning slave nodes, and sendingbuilds to slave nodes for
execution. It will also keep track of the slave node state (offline or online), retrieve build results from slave nodes,
and display them on the terminal output. In mostinstallations, multiple slave nodes will be assigned to the task of
building jobs.
Before we get started, let's double-check that we have all of the prerequisites in place foradding a slave node:
On the next screen, we enter the “Node Name” (slaveNode1), select “Permanent Agent”,then click “OK”:
After clicking “OK”, we'll be taken to a screen with a new form where we need tofill out the slave node's
information . We're considering the slave node to be runningon Linux operating systems, hence the launch method is
set to “Launch agents viassh”.
In the same way, we'll add relevant details, such as the name, description, and anumber of executors.
We'll save our work by pressing the “Save” button . The “Labels” with the name“slaveNode1” will help us to set up
jobs on this slave node:
4. Building the Project on Slave Nodes
Now that our master and slave nodes are ready, we'll discuss the steps for building the project onthe slave node.
For this, we start by clicking “New Item” in the top left corner of the dashboard.
Next, we need to enter the name of our project in the “Enter an item name” field and select the“Pipeline project”,
and then click the “OK” button.
On the next screen, we'll enter a “Description” (optional) and navigate to the “Pipeline” section.Make sure the
“Definition” field has the Pipeline script option selected.
After this, we copy and paste the following declarative Pipeline script into a “script” field:
node('slaveNode1'){
stage('Build'){
sh '''echo build steps'''
} stage('Test')
{
sh '''echo test steps'''
}
}Copy
Next, we click on the “Save” button. This will redirect to the Pipeline view page.
On the left pane, we click the “Build Now” button to execute our Pipeline. After Pipelineexecution is completed,
we'll see the Pipeline view:
We can verify the history of the executed build under the Build History byclicking the build number .As shown
above, when we click on the build number andselect “Console Output”, we can see that the pipeline ran on our
slave Node1 machine.
To run software on the host in Jenkins, you need to have the necessary dependencies and toolsinstalled on the host
machine. The exact software you'll need will depend on the specificrequirements of your project and build process.
Some common tools and software used inJenkins include:
Java: Jenkins is written in Java and requires Java to be installed on the host machine.
Git: If your project uses Git as the version control system, you'll need to have Git installed on thehost machine.
Build Tools: Depending on the programming language and build process of your project, youmay need to install
build tools such as Maven, Gradle, or Ant.
Testing Tools: To run tests as part of your build process, you'll need to install any necessarytesting tools, such as
JUnit, TestNG, or Selenium.
Database Systems: If your project requires access to a database, you'll need to have thenecessary database software
installed on the host machine, such as MySQL, PostgreSQL, or Oracle.
Continuous Integration Plugins: To extend the functionality of Jenkins, you may need to install plugins that provide
additional tools and features for continuous integration, such as the JenkinsGitHub plugin, Jenkins Pipeline plugin,
or Jenkins Slack plugin.
To install these tools and software on the host machine, you can use a package manager such asapt or yum, or you
can download and install the necessary software manually. You can also use acontainerization tool such as Docker
to run Jenkins and the necessary software in isolatedcontainers, which can simplify the installation process and make
it easier to manage thedependencies and tools needed for your build process.
Trigger
1. Trigger builds remotely : If you want to trigger your project built from anywhere anytime then you should
select Triggerbuilds remotely option from the build triggers.
You’ll need to provide an authorization token in the form of a string so that only those who knowit would be able to
remotely trigger this project’s build. This provides the predefined URL toinvoke this trigger remotely.
//Example:http://e330c73d.ngrok.io/job/test/build?token=12345
Whenever you will hit this URL from anywhere you project build will start.
2. Build after other projects are built : If your project depends on another project build then you should
select Build after otherprojects are built option from the build triggers.In this, you must specify the project(Job)
names in the Projects to watch field section and selectone of the following options:
After that, It starts watching the specified projects in the Projects to watch section.
Whenever the build of the specified project completes (either is stable, unstable or failedaccording to your selected
option) then this project build invokes.
3. Build periodically:
If you want to schedule your project build periodically then you should select the Build periodically option from
the build triggers.
You must specify the periodical duration of the project build in the scheduler field section.
This field follows the syntax of cron (with minor differences). Specifically, each line consists of 5 fields separated
by TAB or white space:
After successfully scheduled the project build then the scheduler will invoke the build periodically according to your
specified duration.
4. GitHub webhook trigger for GITScm polling:
A webhook is an HTTP callback, an HTTP POST that occurs when something happens through asimple event-
notification via HTTP POST.
GitHub webhooks in Jenkins are used to trigger the build whenever a developer commitssomething to the branch.
Let’s see how to add build a webhook in GitHub and then add this webhook in Jenkins.
Go to your project repository.
Go to “settings” in the right corner.
Click on “webhooks.”
Click “Add webhooks.”
Write the Payload URL as http://e330c73d.ngrok.io/github-webhook //
This URL is a public URL where the Jenkins server is running Here https://e330c73d.ngrok.io/ is the IP and port
where my Jenkins is running.
If you are running Jenkins on localhost then writing https://localhost:8080/github-webhook/ willnot work because
Webhooks can only work with the public IP.So if you want to make your localhost:8080 expose public then we can
use some tools.
In this example, we used ngrok tool to expose my local address to the public.To know more on how to add webhook
in Jenkins pipeline,visit: https://blog.knoldus.com/opsinit-adding-a-github-webhook-in-jenkins-pipeline/
5. Poll SCM:
Poll SCM periodically polls the SCM to check whether changes were made (i.e. new commits)and builds the
project if new commits were pushed since the last build.
You must schedule the polling duration in the scheduler field. Like we explained above in theBuild periodically
section. You can see the Build periodically section to know how to schedule.
After successfully scheduled, the scheduler polls the SCM according to your specified durationin scheduler field and
builds the project if new commits were pushed since the last build.LET'S INITIATE A PARTNERSHIP
Job chaining
Job chaining in Jenkins refers to the process of linking multiple build jobs together in asequence. When one job
completes, the next job in the sequence is automatically triggered. Thisallows you to create a pipeline of builds that
are dependent on each other, so you can automatethe entire build process.
There are several ways to chain jobs in Jenkins:
Build Trigger: You can use the build trigger in Jenkins to start one job after another. This isdone by configuring
the upstream job to trigger the downstream job when it completes.
Jenkinsfile: If you are using Jenkins Pipeline, you can write a Jenkinsfile to define the steps inyour build pipeline.
The Jenkinsfile can contain multiple stages, each of which represents aseparate build job in the pipeline.
JobDSL plugin: The JobDSL plugin allows you to programmatically create and manage Jenkins jobs. You can use
this plugin to create a series of jobs that are linked together and run insequence.
Multi-Job plugin: The Multi-Job plugin allows you to create a single job that runs multiple build steps, each of
which can be a separate build job. This plugin is useful if you have a build pipeline that requires multiple build jobs
to be run in parallel.
By chaining jobs in Jenkins, you can automate the entire build process and ensure that each stepis completed before
the next step is started. This can help to improve the efficiency andreliability of your build process, and allow you to
quickly and easily make changes to your build pipeline.
Build pipelines
A build pipeline in DevOps is a set of automated processes that compile, build, and test software,and prepare it for
deployment. A build pipeline represents the end-to-end flow of code changesfrom development to production.The
steps involved in a typical build pipeline include:
Code Commit: Developers commit code changes to a version control system such as Git.Build and Compile: The
code is built and compiled, and any necessary dependencies areresolved.
Unit Testing: Automated unit tests are run to validate the code changes.Integration Testing: Automated integration
tests are run to validate that the code integratescorrectly with other parts of the system.
Staging: The code is deployed to a staging environment for further testing and validation.
Release: If the code passes all tests, it is deployed to the production environment.
A build pipeline can be managed using a continuous integration tool such as Jenkins, TravisCI,or CircleCI. These
tools automate the build process, allowing you to quickly and easily makechanges to the pipeline, and ensuring that
the pipeline is consistent and reliable.
In DevOps, the build pipeline is a critical component of the continuous delivery process, and isused to ensure that
code changes are tested, validated, and deployed to production as quickly and efficiently as possible. By automating
the build pipeline, you can reduce the time and effortrequired to deploy code changes, and improve the speed and
quality of your software delivery process.
Build servers
When you're developing and deploying software, one of the first things to figure out is how totake your code and
deploy your working application to a production environment where peoplecan interact with your software.
Most development teams understand the importance of version control to coordinate codecommits, and build servers
to compile and package their software, but Continuous Integration(CI) is a big topic.
Why build servers are important?
Build servers have 3 main purposes:
Compiling committed code from your repository many times a day
Running automatic tests to validate code
Creating deployable packages and handing off to a deployment tool, like Octopus Deploy Without a build
server you're slowed down by complicated, manual processes and the needlesstime constraints they
introduce. For example, without a build server:
Your team will likely need to commit code before a daily deadline or during changewindows
After that deadline passes, no one can commit again until someone manually creates andtests a build
If there are problems with the code, the deadlines and manual processes further delay thefixes
Without a build server, the team battles unnecessary hurdles that automation removes. A buildserver will repeat
these tasks for you throughout the day, and without those human-causeddelays.
But CI doesn’t just mean less time spent on manual tasks or the death of arbitrary deadlines,either. By automatically
taking these steps many times a day, you fix problems sooner and your results become more predictable. Build
servers ultimately help you deploy through your pipelinewith more confidence.
Requirements gathering: Determine the requirements for the server, such as hardwarespecifications, operating
system, and software components needed.
Server provisioning: Choose a method for provisioning the server, such as physical installation,virtualization, or
cloud computing.
Operating System installation: Install the chosen operating system on the server.
Software configuration: Install and configure the necessary software components, such as webservers, databases,
and middleware.
Network configuration: Set up network connectivity, such as IP addresses, hostnames, andfirewall rules.
Security configuration: Configure security measures, such as user authentication, accesscontrol, and encryption.
Monitoring and maintenance: Implement monitoring and maintenance processes, such aslogging, backup, and
disaster recovery.
Deployment: Deploy the application to the server and test it to ensure it is functioning asexpected.
Throughout the process, it is important to automate as much as possible using tools such asAnsible, Chef, or Puppet
to ensure consistency and efficiency in building servers.
Infrastructure as code
Infrastructure as code (IaC) uses DevOps methodology and versioning with a descriptive modelto define and
deploy infrastructure, such as networks, virtual machines, load balancers, andconnection topologies. Just as the same
source code always generates the same binary, an IaCmodel generates the same environment every time it deploys.
IaC is a key DevOps practice and a component of continuous delivery. With IaC, DevOps teamscan work together
with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly and
reliably at scale.
IaC evolved to solve the problem of environment drift in release pipelines. Without IaC, teamsmust maintain
deployment environment settings individually. Over time, each environment becomes a "snowflake," a unique
configuration that can't be reproduced automatically.Inconsistency among environments can cause deployment
issues. Infrastructure administrationand maintenance involve manual processes that are error prone and hard to
track.
IaC avoids manual configuration and enforces consistency by representing desired environmentstates via well-
documented code in formats such as JSON. Infrastructure deployments with IaCare repeatable and prevent runtime
issues caused by configuration drift or missing dependencies.Release pipelines execute the environment descriptions
and version configuration models toconfigure target environments. To make changes, the team edits the source, not
the target.
Idempotence , the ability of a given operation to always produce the same result, is an importantIaC principle. A
deployment command always sets the target environment into the sameconfiguration, regardless of the
environment's starting state. Idempotency is achieved by either automatically configuring the existing target, or by
discarding the existing target and recreating afresh environment.
IAC can be achieved by using tools such as Terraform, CloudFormation, or Ansible to defineinfrastructure
components in a file that can be versioned, tested, and deployed in a consistent andautomated manner.
Speed: IAC enables quick and efficient provisioning and deployment of infrastructure.
Consistency: By using code to define and manage infrastructure, it is easier to ensureconsistency across multiple
environments.
Repeatability: IAC allows for easy replication of infrastructure components in differentenvironments, such as
development, testing, and production.
Scalability: IAC makes it easier to scale infrastructure as needed by simply modifying the code.
Version control: Infrastructure components can be versioned, allowing for rollback to previousversions if
necessary.
Overall, IAC is a key component of modern DevOps practices, enabling organizations to manage their infrastructure
in a more efficient, reliable, and scalable way.
Building by dependency order in DevOps is the process of ensuring that the components of asystem are built and
deployed in the correct sequence, based on their dependencies. This isnecessary to ensure that the system functions
as intended and those components are deployed inthe right order so that they can interact correctly with each
other.The steps involved in building by dependency order in DevOps include:
Define dependencies: Identify all the components of the system and the dependencies betweenthem. This can be
represented in a diagram or as a list.
Determine the build order: Based on the dependencies, determine the correct order in whichcomponents should
be built and deployed.
Automate the build process: Use tools such as Jenkins, TravisCI, or CircleCI to automate the build and
deployment process. This allows for consistency and repeatability in the build process.
Monitor progress: Monitor the progress of the build and deployment process to ensure thatcomponents are
deployed in the correct order and that the system is functioning as expected.
Test and validate: Test the system after deployment to ensure that all components arefunctioning as intended and
that dependencies are resolved correctly.
Rollback: If necessary, have a rollback plan in place to revert to a previous version of the systemif the build or
deployment process fails.
In conclusion, building by dependency order in DevOps is a critical step in ensuring the successof a system
deployment, as it ensures that components are deployed in the correct order and thatdependencies are resolved
correctly. This results in a more stable, reliable, and consistentsystem.
Build phases
In DevOps, there are several phases in the build process, including:
Planning: Define the project requirements, identify the dependencies, and create a build plan.
Code development: Write the code and implement features, fixing bugs along the way.
Continuous Integration (CI): Automatically build and test the code as it is committed to aversion control system.
Continuous Delivery (CD): Automatically deploy code changes to a testing environment, wherethey can be tested
and validated.
Deployment: Deploy the code changes to a production environment, after they have passedtesting in a pre-
production environment.
Monitoring: Continuously monitor the system to ensure that it is functioning as expected, and todetect and resolve
any issues that may arise.
Maintenance: Continuously maintain and update the system, fixing bugs, adding new features,and ensuring its
stability.
These phases help to ensure that the build process is efficient, reliable, and consistent, and thatcode changes are
validated and deployed in a controlled manner. Automation is a key aspect of DevOps, and it helps to make these
phases more efficient and less prone to human error.
In continuous integration (CI), this is where we build the application for the first time. The buildstage is the first
stretch of a CI/CD pipeline, and it automates steps like downloadingdependencies, installing tools, and compiling.
Besides building code, build automation includes using tools to check that the code is safe andfollows best practices.
The build stage usually ends in the artifact generation step, where wecreate a production-ready package. Once this is
done, the testing stages can begin.
The build stage starts from code commit and runs from the beginning up to the test stage
We’ll be covering testing in-depth in future articles (subscribe to the newsletter so you don’tmiss them). Today,
we’ll focus on build automation.
Build automation verifies that the application, at a given code commit, can qualify for further testing. We can divide
it into three parts:
Compilation : the first step builds the application.
Linting : checks the code for programmatic and stylistic errors.
Code analysis : using automated source-checking tools, we control the code’s quality.
Artifact generation : the last step packages the application for release or deployment.
Alternative build servers
There are several alternative build servers in DevOps, including:
Jenkins - an open-source, Java-based automation server that supports various plugins andintegrations.
Travis CI - a cloud-based, open-source CI/CD platform that integrates with Github.
CircleCI - a cloud-based, continuous integration and delivery platform that supports multiplelanguages and
integrates with several platforms.
GitLab CI/CD - an integrated CI/CD solution within GitLab that allows for complete projectand pipeline
management.
Bitbucket Pipelines - a CI/CD solution within Bitbucket that allows for pipeline creation andmanagement within the
code repository.
AWS CodeBuild - a fully managed build service that compiles source code, runs tests, and produces software
packages that are ready to deploy.
Azure Pipelines - a CI/CD solution within Microsoft Azure that supports multiple platforms and programming
languages.
In DevOps, collating quality measures is an important part of the continuous improvement process. The following
are some common quality measures used in DevOps to evaluate thequality of software systems:
Continuous Integration (CI) metrics - metrics that track the success rate of automated builds andtests, such as
build duration and test pass rate.
Continuous Deployment (CD) metrics - metrics that track the success rate of deployments, suchas deployment
frequency and time to deployment.
Code review metrics - metrics that track the effectiveness of code reviews, such as reviewcompletion time and code
review feedback.
Performance metrics - measures of system performance in production, such as response time andresource
utilization.
User experience metrics - measures of how users interact with the system, such as click-throughrate and error rate.
Security metrics - measures of the security of the system, such as the number of securityvulnerabilities and the
frequency of security updates.
Incident response metrics - metrics that track the effectiveness of incident response, such asmean time to
resolution (MTTR) and incident frequency.
By regularly collating these quality measures, DevOps teams can identify areas for improvement,track progress over
time, and make informed decisions about the quality of their systems.
Unit 5 Testing Tools and automation
As we know, software testing is a process of analyzing an application's functionality as per thecustomer
prerequisite.
If we want to ensure that our software is bug-free or stable, we must perform the various types of software testing
because testing is the only method that makes our application bug-free.
The software testing mainly divided into two parts, which are as follows:
Manual Testing
Automation Testing
White Box Testing : In white-box testing, the developer will inspect every line of code before handing it over to
thetesting team or the concerned test engineers.
Subsequently, the code is noticeable for developers throughout testing; that's why this process isknown as WBT
(White Box Testing) .
In other words, we can say that the developer will execute the complete white-box testing for the particular software
and send the specific application to the testing team.
The purpose of implementing the white box testing is to emphasize the flow of inputs andoutputs over the software
and enhance the security of an application.
White box testing is also known as open box testing, glass box testing, structural testing,clear box testing, and
transparent box testing.
Black Box Testing : Another type of manual testing is black-box testing. In this testing, the test engineer
willanalyze the software against requirements, identify the defects or bug, and sends it back to thedevelopment team.
Then, the developers will fix those defects, do one round of White box testing, and send it to thetesting team.
Here, fixing the bugs means the defect is resolved, and the particular feature is workingaccording to the given
requirement.
The main objective of implementing the black box testing is to specify the business needs or thecustomer's
requirements.
In other words, we can say that black box testing is a process of checking the functionality of anapplication as per
the customer requirement. The source code is not visible in this testing; that'swhy it is known as black-box testing.
Functional Testing: The test engineer will check all the components systematically against requirement
specificationsis known as functional testing. Functional testing is also known as Component testing.
In functional testing, all the components are tested by giving the value, defining the output, andvalidating the actual
output with the expected value.
Functional testing is a part of black-box testing as its emphases on application requirement rather than actual code.
The test engineer has to test only the program instead of the system.
Types of Functional Testing
Just like another type of testing is divided into several parts, functional testing is also classifiedinto various
categories.The diverse types of Functional Testing contain the following:
a. Unit Testing
b. Integration Testing
c. System Testing
a. Unit Testing: Unit testing is the first level of functional testing in order to test any software. In this, the
testengineer will test the module of an application independently or test all the module functionalityis called
unit testing.
The primary objective of executing the unit testing is to confirm the unit components with their performance. Here,
a unit is defined as a single testable function of a software or an application.And it is verified throughout the
specified application development phase.
b. Integration Testing: Once we are successfully implementing the unit testing, we will go integration testing.
It is thesecond level of functional testing, where we test the data flow between dependent modules or interface
between two features is called integration testing.
The purpose of executing the integration testing is to test the statement's accuracy between eachmodule.
If these modules are working fine, then we can add one more module and test again. And we cancontinue with the
same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data flow between the modules is
known as Incremental integration testing.
Incremental integration testing can further classify into two parts, which are as follows:
Top-down Incremental Integration Testing
Bottom-up Incremental Integration Testing
Let's see a brief introduction of these types of integration testing:
1. Top-down Incremental Integration Testing :-In this approach, we will add the modules step by step or
incrementally and test the data flow between them. We have to ensure that the modules we are adding are the
child of the earlierones.
2. Bottom-up Incremental Integration Testing:- In the bottom-up approach, we will add the modules
incrementally and check the data flow between modules. And also, ensure that the module we are adding is the
parent of the earlierones.
Non-Incremental Integration Testing/ Big Bang Method: Whenever the data flow is complex and very difficult
to classify a parent and a child, we will gofor the non-incremental integration approach. The non-incremental
method is also known as the Big Bang method.
C.System Testing: Whenever we are done with the unit and integration testing, we can proceed with the system
testing.
In system testing, the test environment is parallel to the production environment. It is also knownas end-to-end
testing.
In this type of testing, we will undergo each attribute of the software and test if the end feature works according to
the business requirement. And analysis the software product as a complete system.
Non-function Testing
The next part of black-box testing is non-functional testing. It provides detailed information onsoftware
product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of thesoftware.
Non-functional testing is a combination of performance, load, stress, usability and,compatibility testing.
Types of Non-functional Testing
Non-functional testing categorized into different parts of testing, which we are going to discussfurther:
Performance Testing
Usability Testing
Compatibility Testing
Performance Testing:- In performance testing, the test engineer will test the working of an application by applying
some load.
In this type of non-functional testing, the test engineer will only focus on several aspects, such as Response time,
Load, scalability, and Stability of the software or an application.
Classification of Performance Testing
Performance testing includes the various types of testing, which are as follows:
Load Testing
Stress Testing
Scalability Testing
Stability Testing
Load Testing: While executing the performance testing, we will apply some load on the particular applicationto
check the application's performance, known as load testing . Here, the load could be less thanor equal to the desired
load.It will help us to detect the highest operating volume of the software and bottlenecks.
Stress Testing: It is used to analyze the user-friendliness and robustness of the software beyond the
commonfunctional limits.Primarily, stress testing is used for critical software, but it can also be used for all types
of software applications.
Scalability Testing: To analysis, the application's performance by enhancing or reducing the load in
particular balances is known as scalability testing .In scalability testing, we can also check the system, processes, or
database's ability to meet anupward need. And in this, the Test Cases are designed and implemented efficiently.
Stability Testing: Stability testing is a procedure where we evaluate the application's performance by applying
theload for a precise time.It mainly checks the constancy problems of the application and the efficiency of a
developed product. In this type of testing, we can rapidly find the system's defect even in a stressfulsituation.
Usability Testing:- Another type of non-functional testing is usability testing. In usability testing, we will analyze
the user-friendliness of an application and detect the bugs in the software's end-user interface.Here, the term user-
friendliness defines the following aspects of an application:
The application should be easy to understand, which means that all the features must bevisible to end-users.
The application's look and feel should be good that means the application should be pleasant looking and
make a feel to the end-user to use it.
Compatibility Testing:- In compatibility testing, we will check the functionality of an application in specific
hardwareand software environments. Once the application is functionally stable then only, we go for compatibility
testing .
Here, software means we can test the application on the different operating systems and other browsers, and
hardware means we can test the application on different sizes.
Automation Testing:
The most significant part of Software testing is Automation testing. It uses specific tools toautomate manual design
test cases without any human interference.
Automation testing is the best way to enhance the efficiency, productivity, and coverage of Software testing.
It is used to re-run the test scenarios, which were executed manually, quickly, and repeatedly.
In other words, we can say that whenever we are testing an application by using some tools isknown as
automation testing.
We will go for automation testing when various releases or several regression cycles goes on theapplication or
software. We cannot write the test script or perform the automation testing withoutunderstanding the programming
language.
Automation of testing Pros and cons
Some other types of Software TestingIn software testing, we also have some other types of testing that are not part
of any abovediscussed testing, but those testing are required while testing any software or an application.
Smoke Testing
Sanity Testing
Regression Testing
User Acceptance Testing
Exploratory Testing
Adhoc Testing
Security Testing
Globalization Testing
Smoke Testing: In smoke testing, we will test an application's basic and critical features before doing one roundof
deep and rigorous testing.
Or before checking all possible positive and negative values is known as smoke testing.Analyzing the workflow of
the application's core and main functions is the main objective of performing the smoke testing.
Sanity Testing: It is used to ensure that all the bugs have been fixed and no added issues come into existence dueto
these changes. Sanity testing is unscripted, which means we cannot documented it. It checksthe correctness of the
newly added features and components.
Regression Testing: Regression testing is the most commonly used type of software testing. Here, theterm
regression implies that we have to re-test those parts of an unaffected application.
Regression testing is the most suitable testing for automation tools. As per the project type andaccessibility of
resources, regression testing can be similar to Retesting.
Whenever a bug is fixed by the developers and then testing the other features of the applicationsthat might be
simulated because of the bug fixing is known as regression testing.
In other words, we can say that whenever there is a new release for some project, then we can perform Regression
Testing, and due to a new feature may affect the old features in the earlier releases.
User Acceptance Testing: The User acceptance testing (UAT) is done by the individual team known as
domainexpert/customer or the client. And knowing the application before accepting the final product is called as
user acceptance testing.
In user acceptance testing, we analyze the business scenarios, and real-time scenarios on thedistinct environment
called the UAT environment. In this testing, we will test the application before UAI for customer approval.
Exploratory Testing: Whenever the requirement is missing, early iteration is required, and the testing team
hasexperienced testers when we have a critical application. New test engineer entered into the teamthen we go for
the exploratory testing.
To execute the exploratory testing, we will first go through the application in all possible ways,make a test
document, understand the flow of the application, and then test the application.
Adhoc Testing: Testing the application randomly as soon as the build is in the checked sequence is knownas Adhoc
testing.
It is also called Monkey testing and Gorilla testing. In Adhoc testing, we will check theapplication in contradiction
of the client's requirements; that's why it is also known as negativetesting.
When the end-user using the application casually, and he/she may detect a bug. Still, thespecialized test engineer
uses the software thoroughly, so he/she may not identify a similar detection.
Security Testing: It is an essential part of software testing, used to determine the weakness, risks, or threats in
thesoftware application.
The execution of security testing will help us to avoid the nasty attack from outsiders and ensureour software
applications' security.
In other words, we can say that security testing is mainly used to define that the data will be safeand endure the
software's working process.
Globalization Testing: Another type of software testing is Globalization testing. Globalization testing is used to
check the developed software for multiple languages or not. Here, the words lobalization meansenlightening the
application or software for various languages.
Globalization testing is used to make sure that the application will support multiple languagesand multiple features.
In present scenarios, we can see the enhancement in several technologies as the applications are prepared to be used
globally.
Conclusion
In the tutorial, we have discussed various types of software testing. But there is still a list of morethan 100+
categories of testing. However, each kind of testing is not used in all types of projects.
We have discussed the most commonly used types of Software Testing like black-box testing,white box testing,
functional testing, non-functional testing, regression testing, Adhoctesting, etc.
Also, there are alternate classifications or processes used in diverse organizations, but the generalconcept is similar
all over the place.
These testing types, processes, and execution approaches keep changing when the project,requirements, and scope
change.
Selenium
Introduction
Selenium is one of the most widely used open source Web UI (User Interface) automation testingsuite.It was
originally developed by Jason Huggins in 2004 as an internal tool at Thought Works.Selenium supports automation
across different browsers, platforms and programming languages.
Selenium can be easily deployed on platforms such as Windows, Linux, Solaris and Macintosh.Moreover, it
supports OS (Operating System) for mobile applications like iOS, windows mobileand android.
Selenium supports a variety of programming languages through the use of drivers specific toeach Language.
Languages supported by Selenium include C#, Java, Perl, PHP, Python and Ruby.
Currently, Selenium Web driver is most popular with Java and C#. Selenium test scripts can becoded in any of the
supported programming languages and can be run directly in most modernweb browsers. Browsers supported by
Selenium include Internet Explorer, Mozilla Firefox,Google Chrome and Safari.
Selenium can be used to automate functional tests and can be integrated with automation testtools such as Maven,
Jenkins, & Docker to achieve continuous testing. It can also be integratedwith tools such as TestNG, & JUnit for
managing test cases and generating reports.
Selenium Features
Selenium is an open source and portable Web testing Framework.
Selenium IDE provides a playback and record feature for authoring tests without the needto learn a test
scripting language.
It can be considered as the leading cloud-based testing platform which helps testers torecord their actions
and export them as a reusable script with a simple-to-understand andeasy-to-use interface.
Selenium supports various operating systems, browsers and programming languages.Following is the list:
Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera,Safari, etc.
It also supports parallel test execution which reduces time and increases the efficiency of tests.
Selenium can be integrated with frameworks like Ant and Maven for source codecompilation.
Selenium can also be integrated with testing frameworks like TestNG for applicationtesting and generating
rseports.
Selenium requires fewer resources as compared to other automation test tools.
WebDriver API has been indulged in selenium whichis one of the most importantmodifications done to
selenium.
Selenium web driver does not require server installation, test scripts interact directly withthe browser.
Selenium commands are categorized in terms of different classes which make it easier tounderstand and
implement.
JavaScript testing
JavaScript testing is a crucial part of the software development process that helps ensure thequality and reliability of
code. The following are the key components of JavaScript testing:
Test frameworks: A test framework provides a structure for writing and organizing tests. Some popular JavaScript
test frameworks include Jest, Mocha, and Jasmine.
Assertion libraries: An assertion library provides a set of functions that allow developers towrite assertions about
the expected behavior of the code. For example, an assertion might check that a certain function returns the expected
result.
Test suites: A test suite is a collection of related tests that are grouped together. The purpose of atest suite is to test
a specific aspect of the code in isolation.
Test cases: A test case is a single test that verifies a specific aspect of the code. For example, atest case might check
that a function behaves correctly when given a certain input.
Test runners: A test runner is a tool that runs the tests and provides feedback on the results. Testrunners typically
provide a report on which tests passed and which tests failed.
Continuous Integration (CI): CI is a software development practice where developers integratecode into a shared
repository frequently. By using CI, developers can catch issues early andavoid integration problems.The goal of
JavaScript testing is to catch bugs and defects early in the development cycle, beforethey become bigger problems
and impact the quality of the software. Testing also helps to ensurethat the code behaves as expected, even when
changes are made in the future.
There are different types of tests that can be performed in JavaScript, including unit tests,integration tests, and end-
to-end tests. The choice of which tests to write depends on the specificrequirements and goals of the project.
For Example, while uploading the details of the students in the database, the database willstore all the details. When
there is a need to display the details of the students, it willsimply fetch all the details and display them. Here, it will
show only the result, not the process and how it fetches the details.
For implementing backend testing, the backend test engineer should also have someknowledge about that particular
server-side or database language. It is also knownas Database Testing.
Importance of Backend Testing: Backend testing is a must because anything wrong or error happens at the server-
side, it will not further proceed with that task or the outputwill get differed or sometimes it will also cause problems
such as data loss, deadlock,etc.,
Types of Backend Testing
The following are the different types of backend testing:
1. Structural Testing
2. Functional Testing
3. Non-Functional Testing
Let’s discuss each of these types of backend testing.
1. Structural Testing :Structural testing is the process of validating all the elements that are present inside thedata
repository and are primarily used for data storage. It involves checking the objects of front-end developments with
the database mapping objects.
Types of Structural Testing: The following are the different types of structural testing:
a. Schema Testing
b. Table and Column Testing
c. Key and Indexes Testing
d. Trigger Testing
e. Stored Procedures Testing
f. Database Server Validation Testing
a) Schema Testing: In this Schema Testing, the tester will check for the correctly mappedobjects. This is also
known as mapping testing.
It ensures whether the objects of thefront-end and the objects of the back-end are correctly matched or mapped. It
will mainlyfocus on schema objects such as a table, view, indexes, clusters, etc., In this testing, thetester will find
the issues of mapped objects like table, view, etc.,
b) Table and Column Testing: In this, it ensures that the table and column properties arecorrectly mapped.
It ensures whether the table and the column names are correctly mapped on both thefront-end side and
server-side.
It validates the datatype of the column is correctly mentioned.
It ensures the correct naming of the column values of the database.
It detects the unused tables and columns.
It validates whether the users are able to give the correct input as per the requirement.For example, if we
mention the wrong datatype for the column on the server-side whichis different from the front-end then it
will raise an error.
c) Key and Indexes Testing: In this, it validates the key and indexes of the columns.
It ensures whether the mentioned key constraints are correctly provided. For example,Primary Key for the
column is correctly mentioned as per the given requirement.
It ensures the correct references of Foreign Key with the parent table.
It checks the length and size of the indexes.
It ensures the creation of clustered and non-clustered indexes for the table as per therequirement.
It validates the naming conventions of the Keys.
d) Trigger Testing: It ensures that the executed triggers are fulfilling the requiredconditions of the DML
transactions.
It validates whether the triggers make the data updates correctly when we haveexecuted them.
It checks the coding conventions are followed correctly during the coding phase of thetriggers.
It ensures that the trigger functionalities of update, delete, and insert.
e) Stored Procedures Testing: In this, the tester checks for the correctness of the stored procedure results.
It checks whether the stored procedure contains the valid conditions for looping andconditional statements
as per the requirement.
It validates the exception and error handling in the stored procedure.
It detects the unused stored procedure.
It validates the cursor operations.
It validates whether the TRIM operations are correctly applied or not.
It ensures that the required triggers are implicitly invoked by executing the stored procedures.
f) Database Server Validation Testing: It validates the database configuration details as per the requirements.
It validates that the transactions of the data are made as per the requirements.
It validates the user’s authentication and authorization.For Example, If wrong user authentication is given,
it will raise an error.
2. Functional Testing: Functional Testing is the process of validating that the transactions and operations made by
the end-users meet the requirements.
Types of Functional Testing: The following are the different types of functional testing:
a. Black Box Testing
b. White Box Testing
Black Box Testing is the process of checking the functionalities of the integration of the database.
This testing is carried out at the early stage of development and hence It is veryhelpful to reduce errors.
It consists of various techniques such as boundary analysis, equivalent partitioning,and cause-effect
graphing.
These techniques are helpful in checking the functionality of the database.
The best example is the User login page. If the entered username and password arecorrect, It will allow the
user and redirect to the next page.
White Box Testing is the process of validating the internal structure of the database.
Here, the specified details are hidden from the user.
The database triggers, functions, views, queries, and cursors will be checked in thistesting.
It validates the database schema, database table, etc.,
Here the coding errors in the triggers can be easily found.
Errors in the queries can also be handled in this white box testing and hence internalerrors are easily
eliminated.
3. Non-Functional Testing : Non-functional testing is the process of performing load testing, stress testing,
andchecking minimum system requirements are required to meet the requirements. It willalso detect risks, and errors
and optimize the performance of the database.
a. Load Testing
b. Stress Testing
a) Load Testing:
Load testing involves testing the performance and scalability of the database.
It determines how the software behaves when it is been used by many userssimultaneously.
It focuses on good load management.
For example, if the web application is accessed by multiple users at the same time andit does not create any
traffic problems then the load testing is successfully completed.
b) Stress Testing:
Stress Testing is also known as endurance testing. Stress testing is a testing processthat is performed to
identify the breakpoint of the system.
In this testing, an application is loaded till the stage the system fails.
This point is known as a breakpoint of the database system.
It evaluates and analyzes the software after the breakage of system failure. In case of error detection, It will
display the error messages.
For example, if users enter the wrong login information then it will throw an error message.
Backend Testing Process
The following are some of the factors for backend testing validation:
Performance Check: It validates the performance of each individual test and thesystem behavior.
Sequence Testing: Backend testing validates that the tests are distributed accordingto the priority.
Database Server Validations: In this, ensures that the data fed through for the testsis correct or not.
Functions Testing: In this, the test validates the consistency in transactions of thedatabase.
Key and Indexes: In this, the test ensures that the accurate constraint and the rules of constraints and
indexes are followed properly.
Data Integrity Testing: It is a technique in which data is verified in the databasewhether it is accurate and
functions as per requirements.
Database Tables: It ensures that the created table and the queries for the output are providing the expected
result.
Database Triggers: Backend Testing validates the correctness of the functionality of triggers.
Stored Procedures: Backend testing validates the functions, return statements,calling the other events,
etc., are correctly mentioned as per the requirements,
Schema: Backend testing validates that the data is organized in a correct way as per the business
requirement and confirms the outcome.
2. Empirix-TEST Suite:
It is acquired by Oracle from Empirix. It is a load testing tool.
It validates the scalability along with the functionality of the application under heavytest.
Acquisition with the Empirix -Test suite may be proven effective to deliver theapplication with improved
quality.
6. SQL Map:
It is an open-source tool.
It is used for performing Penetration Testing to automate the process of detection.
Powerful detection of errors will lead to efficient testing and result in the expected behavior of the
requirements.
7. php Myadmin:
This is the software tool and it is written in PHP.
It is developed to handle the databases and we can execute test queries to ensure thecorrectness of the result
as a whole and even for a separate table.
9. Hammer DB:
It is an open-source tool for load testing.
It validates the activity replay functionality for the oracle database.
It is based on industry standards like TPC-C and TPC-H Benchmarks.
Test-driven development
Test Driven Development (TDD) is software development approach in which test casesare developed to specify and
validate what the code will do. In simple terms, testcases for each functionality are created and tested first and if the
test fails then thenew code is written in order to pass the test and making code simple and bug-free.
Test-Driven Development starts with designing and developing tests for every smallfunctionality of an application.
TDD framework instructs developers to write newcode only if an automated test has failed. This avoids duplication
of code. The TDDfull form is Test-driven development.
The simple concept of TDD is to write and correct the failed tests before writing new code(before development).
This helps to avoid duplication of code as we write a small amount of code at a time in order to pass tests. (Tests are
nothing but requirement conditions that we needto test to fulfill them).
Test-Driven development is a process of developing and running automated test before actualdevelopment of the
application. Hence, TDD sometimes also called as Test First Development.
How to perform TDD Test: Following steps define how to perform TDD test,
Add a test.
Run all tests and see if any new test fails.
Write some code.
Run tests and Refactor code.
Repeat
TDD Vs. Traditional Testing
Below is the main difference between Test driven development and traditional testing:TDD approach is primarily a
specification technique. It ensures that your source codeis thoroughly tested at confirmatory level.
With traditional testing, a successful test finds one or more defects. It is sameas TDD. When a test fails,
you have made progress because you know that youneed to resolve the problem.
TDD ensures that your system actually meets requirements defined for it. Ithelps to build your confidence
about your system.
In TDD more focus is on production code that verifies whether testing willwork properly. In traditional
testing, more focus is on test case design. Whether the test will show the proper/improper execution of the
application in order tofulfill requirements.
In TDD, you achieve 100% coverage test. Every single line of code is tested,unlike traditional testing.
The combination of both traditional testing and TDD leads to the importance of testing the system rather
than perfection of the system.
In Agile Modeling (AM), you should “test with a purpose”. You should knowwhy you are testing
something and what level its need to be tested.
Acceptance TDD (ATDD): With ATDD you write a single acceptance test. This testfulfills the requirement of the
specification or satisfies the behavior of the system. After that write just enough production/functionality code to
fulfill that acceptance test.Acceptance test focuses on the overall behavior of the system. ATDD also was knownas
Behavioral Driven Development (BDD).
Developer TDD: With Developer TDD you write single developer test i.e. unit test andthen just enough production
code to fulfill that test. The unit test focuses on every smallfunctionality of the system. Developer TDD is simply
called as TDD.
The main goal of ATDD and TDD is to specify detailed, executable requirements for your solution on a just in time
(JIT) basis. JIT means taking only those requirements in consideration thatare needed in the system. So increase
efficiency.
REPL-driven development
REPL-driven development (Read-Eval-Print Loop) is an interactive programming approach thatallows developers to
execute code snippets and see their results immediately. This enablesdevelopers to test their code quickly and
iteratively, and helps them to understand the behavior of their code as they work.
In a REPL environment, developers can type in code snippets, and the environment willimmediately evaluate the
code and return the results. This allows developers to test small bits of code and quickly see the results, without
having to create a full-fledged application.
REPL-driven development is commonly used in dynamic programming languages such asPython, JavaScript, and
Ruby. Some popular REPL environments include the Python REPL, Node.js REPL, and IRB (Interactive Ruby).
Increased efficiency: The immediate feedback provided by a REPL environment allowsdevelopers to test and
modify their code quickly, without having to run a full-fledgedapplication.Improved understanding: By being able to
see the results of code snippets immediately,developers can better understand how the code works and identify any
issues early on.
Increased collaboration: REPL-driven development makes it easy for developers to share codesnippets and
collaborate on projects, as they can demonstrate the behavior of the code quicklyand easily.Overall, REPL-driven
development is a useful tool for developers looking to improve their workflow and increase their understanding of
their code. By providing an interactiveenvironment for testing and exploring code, REPL-driven development can
help developers to bemore productive and efficient.
Deployment of the system:In DevOps, deployment systems are responsible for automating the release of
software updatesand applications from development to production. Some popular deployment systems include:
Jenkins: an open-source automation server that provides plugins to support building, deploying,and automating any
project.
Ansible: an open-source platform that provides a simple way to automate software provisioning,configuration
management, and application deployment.
Docker: a platform that enables developers to create, deploy, and run applications in containers.
Kubernetes: an open-source system for automating deployment, scaling, and management of containerized
applications.
AWS Code Deploy: a fully managed deployment service that automates software deploymentsto a variety of
compute services such as Amazon EC2, AWS Fargate, and on-premises servers.
Azure DevOps: a Microsoft product that provides an end-to-end DevOps solution for developing, delivering, and
deploying applications on multiple platforms.
Virtualization stacks
In DevOps, virtualization refers to the creation of virtual machines, containers, or environmentsthat allow multiple
operating systems to run on a single physical machine. The following aresome of the commonly used virtualization
stacks in DevOps:
Docker: An open-source platform for automating the deployment, scaling, and management of containerized
applications.
Kubernetes: An open-source platform for automating the deployment, scaling, and managementof containerized
applications, commonly used in conjunction with Docker.
VirtualBox : An open-source virtualization software that allows multiple operating systems torun on a single
physical machine.
VMware: A commercial virtualization software that provides a comprehensive suite of tools for virtualization,
cloud computing, and network and security management.
Hyper-V: Microsoft's hypervisor technology that enables virtualization on Windows-basedsystems.
These virtualization stacks play a crucial role in DevOps by allowing developers to build, test,and deploy
applications in isolated, consistent environments, while reducing the costs andcomplexities associated with physical
infrastructure.
Client-side scripting languages: JavaScript, HTML, and CSS are commonly used client-sidescripting languages
that run in a web browser and allow developers to create dynamic,interactive web pages.
Remote execution tools: Tools such as SSH, Telnet, or Remote Desktop Protocol (RDP) allowdevelopers to
remotely execute commands and scripts on client devices.
Configuration management tools: Tools such as Ansible, Puppet, or Chef use agent-based or agentless
architectures to manage and configure client devices, allowing developers to executecode and scripts remotely.
Mobile apps: Mobile applications can also run code on client devices, allowing developers tocreate dynamic,
interactive experiences for users.
These methods are used in DevOps to automate various tasks, such as application deployment,software updates, or
system configuration, on client devices. By executing code on the clientside, DevOps teams can improve the speed,
reliability, and security of their software delivery process.
Puppet Master : Puppet master handles all the configuration related process in the form of puppet codes. It is
aLinux based system in which puppet master software is installed. The puppet master must be inLinux. It uses the
puppet agent to apply the configuration to nodes.This is the place where SSL certificates are checked and marked.
Puppet Slave or Agent : Puppet agents are the real working systems and used by the Client. It is installed on the
clientmachine and maintained and managed by the puppet master. They have a puppet agent servicerunning inside
them.The agent machine can be configured on any operating system such as Windows, Linux, Solaris,or Mac OS.
Config Repository : Config repository is the storage area where all the servers and nodes related configurations
arestored, and we can pull these configurations as per requirements.
Facts : Facts are the key-value data pair. It contains information about the node or the master machine. Itrepresents
a puppet client states such as operating system, network interface, IP address, uptime,and whether the client machine
is virtual or not.These facts are used for determining the present state of any agent. Changes on any targetmachine
are made based on facts. Puppet's facts are predefined and customized.
Catalog : The entire configuration and manifest files that are written in Puppet are changed into a compiledformat.
This compiled format is known as a catalog, and then we can apply this catalog to thetarget machine.
The above image performs the following functions:
First of all, an agent node sends facts to the master or server and requests for a catalog.
The master or server compiles and returns the catalog of a node with the help of someinformation accessed
by the master.
Then the agent applies the catalog to the node by checking every resource mentioned in thecatalog. If it
identifies resources that are not in their desired state, then makes the necessaryadjustments to fix them. Or,
it determines in no-op mode, the adjustments would be required toreconcile the catalog.
And finally, the agent sends a report back to the master.
Puppet Master-Slave Communication : Puppet master-slave communicates via a secure encrypted channel
through the SSL (SecureSocket Layer). Let's see the below diagram to understand the communication between the
master and slave with this channel:
Puppet Blocks
Puppet provides the flexibility to integrate Reports with third-party tools using Puppet APIs.
Four types of Puppet building blocks are
Resources
Classes
Manifest
Modules
Puppet Resources: Puppet Resources are the building blocks of Puppet.Resources are the inbuilt functions that run
at the back end to perform the required operations in puppet.
Puppet Classes: A combination of different resources can be grouped together into a single unit called class.
Puppet Manifest: Manifest is a directory containing puppet DSL files. Those files have a .pp extension.
The .ppextension stands for puppet program. The puppet code consists of definitions or declarations of Puppet
Classes.
Puppet Modules: Modules are a collection of files and directories such as Manifests, Class definitions. They arethe
re-usable and sharable units in Puppet.
For example, the MySQL module to install and configure MySQL or the Jenkins module tomanage Jenkins, etc..
Ansible
Ansible is simple open source IT engine which automates application deployment, intra serviceorchestration, cloud
provisioning and many other IT tools.Ansible is easy to deploy because it does not use any agents or custom security
infrastructure.
Ansible uses playbook to describe automation jobs, and playbook uses very simple languagei.e. YAML (It’s a
human-readable data serialization language & is commonly used for configuration files, but could be used in many
applications where data is being stored)which isvery easy for humans to understand, read and write. Hence the
advantage is that even the ITinfrastructure support guys can read and understand the playbook and debug if needed
(YAML – It is in human readable form).
Ansible is designed for multi-tier deployment. Ansible does not manage one system at time, itmodels IT
infrastructure by describing all of your systems are interrelated. Ansible is completelyagentless which means
Ansible works by connecting your nodes through ssh(by default). But if you want other method for connection like
Kerberos, Ansible gives that option to you.
After connecting to your nodes, Ansible pushes small programs called as “Ansible Modules”.Ansible runs that
modules on your nodes and removes them when finished. Ansible managesyour inventory in simple text files (These
are the hosts file). Ansible uses the hosts file whereone can group the hosts and can control the actions on a specific
group in the playbooks.
Sample Hosts File
This is the content of hosts file –
#File name: hosts
#Description: Inventory file for your application. Defines machine type abcnode to deploy specific artifacts
# Defines machine type def node to uploadmetadata.
[abc-node]
#server1 ansible_host = <target machine for DU deployment> ansible_user = <Ansibleuser>
ansible_connection = ssh
server1 ansible_host = <your host name> ansible_user = <your unix user>
ansible_connection = ssh
[def-node]
#server2 ansible_host = <target machine for artifact upload>ansible_user = <Ansible user>
ansible_connection = ssh
server2 ansible_host = <host> ansible_user = <user> ansible_connection = ssh
Such information typically includes the exact versions and updates that have been applied toinstalled software
packages and the locations and network addresses of hardware devices. For e.g. If you want to install the new
version of WebLogic/WebSphere server on all of the machines present in your enterprise, it is not feasible for you to
manually go and update each and everymachine.
You can install WebLogic/WebSphere in one go on all of your machines with Ansible playbooksand inventory
written in the most simple way. All you have to do is list out the IP addresses of your nodes in the inventory and
write a playbook to install WebLogic/WebSphere. Run the playbook from your control machine & it will be
installed on all your nodes.
Ansible Workflow
Ansible works by connecting to your nodes and pushing out a small program called Ansiblemodules to them. Then
Ansible executed these modules and removed them after finished. Thelibrary of modules can reside on any machine,
and there are nodaemons, servers, or databases required.
In the above image, the Management Node is the controlling node that controls the entireexecution of the playbook.
The inventory file provides the list of hosts where the Ansible modules need to be run. The Management Node
makes an SSH connection and executes thesmall modules on the host's machine and install the software.Ansible
removes the modules once those are installed so expertly. It connects to the hostmachine executes the instructions,
and if it is successfully installed, then remove that code inwhich one was copied on the host machine.
Terms TermsExplanation
Ansible- Server It is a machine where Ansible is installed and from which all tasks and playbooks will be
executed.
Modules- The module is a command or set of similar commands which is executed on theclient-side.
Task- A task is a section which consists of a single procedure to be completed.
Role- It is a way of organizing tasks and related files to be later called in a playbook.
Fact- The information fetched from the client system from the global variables withthe gather facts
operation.
Inventory- A file containing the data regarding the Ansible client-server.PlayIt is the execution of the
playbook.
Handler- The task is called only if a notifier is present.
Notifier- The section attributed to a task which calls a handler if the output is changed.
Tag- It is a name set to a task that can be used later on to issue just that specific task or group of jobs.
Ansible Architecture
The Ansible orchestration engine interacts with a user who is writing the Ansible playbook toexecute the Ansible
orchestration and interact along with the services of private or public cloudand configuration management database.
You can show in the below diagram, such as:
Inventory: Inventory is lists of nodes or hosts having their IP addresses, databases, servers, etc. which areneed to be
managed.
API's: The Ansible API's works as the transport for the public or private cloud services.
Modules: Ansible connected the nodes and spread out the Ansible modules programs. Ansible executes themodules
and removed after finished. These modules can reside on any machine; no database or servers are required here. You
can work with the chose text editor or a terminal or versioncontrol system to keep track of the changes in the
content.
Plugins: Plugins is a piece of code that expends the core functionality of Ansible. There are many useful plugins,
and you also can write your own.
Playbooks: Playbooks consist of your written code, and they are written in YAML format, which describesthe tasks
and executes through the Ansible. Also, you can launch the tasks synchronously andasynchronously with
playbooks.HostsIn the Ansible architecture, hosts are the node systems, which are automated by Ansible, and
anymachine such as RedHat, Linux, Windows, etc.
Networking: Ansible is used to automate different networks, and it uses the simple, secure, and powerfulagentless
automation framework for IT operations and development. It uses a type of data modelwhich separated from the
Ansible automation engine that spans the different hardware quiteeasily.
Cloud: A cloud is a network of remote servers on which you can store, manage, and process the data.These servers
are hosted on the internet and storing the data remotely rather than the local server.It just launches the resources and
instances on the cloud, connect them to the servers, and youhave good knowledge of operating your tasks remotely.
CMDB
CMDB is a type of repository which acts as a data warehouse for the IT installations.
Puppet Components
Following are the key components of Puppet:
Manifests
Module
Resource
Factor
M-collective
Catalogs
Class
Nodes
Resource
Resources are a basic unit of system configuration modeling. These are the predefined functionsthat run at the
backend to perform the necessary operations in the puppet.Each puppet resource defines certain elements of the
system, such as some particular service or package.
Factor
The factor collects facts or important information about the puppet slave. Facts are the key-valuedata pair. It
contains information about the node or the master machine. It represents a puppetclient states such as operating
system, network interface, IP address, uptime, and whether theclient machine is virtual or not.These facts are used
for determining the present state of any agent. Changes on any targetmachine are made based on facts. Puppet's facts
are predefined and customized.
M-Collective
M-collective is a framework that enables parallel execution of several jobs on multiple Slaves.This framework
performs several functions, such as:
This is used to interact with clusters of puppet slaves; they can be in small groups or verylarge
deployments.
To transmit demands, use a broadcast model. All Slaves receive all requests at the sametime, requests have
filters attached, and only Slaves matching the filter can act onrequests.
This is used to call remote slaves with the help of simple command-line tools.
This is used to write custom reports about your infrastructure.
Catalogs
The entire configuration and manifest files that are written in Puppet are changed into a compiledformat. This
compiled format is known as a catalog, and then we can apply this catalog to thetarget machine.All the required
states of slave resources are described in the catalog.
Class
Like other programming languages, the puppet also supports a class to organize the code in a better way. Puppet
class is a collection of various resources that are grouped into a single unit.
Nodes
The nodes are the location where the puppet slaves are installed used to manage all the clientsand servers.
Deployment tools
Chef
Chef is an open source technology developed by Opscode. Adam Jacob, co-founder of Opscodeis known as the
founder of Chef. This technology uses Ruby encoding to develop basic building blocks like recipe and cookbooks.
Chef is used in infrastructure automation and helps inreducing manual and repetitive tasks for infrastructure
management.Chef have got its own convention for different building blocks, which are required to manageand
automate infrastructure.
Why Chef?
Chef is a configuration management technology used to automate the infrastructure provisioning.It is developed on
the basis of Ruby DSL language. It is used to streamline the task of configuration and managing the company’s
server. It has the capability to get integrated with anyof the cloud technology.In DevOps, we use Chef to deploy and
manage servers and applications in-house and on thecloud.Features of Chef Following are the most prominent
Features of Chef −
Chef uses popular Ruby language to create a domain-specific language.
Chef does not make assumptions on the current status of a node. It uses its mechanisms toget the current
status of machine.
Chef is ideal for deploying and managing the cloud server, storage, and software.
Advantages of Chef
Chef offers the following advantages −
Lower barrier for entry
− As Chef uses native Ruby language for configuration, astandard configuration language it can be
easily picked up by anyone having somedevelopment experience.
Excellent integration with cloud
− Using the knife utility, it can be easily integratedwith any of the cloud technologies. It is the best
tool for an organization that wishes todistribute its infrastructure on multi-cloud environment.
Disadvantages of Chef
Some of the major drawbacks of Chef are as follows −
One of the huge disadvantages of Chef is the way cookbooks are controlled. It needsconstant babying so
that people who are working should not mess up with otherscookbooks.
Only Chef solo is available.
In the current situation, it is only a good fit for AWS cloud.
It is not very easy to learn if the person is not familiar with Ruby.
Documentation is still lacking.
Chef - Architecture
Chef works on a three-tier client server model wherein the working units such ascookbooks are developed
on the Chef workstation. From the command line utilities suchas knife, they are uploaded to the Chef server
and all the nodes which are present in thearchitecture are registered with the Chef server.
In order to get the working Chef infrastructure in place, we need to set up multiple thingsin sequence.
In the above setup, we have the following components.
Chef Workstation
This is the location where all the configurations are developed. Chef workstation isinstalled on the local
machine. Detailed configuration structure is discussed in the later chapters of this tutorial.
Chef Server
This works as a centralized working unit of Chef setup, where all the configuration filesare uploaded post
development. There are different kinds of Chef server, some are hostedChef server whereas some are built-
in premise.
Chef Nodes
They are the actual machines which are going to be managed by the Chef server. All thenodes can have
different kinds of setup as per requirement. Chef client is the keycomponent of all the nodes, which helps in
setting up the communication between theChef server and Chef node. The other components of Chef node
is Ohai, which helps ingetting the current state of any node at a given point of time.
Salt Stack
Salt Stack is an open-source configuration management software and remote execution engine.Salt is a command-
line tool. While written in Python, SaltStack configuration management islanguage agnostic and simple. Salt
platform uses the push model for executing commands via theSSH protocol. The default configuration system is
YAML and Jinja templates. Salt is primarilycompeting with Puppet, Chef and Ansible.
Salt provides many features when compared to other competing tools. Some of these importantfeatures are listed
below.
Fault tolerance − Salt minions can connect to multiple masters at one time byconfiguring the master
configuration parameter as a YAML list of all the availablemasters. Any master can direct commands to the
Salt infrastructure.
Flexible − The entire management approach of Salt is very flexible. It can beimplemented to follow the
most popular systems management models such as Agent andServer, Agent-only, Server-only or all of the
above in the same environment.
Scalable Configuration Management − SaltStack is designed to handle ten thousandminions per master.
Parallel Execution model − Salt can enable commands to execute remote systems in a parallel manner.
Python API − Salt provides a simple programming interface and it was designed to bemodular and easily
extensible, to make it easy to mold to diverse applications.
Easy to Setup − Salt is easy to setup and provides a single remote execution architecturethat can manage
the diverse requirements of any number of servers.
Language Agnostic − Salt state configuration files, templating engine or file typesupports any type of
language.
Benefits of SaltStack
Being simple as well as a feature-rich system, Salt provides many benefits and they can besummarized as below −
Robust − Salt is powerful and robust configuration management framework and worksaround tens of
thousands of systems.
Authentication − Salt manages simple SSH key pairs for authentication.
Secure − Salt manages secure data using an encrypted protocol.
Fast − Salt is very fast, lightweight communication bus to provide the foundation for aremote execution
engine.
Virtual Machine Automation − The Salt Virt Cloud Controller capability is used for automation.
Infrastructure as data, not code − Salt provides a simple deployment, model drivenconfiguration
management and command execution framework.
Introduction to ZeroMQ
Salt is based on the ZeroMQ library and it is an embeddable networking library. It islightweight and a fast
messaging library. The basic implementation is in C/C++ and nativeimplementations for several languages including
Java and .Net is available.ZeroMQ is a broker-less peer-peer message processing. ZeroMQ allows you to design a
complexcommunication system easily.
ZeroMQ comes with the following five basic patterns −
Synchronous Request/Response − Used for sending a request and receiving subsequentreplies for each one
sent.
Asynchronous Request/Response − Requestor initiates the conversation by sending aRequest message and
waits for a Response message. Provider waits for the incomingRequest messages and replies with the
Response messages.
Publish/Subscribe − Used for distributing data from a single process (e.g. publisher) tomultiple recipients
(e.g. subscribers).
Push/Pull − Used for distributing data to connected nodes.
Exclusive Pair − Used for connecting two peers together, forming a pair.
ZeroMQ is a highly flexible networking tool for exchanging messages among clusters, cloud andother multi system
environments. ZeroMQ is the default transport library presented inSaltStack.
SaltStack – Architecture
The architecture of SaltStack is designed to work with any number of servers, from localnetwork systems to other
deployments across different data centers. Architecture is a simpleserver/client model with the needed functionality
built into a single set of daemons.Take a look at the following illustration. It shows the different components of
SaltStack architecture.
SaltMaster − SaltMaster is the master daemon. A SaltMaster is used to send commandsand configurations
to the Salt slaves. A single master can manage multiple masters.
SaltMinions − SaltMinion is the slave daemon. A Salt minion receives commands andconfiguration from
the SaltMaster.
Execution − Modules and Adhoc commands executed from the command line againstone or more minions.
It performs Real-time Monitoring.
Formulas − Formulas are pre-written Salt States. They are as open-ended as Salt Statesthemselves and can
be used for tasks such as installing a package, configuring andstarting a service, setting up users or
permissions and many other common tasks.
Grains − Grains is an interface that provides information specific to a minion. Theinformation available
through the grains interface is static. Grains get loaded when theSalt minion starts. This means that the
information in grains is unchanging. Therefore,grains information could be about the running kernel or the
operating system. It is caseinsensitive.
Pillar − A pillar is an interface that generates and stores highly sensitive data specific to a particular minion,
such as cryptographic keys and passwords. It stores data in a key/value pair and the data is managed in a
similar way as the Salt State Tree.
Top File − Matches Salt states and pillar data to Salt minions.
Runners − It is a module located inside the SaltMaster and performs tasks such as jobstatus, connection
status, read data from external APIs, query connected salt minions andmore.
Returners − Returns data from Salt minions to another system.
Reactor − It is responsible for triggering reactions when events occur in your SaltStack environment.
SaltCloud − Salt Cloud provides a powerful interface to interact with cloud hosts.
SaltSSH − Run Salt commands over SSH on systems without using Salt minion.
Dockers
Docker is a container management service. The keywords of Docker are develop,ship and run anywhere. The whole
idea of Docker is for developers to easily developapplications, ship them into containers which can then be deployed
anywhere.The initial release of Docker was in March 2013 and since then, it has become the buzzword for modern
world development, especially in the face of Agile-based projects.Features of Docker
Docker has the ability to reduce the size of development by providing a smaller footprintof the operating
system via containers.
With containers, it becomes easier for teams across different units, such as development,QA and
Operations to work seamlessly across applications.
You can deploy Docker containers anywhere, on any physical and virtual machines andeven on the cloud.
Since Docker containers are pretty lightweight, they are very easily scalable.
Components of Docker Docker has the following components
Docker for Mac − It allows one to run Docker containers on the Mac OS.
Docker for Linux − It allows one to run Docker containers on the Linux OS.
Docker for Windows − It allows one to run Docker containers on the Windows OS.
Docker Engine − It is used for building Docker images and creating Docker containers.
Docker Hub − This is the registry which is used to host various Docker images.
Docker Compose − This is used to define applications using multiple Docker containers.
Docker architecture
●Docker uses a client-server architecture. The Docker client talks to the Docker daemon ,which does the heavy
lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the
same system, or you canconnect a Docker client to a remote Docker daemon. The Docker client and
daemoncommunicate using a REST API, over UNIX sockets or a network interface. Another Docker client is
Docker Compose, that lets you work with applications consisting of a set of containers.
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects suchas images,
containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker
services.
Docker Desktop
Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environmentthat enables you to
build and share containerized applications and microservices. Docker Desktop includes the Docker daemon
(dockerd), the Docker client (docker), Docker Compose,Docker Content Trust, Kubernetes, and Credential Helper.
For more information, see Docker Desktop.
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,and Docker is
configured to look for images on Docker Hub by default. You can even run your own private registry.When you use
the docker pull or docker run commands, the required images are pulled fromyour configured registry. When you
use the docker push command, your image is pushed to your configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects.
This section is a brief overview of some of those objects.
Images
An image is a read-only template with instructions for creating a Docker container. Often, animage is based
on another image, with some additional customization. For example, you may build an image which is based on the
ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to
make your application run.You might create your own images or you might only use those created by others and
publishedin a registry. To build your own image, you create a Dockerfile with a simple syntax for definingthe steps
needed to create the image and run it. Each instruction in a Dockerfile creates a layer inthe image. When you change
the Dockerfile and rebuild the image, only those layers which havechanged are rebuilt. This is part of what makes
images so lightweight, small, and fast, whencompared to other virtualization technologies.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or delete acontainer using the
Docker API or CLI. You can connect a container to one or more networks,attach storage to it, or even create a new
image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. Youcan control how
isolated a container’s network, storage, or other underlying subsystems are fromother containers or from the host
machine.
A container is defined by its image as well as any configuration options you provide to it whenyou create or start it.
When a container is removed, any changes to its state that are not stored in persistent storage disappear.
Introduction
Introduction, Agile development model, DevOps, and ITIL. DevOps process and ContinuousDelivery, Release
management, Scrum, Kanban, delivery pipeline, bottlenecks, examples.
PART A:
1)Explain briefly about Sdlc?
2)What is waterfall model?
3)What is agile model?
4)Why Devops ?
5)What is Devops?
6)What is IITL?
7)What is continuous development?
8)What is continuous Integration?
9)What is continuous Testing?
10)What is continuous delivery?
11)What is continuous deployment?
12)What is Scrum?
13)What is Kanban?
PART B:
1)What is the difference between agile and Devops?
2)What are the differences between agile and waterfall model?
3)Explain Devops process flow in detail?
4)What is continuous delivery and how it works?
5)Explain components of delivery pipeline?
DevOps Lifecycle for Business Agility, DevOps, and Continuous Testing. DevOps influence onArchitecture:
Introducing software architecture, The monolithic scenario, Architecture rules of thumb, The separation of concerns,
Handling database migrations, Microservices, and the datatier, DevOps, architecture, and resilience.
PART A:
Introduction to project management: The need for source code control, The history of sourcecode management,
Roles and code, source code management system and migrations, Sharedauthentication, Hosted Git servers,
Different Git server implementations, Docker intermission,Gerrit, The pull request model, GitLab.
PART A:
1)What is the need for source code control in Devops?
2)What are the roles and codes in devops?
3)What are the benefits of source code management in Devops?
4)What is shared authentication in Devops?
5)What is pull request model in Devops ?
PART B:
1) What is version control, Explain types of version control systems and benefits of version control systems?
2) What is Gerrit and explain the architecture of gerrit?
3) What is docker intermission and what are the differences between Docker andmachine?
4) Explain gerrit and its architecture?
Integrating the system: Build systems, Jenkins build server, Managing build dependencies,Jenkins plugins, and file
system layout, The host server, Build slaves, Software on the host,Triggers, Job chaining and build pipelines, Build
servers and infrastructure as code, Building bydependency order, Build phases, Alternative build servers, Collating
quality measures.
PART A:
1) What is Git plugin?
2) What is managing build dependencies in Devops?
3) What is buildpipelines?
4) What is job chaining?
5) Explain collating Quality measures?
6) What are alternative build servers?
PART B:
Testing Tools and automation: Various types of testing, Automation of testing Pros and cons,Selenium -
Introduction, Selenium features, JavaScript testing, Testing backend integration points, Test-driven development,
REPL-driven development Deployment of the system:Deployment systems, Virtualization stacks, code execution at
the client, Puppet master andagents, Ansible, Deployment tools: Chef, Salt Stack and Docker.
PART A:
PART B:
UNIT-1
1) What is the main philosophy of Agile development?
a. To deliver working software frequently
b. To prioritize customer satisfaction
c. To respond to change over following a plan
d. All of the above
Answer: a. Scrum
10) Which ITIL process is concerned with the delivery of IT services to customers?
a. Incident Management
b. Service Delivery
c. Service Level Management
. Capacity Management
12) Which ITIL process is concerned with the management of IT service continuity?
a. Incident Management
b.Service Delivery
c. Service Level Management
d. Continuity Management
Answer: d. Continuity Management
UNIT-2
1) What are the main stages of the DevOps lifecycle?
a) Development, testing, deployment
b) Plan, code, deploy
c) Continuous integration, continuous delivery, continuous deployment
d) Plan, build, test, release, deploy, operate,
Answer: d
15) What are the main challenges of handling database migrations in DevOps?
a) Data loss and downtime
b) Incompatibility with different database systems
c) Lack of automation
d) All of the above
Answer: d
2) What are the main benefits of using source code management in DevOps?
a) Improved collaboration and coordination between developers
b) Increased visibility into code changes
c) Better organization of source code
d) All of the above
Answer: d
3) What are the main tools used for source code management in DevOps?
a) Git
b) Subversion
c) Mercurial
d) All of the above
Answer: a
5) How does using source code management impact collaboration between developers in DevOps?
a) Collaboration is not impacted by source code management
b) Collaboration is made more complex because of the need to manage code changes
c) Collaboration is simplified because code changes are tracked and can be easily reviewed
d) Collaboration is made easier because code changes are automatically compiled
Answer: c
12) What are the main challenges associated with shared authentication in DevOps?
a) Lack of control over authentication credentials
b) Increased risk of unauthorized access
c) Increased complexity of systems
d) All of the above
Answer: d
14) How does shared authentication impact collaboration between teams in DevOps?
a) Collaboration is not impacted by shared authentication
b) Collaboration is made more complex because of the need to coordinate shared authentication credentials
c) Collaboration is simplified because authentication is centralized
d) Collaboration is made easier because authentication is automatically performed.
16) What are the main benefits of using Git in software development?
a) Improved collaboration
b) Increased efficiency
c) Better ability to manage code changes
d) All of the above
Answer: d
18) How does Git handle conflicts between multiple code changes?
a) Git automatically merges changes
b) Git prompts the user to manually resolve conflicts
c) Git discards conflicting changes
d) Git stores conflicting changes as separate branches
Answer: b
21) What are the main benefits of using GitHub in software development?
a) Improved collaboration
b) Increased visibility of code changes
c) Better ability to manage code changes
d) All of the above
Answer: d
26) What are the main benefits of using Docker in software development?
a) Improved application portability
b) Increased efficiency in deploying applications
c) Better ability to manage dependencies
d) All of the above
Answer: d
31) What are the main benefits of using Gerrit in software development?
a) Improved collaboration
b) Increased visibility of code changes
c) Better ability to manage code changes
d) All of the above
Answer: d
UNIT-4
1) What is Jenkins?
a) A virtual machine software
b) A continuous integration and continuous delivery (CI/CD) tool
c) A configuration management tool
d) A software distribution platform
Answer: b
7)What are the main benefits of using Jenkins plugins in software development?
a) Improved efficiency in software delivery
b) Increased flexibility in customizing Jenkins
c) Better ability to integrate with other tools and systems
d) All of the above
Answer: d
20) How does orchestration in DevOps help with continuous delivery and continuous deployment?
a) By coordinating and automating the various steps and processes involved in software delivery
b) By manual review and approval of every step in the delivery process
c) By only building software, without coordinating and automating delivery processes
d) By only distributing software packages, without coordinating and automating delivery processes
Answer: a
UNIT-5
1) What is the main goal of testing in DevOps?
a) To ensure that software is of high quality and meets customer requirements
b) To increase development speed
c) To implement version control
d) To automate code review processes
Answer: a
2) What are the benefits of incorporating testing into the DevOps process?
a) Faster time-to-market for software releases
b) Improved software quality and reliability
c) Increased transparency in the development process
d) All of the above
Answer: d
12) What are some common tools used for JavaScript testing in DevOps?
a) Jest, Mocha, and Karma
b) Git, Jenkins, and Docker
c) Selenium, Appium, and Espresso
d) Oracle, MySQL, and PostgreSQL
Answer: a
15) How can JavaScript testing improve the speed and reliability of software delivery in DevOps?
a) By quickly identifying and resolving issues in JavaScript code, reducing the risk of causing problems in later
stages of the software delivery process
b) By slowing down the software delivery process
c) By having no impact on the software delivery process
d) By increasing the manual effort required for software delivery
Answer: a