0% found this document useful (0 votes)
32 views72 pages

Rapport PFE Marwa Yahya

The document includes a dedication and acknowledgments section expressing gratitude to family, friends, and mentors who supported the author, Marwa Yahya, throughout her project. It outlines the structure of the project, including context, goals, methodology, and analysis, as well as a detailed table of contents. The document emphasizes the importance of collaboration and support in achieving the project's objectives.

Uploaded by

belkis abidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views72 pages

Rapport PFE Marwa Yahya

The document includes a dedication and acknowledgments section expressing gratitude to family, friends, and mentors who supported the author, Marwa Yahya, throughout her project. It outlines the structure of the project, including context, goals, methodology, and analysis, as well as a detailed table of contents. The document emphasizes the importance of collaboration and support in achieving the project's objectives.

Uploaded by

belkis abidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Dedication

I dedicate this work to those whom all the words in the world cannot express the immense love
and deep gratitude that I show them for all the sacrifices they have never ceased to lavish on me
since my birth.

My dear father Mohamed Sghaier , My dear mother Khadija

I hope I have lived up to the hopes you have placed in me. May God protect you and preserve your
happiness and health. To those who supported me throughout my journey, each in their own way,

and who never stopped believing in me. To those I love dearly and whose presence inspires me

with serenity and tranquility of the soul. To all my friends for all the love, encouragement,
beautiful memories, and crazy moments. To all those I love and who love me.

Thank you for always being there for me!

Marwa Yahya

i
Acknowledgments

I am extremely grateful for the people that helped me in writing this project, without them it would
not have been possible. I’ll begin with Mr. Ahmed Neffati, my team leader at Beecoders. And

of course, Mr. Anis Bejaoui for their ongoing support and guidance. His valuable advice,

encouragement, and constructive suggestions have been instrumental in the development of this thesis. Along
with the interesting conversations we’ve had and their willingness to dedicate their time to this project, which I

appreciate deeply.
Next is Mrs. Yosra Abassi, my pedagogical supervisor. I can’t thank you enough for your guidance,

support, and assistance along with your insightful remarks that led to numerous improvements.
Finally, I hope the jury members who reviewed my work find the quality and clarity they’re looking
for in this report because a lot of time and effort went into it.
This project wouldn’t have been possible if it wasn’t for my family and friends. Specifically, my Best Friends

gave me an incredible amount of support and encouragement.

ii
Contents

General Introduction 1

1 Context of the Project and Basic Notions 3


1.1 Hosting Company: Beecoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Location and Global Reach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Comprehensive Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Industry Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 Digital Twin Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.5 Full-Service Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Review of Existing Infrastructure Automation Tools . . . . . . . . . . . . . . . . . 6
1.3 Project Goals and Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Methodology and Technological Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Design Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 State of the art 12


2.1 DevOps Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Benefits of DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 Principles of DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 DevOps Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Concept of Orchestration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Ansible, the adopted solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.1 Definition of Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.2 Ansible Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.3 How Ansible Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.4 Advantages of Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.5 Comparative table for Ansible and two other tools . . . . . . . . . . . . . . . . . . 24

iii
Acknowledgments

2.3 CI/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 CI/CD tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Analysis and Requirements Specifications 28


3.1 Requirements specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.1 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.2 Non Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Identification of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Project Management with SCRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 Identification of the SCRUM team for our project . . . . . . . . . . . . . . . . . . . 31
3.3.2 Backlog of product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.3 Planning sprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Generic diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.1 Backend-Infrastructure Tech-Stack and Modules diagram . . . . . . . . . . . . . . . 34
3.4.2 General diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5 Working environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.1 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.2 DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.3 Version Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5.4 Code Editing and Script Development . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.5 Automation tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4 Realization 42
4.1 First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Realization of First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.2 Realization of Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

iv
4.3.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.2 Realization of Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.2 Realization of Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

General Conclusion 60

Bibliography 62

v
List of Figures

1.1 DiConneX Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


1.2 The actors in SCRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 The actors in SCRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 The 7 principles of DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


2.2 The DevOps Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 The DevOps Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 The DevOps Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 Virtualization vs Containerization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6 Ansible Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7 Ansible works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 CI/CD pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1 Backend-Infrastructure Tech-Stack and Modules diagram . . . . . . . . . . . . . . . . . . . 36


3.2 General diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Windows 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Ansible Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5 YAML Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.6 Docker Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.7 GitLab Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.8 Overleaf Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.9 Vs Code Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.10 Intellij Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.11 MySQL Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.1 Actuel version of Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44


4.2 Get-pip.py Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Pywinrm package Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.4 Properties Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5 The actuel Venv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

vi
4.6 Ansible exist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 Choose new version of Python3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.8 Upgrade Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.9 Ansible Playbook with powershell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.10 The Playbook with VsCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.11 Inventory File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.12 Hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.13 App-config Playbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.14 Pipline dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.15 Run The playbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.16 Run The app-config playbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.17 Installing Docker interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.18 Verificate docker installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.19 Docker file for ecosystem project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.20 Docker Compose for mongodb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

vii
List of abbreviations

• CD = Continuous Deployment

• CI = Continuous Integration

• VM = Virtual Machine

• Venv = Virtual Environement

• VsCode = Visual Studio Code

• YAML = Yet Another Markup Language

viii
General Introduction

In today’s fast-paced digital world, where organizations strive to deliver software and services at an
accelerated pace, the need for efficient and streamlined development and operations practices has become
paramount. DevOps, a portmanteau of "development" and "operations," is an approach that aims to bridge
the gap between these two traditionally siloed teams, enabling collaboration, automation, and continuous
delivery
In this context, our project is aligned with addressing a significant challenge at Beecoders.

Despite being a comprehensive provider of digital transformation services, offering tailored digital strategies,
implementation concepts, educational seminars, 3D infrastructure capture, as-built analysis, and browser-based
3D models, DiConneX currently lacks the incorporation of DevOps principles. The absence of automated
processes could potentially lead to inefficiencies, longer development cycles, and challenges in adapting to
the demands of a fast-paced digital environment. Integrating DevOps practices would not only streamline
their operations but also contribute to a more seamless digital transition and improved project management.
Throughout this work, we have followed the DevOps approach, which is aimed at reducing the
deployment cycle and fostering agility among teams. This is notably manifested by the automation of
continuous integration and deployment that we have implemented.
This introductory chapter sets the stage for a comprehensive exploration of Beecoders’s

mission to revolutionize the digitalization landscape. Our commitment to reducing deployment cycles,
enhancing team agility, and automating critical processes aligns perfectly with the principles of modern
digital transformation. To trace the chronological progression of this work, the current report is organized as
follows:

- The first chapter, titled Context of the project and basic notions,will specifically present the hosting
organization and contextualize the project by describing its challenges and the proposed solution. I
also detail the methodology adopted during this project.

- The second chapter, titled State of the Art,will introduce the various fundamental concepts necessary
for the completion of our project. We will discuss the DevOps approach, the concept of continuous
integration, and continuous deployment, including their functioning and components, Ansible Architecture
as well. We will also delve into some important concepts essential for understanding our work.

- The third chapter, titled Analysis and Specification of Requirements,will define our functional and

1
General Introduction

non-functional requirements for the solution. We will conclude by describing the general diagram,
Backend-Infrastructure Tech-Stack and Modules diagram, and our product backlog with its various
user stories.

- The fourth chapter, titled Realization,we turn our project concept into a practical reality. We begin
by setting up the Ansible environment, crafting Ansible playbooks for automation, establishing a
robust pipeline for continuous integration and deployment, and meticulously structuring our playbooks
for efficiency and consistency. Furthermore, We deploy the projects using Docker, incorporating a
versatile containerization platform.

The conclusion of this report serves as a recapitulation, reaffirming the context of this present work
and reiterating the proposed approach. Furthermore, it serves as a gateway to new horizons and future
possibilities.

2
Chapter 1

C ONTEXT OF THE P ROJECT AND BASIC


N OTIONS

Contents
1 Hosting Company: Beecoders . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1 Location and Global Reach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Comprehensive Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Industry Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Digital Twin Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Full-Service Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Review of Existing Infrastructure Automation Tools . . . . . . . . . . . . . . . . 6

3 Project Goals and Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

4 Methodology and Technological Choice . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4.1 Design Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4.2 Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8


Chapter 1. Context of the Project and Basic Notions

Introduction

Companies and industries are always looking for new and creative ways to improve their operations in
this era of fast digital transformation. Our project officially begins with this chapter, which offers a thorough
examination of the organizational setup of Beecoders, our host firm, as well as the background of the

project. We’ll explore the major obstacles that made our project necessary and set out our ambitions to make
sure it succeeds.

1.1 Hosting Company: Beecoders

Beecoders is a company built on digital services dedicated to ushering in the future of building infrastructure

digitization. This company has firmly established itself as a leading authority in the field of digital

transformation, contributing to the digitization of building infrastructure. Flexibility is the key strength
of Beecoders, ensuring the development of client projects under the best conditions.

1.1.1 Location and Global Reach

Headquartered in Tunisia, Beecoders operates as a global entity. Its services, knowledge,

and expertise help you to find the most relevant solutions that adapt to your project and ensure

the expected results, allowing clients to harness the benefits of cutting-edge digitization solutions.

Figure 1.1: Beecoders Logo

4
Chapter 1. Context of the Project and Basic Notions

1.1.2 Comprehensive Services

Beecoders offers a broad range of services that encompass the entire spectrum of digital

transformation. The company excels in developing customized digital strategies, training employees, and
executing large projects. One of their most impressive accomplishments is the development of browser-based,
3D building structures. This innovative approach allows for effective management for the life of the structure
while simplifying operations, a testament to Beecoders’s commitment to guiding clients through the intricate

process of digitization.

1.1.3 Industry Impact

Beecoders has a significant influence on a wide range of projects and sectors. It gives

customers the ability to take charge of their endeavors in a variety of fields, such as project development,
architecture, building management, digital solutions, and technical planning, in addition to real estate and
construction. By offering cutting-edge and effective digital tools, Beecoders has cemented its status as

a force, transforming entire industries.

1.1.4 Digital Twin Technology

The digital twin technology that beecoders offers is the foundation of its products. This innovative

technology, which is available via web browsers, gives clients an unmatched and always changing perspective
of their building assets. The benefits are numerous and include comprehensive project records in addition
to long-term productivity increases. A key component of the current digital transition, this cutting-edge
technology makes information administration easier, promotes constructive stakeholder interaction, allows
for creative distant project organization, and saves a significant amount of money.

1.1.5 Full-Service Provider

One of the top suppliers of digital innovation solutions is that Beecoders offers

a wide range of services that cover every facet of the digital transformation process. Our clients have
everything they need to succeed, from developing customized digital strategies to establishing performance
benchmarks and teaching attendees the basics of the digital twin. For our clients, Beecoders goes above

and beyond by offering sophisticated 3D scanning of complex building systems, "as-built" evaluations to
record construction sites, and 3D models with state-of-the-art browser-based digital reproduction. This
all-encompassing approach, which is distinguished by an uncompromising dedication to automation to

5
Chapter 1. Context of the Project and Basic Notions

accelerate deployment cycles and enhance team flexibility, is consistent with the core ideas of modern digital
transformation.

1.2 Project Overview

1.2.1 Problem Statement

One of the hardest things Beecoders has to do is turn infrastructure building into a digital

enterprise. Conventional manual methods for managing building infrastructure have shown to be inflexible
in terms of scale, error-prone, and time-consuming. When these processes don’t work as intended, the results
aren’t consistent, they don’t follow the blueprint, and it becomes harder to repeat and depend on a stable
production environment. In today’s market, there is an urgent need for an automated solution that facilitates
the establishment of digital building environments, gathers vital data, and allows for seamless integration.

1.2.2 Review of Existing Infrastructure Automation Tools

An assessment of the state of infrastructure automation tools is necessary to address the issues raised
above. This thorough assessment includes a detailed examination of rival programs and a number of tools,
such as Puppet, Engineer, and Terraform. Gaining a comprehensive understanding of these tools’ features,
benefits, and limitations is the main objective of this evaluation. By doing this, we will be better equipped to
select digital solutions that will meet the needs of modern infrastructure management and improve Beecoders's

capabilities.

1.3 Project Goals and Proposed Solution

Beecoders recommends using Ansible, one of the most widely used open-source tools for

automation, to address the problems described in the issue statement. This solution encompasses several key
components:

• Upgrade Ansible:
Ensure the utilization of the latest Ansible version, taking advantage of new features, bug fixes, and
security enhancements.

• Environment Setup:
Environment Setup: Establish a development environment that includes all the required libraries and
dependencies for seamless operations.

6
Chapter 1. Context of the Project and Basic Notions

• Information Gathering Playbooks:


Develop Ansible playbooks designed to retrieve critical information from remote desktops. This
information covers aspects such as system details, hardware configurations, software installations,
and network settings.

• Continuous Integration Pipeline:


Implement a Continuous Integration (CI) pipeline using popular CI/CD tools, including GitLab. This
automation ensures the streamlined execution of Ansible playbooks and facilitates the integration of
updates and changes into the system, thereby enhancing workflow efficiency.

• Testing and Validation:


Rigorously test and validate automated tasks through a battery of tests, including unit tests, integration
tests, and performance tests. These tests are conducted to ensure the accuracy and reliability of Ansible
playbooks and the CI pipeline.

1.4 Methodology and Technological Choice

Several methodologies are popular when it comes to software development processes. While we have
defined the objectives in terms of functionality and constraints for the solution, the choice of the methodology
to use during software development must meet the following criteria:

• Agility and Rapid Iterations.

• Collaboration and Communication.

• Automation and Continuous Integration.

In this situation, we have chosen to apply the Scrum methodology. In the following sections, we will present
this methodology, the different stakeholders involved in our project, and its life cycle.

1.4.1 Design Methodology

The object-oriented method is the chosen approach for our system, as it is essential in software
development. For a better presentation of our project’s architecture, we have decided to go with the widely
used Unified Modeling Language (UML) as it offers numerous advantages, such as:

• Readability and reusability,

7
Chapter 1. Context of the Project and Basic Notions

• Flexibility

• Facilitating understanding of complex abstract representations

1.4.2 Development Process

Characteristics of the Scrum Methodology :


Scrum is rapidly gaining popularity in the field of software development as it provides a
framework for tackling complex problems with great ease. By using Scrum, teams can develop products
creatively, leading to the highest possible value they can deliver to organizations.
Therefore, Scrum is chosen for this project because it:

• Enables early identification and easy resolution of issues.

• Is cost-effective with continuous integration and iterative releases.

• Explicitly identifies the end date of the development process.

Identification of the Scrum Team:


The Scrum Team is composed of a Product Owner, a Development Team, and a Scrum
Master, illustrated in figure 1.2

• Product Owner: The Product Owner is responsible for the product vision and prioritizing requirements.
They define the project’s needs and ensure alignment with stakeholders.

• Scrum Master: The Scrum Master resolves issues and ensures adherence to Scrum principles throughout
the project. They facilitate the Scrum process, remove obstacles, and foster an environment where the
team can work at their maximum potential.

• Development Team: The Development Team is self-organizing and remains unchanged throughout
the duration of a sprint. They are responsible for delivering the product and collaboratively working
on the tasks within the sprint.[1]

8
Chapter 1. Context of the Project and Basic Notions

Figure 1.2: The actors in SCRUM

Project Planning with SCRUM:


To effectively apply SCRUM, several meetings were conducted to facilitate communication
among the actors :

1. Sprint Planning:

• The Sprint Planning meeting is held at the beginning of each sprint.

• During this meeting, the project team selects a set of user stories (US) based on priority from the
"Product Backlog" to construct the "Sprint Backlog."

2. Daily/Weekly Meeting (Daily Scrum / Weekly Meeting):

• Throughout the sprint, the team members hold a brief daily meeting (approximately 15 minutes)
or a weekly meeting (around one hour).

• In this meeting, each team member explains their progress from the previous day or week and
any constraints they encountered. They also discuss the tasks they will work on during the day
or week of the meeting.

3. Sprint Review:

• The Sprint Review meeting takes place at the end of each sprint.

• During this meeting, the project team presents the completed functionalities to stakeholders.

• The Product Owner and end-users provide feedback on the delivered features.

9
Chapter 1. Context of the Project and Basic Notions

4. Sprint Retrospective:

• The Sprint Retrospective meeting is held after the Sprint Review, usually immediately following
it.

• The Scrum Master leads this meeting, and its objective is to help the team identify areas for
improvement to better manage the upcoming sprints.

These meetings ensure effective collaboration, continuous improvement, and alignment with
the project goals throughout the development process. They are essential elements of the Scrum framework,
fostering transparency, adaptability, and value delivery at the end of each sprint.
Figure 1.3 illustrates the life cycle of each sprint in our solution:

Figure 1.3: The actors in SCRUM

We adapted this method to our case by dividing the project into sprints lasting one to four
weeks. At the end of each sprint, a meeting was held to present the tasks accomplished during that specific
sprint and to set the tasks and objectives for the next iteration.

10
Chapter 1. Context of the Project and Basic Notions

During the inspection, if any differences were identified, we would adapt the process accordingly.
In fact, we planned weekly meetings in parallel with these Sprint Reviews to discuss the actions taken during
the period, the challenges encountered, and the planned actions for the next period and its prospects. This
allowed us to set clear objectives for each increment and to be flexible in adapting them. It also enabled us
to modify or supplement the list of features to be implemented for future sprints as needed.[1]
The Artifacts:

• The Product Backlog: It includes the list of features requested by the client in the form of user stories.
This list is organized in order of importance and can be updated by the Product Owner. The Product
Backlog for our project is presented in the requirements specification chapter.

• The Sprint Backlog: It is a part of the Product Backlog that contains what needs to be delivered by
the end of the sprint. It serves as a reference during daily stand-up meetings.

• The Burndown Chart: It is a graphical representation of the evolution of the amount of work during
each sprint.

• The Scrum Board: It is a board divided into at least three sections: tasks to be done, tasks in progress,
and tasks completed. It helps us track the real-time progress of tasks and user stories to be completed.

Conclusion

This introductory chapter was primarily dedicated to presenting the host company. We discussed the
context and a review of the current situation, as well as the proposed solution, which forms the subject of our
project. We concluded by introducing the methodology used for the design and development of the tasks,
namely SCRUM.
To maintain a logical progression in this report, the next chapter will focus on the state of the art.

11
Chapter 2

S TATE OF THE ART

Contents
1 DevOps Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.1 Benefits of DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.2 Principles of DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.3 DevOps Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4 Concept of Orchestration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Ansible, the adopted solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.1 Definition of Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 Ansible Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.3 How Ansible Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.4 Advantages of Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.5 Comparative table for Ansible and two other tools . . . . . . . . . . . . . . . . . 24

3 CI/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1 CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 CI/CD tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


Chapter 2. State of the art

Introduction

To successfully develop our project, a state of art study should be made. This chapter details the
main theoretical concepts needed to understand during the project’s realization.

2.1 DevOps Approach

DevOps is a cultural and technical movement that has revolutionized the way software is developed,
deployed, and operated. It emerged as a response to the challenges faced by organizations in delivering
software faster, more frequently, and with higher quality. Traditionally, software development and IT operations
were separate entities, often working in isolation, leading to inefficiencies, delays, and communication gaps.
The fundamental idea behind DevOps is to create a collaborative and integrated environment where
development, operations, and other teams work together seamlessly throughout the software development
lifecycle. By fostering a culture of shared ownership and responsibility, DevOps promotes a sense of
collective accountability for the success of the entire software ecosystem.
One of the key drivers of DevOps is automation. DevOps practitioners leverage automation to
eliminate manual, error-prone tasks and streamline repetitive processes. Automation ensures consistent and
reliable software delivery, reducing the risk of deployment failures and downtime. Continuous Integration
(CI) and Continuous Deployment (CD) pipelines are integral to DevOps, enabling developers to integrate
code changes frequently, run automated tests, and deploy to production environments automatically.
Feedback is another critical aspect of DevOps. Teams actively seek feedback from customers, users,
and stakeholders to identify areas of improvement. By incorporating feedback loops, DevOps teams can
quickly respond to changing requirements and iterate on software features to meet evolving customer needs.
In addition to technical practices, DevOps places a strong emphasis on fostering a culture of continuous
learning and experimentation. Embracing failure as a learning opportunity encourages teams to take risks
and innovate while maintaining a blame-free environment.
The customer is at the heart of DevOps. The focus on customer needs, satisfaction, and delivering
value drives decision-making throughout the software development process.DevOps encourages teams to
align their efforts with business objectives, ensuring that software initiatives directly contribute to the organization’s
goals.
Overall, DevOps has become a pivotal force in modern software development and IT operations. Its
principles and practices enable organizations to be more agile, responsive, and competitive in the fast-paced

13
Chapter 2. State of the art

world of technology. By adopting DevOps, teams can achieve faster delivery cycles, increased efficiency,
improved software quality, and ultimately, higher customer satisfaction.[2]

2.1.1 Benefits of DevOps

Organizations implementing the DevOps methodology can expect the following benefits:

• Full Concentration on the Clients: DevOps places a strong focus on meeting customer needs and
delivering value. By aligning development and operations teams with customer requirements, organizations
can ensure that software development efforts are customer-centric and result in products that truly
address user needs.

• Quicken Time-to-Resolution:DevOps emphasizes rapid identification and resolution of issues. Continuous


monitoring and automated testing enable teams to detect and address problems early, minimizing
downtime and ensuring a smooth user experience.

• Accelerated Delivery Time:Another benefit of DevOps is practices, such as continuous integration


and continuous deployment, enable organizations to release software updates and features more frequently.
This rapid delivery time allows companies to iterate quickly, respond to user feedback, and stay ahead
in the market.

• Collaborates Teams for Faster Product Shipments: DevOps encourages collaboration and open
communication between development, operations, and other teams. By breaking down silos and
working together, teams can streamline the software delivery process, resulting in faster and smoother
product shipments.

• Quicker Deployment:Automation is a key aspect of DevOps, enabling faster and more reliable deployment
of software. Automated deployment processes reduce manual errors and ensure consistent and repeatable
releases, leading to quicker deployment times.[3]

2.1.2 Principles of DevOps

The success of a DevOps mindset lies in learning the best DevOps practices and principles.
Let’s explore the 7 principles of DevOps that every IT team follows in the following figure:2.1

14
Chapter 2. State of the art

Figure 2.1: The 7 principles of DevOps

• Customer Focus: Place the customer at the center of all decisions and actions. Understand their
needs, gather feedback, and continuously strive to deliver value to the customer through efficient and
high-quality software delivery.

• Complete Ownership:Encourage teams to take complete ownership of their work. Empower them
to make decisions and be accountable for the entire software development and deployment process.

• Systems Thinking:View the software delivery process as a whole system rather than isolated components.
Consider the impact of changes on the entire system and optimize the flow to achieve the best overall
outcomes.

• Continuous Improvement:Foster a culture of continuous improvement and learning. Regularly review


processes, tools, and workflows to identify areas for enhancement and iterate on them.

• Automation:Automate repetitive tasks and processes to reduce manual errors, save time, and increase
the efficiency of software development and operations.

• Communication and Collaboration:Promote effective communication and collaboration among teams.


Break down silos between development, operations, and other stakeholders to enable seamless cooperation.

• Focus on Results:Keep the focus on achieving results and delivering value to the customer. Set clear
goals, measure performance, and continuously align efforts with the desired outcomes.[4]

15
Chapter 2. State of the art

2.1.3 DevOps Culture

The following Figure 2.2 illustrates the three axes of DevOps culture the collaboration between Dev
and Ops, the processes, and the use of tools. the DevOps culture. As shown in the figure 2.2

Figure 2.2: The DevOps Culture

2.1.3.1 The Culture of collaboration

This is the very essence of DevOps—the fact that teams are no longer separated by silos specialization
(one team of developers, one team of Ops, one team of testers, and so on), but, on the contrary, these people
are brought together by making multidisciplinary teams that have the same objective: to deliver added value
to the product as quickly as possible.

2.1.3.2 DevOps Processes:

To expect rapid deployment, these teams must follow development processes from agile methodologies
with iterative phases that allow for better functionality quality and rapid feedback. These processes should
not only be integrated into the development workflow with continuous integration but also into the deployment
workflow with continuous delivery and deployment. The DevOps process is divided into several phases:

16
Chapter 2. State of the art

• The planning and prioritization of functionalities

• Development

• Continuous integration and delivery

• Continuous deployment

• Continuous monitoring

These phases are carried out cyclically and iteratively throughout the life of the project. The following figure
2.3 illustrates the DevOps process flow.

Figure 2.3: The DevOps Process

As DevOps aims to significantly increase the satisfaction of your customers, naturally your teams
start again the steps with a new feature for your software or application. That is why we always draw the
DevOps as an endless loop.

2.1.3.3 DevOps Tools

The choice of tools and products used by teams is very important in DevOps. Indeed, when teams
were separated into Dev and Ops, each team used their specific tools—deployment tools for developers and
infrastructure tools for Ops which further widened communication gaps.

17
Chapter 2. State of the art

Figure 2.4 illustrates some of the DevOps tools.

Figure 2.4: The DevOps Tools

• Ansible:It is a popular open-source automation tool that automates app deployment, configuration
management, and infrastructure provisioning. Ansible uses declarative language and SSH (Secure
Shell Protocol) to execute tasks across multiple systems.

• GitLab CI/CD:This tool is a built-in CI/CD solution provided by GitLab. It allows for defining CI
pipelines using a declarative configuration file (.gitlab-ci.yml). Also, it offers powerful features. They
include parallel execution, caching, and integrated container registry.

• Bash script:A Bash script is a file containing a sequence of commands written in the Bash programming
language. When executed, these commands are processed by the Bash shell line by line. Bash

18
Chapter 2. State of the art

scripts allow for the automation of tasks, such as navigating directories, creating folders, and running
processes via the command line. They are useful for streamlining repetitive or complex operations on
Unix-like operating systems.

• Docker:Another configuration management tool.Docker is a platform designed to help developers


build, share, and run container applications. We handle the tedious setup, so you can focus on the
code.

In conclusion, after evaluating the six tools used in DevOps, we have selected Ansible and GitLab
CI/CD as the optimal solution for our project. Their advanced features in automation, configuration management,
and infrastructure provisioning align seamlessly with our project’s specific requirements.

2.1.3.4 Concept of Containerization

Containerization is a type of virtualization at the application level, which makes it possible to create
several instances of user space isolated on a single kernel. These instances are called containers. Containers
provide a standard way to bundle an application’s code, runtime, system tools, system libraries, and configurations
into a single instance. Containers share a kernel (operating system) installed on the hardware.
Benefits of containers:

• Lightness:Containers take up less space on the server than virtual machines and take only seconds to
start.

• Elasticity:Containers are very elastic and there is no need to allocate a given amount of resources.
This means when the demand on a container decreases, the additional resources are freed up for use
by other containers.

• Density:Density refers to the number of objects that a single physical server can run at a time.

• Performance:When resource pressure is high, application performance is much better with containers
than with hypervisors.

• Maintenance efficiency:With a single OS kernel, OS-level updates or patches only need to be done
once for the changes to take effect in all containers.

19
Chapter 2. State of the art

When to use containers:


Almost any application that needs to be changed and redeployed quickly and frequently is a
great fit for containerization.
Applications using an architecture microservices are also a natural choice.
The difference between containerization vs virtualization is the size of the vessel. Each container
image could be only a few megabytes in size, making it easier to share, migrate, and move.As shown in the
figure:2.5 [5]

Figure 2.5: Virtualization vs Containerization

2.1.4 Concept of Orchestration

Orchestration is a crucial aspect of managing complex IT infrastructures and applications. It involves


coordinating and automating various tasks, processes, and resources to ensure smooth and efficient operations.
In the context of DevOps and IT automation, orchestration plays a vital role in coordinating the deployment
and management of infrastructure and applications.
Orchestration aims to:

• Create application services on several containers either Front-End or Back-end on several containers.

• Plan their execution in a cluster.

• Guarantee their integrity.

• Ensure their monitoring with Kubernetes.

20
Chapter 2. State of the art

Kubernetes, Docker Swarm and LXC are some of the popular open source container orchestration
tools.Also,Ansible as a powerful automation tool, excels in orchestration. It provides the capability to define
and execute orchestrated workflows that encompass multiple tasks, across diverse systems and environments,So
we will use it for our solution.

2.2 Ansible, the adopted solution

2.2.1 Definition of Ansible

Ansible is an open-source DevOps tool that can help the business in configuration management,
deployment, provisioning, etc. It is straightforward to deploy; it leverages SSH to communicate between
servers. It uses the playbook to describe automation jobs, and the playbook uses a very simple language
YAML.
Ansible provides reliability, consistency, and scalability to your IT infrastructure. You can automate configurations
of databases, storage, networks, and firewalls using Ansible. It makes sure that all the necessary packages
and all other software are consistent on the server to run the application. It also holds all the historical data
of your application, so if at any time you want to roll back to the previous version, or you want to upgrade it,
you can easily do that.[6]
Exploring the Following Features:

• Bash scripting: is a swift and robust solution for automating tasks and executing commands on
Unix-like systems, offering efficiency in today’s programming landscape.

• SSH:Very simple passwordless network authentication protocol which is secure. So, your responsibility
is to copy this key to the client.

• Push architecture:push the necessary configurations to them, clients. All you have to do is, write
down those configurations (playbook) and push them all at once to the nodes. You see how powerful
it can be to push the changes to thousands of servers in minutes.

• Setup :a minimal requirement and configuration needed to get it to work.

2.2.2 Ansible Architecture

The beauty of Ansible is that it is not only composed for single-tier deployment. It is done for kind
of multi-tier systems and infrastructures. It is often coined with the term agentless which means it works by
connecting nodes through default login(ssh) Ansiblearchitect as shown in the figure:2.6.

21
Chapter 2. State of the art

Figure 2.6: Ansible Architecture

• Public/Private Cloud:which is the Linux server. It can also act as a repository for all IT installations
and configurations.

• User:who creates the Ansible playbook has a direct connection with the Ansible automation Engine.

• Host:The above architecture has a bunch of host machines to which an ansible server connects and
pushes the playbooks through SSH.

• Ansible automation engine:It has a use in which users can directly run a playbook that gets deployed
on the hosts. There are multiple components in the Ansible automation engine.

(a) Inventory:It’s a list of all the IP addresses of all the hosts.

(b) Modules:modules are those pieces of code that get executed when you run a playbook.A playbook
contains plays, a play contains different tasks, and a task includes modules.

When you run a playbook, it’s the modules that get executed on your hosts, and these modules contain
action in them. So, when you run a playbook, those action takes place on your host machines.

• Playbooks:Playbooks here actually define your workflow because whatever tasks you write in a
playbook, they get executed in the same order that you have written them.

• Plugins:All necessary cache, logging purpose, and ansibles functioning all help in creating augmented
ansible’s core.

22
Chapter 2. State of the art

• Connection plugins:The architecture offers connection plugins, eliminating the mandatory use of
SSH for connecting to host machines. With Ansible’s docker container connection plugin, configuring
Docker containers becomes seamless.

2.2.3 How Ansible Works

The following figure:2.7 illustrates how Ansible works.

Figure 2.7: Ansible works

Ansible works by connecting to nodes and pushing out small programs called ansible modules.
Ansible then executes these modules over SSH by default and then removes them when finished.
The ansible management node is the controlling node, which controls the entire execution of the
Playbook. It’s the node from which you are running the installation, and the inventory file provides the list
of the hosts where the modules need to be run. The management node makes an ssh connection, and then it
executes the modules on the host machines and installs the product. It removes the modules once they are
installed.[7]

2.2.4 Advantages of Ansible

• Easy and Understandable:Ansible is very simple and easy to understand and has a very simple syntax
that can be used by human-readable data serialization language.

• Powerful and Versatile: It is a very powerful and versatile tool that helps in real orchestration and
manages the entire application or configuration management environment.

23
Chapter 2. State of the art

• Efficient: It is very efficient in the sense it can be customized according to your need like modules
can be called with the help of a playbook for where the applications are deployed.

• Application Deployment:Easy for teams to manage the entire lifecycle from development to deployment.

• Secured: Security is the key to maintaining the ansible infrastructure as all applications require it to
get applications free from security breaches.

2.2.5 Comparative table for Ansible and two other tools

Table 2.1 is an expanded comparative table for Ansible and two other tools, highlighting their advantages
and disadvantages in the context of configuration management and automation:

Feature Ansible Puppet Chef


Ansible offers powerful
Puppet is known for Chef provides robust
automation capabilities
its strong automation automation features with
with a simple and
capabilities, providing a a flexible and scalable
easy-to-understand
Automation rich set of resources and approach using its
YAML-based language,
a declarative language domain-specific language
making it accessible
(Puppet DSL) for defining (DSL) to define resources
to both beginners and
configurations. and configurations.
experienced users.
Ansible follows a Puppet follows a Chef follows a
declarative approach, declarative approach, declarative approach
making it straightforward allowing users to specify with its domain-specific
Configuration
to define and maintain desired states, and language, enabling
Mgmt
configurations, with no it supports a large users to manage system
need for agent installation number of platforms and configurations and ensure
on target systems. environments. consistency.
Puppet’s learning
Ansible has a relatively Chef’s learning curve can
curve can be steeper,
low learning curve due be steep, as its DSL and
especially for those
to its simple YAML terminology may require
Learning Curve new to configuration
syntax, making it easy for more time and effort for
management, due
new users to get started new users to become
to its own DSL and
quickly. proficient.
terminology.

24
Chapter 2. State of the art

Puppet is suitable for Chef is suitable for small


Advantage: Ansible is small to medium to large environments,
suitable for small to large environments, but but as the infrastructure
Scalability environments, and its managing a large-scale grows, careful design
agentless architecture infrastructure may require and architecture
facilitates easy scalability. additional planning and considerations become
resources. important.
Ansible provides strong Puppet has solid Chef offers good support
support for Windows support for Windows for Windows systems,
environments, allowing systems, enabling making it suitable
Windows Support
seamless configuration effective configuration for environments
management for management in mixed with Windows-based
Windows-based systems. environments. infrastructure.
Ansible offers
comprehensive Chef provides strong
Puppet has good cloud
cloud integrations, cloud support, with native
support, with modules
Cloud Support supporting major cloud integrations for various
and integrations available
providers and enabling cloud providers and
for major cloud platforms.
infrastructure-as-code services.
practices.
Ansible is agentless,
Puppet requires agents Chef requires agents
meaning it does not
(Puppet agents) to (Chef clients) to be
require any software
be installed on target installed on target nodes,
Agentless to be installed on the
nodes, which can add which may introduce
target systems, reducing
complexity and resource additional overhead and
deployment complexity
consumption. configuration challenges.
and overhead.
Tableau 2.1: Comparative Table

2.3 CI/CD

CI/CD is a method to deliver applications to customers frequently by introducing automation into


the application development stages. [8] It’s the combined practices of continuous integration (CI) and (more
often) continuous delivery or (less often) continuous deployment (CD). The main concepts attributed to
CI/CD are continuous Integration, continuous delivery, and continuous deployment. CI/CD is a solution to
the problems that integration of new code can cause to development and operations teams (aka "integration
hell"). Specifically, CI/CD introduces automation and continuous monitoring throughout the application
lifecycle, from the integration and testing phases to delivery and deployment. Taken together, these connected
practices are often referred to as the "CI/CD pipeline" and are supported by development and operations
teams working together in an agile manner with a DevOps or site reliability engineering (SRE) approach.
the CI/CD pipeline cycle is illustrated in the figure:2.8

25
Chapter 2. State of the art

Figure 2.8: CI/CD pipeline

2.3.1 CI

The CI in CI/CD stands for continuous integration. Continuous integration means that developers
frequently merge their code changes into a shared repository. It is an automated process that allows multiple
developers to contribute to software components of the same project without integration conflicts. CI involves
automated testing each time a software change is integrated into the repository.

2.3.2 CD

CD can be synonymous with continuous delivery or continuous deployment. In both cases, the idea is
to take the integrated code and make it capable of deploying to a QA or production environment. Continuous
delivery checks the code automatically but requires human intervention to manually and strategically trigger
the deployment of changes. Continuous deployment takes the process a step further by configuring the
deployment to be automated. Human intervention is not required.

2.3.3 CI/CD tools

Continuous integration and continuous deployment tools or CI/CD allow the automated build and
deployment of source code changes. Concretely, CI/CD tools enable application modernization by reducing
the time it takes to build new functions. There are many CI/CD tools. One of the most used platforms
is Jenkins, an open-source tool. There are also paid solutions with a free tier like GitLab CI, Bamboo,
TeamCity, Concourse, CircleCI or Travis CI. Cloud providers, Google, Azure, and AWS in particular, also
offer their own tools for continuous integration and deployment. However, for our solution, we will utilize
GitLab CI/CD for automation, which is seamlessly integrated with Ansible playbooks. This integration is

26
Chapter 2. State of the art

achieved through GitLab’s CI/CD configuration file (.gitlab-ci.yml) located in each repository, allowing for
efficient and streamlined automation of our deployment processes. Moreover, thanks to GitLab’s comprehensive
integration with Ansible, empowers us to automate various workflows directly within GitLab. The range of
events that can trigger workflows is extensive, and some examples include:

• Creation or modification of a Merge Request.

• Push to a specific branch.

• Updates to the status of a GitLab issue.

• Manual trigger.

• Any modification of a ticket on a project board in GitLab.

By combining Ansible’s powerful automation capabilities with GitLab’s seamless CI/CD integration, we can
ensure streamlined deployment processes and efficient management of our infrastructure and applications.
This collaborative approach facilitates smoother collaboration between development and operations teams,
resulting in more reliable and rapid software delivery.

Conclusion

In this chapter, we began by discussing the DevOps approach and containerization and orchestration.
Next, we introduced Ansible as our adopted solution. We then proceeded to present and explain various of
Automation tools and CI/CD testing. With a clear understanding of the solution to be adopted, we will now
move on to the next chapter, where we will detail the designed solution by specifying the functional and
non-functional requirements.

27
Chapter 3

A NALYSIS AND R EQUIREMENTS


S PECIFICATIONS

Contents
1 Requirements specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1.1 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1.2 Non Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2 Identification of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3 Project Management with SCRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.1 Identification of the SCRUM team for our project . . . . . . . . . . . . . . . . . . 31

3.2 Backlog of product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.3 Planning sprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Generic diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.1 Backend-Infrastructure Tech-Stack and Modules diagram . . . . . . . . . . . . . . 34

4.2 General diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5 Working environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.1 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.2 DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.3 Version Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.4 Code Editing and Script Development . . . . . . . . . . . . . . . . . . . . . . . . 40

5.5 Automation tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41


Chapter 3. Analysis and Requirements Specifications

Introduction

In order to estimate the analysis and specification of requirements, we will begin by defining the functional
and non-functional needs of our solution. Next, we will proceed to identify the users. Finally, we will
establish user stories to provide an overall vision of our solution’s behavior.

3.1 Requirements specification

In this part we will focus on the analysis of our requirements by picking up the functional and no functional
requirements for our project.

3.1.1 Functional requirements

Our solution must provide a set of functionalities that satisfy the user’s needs. Therefore, it is essential to
express them clearly before presenting our solution.
The solution should enable the:

• Virtual Environment (venv) Creation:

- Implement a mechanism to set up Python virtual environments.

- Ensure isolation of dependencies between different projects.

- Document the steps required to create, activate, and deactivate the virtual environments.

• Upgrade Ansible:

- The project should upgrade the existing Ansible installation to the latest stable version.

- Ensure compatibility with the target systems and platforms.

- Verify that the upgraded Ansible works with the existing inventory and configurations.

• Playbook Creation:

- Develop playbooks for various common use cases, such as package installation, configuration
management, and service deployment.

- Ensure playbooks follow best practices and are modular and reusable.

- Test the playbooks against different target environments to validate their functionality.

29
Chapter 3. Analysis and Requirements Specifications

• GitLab CI/CD Integration:

- Integrate the project with GitLab CI/CD pipeline for continuous integration and delivery.

- Implement automated testing for Ansible playbooks and Docker setup to ensure reliable deployments.

- Configure CI/CD jobs to trigger playbook execution on successful code commits.

3.1.2 Non Functional Requirements

In this section we identify the main non-functional requirements, they characterize the restrictions and
constraints that can weigh down our solution:

• Performance:

- The Ansible upgrade process should not cause any significant downtime to the existing infrastructure.

- Playbooks execution should be efficient and not cause noticeable delays.

- Virtual environments should have minimal overhead in terms of memory and storage.

- CI/CD pipeline execution time should be optimized to reduce the waiting time for developers.

• Usability:

- Playbooks should be well-documented and easy to understand by other team members.

- Virtual environment management should be straightforward and intuitive.

• Reliability:

- The upgraded Ansible should be thoroughly tested to ensure stability and reliability.

- Playbooks must be tested in different scenarios to ensure they work as expected.

- The CI/CD pipeline should be resilient to handle failures gracefully and recover without manual
intervention when possible.

• Security: The solution must be highly secure.

30
Chapter 3. Analysis and Requirements Specifications

3.2 Identification of Users

The actors are the elements that will translate our backlog into the finished product. Our solution has three
main actors:

(1) Developer/Tester: This is a user who can modify the source code for a given objective (adding a
feature, fixing an error, etc.) from a code repository server. They can also initiate build tasks to ensure
continuous integration and continuous delivery of the project. Additionally, they are responsible for
continuous monitoring of the code based on unit tests and regression tests.

(2) Project Manager/Integrator: This is a user who inherits the responsibilities of the developer and
validates automated scripts for production deployment. They are in charge of setting up the CI/CD
pipeline and monitoring the platform’s status.

(3) Administrator:This is a user who handles the setup and configuration of the platform and ensures the
monitoring and proper functioning of the cluster.

3.3 Project Management with SCRUM

In this section, we identify the SCRUM team members for our project, and then we present the product
backlog and the sprint planning.

3.3.1 Identification of the SCRUM team for our project

In the previous chapter, we introduced the general SCRUM roles. In this section, we present the three
SCRUM roles for our project, which are organized as follows:

• Product Owner: Mr. Ahmed Neffati is the Product Owner at Beecoders.

• SCRUM Master: Mr. is our Scrum Master.

• SCRUM Team: we are 4 people on the team .

3.3.2 Backlog of product

The following backlog is outlined, detailing a series of tasks and steps. These tasks encompass the creation
of a virtual environment, upgrading Ansible, developing Ansible playbooks for information collection using
PowerShell and SSH, integrating with GitLab pipelines, and deploying on docker as shown in the table.3.1

31
Chapter 3. Analysis and Requirements Specifications

ID
ID Functionality User Story Sprint
Story
1.1 Study the existing solution and its issues

1.2 Develop expertise in concepts and work tools Sprint


1 Art Study
0
Specify the functional and non-functional
1.3 requirements of our project
Create a Venv for the project.
2.1

Environment Setup and 2.2 Upgrade Ansible to the required version within the Release
2
Ansible Upgrade virtual environment. 1

2.3 Ensure compatibility of Ansible with Python in the


virtual environment.
Install VS Code on your local machine.
3.1

3.2 Set up the necessary extensions for Ansible


development.
3.3
Define the scope and specifics of the information
you need to gather from systems.
3.4
Ansible Playbook Design Ansible playbooks to perform information Release
3
Development gathering using PowerShell and SSH. 2
3.5
Implement tasks to collect relevant system data, log
files ,configuration details and ensure modularity
and reusability in playbook design.
3.6
Create Ansible playbooks to gather system
information using Vs code .

3.7 Test the playbooks locally to validate their


functionalit.
4.1 Create a new GitLab repository for the project.

Develop a .gitlab-ci.yml file to define GitLab CI/CD


4.2 pipeline stages.
GitLab Integration ,
Release
4 Pipelines Testing and Test the GitLab pipeline to ensure smooth execution 3
Execution of playbook stages.
4.3
Verify that the information-gathering playbooks
4.4 execute successfully within the pipeline.

32
Chapter 3. Analysis and Requirements Specifications

5.1
Research and download the latest version of Docker
Desktop and Follow installation instructions for
Windows.

5.2
Verify Docker installation by running a simple
container.

Identify Ktor project dependencies and


5.3
configurations and create individual Dockerfiles for
each Ktor project.

5.4 Test each Dockerfile by building and running a


container.

• Create a docker-compose.yml file.


5.5
• Define services for each Ktor project.

Docker Deployment Ktor Release


5
projects 4
• Set up a service for the database (if needed).

• Configure network settings to ensure projects


5.6 are on the same main network.

• Verify that all services start without errors by


running the command docker-compose up -d.

• Test connectivity between Ktor projects and


5.7
the database.

• Document the structure and usage of


Dockerfiles.

5.8 • Provide instructions for running projects


using docker-compose.

Tableau 3.1: Backlog of product

33
Chapter 3. Analysis and Requirements Specifications

3.3.3 Planning sprints

Based on our product backlog, we divide our project into 4 releases. Table 3.2 illustrates the planning for
each sprint.

Sprint User Story ID Estimation (Week)

0 1.1 ,1.2 , 1.3 3

1 2.1 , 2.2 , 2.3 5

2 3.1 , 3.2 , 3.3 , 3.4 , 3.5 , 3.6 , 3.7 5

3 4.1 ,4.2 , 4.3 , 4.4 3

4 5.1 , 5.2 , 5.3 , 5.4 ,5.5 , 5.6 ,5.7 ,5.8 4

Tableau 3.2: Sprint Estimation

3.4 Generic diagrams

3.4.1 Backend-Infrastructure Tech-Stack and Modules diagram

At the backend infrastructure level, our technology stack and modules are structured into different layers to
support the application-level functionalities. These layers play a crucial role in our system: 3.1
Application Level: This is where our actual applications reside, built on the lower layers. The key
components in this layer include:

• EcoSystem: An overarching management system wrapped around our viewers. When a user is logged
into the EcoSystem, they are automatically logged into every viewer. This enhances user convenience
and simplifies access.

• DigaTwin: This application offers a set of features and functionalities for our users.

• DexViewer: Another application split into modules to facilitate various functions. The primary goal
here is to select all required modules with a configuration file.

Dex Modules: These modules are integral to our application. They implement DcxCore-Interfaces
and can be implemented by Dcx Viewer. For instance, there could be a module designed for a 3D viewer,
enhancing the capabilities of our applications. Library Level: This is the foundational layer on which
everything else is built. It includes:

34
Chapter 3. Analysis and Requirements Specifications

• Core: This core library serves as the foundation for all modules and applications. It defines interfaces
that are implemented by the application level. It can utilize the Navvis Connector and is responsible
for a wide range of functions.

• Navvis Connector: The Navvis Connector module is instrumental in enabling us to utilize the Navvis
API. It plays a pivotal role by including functionalities related to user and group management, as well
as permission management. It acts as an intermediate layer that connects our system to the Navvis
platform, enhancing data accessibility and functionality.

Base Technology: This layer encompasses the technology stack that we utilize, and it’s important to
note that none of the components in this layer are self-written. Our technology stack comprises:

• Kotlin: This is our programming language of choice, known for its conciseness and expressiveness.

• Ktor: Ktor serves as our server framework, facilitating the development of robust and high-performance
server-side applications.

• Morphia: Morphia is the database connector that helps in connecting our applications to the database
efficiently.

• MongoDB: MongoDB is our database of choice, providing a scalable, NoSQL storage solution that
supports our data management and retrieval needs.

In summary, our system’s backend infrastructure is well-organized into layers, each serving a specific
purpose to ensure the smooth operation of our applications and modules. The technology stack and modules,
along with their interactions, enable us to offer robust digital solutions to our users.

35
Chapter 3. Analysis and Requirements Specifications

Figure 3.1: Backend-Infrastructure Tech-Stack and Modules diagram

36
Chapter 3. Analysis and Requirements Specifications

3.4.2 General diagram

This type of diagram provides an overview of the Services, their relationships, and the structure of an entire
application. It showcases the major Services, and how they connect with each other as shown in figure 3.2

Figure 3.2: General diagram

3.5 Working environment

In this section, we will describe the tools used in this project.

3.5.1 Operating System

• Windows 10

Windows 10 is a widely used operating system developed by Microsoft. It is part of the Windows NT family
and succeeded Windows 8.

Figure 3.3: Windows 10

37
Chapter 3. Analysis and Requirements Specifications

3.5.2 DevOps

• Ansible

Ansible is an open-source IT automation platform from Red Hat. It enables organizations to automate
many IT processes usually performed manually, including provisioning, configuration management, application
deployment, and orchestration.[9]

Figure 3.4: Ansible Logo

• YAML

YAML is a human-readable data serialization format. It is frequently employed for configuring files and
exchanging data between languages with varying data structures[10].

In our project, we utilize YAML as it serves as the favored language for defining playbooks. These playbooks
constitute collections of automation tasks intended for execution on remote systems. Additionally, the
configuration of GitLab CI/CD pipelines is accomplished through YAML files (gitlab-ci.yml) housed within
our repository.

Figure 3.5: YAML Logo

38
Chapter 3. Analysis and Requirements Specifications

• Docker

Docker is a platform and toolset for developing, shipping, and running applications in containers.
Containers are lightweight, portable, and self-sufficient units that encapsulate everything needed to run
an application, including the code, runtime, libraries, and system tools. Docker provides a consistent and
reproducible environment across different machines, making it easier to deploy and scale applications.

Figure 3.6: Docker Logo

3.5.3 Version Control

• GitLab

GitLab, the central repository and orchestrator of CI/CD pipelines, facilitates seamless integration of new
code changes and effective project management.[11]

Figure 3.7: GitLab Logo

39
Chapter 3. Analysis and Requirements Specifications

• OverLeaf

Overleaf is a free online platform for editing text in LaTeX without the need for any application downloads.[12]
Overleaf is the software used to create this report.

Figure 3.8: Overleaf Logo

3.5.4 Code Editing and Script Development

• Visual Studio Code

Visual Studio Code is a free source code editor developed by Microsoft for Windows, Linux, and macOS.[13]
Also , VSCode provides advanced features like syntax highlighting, auto-completion, and real-time error
checking for YAML, the language predominantly used for writing Ansible playbooks.
This ensures accurate syntax and formatting, reducing errors.

Figure 3.9: Vs Code Logo

• IntelliJ

IntelliJ IDEA is an integrated development environment (IDE) designed for Java development but also
supports various other programming languages through plugins. It is developed by JetBrains and is widely
used by developers for building Java applications, as well as for web development, mobile app development,
and other programming tasks.

40
Chapter 3. Analysis and Requirements Specifications

Figure 3.10: Intellij Logo

3.5.5 Automation tool

• Bash

A Bash script is a file containing a series of commands written in the Bash (Bourne Again SHell)
scripting language. It allows users to automate tasks, execute sequences of commands, and perform various
operations in a Unix-like environment. Bash scripts enhance efficiency and reproducibility by encapsulating
a set of instructions that can be executed sequentially or conditionally.

Figure 3.11: MySQL Logo

Conclusion

In this chapter, we started by defining the functional and non-functional requirements of our application,
gaining a comprehensive understanding of our project’s scope. We then created the product backlog, which
includes various user stories, and presented a general use case diagram illustrating the expected functionalities
of our solution. We also provided insights into our working environment, highlighting the essential tools,
software, and technologies necessary for our project’s development. Furthermore, we introduced the architectural
model to facilitate comprehension of interactions among the application’s key components. Moving forward,
the next chapter will delve into implementation details and the sprints planned for each version.

41
Chapter 4

R EALIZATION

Contents
1 First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

1.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

1.2 Realization of First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2 Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.2 Realization of Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3 Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.2 Realization of Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4 Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.2 Realization of Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Chapter 4. Realization

Introduction

In this chapter, we delve into the practical implementation and realization of the infrastructure
automation project with Beecoders. The journey is divided into four releases, each representing

a significant milestone in the project’s development. This chapter explores the details, challenges, and
achievements of each release, providing a comprehensive overview of the project’s realization.

4.1 First Release

4.1.1 Sprint planning

The first release is created based on three Sprints, with the first Sprint focused on creating a Venv , as shown
in the table4.1:

ID
User Story Sprint
Story

2.1 Create a Venv for the project Sprint 1

Tableau 4.1: Backlog sprint 1

The second covers the upgrade of Ansible as shown in the table 4.2

ID
User Story Sprint
Story

2.2 Upgrade Ansible to the required version within the virtual environment Sprint 2

Tableau 4.2: Backlog sprint 2

The third one ensures compatibility with tools 4.3

ID
User Story Sprint
Story

2.3 Ensure compatibility of Ansible with Python in the virtual environment. Sprint 3

Tableau 4.3: Backlog sprint 3

43
Chapter 4. Realization

4.1.2 Realization of First Release

Version of python
Captured in figure 4.1 is a snapshot of the current Python environment, showcasing the actualized version.

Figure 4.1: Actuel version of Python

Pip Install
Featured in this screenshot is a pivotal command-line operation:wget https://bootstrap.pypa.io/get-pip.py.This
command serves as the gateway to ushering in a new era of Python within the virtual environment (venv).
By fetching the ’get-pip.py’ script from the specified URL, we secure the foundation for seamless package
management, paving the way for efficient development and innovation.As schown in 4.2

Figure 4.2: Get-pip.py Install

Pywinrm Package Install


In this screenshot illustrated by figure 4.3, the command-line instruction takes center stage. This concise
yet powerful command signifies the installation of the pywinrm package, a critical component enhancing
communication and interaction within the Python environment, further enriching the development landscape.

44
Chapter 4. Realization

Figure 4.3: Pywinrm package Install

Properties Install
Highlighted in this figure 4.4 is the command software-properties-common -y in action. By
swiftly installing the ’software-properties-common’ package with the -y flag, we lay the groundwork for a
streamlined virtual environment (venv) creation process. This step showcases our commitment to meticulous
system setup, setting the stage for optimal Python development.

Figure 4.4: Properties Install

Actuel Venv
Within this screenshot4.5, we observe the pivotal moment where the command cd python-venv is
executed. By navigating into the ’python-venv’ directory, we delve into the heart of our meticulously crafted
virtual environment (venv), where innovation and development intertwine.

Figure 4.5: The actuel Venv

45
Chapter 4. Realization

Actuel Ansible
This screenshot shown in 4.6 the exist version of Ansible "2.9"

Figure 4.6: Ansible exist

New version of Python


In this screenshot, we witness the command sudo update-alternatives -config python3
in action. This command initiates a critical choice-making process as we opt for the new Python 3.10
version. The command exemplifies our commitment to staying current and embracing innovation within our
development environment.As schown in figure 4.7

Figure 4.7: Choose new version of Python3

New Ansible Upgrade


Captured in this figure 4.8 is a dual-step advancement in our development journey. Initially, we elevate
Ansible to version 4.2, accentuating our dedication to harnessing the latest tools. Subsequently, the command
python3 -m pip install ansible==4.2 resonates within our virtual environment (venv), seamlessly
integrating the upgraded Ansible version. This tandem of actions portrays our commitment to continuous
enhancement and technical finesse.

46
Chapter 4. Realization

Figure 4.8: Upgrade Ansible

4.2 Second Release

4.2.1 Sprint planning

The Second release is created based on two Sprints, with the first Sprint focused on setting up Ansible in
VSCode and creating a playbook, as shown in Table 4.4

ID
User Story Sprint
Story

3.1 Install VS Code on your local machine Sprint 1

3.2 Set up the necessary extensions for Ansible development. Sprint 1

Design Ansible playbooks to perform information gathering using PowerShell and


3.3 Sprint 1
SSH.

Tableau 4.4: BackLog sprint 1

The second covers the creation the playbook to gather using information using Vs code.

47
Chapter 4. Realization

ID
User Story Sprint
Story

Implement tasks to collect relevant system data, log files,configuration details and
3.4 Sprint 2
ensure modularity and reusability in playbook design

3.5 Create Ansible playbooks to gather system information using Vs code . Sprint 2

Tableau 4.5: BackLog sprint 2

The third one ensures the validation of playbook functionality as shown in the table 4.6

ID
User Story Sprint
Story

3.6 Test the playbooks locally to validate their functionalit. Sprint 3

Tableau 4.6: BackLog sprint 3

4.2.2 Realization of Second Release

Ansible Playbook in powershell


In figure 4.9, we witness the convergence of technologies as we craft an Ansible playbook using
PowerShell. The commands come to life in the script, embodying the synergy between Ansible’s orchestration
prowess and PowerShell’s versatility.

48
Chapter 4. Realization

Figure 4.9: Ansible Playbook with powershell

Ansible Playbook with Vscode


In this screenshot, we embark on the creation of an Ansible playbook within VS Code, designed to gather
comprehensive information from desktops. With each line of code, our playbook evolves into a powerful
tool for remote data collection. This endeavor symbolizes our commitment to informed decision-making
and efficient system management in our project. As schown in 4.10

49
Chapter 4. Realization

Figure 4.10: The Playbook with VsCode

Iventory
Featured in this figure 4.11 is the pivotal ’inventory’ file within VS Code. The ’inventory’ file stands as a
cornerstone of Ansible playbook orchestration, enabling us to define and organize target hosts, group them
logically, assign variables, and scale dynamically. This powerful tool empowers precise, flexible, and secure
execution of tasks across diverse systems, reflecting our mastery in managing complexity and achieving
tailored automation.

Figure 4.11: Inventory File

Hosts File

50
Chapter 4. Realization

In this screenshot, we unravel the significance of the ’IP-address.yml’ file, a critical piece of the
puzzle. Within its contents lie vital details: ’agent-number,’ ’dsc-version,’ ’dsc-job-id,’ and ’feeder-version.’
These attributes serve as the compass guiding our playbook’s interactions, ensuring precision and relevance
in every action. This file exemplifies our commitment to data-driven orchestration, translating variables into
actionable insights within our dynamic Ansible environment. As schown in figure 4.12

Figure 4.12: Hosts File

App-config Playbook

This Figure 4.13 Contained within the ’app-config.yml’ playbook is a dynamic orchestration that
fine-tunes application behavior. This playbook automates the deployment of configuration settings—ranging
from database connections to environment variables—ensuring optimal application performance. Through
its sequential tasks and dynamic adaptation, ’app-config.yml’ exemplifies our commitment to precision and
efficiency in managing application configurations.

51
Chapter 4. Realization

Figure 4.13: App-config Playbook

4.3 Third Release

4.3.1 Sprint planning

The Second release is created based on tow Sprints, the first one cover GitLab integration, as shown in the
table4.7

ID
User Story Sprint
Story

4.1 Create a new GitLab repository for the project Sprint 1

4.2 Develop a .gitlab-ci.yml file to define GitLab CI/CD pipeline stages Sprint 1

Tableau 4.7: BackLog sprint 1

The second one cover the pipeline testing, and execution.

ID
User Story Sprint
Story

4.3 Test the GitLab pipeline to ensure smooth execution of playbook stages Sprint 2

Verify that the information gathering playbooks execute successfully within the
4.4 Sprint 2
pipeline

Tableau 4.8: BackLog sprint 2

52
Chapter 4. Realization

4.3.2 Realization of Third Release

Pipline dashboard

Highlighted in this screenshot is the playbook execution dashboard within the pipeline environment.
This interactive interface serves as the nerve center for orchestrating playbook runs. With each click,
configurations come alive, tasks unfold, and results materialize. This dashboard encapsulates our journey
toward automated excellence, where pipelines transform code into meaningful actions with speed and accuracy.As
schown in figure 4.14

Figure 4.14: Pipline dashboard

Run The Playbook


In this figure 4.15, we step into the heart of operations within the ’Jobs’ interface. Here, our playbook comes
to life, orchestrated with precision as tasks unfold in a choreographed sequence. This dynamic visualization
encapsulates the essence of automation, where each job executed is a testament to our commitment to
efficiency, consistency, and the art of orchestrated deployment.

53
Chapter 4. Realization

Figure 4.15: Run The playbooks

Run The App-config


Highlighted in this screenshot is the pivotal ’app-config’ playbook, taking the lead within the ’Jobs’ interface.
This playbook orchestrates the intricate dance of configurations, where each task harmonizes settings to
ensure optimal functionality. The ’Jobs’ view provides a real-time spectacle of automation in motion,
illustrating our commitment to precise orchestration and elevating the user experience within our applications.
As shown in figure 4.16

Figure 4.16: Run The app-config playbooks

54
Chapter 4. Realization

4.4 Fourth Release

4.4.1 Sprint planning

The fourth release Conduct thorough research and download the latest version of Docker Desktop. Adhere
to the installation instructions tailored for Windows users. Validate the success of your Docker installation
by executing a simple container.4.9

ID
User Story Sprint
Story

Research and download the latest version of Docker Desktop and Follow installation
5.1 Sprint 1
instructions for Windows.

5.2 Verify Docker installation by running a simple container. Sprint 1

Tableau 4.9: BackLog sprint 1

The second one we will begin by identifying the essential dependencies and configurations for your
Ktor projects. Subsequently, craft distinct Dockerfiles for each Ktor project. Ensure the robustness of each
Dockerfile by rigorously testing them through the process of building and running containers

ID
User Story Sprint
Story

Identify Ktor project dependencies and configurations and create individual


5.3 Sprint 2
Dockerfiles for each Ktor project.

5.4 Test each Dockerfile by building and running a container. Sprint 2

Tableau 4.10: BackLog sprint 2

Third, we Compose a docker-compose.yml file to orchestrate the deployment of your Ktor projects.
Clearly define services for each individual Ktor project and, if required, establish a dedicated service for
the database. Take care to configure network settings meticulously, ensuring that all projects seamlessly
communicate within the same overarching network.

55
Chapter 4. Realization

ID
User Story Sprint
Story

5.5 Create a docker-compose.yml file and Define services for each Ktor project. Sprint 3

Tableau 4.11: BackLog sprint 3

The fourth, execute the command ’docker-compose up -d’ to initiate the deployment of services in
detached mode. Thoroughly confirm that all services start without encountering any errors. Subsequently,
conduct comprehensive tests to verify seamless connectivity between the Ktor projects and the associated
database.

ID
User Story Sprint
Story

Set up a service for the database (if needed) and Configure network settings to ensure
5.6 Sprint 4
projects are on the same main network.

Verify that all services start without errors by running the command docker-compose
5.7 Sprint 4
up -d and Test connectivity between Ktor projects and the database.

Tableau 4.12: BackLog sprint 4

The fifth Create comprehensive documentation outlining the structure and usage of Dockerfiles for
your projects. Clearly articulate the steps involved, including dependencies and configurations. Additionally,
furnish detailed instructions for running the projects using docker-compose, ensuring a user-friendly guide
for seamless execution and deployment.

ID
User Story Sprint
Story

Document the structure and usage of Dockerfiles and Provide instructions for running
5.8 Sprint 5
projects using docker-compose.

Tableau 4.13: BackLog sprint 5

4.4.2 Realization of Fourth Release

Installing Docker
Figure 4.17 shows the process of Docker installation and Follows the on-screen instructions, which may
include accepting license agreements and choosing installation options.

56
Chapter 4. Realization

Figure 4.17: Installing Docker interface

Verify Docker
Figure 4.18 docker -version Retrieves and displays the installed Docker version and build information
in the command line interface. and Get-Service Docker In PowerShell fetches details about the
Docker service, including its status, startup type, and other relevant information. Useful for managing the
Docker daemon’s lifecycle.

Figure 4.18: Verificate docker installation

Docker-compose
The figure 4.19 showing Dockerfile orchestrates the build process for a Java-based application using Gradle.
It separates the build and runtime stages, optimizing the final image size for deployment. The resulting image
includes the compiled application JAR, necessary environment files, and configuration properties, ready to
run as a Java application in a Docker container.

57
Chapter 4. Realization

Figure 4.19: Docker file for ecosystem project

DataBase
The figure 4.20 is Docker Compose file sets up a MongoDB service, exposes it on port 27017,
connects it to a custom network named dcx-network and persists data in the /var/lib/mongodb
directory on the host machine. The MongoDB container is named mongo and it uses version 5.0.8 of the
MongoDB image. The -bind_ip option allows connections from any IP address. The container restarts
automatically unless explicitly stopped.

Figure 4.20: Docker Compose for mongodb

58
Chapter 4. Realization

4.5 Conclusion

In this chapter, we embarked on a dynamic journey captured through insightful screenshots and structured
sprints. From creating virtual environments to orchestrating Ansible playbooks and installing and creating
Docker to Deploy our Ktor projects, our path was marked by innovation and accomplishment. As we close
this chapter, our achievements stand as a testament to our commitment to efficient, precise, and collaborative
development.

59
General Conclusion

Beecoders faces a formidable challenge of digitizing infrastructure building, acknowledging

the limitations of traditional manual methods. The inefficiencies, error-proneness, and time-consuming
nature of manual processes demand a paradigm shift. Recognizing this imperative, our adoption of the
DevOps approach has emerged as a robust solution to tackle these challenges head-on.
The DevOps methodology, centered on reducing deployment cycles and fostering agility among
teams, has played a pivotal role in reshaping Beecoder’s approach to digitalization. Through the

automation of continuous integration and deployment, we have not only streamlined processes but also
addressed inconsistencies, laying the groundwork for scalable and reproducible digital building environments.
This report has delved into a comprehensive exploration of our mission, highlighting the paramount
importance of reducing deployment cycles, enhancing team agility, and automating critical processes. The
incorporation of DevOps principles underscores our commitment to staying at the forefront of modern
digital transformation. As we navigate the swiftly evolving landscape, Beecoders helping to revolutionizing

the management of building infrastructure in the digital age. Through innovation, collaboration

and unwavering dedication to efficiency, our journey sets the stage for a future where digitalization
seamlessly integrates with infrastructure management, creating a more agile and reliable environment.

environment.
Our project’s chosen approach consisted of several releases, each expanding on the one before it. The
process started with setting up the development environment and continued with writing Ansible playbooks
for tasks related to infrastructure management. The implementation of a Continuous Integration (CI) pipeline
marked a crucial milestone, streamlining the workflow and ensuring automated testing and integration. The
final release fine-tuned the infrastructure automation solution, making it production-ready. The project team
encountered various challenges, from configuration management to issue resolution, but each challenge was
met with determination and innovation. The implementation of the Scrum project management methodology
was one of the project’s most noteworthy features. This method placed a strong emphasis on precise
deadlines, economical iterations, and early issue identification. The Product Owner, Scrum Master, and
Development Team collaborated as the Scrum Team to guarantee efficient project planning and execution.
Transparency and progress monitoring were greatly aided by key artifacts like the Scrum Board, Burndown
Chart, Product Backlog, and Sprint Backlog.

60
General Conclusion

As a perspective is to transform infrastructure building into a seamless, automated, and agile process.
Recognizing the limitations of manual methods, the company embraces the DevOps approach to reduce
deployment cycles, enhance team agility, and automate critical processes. This strategic shift not only
addresses current challenges but positions Beecoders as a leader in modern digital transformation.

The integration of DevOps principles signifies a commitment to efficiency and innovation, setting the stage
for a future where digitalization seamlessly converges with infrastructure management for a more agile and
reliable environment.

61
Bibliography

[1] Development process, [Access 10-02-2023]. [Online]. Available: https://bubbleplan.net/


blog/comprendre-scrum-methodologie-agile-gestion-projet-reussie.

[2] Devops approach, [Access 15-12-2022]. [Online]. Available: https://www.bluesoft-group.


com/les-outils-devops-devenus-incontournables/.

[3] Benefits of devops, [Access 15-12-2022]. [Online]. Available: https : / / www . 1min30 . com /
gestion-de-projet/les-avantages-devops-1287497989.

[4] Principles of devops, [Access 15-12-2022]. [Online]. Available: https : / / www . atlassian .
com/devops/what-is-devops.

[5] Concept of containerization, [Access 15-01-2023]. [Online]. Available: https://www.liquidweb.


com/kb/virtualization-vs-containerization/.

[6] Ansible,the adopted solution, [Access 01-12-2022]. [Online]. Available: https://www.ansible.


com/.

[7] How ansible works, [Access 20-12-2022]. [Online]. Available: https://www.educba.com/


ansible-roles/.

[8] Ci/cd, [Accès 15-01-2023]. [Online]. Available: https://www.padok.fr/blog/outils-


devops.

[9] Ansible, [Accès 30-01-2023]. [Online]. Available: https://docs.ansible.com/.

[10] Definition of yaml, [Access 15-02-2023]. [Online]. Available: https://www.freecodecamp.


org/news/what-is-yaml-the-yml-file-format/.

[11] Gitlab definition, [Access 15-02-2023]. [Online]. Available: https://about.gitlab.com/


support/definitions/.

[12] Definition of overleaf, [Access 01-06-2023]. [Online]. Available: https : / / www . overleaf .
com/learn.

[13] Vscode, [Access 15-02-2023]. [Online]. Available: https : / / code . visualstudio . com /
docs.

[14] .

62
Bibliography

[15] Characteristics of the scrum methodology, [Accès 31-03-2023]. [Online]. Available: https : / /
bubbleplan . net / blog / comprendre - scrum - methodologie - agile - gestion -
projet-reussie/.

[16] Devops tools, [Access 15-12-2022]. [Online]. Available: https://www.codica.com/blog/


devops-security-tools/.

63
Bibliography

64

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy