Rapport PFE Marwa Yahya
Rapport PFE Marwa Yahya
I dedicate this work to those whom all the words in the world cannot express the immense love
and deep gratitude that I show them for all the sacrifices they have never ceased to lavish on me
since my birth.
I hope I have lived up to the hopes you have placed in me. May God protect you and preserve your
happiness and health. To those who supported me throughout my journey, each in their own way,
and who never stopped believing in me. To those I love dearly and whose presence inspires me
with serenity and tranquility of the soul. To all my friends for all the love, encouragement,
beautiful memories, and crazy moments. To all those I love and who love me.
Marwa Yahya
i
Acknowledgments
I am extremely grateful for the people that helped me in writing this project, without them it would
not have been possible. I’ll begin with Mr. Ahmed Neffati, my team leader at Beecoders. And
of course, Mr. Anis Bejaoui for their ongoing support and guidance. His valuable advice,
encouragement, and constructive suggestions have been instrumental in the development of this thesis. Along
with the interesting conversations we’ve had and their willingness to dedicate their time to this project, which I
appreciate deeply.
Next is Mrs. Yosra Abassi, my pedagogical supervisor. I can’t thank you enough for your guidance,
support, and assistance along with your insightful remarks that led to numerous improvements.
Finally, I hope the jury members who reviewed my work find the quality and clarity they’re looking
for in this report because a lot of time and effort went into it.
This project wouldn’t have been possible if it wasn’t for my family and friends. Specifically, my Best Friends
ii
Contents
General Introduction 1
iii
Acknowledgments
2.3 CI/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 CI/CD tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Realization 42
4.1 First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Realization of First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.2 Realization of Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
iv
4.3.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.2 Realization of Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.2 Realization of Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
General Conclusion 60
Bibliography 62
v
List of Figures
vi
4.6 Ansible exist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 Choose new version of Python3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.8 Upgrade Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.9 Ansible Playbook with powershell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.10 The Playbook with VsCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.11 Inventory File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.12 Hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.13 App-config Playbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.14 Pipline dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.15 Run The playbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.16 Run The app-config playbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.17 Installing Docker interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.18 Verificate docker installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.19 Docker file for ecosystem project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.20 Docker Compose for mongodb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
vii
List of abbreviations
• CD = Continuous Deployment
• CI = Continuous Integration
• VM = Virtual Machine
viii
General Introduction
In today’s fast-paced digital world, where organizations strive to deliver software and services at an
accelerated pace, the need for efficient and streamlined development and operations practices has become
paramount. DevOps, a portmanteau of "development" and "operations," is an approach that aims to bridge
the gap between these two traditionally siloed teams, enabling collaboration, automation, and continuous
delivery
In this context, our project is aligned with addressing a significant challenge at Beecoders.
Despite being a comprehensive provider of digital transformation services, offering tailored digital strategies,
implementation concepts, educational seminars, 3D infrastructure capture, as-built analysis, and browser-based
3D models, DiConneX currently lacks the incorporation of DevOps principles. The absence of automated
processes could potentially lead to inefficiencies, longer development cycles, and challenges in adapting to
the demands of a fast-paced digital environment. Integrating DevOps practices would not only streamline
their operations but also contribute to a more seamless digital transition and improved project management.
Throughout this work, we have followed the DevOps approach, which is aimed at reducing the
deployment cycle and fostering agility among teams. This is notably manifested by the automation of
continuous integration and deployment that we have implemented.
This introductory chapter sets the stage for a comprehensive exploration of Beecoders’s
mission to revolutionize the digitalization landscape. Our commitment to reducing deployment cycles,
enhancing team agility, and automating critical processes aligns perfectly with the principles of modern
digital transformation. To trace the chronological progression of this work, the current report is organized as
follows:
- The first chapter, titled Context of the project and basic notions,will specifically present the hosting
organization and contextualize the project by describing its challenges and the proposed solution. I
also detail the methodology adopted during this project.
- The second chapter, titled State of the Art,will introduce the various fundamental concepts necessary
for the completion of our project. We will discuss the DevOps approach, the concept of continuous
integration, and continuous deployment, including their functioning and components, Ansible Architecture
as well. We will also delve into some important concepts essential for understanding our work.
- The third chapter, titled Analysis and Specification of Requirements,will define our functional and
1
General Introduction
non-functional requirements for the solution. We will conclude by describing the general diagram,
Backend-Infrastructure Tech-Stack and Modules diagram, and our product backlog with its various
user stories.
- The fourth chapter, titled Realization,we turn our project concept into a practical reality. We begin
by setting up the Ansible environment, crafting Ansible playbooks for automation, establishing a
robust pipeline for continuous integration and deployment, and meticulously structuring our playbooks
for efficiency and consistency. Furthermore, We deploy the projects using Docker, incorporating a
versatile containerization platform.
The conclusion of this report serves as a recapitulation, reaffirming the context of this present work
and reiterating the proposed approach. Furthermore, it serves as a gateway to new horizons and future
possibilities.
2
Chapter 1
Contents
1 Hosting Company: Beecoders . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Introduction
Companies and industries are always looking for new and creative ways to improve their operations in
this era of fast digital transformation. Our project officially begins with this chapter, which offers a thorough
examination of the organizational setup of Beecoders, our host firm, as well as the background of the
project. We’ll explore the major obstacles that made our project necessary and set out our ambitions to make
sure it succeeds.
Beecoders is a company built on digital services dedicated to ushering in the future of building infrastructure
digitization. This company has firmly established itself as a leading authority in the field of digital
transformation, contributing to the digitization of building infrastructure. Flexibility is the key strength
of Beecoders, ensuring the development of client projects under the best conditions.
and expertise help you to find the most relevant solutions that adapt to your project and ensure
the expected results, allowing clients to harness the benefits of cutting-edge digitization solutions.
4
Chapter 1. Context of the Project and Basic Notions
Beecoders offers a broad range of services that encompass the entire spectrum of digital
transformation. The company excels in developing customized digital strategies, training employees, and
executing large projects. One of their most impressive accomplishments is the development of browser-based,
3D building structures. This innovative approach allows for effective management for the life of the structure
while simplifying operations, a testament to Beecoders’s commitment to guiding clients through the intricate
process of digitization.
Beecoders has a significant influence on a wide range of projects and sectors. It gives
customers the ability to take charge of their endeavors in a variety of fields, such as project development,
architecture, building management, digital solutions, and technical planning, in addition to real estate and
construction. By offering cutting-edge and effective digital tools, Beecoders has cemented its status as
The digital twin technology that beecoders offers is the foundation of its products. This innovative
technology, which is available via web browsers, gives clients an unmatched and always changing perspective
of their building assets. The benefits are numerous and include comprehensive project records in addition
to long-term productivity increases. A key component of the current digital transition, this cutting-edge
technology makes information administration easier, promotes constructive stakeholder interaction, allows
for creative distant project organization, and saves a significant amount of money.
One of the top suppliers of digital innovation solutions is that Beecoders offers
a wide range of services that cover every facet of the digital transformation process. Our clients have
everything they need to succeed, from developing customized digital strategies to establishing performance
benchmarks and teaching attendees the basics of the digital twin. For our clients, Beecoders goes above
and beyond by offering sophisticated 3D scanning of complex building systems, "as-built" evaluations to
record construction sites, and 3D models with state-of-the-art browser-based digital reproduction. This
all-encompassing approach, which is distinguished by an uncompromising dedication to automation to
5
Chapter 1. Context of the Project and Basic Notions
accelerate deployment cycles and enhance team flexibility, is consistent with the core ideas of modern digital
transformation.
One of the hardest things Beecoders has to do is turn infrastructure building into a digital
enterprise. Conventional manual methods for managing building infrastructure have shown to be inflexible
in terms of scale, error-prone, and time-consuming. When these processes don’t work as intended, the results
aren’t consistent, they don’t follow the blueprint, and it becomes harder to repeat and depend on a stable
production environment. In today’s market, there is an urgent need for an automated solution that facilitates
the establishment of digital building environments, gathers vital data, and allows for seamless integration.
An assessment of the state of infrastructure automation tools is necessary to address the issues raised
above. This thorough assessment includes a detailed examination of rival programs and a number of tools,
such as Puppet, Engineer, and Terraform. Gaining a comprehensive understanding of these tools’ features,
benefits, and limitations is the main objective of this evaluation. By doing this, we will be better equipped to
select digital solutions that will meet the needs of modern infrastructure management and improve Beecoders's
capabilities.
Beecoders recommends using Ansible, one of the most widely used open-source tools for
automation, to address the problems described in the issue statement. This solution encompasses several key
components:
• Upgrade Ansible:
Ensure the utilization of the latest Ansible version, taking advantage of new features, bug fixes, and
security enhancements.
• Environment Setup:
Environment Setup: Establish a development environment that includes all the required libraries and
dependencies for seamless operations.
6
Chapter 1. Context of the Project and Basic Notions
Several methodologies are popular when it comes to software development processes. While we have
defined the objectives in terms of functionality and constraints for the solution, the choice of the methodology
to use during software development must meet the following criteria:
In this situation, we have chosen to apply the Scrum methodology. In the following sections, we will present
this methodology, the different stakeholders involved in our project, and its life cycle.
The object-oriented method is the chosen approach for our system, as it is essential in software
development. For a better presentation of our project’s architecture, we have decided to go with the widely
used Unified Modeling Language (UML) as it offers numerous advantages, such as:
7
Chapter 1. Context of the Project and Basic Notions
• Flexibility
• Product Owner: The Product Owner is responsible for the product vision and prioritizing requirements.
They define the project’s needs and ensure alignment with stakeholders.
• Scrum Master: The Scrum Master resolves issues and ensures adherence to Scrum principles throughout
the project. They facilitate the Scrum process, remove obstacles, and foster an environment where the
team can work at their maximum potential.
• Development Team: The Development Team is self-organizing and remains unchanged throughout
the duration of a sprint. They are responsible for delivering the product and collaboratively working
on the tasks within the sprint.[1]
8
Chapter 1. Context of the Project and Basic Notions
1. Sprint Planning:
• During this meeting, the project team selects a set of user stories (US) based on priority from the
"Product Backlog" to construct the "Sprint Backlog."
• Throughout the sprint, the team members hold a brief daily meeting (approximately 15 minutes)
or a weekly meeting (around one hour).
• In this meeting, each team member explains their progress from the previous day or week and
any constraints they encountered. They also discuss the tasks they will work on during the day
or week of the meeting.
3. Sprint Review:
• The Sprint Review meeting takes place at the end of each sprint.
• During this meeting, the project team presents the completed functionalities to stakeholders.
• The Product Owner and end-users provide feedback on the delivered features.
9
Chapter 1. Context of the Project and Basic Notions
4. Sprint Retrospective:
• The Sprint Retrospective meeting is held after the Sprint Review, usually immediately following
it.
• The Scrum Master leads this meeting, and its objective is to help the team identify areas for
improvement to better manage the upcoming sprints.
These meetings ensure effective collaboration, continuous improvement, and alignment with
the project goals throughout the development process. They are essential elements of the Scrum framework,
fostering transparency, adaptability, and value delivery at the end of each sprint.
Figure 1.3 illustrates the life cycle of each sprint in our solution:
We adapted this method to our case by dividing the project into sprints lasting one to four
weeks. At the end of each sprint, a meeting was held to present the tasks accomplished during that specific
sprint and to set the tasks and objectives for the next iteration.
10
Chapter 1. Context of the Project and Basic Notions
During the inspection, if any differences were identified, we would adapt the process accordingly.
In fact, we planned weekly meetings in parallel with these Sprint Reviews to discuss the actions taken during
the period, the challenges encountered, and the planned actions for the next period and its prospects. This
allowed us to set clear objectives for each increment and to be flexible in adapting them. It also enabled us
to modify or supplement the list of features to be implemented for future sprints as needed.[1]
The Artifacts:
• The Product Backlog: It includes the list of features requested by the client in the form of user stories.
This list is organized in order of importance and can be updated by the Product Owner. The Product
Backlog for our project is presented in the requirements specification chapter.
• The Sprint Backlog: It is a part of the Product Backlog that contains what needs to be delivered by
the end of the sprint. It serves as a reference during daily stand-up meetings.
• The Burndown Chart: It is a graphical representation of the evolution of the amount of work during
each sprint.
• The Scrum Board: It is a board divided into at least three sections: tasks to be done, tasks in progress,
and tasks completed. It helps us track the real-time progress of tasks and user stories to be completed.
Conclusion
This introductory chapter was primarily dedicated to presenting the host company. We discussed the
context and a review of the current situation, as well as the proposed solution, which forms the subject of our
project. We concluded by introducing the methodology used for the design and development of the tasks,
namely SCRUM.
To maintain a logical progression in this report, the next chapter will focus on the state of the art.
11
Chapter 2
Contents
1 DevOps Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 CI/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Introduction
To successfully develop our project, a state of art study should be made. This chapter details the
main theoretical concepts needed to understand during the project’s realization.
DevOps is a cultural and technical movement that has revolutionized the way software is developed,
deployed, and operated. It emerged as a response to the challenges faced by organizations in delivering
software faster, more frequently, and with higher quality. Traditionally, software development and IT operations
were separate entities, often working in isolation, leading to inefficiencies, delays, and communication gaps.
The fundamental idea behind DevOps is to create a collaborative and integrated environment where
development, operations, and other teams work together seamlessly throughout the software development
lifecycle. By fostering a culture of shared ownership and responsibility, DevOps promotes a sense of
collective accountability for the success of the entire software ecosystem.
One of the key drivers of DevOps is automation. DevOps practitioners leverage automation to
eliminate manual, error-prone tasks and streamline repetitive processes. Automation ensures consistent and
reliable software delivery, reducing the risk of deployment failures and downtime. Continuous Integration
(CI) and Continuous Deployment (CD) pipelines are integral to DevOps, enabling developers to integrate
code changes frequently, run automated tests, and deploy to production environments automatically.
Feedback is another critical aspect of DevOps. Teams actively seek feedback from customers, users,
and stakeholders to identify areas of improvement. By incorporating feedback loops, DevOps teams can
quickly respond to changing requirements and iterate on software features to meet evolving customer needs.
In addition to technical practices, DevOps places a strong emphasis on fostering a culture of continuous
learning and experimentation. Embracing failure as a learning opportunity encourages teams to take risks
and innovate while maintaining a blame-free environment.
The customer is at the heart of DevOps. The focus on customer needs, satisfaction, and delivering
value drives decision-making throughout the software development process.DevOps encourages teams to
align their efforts with business objectives, ensuring that software initiatives directly contribute to the organization’s
goals.
Overall, DevOps has become a pivotal force in modern software development and IT operations. Its
principles and practices enable organizations to be more agile, responsive, and competitive in the fast-paced
13
Chapter 2. State of the art
world of technology. By adopting DevOps, teams can achieve faster delivery cycles, increased efficiency,
improved software quality, and ultimately, higher customer satisfaction.[2]
Organizations implementing the DevOps methodology can expect the following benefits:
• Full Concentration on the Clients: DevOps places a strong focus on meeting customer needs and
delivering value. By aligning development and operations teams with customer requirements, organizations
can ensure that software development efforts are customer-centric and result in products that truly
address user needs.
• Collaborates Teams for Faster Product Shipments: DevOps encourages collaboration and open
communication between development, operations, and other teams. By breaking down silos and
working together, teams can streamline the software delivery process, resulting in faster and smoother
product shipments.
• Quicker Deployment:Automation is a key aspect of DevOps, enabling faster and more reliable deployment
of software. Automated deployment processes reduce manual errors and ensure consistent and repeatable
releases, leading to quicker deployment times.[3]
The success of a DevOps mindset lies in learning the best DevOps practices and principles.
Let’s explore the 7 principles of DevOps that every IT team follows in the following figure:2.1
14
Chapter 2. State of the art
• Customer Focus: Place the customer at the center of all decisions and actions. Understand their
needs, gather feedback, and continuously strive to deliver value to the customer through efficient and
high-quality software delivery.
• Complete Ownership:Encourage teams to take complete ownership of their work. Empower them
to make decisions and be accountable for the entire software development and deployment process.
• Systems Thinking:View the software delivery process as a whole system rather than isolated components.
Consider the impact of changes on the entire system and optimize the flow to achieve the best overall
outcomes.
• Automation:Automate repetitive tasks and processes to reduce manual errors, save time, and increase
the efficiency of software development and operations.
• Focus on Results:Keep the focus on achieving results and delivering value to the customer. Set clear
goals, measure performance, and continuously align efforts with the desired outcomes.[4]
15
Chapter 2. State of the art
The following Figure 2.2 illustrates the three axes of DevOps culture the collaboration between Dev
and Ops, the processes, and the use of tools. the DevOps culture. As shown in the figure 2.2
This is the very essence of DevOps—the fact that teams are no longer separated by silos specialization
(one team of developers, one team of Ops, one team of testers, and so on), but, on the contrary, these people
are brought together by making multidisciplinary teams that have the same objective: to deliver added value
to the product as quickly as possible.
To expect rapid deployment, these teams must follow development processes from agile methodologies
with iterative phases that allow for better functionality quality and rapid feedback. These processes should
not only be integrated into the development workflow with continuous integration but also into the deployment
workflow with continuous delivery and deployment. The DevOps process is divided into several phases:
16
Chapter 2. State of the art
• Development
• Continuous deployment
• Continuous monitoring
These phases are carried out cyclically and iteratively throughout the life of the project. The following figure
2.3 illustrates the DevOps process flow.
As DevOps aims to significantly increase the satisfaction of your customers, naturally your teams
start again the steps with a new feature for your software or application. That is why we always draw the
DevOps as an endless loop.
The choice of tools and products used by teams is very important in DevOps. Indeed, when teams
were separated into Dev and Ops, each team used their specific tools—deployment tools for developers and
infrastructure tools for Ops which further widened communication gaps.
17
Chapter 2. State of the art
• Ansible:It is a popular open-source automation tool that automates app deployment, configuration
management, and infrastructure provisioning. Ansible uses declarative language and SSH (Secure
Shell Protocol) to execute tasks across multiple systems.
• GitLab CI/CD:This tool is a built-in CI/CD solution provided by GitLab. It allows for defining CI
pipelines using a declarative configuration file (.gitlab-ci.yml). Also, it offers powerful features. They
include parallel execution, caching, and integrated container registry.
• Bash script:A Bash script is a file containing a sequence of commands written in the Bash programming
language. When executed, these commands are processed by the Bash shell line by line. Bash
18
Chapter 2. State of the art
scripts allow for the automation of tasks, such as navigating directories, creating folders, and running
processes via the command line. They are useful for streamlining repetitive or complex operations on
Unix-like operating systems.
In conclusion, after evaluating the six tools used in DevOps, we have selected Ansible and GitLab
CI/CD as the optimal solution for our project. Their advanced features in automation, configuration management,
and infrastructure provisioning align seamlessly with our project’s specific requirements.
Containerization is a type of virtualization at the application level, which makes it possible to create
several instances of user space isolated on a single kernel. These instances are called containers. Containers
provide a standard way to bundle an application’s code, runtime, system tools, system libraries, and configurations
into a single instance. Containers share a kernel (operating system) installed on the hardware.
Benefits of containers:
• Lightness:Containers take up less space on the server than virtual machines and take only seconds to
start.
• Elasticity:Containers are very elastic and there is no need to allocate a given amount of resources.
This means when the demand on a container decreases, the additional resources are freed up for use
by other containers.
• Density:Density refers to the number of objects that a single physical server can run at a time.
• Performance:When resource pressure is high, application performance is much better with containers
than with hypervisors.
• Maintenance efficiency:With a single OS kernel, OS-level updates or patches only need to be done
once for the changes to take effect in all containers.
19
Chapter 2. State of the art
• Create application services on several containers either Front-End or Back-end on several containers.
20
Chapter 2. State of the art
Kubernetes, Docker Swarm and LXC are some of the popular open source container orchestration
tools.Also,Ansible as a powerful automation tool, excels in orchestration. It provides the capability to define
and execute orchestrated workflows that encompass multiple tasks, across diverse systems and environments,So
we will use it for our solution.
Ansible is an open-source DevOps tool that can help the business in configuration management,
deployment, provisioning, etc. It is straightforward to deploy; it leverages SSH to communicate between
servers. It uses the playbook to describe automation jobs, and the playbook uses a very simple language
YAML.
Ansible provides reliability, consistency, and scalability to your IT infrastructure. You can automate configurations
of databases, storage, networks, and firewalls using Ansible. It makes sure that all the necessary packages
and all other software are consistent on the server to run the application. It also holds all the historical data
of your application, so if at any time you want to roll back to the previous version, or you want to upgrade it,
you can easily do that.[6]
Exploring the Following Features:
• Bash scripting: is a swift and robust solution for automating tasks and executing commands on
Unix-like systems, offering efficiency in today’s programming landscape.
• SSH:Very simple passwordless network authentication protocol which is secure. So, your responsibility
is to copy this key to the client.
• Push architecture:push the necessary configurations to them, clients. All you have to do is, write
down those configurations (playbook) and push them all at once to the nodes. You see how powerful
it can be to push the changes to thousands of servers in minutes.
The beauty of Ansible is that it is not only composed for single-tier deployment. It is done for kind
of multi-tier systems and infrastructures. It is often coined with the term agentless which means it works by
connecting nodes through default login(ssh) Ansiblearchitect as shown in the figure:2.6.
21
Chapter 2. State of the art
• Public/Private Cloud:which is the Linux server. It can also act as a repository for all IT installations
and configurations.
• User:who creates the Ansible playbook has a direct connection with the Ansible automation Engine.
• Host:The above architecture has a bunch of host machines to which an ansible server connects and
pushes the playbooks through SSH.
• Ansible automation engine:It has a use in which users can directly run a playbook that gets deployed
on the hosts. There are multiple components in the Ansible automation engine.
(b) Modules:modules are those pieces of code that get executed when you run a playbook.A playbook
contains plays, a play contains different tasks, and a task includes modules.
When you run a playbook, it’s the modules that get executed on your hosts, and these modules contain
action in them. So, when you run a playbook, those action takes place on your host machines.
• Playbooks:Playbooks here actually define your workflow because whatever tasks you write in a
playbook, they get executed in the same order that you have written them.
• Plugins:All necessary cache, logging purpose, and ansibles functioning all help in creating augmented
ansible’s core.
22
Chapter 2. State of the art
• Connection plugins:The architecture offers connection plugins, eliminating the mandatory use of
SSH for connecting to host machines. With Ansible’s docker container connection plugin, configuring
Docker containers becomes seamless.
Ansible works by connecting to nodes and pushing out small programs called ansible modules.
Ansible then executes these modules over SSH by default and then removes them when finished.
The ansible management node is the controlling node, which controls the entire execution of the
Playbook. It’s the node from which you are running the installation, and the inventory file provides the list
of the hosts where the modules need to be run. The management node makes an ssh connection, and then it
executes the modules on the host machines and installs the product. It removes the modules once they are
installed.[7]
• Easy and Understandable:Ansible is very simple and easy to understand and has a very simple syntax
that can be used by human-readable data serialization language.
• Powerful and Versatile: It is a very powerful and versatile tool that helps in real orchestration and
manages the entire application or configuration management environment.
23
Chapter 2. State of the art
• Efficient: It is very efficient in the sense it can be customized according to your need like modules
can be called with the help of a playbook for where the applications are deployed.
• Application Deployment:Easy for teams to manage the entire lifecycle from development to deployment.
• Secured: Security is the key to maintaining the ansible infrastructure as all applications require it to
get applications free from security breaches.
Table 2.1 is an expanded comparative table for Ansible and two other tools, highlighting their advantages
and disadvantages in the context of configuration management and automation:
24
Chapter 2. State of the art
2.3 CI/CD
25
Chapter 2. State of the art
2.3.1 CI
The CI in CI/CD stands for continuous integration. Continuous integration means that developers
frequently merge their code changes into a shared repository. It is an automated process that allows multiple
developers to contribute to software components of the same project without integration conflicts. CI involves
automated testing each time a software change is integrated into the repository.
2.3.2 CD
CD can be synonymous with continuous delivery or continuous deployment. In both cases, the idea is
to take the integrated code and make it capable of deploying to a QA or production environment. Continuous
delivery checks the code automatically but requires human intervention to manually and strategically trigger
the deployment of changes. Continuous deployment takes the process a step further by configuring the
deployment to be automated. Human intervention is not required.
Continuous integration and continuous deployment tools or CI/CD allow the automated build and
deployment of source code changes. Concretely, CI/CD tools enable application modernization by reducing
the time it takes to build new functions. There are many CI/CD tools. One of the most used platforms
is Jenkins, an open-source tool. There are also paid solutions with a free tier like GitLab CI, Bamboo,
TeamCity, Concourse, CircleCI or Travis CI. Cloud providers, Google, Azure, and AWS in particular, also
offer their own tools for continuous integration and deployment. However, for our solution, we will utilize
GitLab CI/CD for automation, which is seamlessly integrated with Ansible playbooks. This integration is
26
Chapter 2. State of the art
achieved through GitLab’s CI/CD configuration file (.gitlab-ci.yml) located in each repository, allowing for
efficient and streamlined automation of our deployment processes. Moreover, thanks to GitLab’s comprehensive
integration with Ansible, empowers us to automate various workflows directly within GitLab. The range of
events that can trigger workflows is extensive, and some examples include:
• Manual trigger.
By combining Ansible’s powerful automation capabilities with GitLab’s seamless CI/CD integration, we can
ensure streamlined deployment processes and efficient management of our infrastructure and applications.
This collaborative approach facilitates smoother collaboration between development and operations teams,
resulting in more reliable and rapid software delivery.
Conclusion
In this chapter, we began by discussing the DevOps approach and containerization and orchestration.
Next, we introduced Ansible as our adopted solution. We then proceeded to present and explain various of
Automation tools and CI/CD testing. With a clear understanding of the solution to be adopted, we will now
move on to the next chapter, where we will detail the designed solution by specifying the functional and
non-functional requirements.
27
Chapter 3
Contents
1 Requirements specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2 Identification of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 Generic diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5 Working environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Introduction
In order to estimate the analysis and specification of requirements, we will begin by defining the functional
and non-functional needs of our solution. Next, we will proceed to identify the users. Finally, we will
establish user stories to provide an overall vision of our solution’s behavior.
In this part we will focus on the analysis of our requirements by picking up the functional and no functional
requirements for our project.
Our solution must provide a set of functionalities that satisfy the user’s needs. Therefore, it is essential to
express them clearly before presenting our solution.
The solution should enable the:
- Document the steps required to create, activate, and deactivate the virtual environments.
• Upgrade Ansible:
- The project should upgrade the existing Ansible installation to the latest stable version.
- Verify that the upgraded Ansible works with the existing inventory and configurations.
• Playbook Creation:
- Develop playbooks for various common use cases, such as package installation, configuration
management, and service deployment.
- Ensure playbooks follow best practices and are modular and reusable.
- Test the playbooks against different target environments to validate their functionality.
29
Chapter 3. Analysis and Requirements Specifications
- Integrate the project with GitLab CI/CD pipeline for continuous integration and delivery.
- Implement automated testing for Ansible playbooks and Docker setup to ensure reliable deployments.
In this section we identify the main non-functional requirements, they characterize the restrictions and
constraints that can weigh down our solution:
• Performance:
- The Ansible upgrade process should not cause any significant downtime to the existing infrastructure.
- Virtual environments should have minimal overhead in terms of memory and storage.
- CI/CD pipeline execution time should be optimized to reduce the waiting time for developers.
• Usability:
• Reliability:
- The upgraded Ansible should be thoroughly tested to ensure stability and reliability.
- The CI/CD pipeline should be resilient to handle failures gracefully and recover without manual
intervention when possible.
30
Chapter 3. Analysis and Requirements Specifications
The actors are the elements that will translate our backlog into the finished product. Our solution has three
main actors:
(1) Developer/Tester: This is a user who can modify the source code for a given objective (adding a
feature, fixing an error, etc.) from a code repository server. They can also initiate build tasks to ensure
continuous integration and continuous delivery of the project. Additionally, they are responsible for
continuous monitoring of the code based on unit tests and regression tests.
(2) Project Manager/Integrator: This is a user who inherits the responsibilities of the developer and
validates automated scripts for production deployment. They are in charge of setting up the CI/CD
pipeline and monitoring the platform’s status.
(3) Administrator:This is a user who handles the setup and configuration of the platform and ensures the
monitoring and proper functioning of the cluster.
In this section, we identify the SCRUM team members for our project, and then we present the product
backlog and the sprint planning.
In the previous chapter, we introduced the general SCRUM roles. In this section, we present the three
SCRUM roles for our project, which are organized as follows:
The following backlog is outlined, detailing a series of tasks and steps. These tasks encompass the creation
of a virtual environment, upgrading Ansible, developing Ansible playbooks for information collection using
PowerShell and SSH, integrating with GitLab pipelines, and deploying on docker as shown in the table.3.1
31
Chapter 3. Analysis and Requirements Specifications
ID
ID Functionality User Story Sprint
Story
1.1 Study the existing solution and its issues
Environment Setup and 2.2 Upgrade Ansible to the required version within the Release
2
Ansible Upgrade virtual environment. 1
32
Chapter 3. Analysis and Requirements Specifications
5.1
Research and download the latest version of Docker
Desktop and Follow installation instructions for
Windows.
5.2
Verify Docker installation by running a simple
container.
33
Chapter 3. Analysis and Requirements Specifications
Based on our product backlog, we divide our project into 4 releases. Table 3.2 illustrates the planning for
each sprint.
At the backend infrastructure level, our technology stack and modules are structured into different layers to
support the application-level functionalities. These layers play a crucial role in our system: 3.1
Application Level: This is where our actual applications reside, built on the lower layers. The key
components in this layer include:
• EcoSystem: An overarching management system wrapped around our viewers. When a user is logged
into the EcoSystem, they are automatically logged into every viewer. This enhances user convenience
and simplifies access.
• DigaTwin: This application offers a set of features and functionalities for our users.
• DexViewer: Another application split into modules to facilitate various functions. The primary goal
here is to select all required modules with a configuration file.
Dex Modules: These modules are integral to our application. They implement DcxCore-Interfaces
and can be implemented by Dcx Viewer. For instance, there could be a module designed for a 3D viewer,
enhancing the capabilities of our applications. Library Level: This is the foundational layer on which
everything else is built. It includes:
34
Chapter 3. Analysis and Requirements Specifications
• Core: This core library serves as the foundation for all modules and applications. It defines interfaces
that are implemented by the application level. It can utilize the Navvis Connector and is responsible
for a wide range of functions.
• Navvis Connector: The Navvis Connector module is instrumental in enabling us to utilize the Navvis
API. It plays a pivotal role by including functionalities related to user and group management, as well
as permission management. It acts as an intermediate layer that connects our system to the Navvis
platform, enhancing data accessibility and functionality.
Base Technology: This layer encompasses the technology stack that we utilize, and it’s important to
note that none of the components in this layer are self-written. Our technology stack comprises:
• Kotlin: This is our programming language of choice, known for its conciseness and expressiveness.
• Ktor: Ktor serves as our server framework, facilitating the development of robust and high-performance
server-side applications.
• Morphia: Morphia is the database connector that helps in connecting our applications to the database
efficiently.
• MongoDB: MongoDB is our database of choice, providing a scalable, NoSQL storage solution that
supports our data management and retrieval needs.
In summary, our system’s backend infrastructure is well-organized into layers, each serving a specific
purpose to ensure the smooth operation of our applications and modules. The technology stack and modules,
along with their interactions, enable us to offer robust digital solutions to our users.
35
Chapter 3. Analysis and Requirements Specifications
36
Chapter 3. Analysis and Requirements Specifications
This type of diagram provides an overview of the Services, their relationships, and the structure of an entire
application. It showcases the major Services, and how they connect with each other as shown in figure 3.2
• Windows 10
Windows 10 is a widely used operating system developed by Microsoft. It is part of the Windows NT family
and succeeded Windows 8.
37
Chapter 3. Analysis and Requirements Specifications
3.5.2 DevOps
• Ansible
Ansible is an open-source IT automation platform from Red Hat. It enables organizations to automate
many IT processes usually performed manually, including provisioning, configuration management, application
deployment, and orchestration.[9]
• YAML
YAML is a human-readable data serialization format. It is frequently employed for configuring files and
exchanging data between languages with varying data structures[10].
In our project, we utilize YAML as it serves as the favored language for defining playbooks. These playbooks
constitute collections of automation tasks intended for execution on remote systems. Additionally, the
configuration of GitLab CI/CD pipelines is accomplished through YAML files (gitlab-ci.yml) housed within
our repository.
38
Chapter 3. Analysis and Requirements Specifications
• Docker
Docker is a platform and toolset for developing, shipping, and running applications in containers.
Containers are lightweight, portable, and self-sufficient units that encapsulate everything needed to run
an application, including the code, runtime, libraries, and system tools. Docker provides a consistent and
reproducible environment across different machines, making it easier to deploy and scale applications.
• GitLab
GitLab, the central repository and orchestrator of CI/CD pipelines, facilitates seamless integration of new
code changes and effective project management.[11]
39
Chapter 3. Analysis and Requirements Specifications
• OverLeaf
Overleaf is a free online platform for editing text in LaTeX without the need for any application downloads.[12]
Overleaf is the software used to create this report.
Visual Studio Code is a free source code editor developed by Microsoft for Windows, Linux, and macOS.[13]
Also , VSCode provides advanced features like syntax highlighting, auto-completion, and real-time error
checking for YAML, the language predominantly used for writing Ansible playbooks.
This ensures accurate syntax and formatting, reducing errors.
• IntelliJ
IntelliJ IDEA is an integrated development environment (IDE) designed for Java development but also
supports various other programming languages through plugins. It is developed by JetBrains and is widely
used by developers for building Java applications, as well as for web development, mobile app development,
and other programming tasks.
40
Chapter 3. Analysis and Requirements Specifications
• Bash
A Bash script is a file containing a series of commands written in the Bash (Bourne Again SHell)
scripting language. It allows users to automate tasks, execute sequences of commands, and perform various
operations in a Unix-like environment. Bash scripts enhance efficiency and reproducibility by encapsulating
a set of instructions that can be executed sequentially or conditionally.
Conclusion
In this chapter, we started by defining the functional and non-functional requirements of our application,
gaining a comprehensive understanding of our project’s scope. We then created the product backlog, which
includes various user stories, and presented a general use case diagram illustrating the expected functionalities
of our solution. We also provided insights into our working environment, highlighting the essential tools,
software, and technologies necessary for our project’s development. Furthermore, we introduced the architectural
model to facilitate comprehension of interactions among the application’s key components. Moving forward,
the next chapter will delve into implementation details and the sprints planned for each version.
41
Chapter 4
R EALIZATION
Contents
1 First Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2 Second Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3 Third Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Fourth Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Chapter 4. Realization
Introduction
In this chapter, we delve into the practical implementation and realization of the infrastructure
automation project with Beecoders. The journey is divided into four releases, each representing
a significant milestone in the project’s development. This chapter explores the details, challenges, and
achievements of each release, providing a comprehensive overview of the project’s realization.
The first release is created based on three Sprints, with the first Sprint focused on creating a Venv , as shown
in the table4.1:
ID
User Story Sprint
Story
The second covers the upgrade of Ansible as shown in the table 4.2
ID
User Story Sprint
Story
2.2 Upgrade Ansible to the required version within the virtual environment Sprint 2
ID
User Story Sprint
Story
2.3 Ensure compatibility of Ansible with Python in the virtual environment. Sprint 3
43
Chapter 4. Realization
Version of python
Captured in figure 4.1 is a snapshot of the current Python environment, showcasing the actualized version.
Pip Install
Featured in this screenshot is a pivotal command-line operation:wget https://bootstrap.pypa.io/get-pip.py.This
command serves as the gateway to ushering in a new era of Python within the virtual environment (venv).
By fetching the ’get-pip.py’ script from the specified URL, we secure the foundation for seamless package
management, paving the way for efficient development and innovation.As schown in 4.2
44
Chapter 4. Realization
Properties Install
Highlighted in this figure 4.4 is the command software-properties-common -y in action. By
swiftly installing the ’software-properties-common’ package with the -y flag, we lay the groundwork for a
streamlined virtual environment (venv) creation process. This step showcases our commitment to meticulous
system setup, setting the stage for optimal Python development.
Actuel Venv
Within this screenshot4.5, we observe the pivotal moment where the command cd python-venv is
executed. By navigating into the ’python-venv’ directory, we delve into the heart of our meticulously crafted
virtual environment (venv), where innovation and development intertwine.
45
Chapter 4. Realization
Actuel Ansible
This screenshot shown in 4.6 the exist version of Ansible "2.9"
46
Chapter 4. Realization
The Second release is created based on two Sprints, with the first Sprint focused on setting up Ansible in
VSCode and creating a playbook, as shown in Table 4.4
ID
User Story Sprint
Story
The second covers the creation the playbook to gather using information using Vs code.
47
Chapter 4. Realization
ID
User Story Sprint
Story
Implement tasks to collect relevant system data, log files,configuration details and
3.4 Sprint 2
ensure modularity and reusability in playbook design
3.5 Create Ansible playbooks to gather system information using Vs code . Sprint 2
The third one ensures the validation of playbook functionality as shown in the table 4.6
ID
User Story Sprint
Story
48
Chapter 4. Realization
49
Chapter 4. Realization
Iventory
Featured in this figure 4.11 is the pivotal ’inventory’ file within VS Code. The ’inventory’ file stands as a
cornerstone of Ansible playbook orchestration, enabling us to define and organize target hosts, group them
logically, assign variables, and scale dynamically. This powerful tool empowers precise, flexible, and secure
execution of tasks across diverse systems, reflecting our mastery in managing complexity and achieving
tailored automation.
Hosts File
50
Chapter 4. Realization
In this screenshot, we unravel the significance of the ’IP-address.yml’ file, a critical piece of the
puzzle. Within its contents lie vital details: ’agent-number,’ ’dsc-version,’ ’dsc-job-id,’ and ’feeder-version.’
These attributes serve as the compass guiding our playbook’s interactions, ensuring precision and relevance
in every action. This file exemplifies our commitment to data-driven orchestration, translating variables into
actionable insights within our dynamic Ansible environment. As schown in figure 4.12
App-config Playbook
This Figure 4.13 Contained within the ’app-config.yml’ playbook is a dynamic orchestration that
fine-tunes application behavior. This playbook automates the deployment of configuration settings—ranging
from database connections to environment variables—ensuring optimal application performance. Through
its sequential tasks and dynamic adaptation, ’app-config.yml’ exemplifies our commitment to precision and
efficiency in managing application configurations.
51
Chapter 4. Realization
The Second release is created based on tow Sprints, the first one cover GitLab integration, as shown in the
table4.7
ID
User Story Sprint
Story
4.2 Develop a .gitlab-ci.yml file to define GitLab CI/CD pipeline stages Sprint 1
ID
User Story Sprint
Story
4.3 Test the GitLab pipeline to ensure smooth execution of playbook stages Sprint 2
Verify that the information gathering playbooks execute successfully within the
4.4 Sprint 2
pipeline
52
Chapter 4. Realization
Pipline dashboard
Highlighted in this screenshot is the playbook execution dashboard within the pipeline environment.
This interactive interface serves as the nerve center for orchestrating playbook runs. With each click,
configurations come alive, tasks unfold, and results materialize. This dashboard encapsulates our journey
toward automated excellence, where pipelines transform code into meaningful actions with speed and accuracy.As
schown in figure 4.14
53
Chapter 4. Realization
54
Chapter 4. Realization
The fourth release Conduct thorough research and download the latest version of Docker Desktop. Adhere
to the installation instructions tailored for Windows users. Validate the success of your Docker installation
by executing a simple container.4.9
ID
User Story Sprint
Story
Research and download the latest version of Docker Desktop and Follow installation
5.1 Sprint 1
instructions for Windows.
The second one we will begin by identifying the essential dependencies and configurations for your
Ktor projects. Subsequently, craft distinct Dockerfiles for each Ktor project. Ensure the robustness of each
Dockerfile by rigorously testing them through the process of building and running containers
ID
User Story Sprint
Story
Third, we Compose a docker-compose.yml file to orchestrate the deployment of your Ktor projects.
Clearly define services for each individual Ktor project and, if required, establish a dedicated service for
the database. Take care to configure network settings meticulously, ensuring that all projects seamlessly
communicate within the same overarching network.
55
Chapter 4. Realization
ID
User Story Sprint
Story
5.5 Create a docker-compose.yml file and Define services for each Ktor project. Sprint 3
The fourth, execute the command ’docker-compose up -d’ to initiate the deployment of services in
detached mode. Thoroughly confirm that all services start without encountering any errors. Subsequently,
conduct comprehensive tests to verify seamless connectivity between the Ktor projects and the associated
database.
ID
User Story Sprint
Story
Set up a service for the database (if needed) and Configure network settings to ensure
5.6 Sprint 4
projects are on the same main network.
Verify that all services start without errors by running the command docker-compose
5.7 Sprint 4
up -d and Test connectivity between Ktor projects and the database.
The fifth Create comprehensive documentation outlining the structure and usage of Dockerfiles for
your projects. Clearly articulate the steps involved, including dependencies and configurations. Additionally,
furnish detailed instructions for running the projects using docker-compose, ensuring a user-friendly guide
for seamless execution and deployment.
ID
User Story Sprint
Story
Document the structure and usage of Dockerfiles and Provide instructions for running
5.8 Sprint 5
projects using docker-compose.
Installing Docker
Figure 4.17 shows the process of Docker installation and Follows the on-screen instructions, which may
include accepting license agreements and choosing installation options.
56
Chapter 4. Realization
Verify Docker
Figure 4.18 docker -version Retrieves and displays the installed Docker version and build information
in the command line interface. and Get-Service Docker In PowerShell fetches details about the
Docker service, including its status, startup type, and other relevant information. Useful for managing the
Docker daemon’s lifecycle.
Docker-compose
The figure 4.19 showing Dockerfile orchestrates the build process for a Java-based application using Gradle.
It separates the build and runtime stages, optimizing the final image size for deployment. The resulting image
includes the compiled application JAR, necessary environment files, and configuration properties, ready to
run as a Java application in a Docker container.
57
Chapter 4. Realization
DataBase
The figure 4.20 is Docker Compose file sets up a MongoDB service, exposes it on port 27017,
connects it to a custom network named dcx-network and persists data in the /var/lib/mongodb
directory on the host machine. The MongoDB container is named mongo and it uses version 5.0.8 of the
MongoDB image. The -bind_ip option allows connections from any IP address. The container restarts
automatically unless explicitly stopped.
58
Chapter 4. Realization
4.5 Conclusion
In this chapter, we embarked on a dynamic journey captured through insightful screenshots and structured
sprints. From creating virtual environments to orchestrating Ansible playbooks and installing and creating
Docker to Deploy our Ktor projects, our path was marked by innovation and accomplishment. As we close
this chapter, our achievements stand as a testament to our commitment to efficient, precise, and collaborative
development.
59
General Conclusion
the limitations of traditional manual methods. The inefficiencies, error-proneness, and time-consuming
nature of manual processes demand a paradigm shift. Recognizing this imperative, our adoption of the
DevOps approach has emerged as a robust solution to tackle these challenges head-on.
The DevOps methodology, centered on reducing deployment cycles and fostering agility among
teams, has played a pivotal role in reshaping Beecoder’s approach to digitalization. Through the
automation of continuous integration and deployment, we have not only streamlined processes but also
addressed inconsistencies, laying the groundwork for scalable and reproducible digital building environments.
This report has delved into a comprehensive exploration of our mission, highlighting the paramount
importance of reducing deployment cycles, enhancing team agility, and automating critical processes. The
incorporation of DevOps principles underscores our commitment to staying at the forefront of modern
digital transformation. As we navigate the swiftly evolving landscape, Beecoders helping to revolutionizing
the management of building infrastructure in the digital age. Through innovation, collaboration
and unwavering dedication to efficiency, our journey sets the stage for a future where digitalization
seamlessly integrates with infrastructure management, creating a more agile and reliable environment.
environment.
Our project’s chosen approach consisted of several releases, each expanding on the one before it. The
process started with setting up the development environment and continued with writing Ansible playbooks
for tasks related to infrastructure management. The implementation of a Continuous Integration (CI) pipeline
marked a crucial milestone, streamlining the workflow and ensuring automated testing and integration. The
final release fine-tuned the infrastructure automation solution, making it production-ready. The project team
encountered various challenges, from configuration management to issue resolution, but each challenge was
met with determination and innovation. The implementation of the Scrum project management methodology
was one of the project’s most noteworthy features. This method placed a strong emphasis on precise
deadlines, economical iterations, and early issue identification. The Product Owner, Scrum Master, and
Development Team collaborated as the Scrum Team to guarantee efficient project planning and execution.
Transparency and progress monitoring were greatly aided by key artifacts like the Scrum Board, Burndown
Chart, Product Backlog, and Sprint Backlog.
60
General Conclusion
As a perspective is to transform infrastructure building into a seamless, automated, and agile process.
Recognizing the limitations of manual methods, the company embraces the DevOps approach to reduce
deployment cycles, enhance team agility, and automate critical processes. This strategic shift not only
addresses current challenges but positions Beecoders as a leader in modern digital transformation.
The integration of DevOps principles signifies a commitment to efficiency and innovation, setting the stage
for a future where digitalization seamlessly converges with infrastructure management for a more agile and
reliable environment.
61
Bibliography
[3] Benefits of devops, [Access 15-12-2022]. [Online]. Available: https : / / www . 1min30 . com /
gestion-de-projet/les-avantages-devops-1287497989.
[4] Principles of devops, [Access 15-12-2022]. [Online]. Available: https : / / www . atlassian .
com/devops/what-is-devops.
[12] Definition of overleaf, [Access 01-06-2023]. [Online]. Available: https : / / www . overleaf .
com/learn.
[13] Vscode, [Access 15-02-2023]. [Online]. Available: https : / / code . visualstudio . com /
docs.
[14] .
62
Bibliography
[15] Characteristics of the scrum methodology, [Accès 31-03-2023]. [Online]. Available: https : / /
bubbleplan . net / blog / comprendre - scrum - methodologie - agile - gestion -
projet-reussie/.
63
Bibliography
64