0% found this document useful (0 votes)
18 views15 pages

devops-unit 2

The document discusses software architecture, focusing on the DevOps model, monolithic architecture, and microservices. It outlines the phases of the DevOps model, the benefits and drawbacks of monolithic architecture, and the advantages of microservices architecture over monolithic systems. Key concepts include separation of concerns, database migrations, and the importance of scalability and independent deployment in modern software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views15 pages

devops-unit 2

The document discusses software architecture, focusing on the DevOps model, monolithic architecture, and microservices. It outlines the phases of the DevOps model, the benefits and drawbacks of monolithic architecture, and the advantages of microservices architecture over monolithic systems. Key concepts include separation of concerns, database migrations, and the importance of scalability and independent deployment in modern software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit-2

Introducing software architecture


DevOps Model:
The DevOps model goes through several phases governed by cross-discipline teams. Those phases are
as follows:
Planning, Identify, and Track
Using the latest in project management tools and agile practices, track ideas and workflows visually.
This gives all important stakeholders a clear pathway to prioritization and better results. With better
oversight, project managers can ensure teams are on the right track and aware of potential obstacles
and pitfalls. All applicable teams can better work together to solve any problems in the development
process.
Development Phase
Version control systems help developers continuously code, ensuring one patch connects seamlessly
with the master branch. Each complete feature triggers the developer to submit a request that, if
approved, allows the changes to replace existing code. Development is ongoing.
Testing Phase
After a build is completed in development, it is sent to QA testing. Catching bugs is important to the
user experience, in DevOps bug testing happens early and often. Practices like continuous integration
allow developers to use automation to build and test as a cornerstone of continuous development.
Deployment Phase
In the deployment phase, most businesses strive to achieve continuous delivery. This means
enterprises have mastered the art of manual deployment. After bugs have been detected and resolved,
and the user experience has been perfected, a final team is responsible for the manual deployment. By
contrast, continuous deployment is a DevOps approach that automates deployment after QA testing
has been completed.

Management Phase
During the post-deployment management phase, organizations monitor and maintain the
DevOps architecture in place. This is achieved by reading and interpreting data from users,
ensuring security, availability and more.

What is monolithic architecture


A monolithic architecture is the traditional unified model for the design of a software
program.
Monolithic, in this context, means "composed all in one piece." According to the Cambridge
dictionary, the adjective monolithic also means both "too large" and "unable to be changed."

1
Benefits of monolithic architecture
There are benefits to monolithic architectures, which is why many applications are still
created using this development paradigm. For one, monolithic programs may have better
throughput than modular applications. They may also be easier to test and debug because,
with fewer elements, there are fewer testing variables and scenarios that come into play. At
the beginning of the software development lifecycle, it is usually easier to go with the
monolithic architecture since development can be simpler during the early stages. A single
codebase also simplifies logging, configuration management, application performance
monitoring and other development concerns. Deployment can also be easier by copying the
packaged application to a server. Finally, multiple copies of the application can be placed
behind a load balancer to scale it horizontally. That said, the monolithic approach is usually
better for simple, lightweight applications. For more complex applications with frequent
expected code changes or evolving scalability requirements, this approach is not suitable.
Drawbacks of monolithic architecture
Generally, monolithic architectures suffer from drawbacks that can delay application
development and deployment. These drawbacks become especially significant when the
product's complexity increases or when the development team grows in size. The code base
of monolithic applications can be difficult to understand because they may be extensive,
which can make it difficult for new developers to modify the code to meet changing business
or technical requirements. As requirements evolve or become more complex, it becomes
difficult to correctly implement changes without hampering the quality of the code and
affecting the overall operation of the application. Following each update to a monolithic
application, developers must compile the entire codebase and redeploy the full application
rather than just the part that was updated. This makes continuous or regular deployments
difficult, which then affects the application's and team's agility. The application's size can also
increase startup time and add to delays. In some cases, different parts of the application may
have conflicting resource requirements. This makes it harder to find the resources required to
scale the application.
Architecture Rules of Thumb
1. There is always a bottleneck.
Even in a serverless system or one you think will “infinitely” scale, pressure will always be
created elsewhere. For example, if your API scales, does your database also scale? If your
database scales, does your email system? In modern cloud systems, there are so many
components that scalability is not always the goal. Throttling systems are sometimes the best
choice.
2. Your data model is linked to the scalability of your application.
If your table design is garbage, your queries will be cumbersome, so accessing data will be
slow. When designing a database (NoSQL or SQL), carefully consider your access pattern
and what data you will have to filter. For example, with DynamoDB, you need to consider
what “Key” you will have to retrieve data. If that field is not set as the primary or sort key, it
will force you to use a scan rather than a faster query.
2
3. Scalability is mainly linked with cost. When you get to a large scale, consider systems
where this relationship does not track linearly.
If, like many, you have systems on RDS and ECS; these will scale nicely. But the downside is
that as you scale, you will pay directly for that increased capacity. It’s common for these
workloads to cost $50,000 per month at scale. The solution is to migrate these workloads to
serverless systems proactively.
4. Favour systems that require little tuning to make fast. The days of configuring your
own servers are over. AWS, GCP and Azure all provide fantastic systems that don’t need
expert knowledge to achieve outstanding performance.
5. Use infrastructure as code. Terraform makes it easy to build repeatable and version-
controlled infrastructure. It creates an ethos of collaboration and reduces errors by defining
them in code rather than “missing” a critical checkbox.
6. Use a PaaS if you’re at less than 100k MAUs. With Heroku, Fly and Render, there is no
need to spend hours configuring AWS and messing around with your application build
process. Platform-as-a-service should be leveraged to deploy quickly and focus on the
product.
7. Outsource systems outside of the market you are in. Don’t roll your own CMS or
Auth, even if it costs you tonnes.
If you go to the pricing page of many third-party systems, for enterprise-scale, the cost is
insane - think $10,000 a month for an authentication system! “I could make that in a week,”
you think. That may be true, but it doesn’t consider the long-term maintenance and the time
you cannot spend on your core product. Where possible, buy off the shelf.
8. You have three levers, quality, cost and time. You have to balance them accordingly.
You have, at best, 100 “points” to distribute between the three. Of course, you always want to
maintain quality, so the other levers to pull are time and cost.

The Separation of Concerns


Separation of concerns is a software architecture design pattern/principle for separating an
application into distinct sections, so each section addresses a separate concern. At its essence,
Separation of concerns is about order. The overall goal of separation of concerns is to
establish a well-organized system where each part fulfils a meaningful and intuitive role
while maximizing its ability to adapt to change.

3
How is separation of concerns achieved
Separation of concerns in software architecture is achieved by the establishment of
boundaries. A boundary is any logical or physical constraint which delineates a given set of
responsibilities. Some examples of boundaries would include the use of methods, objects,
components, and services to define core behaviour within an application; projects, solutions,
and folder hierarchies for source organization; application layers and tiers for processing
organization.
Separation of concerns – advantages
Separation of Concerns implemented in software architecture would have several advantages:
1. Lack of duplication and singularity of purpose of the individual components render the
overall system easier to maintain.
2. The system becomes more stable as a byproduct of the increased maintainability.
3. The strategies required to ensure that each component only concerns itself with a single set
of cohesive responsibilities often result in natural extensibility points.
4. The decoupling which results from requiring components to focus on a single purpose
leads to components which are more easily reused in other systems, or different contexts
within the same system.
5. The increase in maintainability and extensibility can have a major impact on the
marketability and adoption rate of the system. There are several flavors of Separation of
Concerns. Horizontal Separation, Vertical Separation, Data Separation and Aspect
Separation. In this article, we will restrict ourselves to Horizontal and Aspect separation of
concern.

Handling database migrations


Introduction
Database schemas define the structure and interrelations of data managed by relational
databases. While it is important to develop a well-thought-out schema at the beginning of
your projects, evolving requirements make changes to your initial schema difficult or
impossible to avoid. And since the schema manages the shape and boundaries of your data,
changes must be carefully applied to match the expectations of the applications that use it and
avoid losing data currently held by the database system.
What are database migrations?
Database migrations, also known as schema migrations, database schema migrations, or
simply migrations, are controlled sets of changes developed to modify the structure of the
objects within a relational database. Migrations help transition database schemas from their

4
current state to a new desired state, whether that involves adding tables and columns,
removing elements, splitting fields, or changing types and constraints.

Migrations manage incremental, often reversible, changes to data structures in a


programmatic way. The goals of database migration software are to make database changes
repeatable, shareable, and testable without loss of data. Generally, migration software
produces artifacts that describe the exact set of operations required to transform a database
from a known state to the new state. These can be checked into and managed by normal
version control software to track changes and share among team members. While preventing
data loss is generally one of the goals of migration software, changes that drop or
destructively modify structures that currently house data can result in deletion. To cope with
this, migration is often a supervised process involving inspecting the resulting change scripts
and making any modifications necessary to preserve important information.

What are the advantages of migration tools?


Migrations are helpful because they allow database schemas to evolve as requirements
change. They help developers plan, validate, and safely apply schema changes to their
environments. These compartmentalized changes are defined on a granular level and describe
the transformations that must take place to move between various "versions" of the database.
In general, migration systems create artifacts or files that can be shared, applied to multiple
database systems, and stored in version control. This helps construct a history of
modifications to the database that can be closely tied to accompanying code changes in the
client applications. The database schema and the application's assumptions about that
structure can evolve in tandem. Some other benefits include being allowed (and sometimes
required) to manually tweak the process by separating the generation of the list of operations
from the execution of them. Each change can be audited, tested, and modified to ensure that
the correct results are obtained while still relying on automation for the majority of the
process.
State based migration
State based migration software creates artifacts that describe how to recreate the desired
database state from scratch. The files that it produces can be applied to an empty relational
database system to bring it fully up to date. After the artifacts describing the desired state are
created, the actual migration involves comparing the generated files against the current state
of the database. This process allows the software to analyze the difference between the two

5
states and generate a new file or files to bring the current database schema in line with the
schema described by the files. These change operations are then applied to the database to
reach the goal state.

What to keep in mind with state-based migrations


Like almost all migrations, state-based migration files must be carefully examined by
knowledgeable developers to oversee the process. Both the files describing the desired final
state and the files that outline the operations to bring the current database into compliance
must be reviewed to ensure that the transformations will not lead to data loss. For example, if
the generated operations attempt to rename a table by deleting the current one and recreating
it with its new name, a knowledgeable human must recognize this and intervene to prevent
data loss. State based migrations can feel rather clumsy if there are frequent major changes to
the database schema that require this type of manual intervention. Because of this overhead,
this technique is often better suited for scenarios where the schema is well-thought out ahead
of time with fundamental changes occurring infrequently. However, state based migrations do
have the advantage of producing files that fully describe the database state in a single context.
This can help new developers onboard more quickly and works well with workflows in
version control systems since conflicting changes introduced by code branches can be
resolved easily.
Change based migrations
The major alternative to state-based migrations is a change based migration system. Change
based migrations also produce files that alter the existing structures in a database to arrive at
the desired state. Rather than discovering the differences between the desired database state
and the current one, this approach builds off of a known database state to define the
operations to bring it into the new state. Successive migration files are produced to modify
the database further, creating a series of change files that can reproduce the final database
state when applied consecutively.
Because change based migrations work by outlining the operations required from a known
database state to the desired one, an unbroken chain of migration files is necessary from the
initial starting point. This system requires an initial state, which may be an empty database
system or a files describing the starting structure, the files describing the operations that take
the schema through each transformation, and a defined order which the migration files must
be applied.
Microservices
Micro services, often referred to as Micro services architecture, is an architectural approach
that involves dividing large applications into smaller, functional units capable of functioning
and communicating independently. This approach arose in response to the limitations of
monolithic architecture. Because monoliths are large containers holding all software
components of an application, they are severely limited: inflexible, unreliable, and often
develop slowly. With micro services, however, each unit is independently deployable but can
communicate with each other when necessary. Developers can now achieve the scalability,
simplicity, and flexibility needed to create highly sophisticated software.

6
How does microservices architecture work?

The key benefits of microservices architecture


Microservices architecture presents developers and engineers with a number of benefits that
monoliths cannot provide. Here are a few of the most notable.

1. Less development effort

7
Smaller development teams can work in parallel on different components to update existing
functionalities. This makes it significantly easier to identify hot services, scale independently
from the rest of the application, and improve the application.

2. Improved scalability
Microservices launch individual services independently, developed in different languages or
technologies; all tech stacks are compatible, allowing DevOps to choose any of the most
efficient tech stacks without fearing if they will work well together. These small services
work on relatively less infrastructure than monolithic applications by choosing the precise
scalability of selected components per their requirements.
3. Independent deployment
Each microservice constituting an application needs to be a full stack. This enables
microservices to be deployed independently at any point. Since microservices are granular in
nature, development teams can work on one microservice, fix errors, then redeploy it without
redeploying the entire application. Microservice architecture is agile and thus does not need a
congressional act to modify the program by adding or changing a line of code or adding or
eliminating features. The software offers to streamline business structures through resilience
improvisation and fault separation.
4. Error isolation
In monolithic applications, the failure of even a small component of the overall application
can make it inaccessible. In some cases, determining the error could also be tedious. With
microservices, isolating the problem-causing component is easy since the entire application is
divided into standalone, fully functional software units. If errors occur, other non-related
units will still continue to function.
5. Integration with various tech stacks
With microservices, developers have the freedom to pick the tech stack best suited for one
particular microservice and its functions. Instead of opting for one standardized tech stack
encompassing all of an application’s functions, they have complete control over their options.

8
9
10
Microservices vs monolithic architecture
With monolithic architectures, all processes are tightly coupled and run as a single service.
This means that if one process of the application experiences a spike in demand, the entire
architecture must be scaled. Adding or improving a monolithic application’s features becomes
more complex as the code base grows. This complexity limits experimentation and makes it
difficult to implement new ideas. Monolithic architectures add risk for application availability
because many dependent and tightly coupled processes increase the impact of a single
process failure. With a microservices architecture, an application is built as independent
components that run each application process as a service. These services communicate via a
well-defined interface using lightweight APIs. Services are built for business capabilities and
each service performs a single function. Because they are independently run, each service can
be updated, deployed, and scaled to meet demand for specific functions of an application.
Data tier
The data tier in DevOps refers to the layer of the application architecture that is responsible
for storing, retrieving, and processing data. The data tier is typically composed of databases,
data warehouses, and data processing systems that manage large amounts of structured and
unstructured data.
In DevOps, the data tier is considered an important aspect of the overall application
architecture
and is typically managed as part of the DevOps process. This includes:
1. Data management and migration: Ensuring that data is properly managed and migrated as
part of the software delivery pipeline.
2. Data backup and recovery: Implementing data backup and recovery strategies to ensure
that data can be recovered in case of failures or disruptions.
3. Data security: Implementing data security measures to protect sensitive information and
comply with regulations.
4. Data performance optimization: Optimizing data performance to ensure that applications
and services perform well, even with large amounts of data.
5. Data integration: Integrating data from multiple sources to provide a unified view of data
and support business decisions.
By integrating data management into the DevOps process, teams can ensure that data is
properly managed and protected, and that data-driven applications and services perform well
and deliver value to customers.

11
Devops architecture and resilience

12
1) Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the
predefined individual usage with fixed hardware allocation. And with DevOps, the usage of
cloud, sharing of resources comes into the picture, and the build is dependent upon the user's
need, which is a mechanism to control the usage of resources or capacity.
2) Code
Many good practices such as Git enables the code to be used, which ensures writing the code
for business, helps to track changes, getting notified about the reason behind the difference in
the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.
3) Test
The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be
automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual
steps.
4) Plan
DevOps use Agile methodology to plan the development. With the operations and
development team in sync, it helps in organizing the work to plan accordingly to increase
productivity.
5) Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the
system accurately so that the health of the application can be checked. The monitoring
becomes more comfortable with services where the log data may get monitored through
many third-party tools such as Splunk.

13
6) Deploy
Many systems can support the scheduler for automated deployment. The cloud management
platform enables users to capture accurate insights and view the optimization scenario,
analytics on trends by the deployment of dashboards.
7) Operate
DevOps changes the way traditional approach of developing and testing separately. The
teams operate in a collaborative way where both the teams actively participate throughout the
service lifecycle. The operation team interacts with developers, and they come up with a
monitoring plan which serves the IT and business requirements.

8) Release
Deployment to an environment can be done by automation. But when the deployment is
made to the production environment, it is done by manual triggering. Many processes
involved in release management commonly used to do the deployment in the production
environment manually to lessen the impact on the customers.

DevOps resilience
DevOps resilience refers to the ability of a DevOps system to withstand and recover from
failures and disruptions. This means ensuring that the systems and processes used in DevOps
are robust, scalable, and able to adapt to changing conditions. Some of the key components of
DevOps resilience include:
1. Infrastructure automation: Automating infrastructure deployment, scaling, and
management helps to ensure that systems are deployed consistently and are easier to manage
in case of failures or disruptions.
2. Monitoring and logging: Monitoring systems, applications, and infrastructure in real-time
and collecting logs can help detect and diagnose issues quickly, reducing downtime.
3. Disaster recovery: Having a well-designed disaster recovery plan and regularly testing it
can help ensure that systems can quickly recover from disruptions.
4. Continuous testing: Continuously testing systems and applications can help identify and
fix issues before they become critical.
5. High availability: Designing systems for high availability helps to ensure that systems
remain up and running even in the event of failures or disruptions.

14
By focusing on these components, DevOps teams can create a resilient and adaptive DevOps
system that is able to deliver high-quality applications and services, even in the face of
failures and disruptions.

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy