devops-unit 2
devops-unit 2
Management Phase
During the post-deployment management phase, organizations monitor and maintain the
DevOps architecture in place. This is achieved by reading and interpreting data from users,
ensuring security, availability and more.
1
Benefits of monolithic architecture
There are benefits to monolithic architectures, which is why many applications are still
created using this development paradigm. For one, monolithic programs may have better
throughput than modular applications. They may also be easier to test and debug because,
with fewer elements, there are fewer testing variables and scenarios that come into play. At
the beginning of the software development lifecycle, it is usually easier to go with the
monolithic architecture since development can be simpler during the early stages. A single
codebase also simplifies logging, configuration management, application performance
monitoring and other development concerns. Deployment can also be easier by copying the
packaged application to a server. Finally, multiple copies of the application can be placed
behind a load balancer to scale it horizontally. That said, the monolithic approach is usually
better for simple, lightweight applications. For more complex applications with frequent
expected code changes or evolving scalability requirements, this approach is not suitable.
Drawbacks of monolithic architecture
Generally, monolithic architectures suffer from drawbacks that can delay application
development and deployment. These drawbacks become especially significant when the
product's complexity increases or when the development team grows in size. The code base
of monolithic applications can be difficult to understand because they may be extensive,
which can make it difficult for new developers to modify the code to meet changing business
or technical requirements. As requirements evolve or become more complex, it becomes
difficult to correctly implement changes without hampering the quality of the code and
affecting the overall operation of the application. Following each update to a monolithic
application, developers must compile the entire codebase and redeploy the full application
rather than just the part that was updated. This makes continuous or regular deployments
difficult, which then affects the application's and team's agility. The application's size can also
increase startup time and add to delays. In some cases, different parts of the application may
have conflicting resource requirements. This makes it harder to find the resources required to
scale the application.
Architecture Rules of Thumb
1. There is always a bottleneck.
Even in a serverless system or one you think will “infinitely” scale, pressure will always be
created elsewhere. For example, if your API scales, does your database also scale? If your
database scales, does your email system? In modern cloud systems, there are so many
components that scalability is not always the goal. Throttling systems are sometimes the best
choice.
2. Your data model is linked to the scalability of your application.
If your table design is garbage, your queries will be cumbersome, so accessing data will be
slow. When designing a database (NoSQL or SQL), carefully consider your access pattern
and what data you will have to filter. For example, with DynamoDB, you need to consider
what “Key” you will have to retrieve data. If that field is not set as the primary or sort key, it
will force you to use a scan rather than a faster query.
2
3. Scalability is mainly linked with cost. When you get to a large scale, consider systems
where this relationship does not track linearly.
If, like many, you have systems on RDS and ECS; these will scale nicely. But the downside is
that as you scale, you will pay directly for that increased capacity. It’s common for these
workloads to cost $50,000 per month at scale. The solution is to migrate these workloads to
serverless systems proactively.
4. Favour systems that require little tuning to make fast. The days of configuring your
own servers are over. AWS, GCP and Azure all provide fantastic systems that don’t need
expert knowledge to achieve outstanding performance.
5. Use infrastructure as code. Terraform makes it easy to build repeatable and version-
controlled infrastructure. It creates an ethos of collaboration and reduces errors by defining
them in code rather than “missing” a critical checkbox.
6. Use a PaaS if you’re at less than 100k MAUs. With Heroku, Fly and Render, there is no
need to spend hours configuring AWS and messing around with your application build
process. Platform-as-a-service should be leveraged to deploy quickly and focus on the
product.
7. Outsource systems outside of the market you are in. Don’t roll your own CMS or
Auth, even if it costs you tonnes.
If you go to the pricing page of many third-party systems, for enterprise-scale, the cost is
insane - think $10,000 a month for an authentication system! “I could make that in a week,”
you think. That may be true, but it doesn’t consider the long-term maintenance and the time
you cannot spend on your core product. Where possible, buy off the shelf.
8. You have three levers, quality, cost and time. You have to balance them accordingly.
You have, at best, 100 “points” to distribute between the three. Of course, you always want to
maintain quality, so the other levers to pull are time and cost.
3
How is separation of concerns achieved
Separation of concerns in software architecture is achieved by the establishment of
boundaries. A boundary is any logical or physical constraint which delineates a given set of
responsibilities. Some examples of boundaries would include the use of methods, objects,
components, and services to define core behaviour within an application; projects, solutions,
and folder hierarchies for source organization; application layers and tiers for processing
organization.
Separation of concerns – advantages
Separation of Concerns implemented in software architecture would have several advantages:
1. Lack of duplication and singularity of purpose of the individual components render the
overall system easier to maintain.
2. The system becomes more stable as a byproduct of the increased maintainability.
3. The strategies required to ensure that each component only concerns itself with a single set
of cohesive responsibilities often result in natural extensibility points.
4. The decoupling which results from requiring components to focus on a single purpose
leads to components which are more easily reused in other systems, or different contexts
within the same system.
5. The increase in maintainability and extensibility can have a major impact on the
marketability and adoption rate of the system. There are several flavors of Separation of
Concerns. Horizontal Separation, Vertical Separation, Data Separation and Aspect
Separation. In this article, we will restrict ourselves to Horizontal and Aspect separation of
concern.
4
current state to a new desired state, whether that involves adding tables and columns,
removing elements, splitting fields, or changing types and constraints.
5
states and generate a new file or files to bring the current database schema in line with the
schema described by the files. These change operations are then applied to the database to
reach the goal state.
6
How does microservices architecture work?
7
Smaller development teams can work in parallel on different components to update existing
functionalities. This makes it significantly easier to identify hot services, scale independently
from the rest of the application, and improve the application.
2. Improved scalability
Microservices launch individual services independently, developed in different languages or
technologies; all tech stacks are compatible, allowing DevOps to choose any of the most
efficient tech stacks without fearing if they will work well together. These small services
work on relatively less infrastructure than monolithic applications by choosing the precise
scalability of selected components per their requirements.
3. Independent deployment
Each microservice constituting an application needs to be a full stack. This enables
microservices to be deployed independently at any point. Since microservices are granular in
nature, development teams can work on one microservice, fix errors, then redeploy it without
redeploying the entire application. Microservice architecture is agile and thus does not need a
congressional act to modify the program by adding or changing a line of code or adding or
eliminating features. The software offers to streamline business structures through resilience
improvisation and fault separation.
4. Error isolation
In monolithic applications, the failure of even a small component of the overall application
can make it inaccessible. In some cases, determining the error could also be tedious. With
microservices, isolating the problem-causing component is easy since the entire application is
divided into standalone, fully functional software units. If errors occur, other non-related
units will still continue to function.
5. Integration with various tech stacks
With microservices, developers have the freedom to pick the tech stack best suited for one
particular microservice and its functions. Instead of opting for one standardized tech stack
encompassing all of an application’s functions, they have complete control over their options.
8
9
10
Microservices vs monolithic architecture
With monolithic architectures, all processes are tightly coupled and run as a single service.
This means that if one process of the application experiences a spike in demand, the entire
architecture must be scaled. Adding or improving a monolithic application’s features becomes
more complex as the code base grows. This complexity limits experimentation and makes it
difficult to implement new ideas. Monolithic architectures add risk for application availability
because many dependent and tightly coupled processes increase the impact of a single
process failure. With a microservices architecture, an application is built as independent
components that run each application process as a service. These services communicate via a
well-defined interface using lightweight APIs. Services are built for business capabilities and
each service performs a single function. Because they are independently run, each service can
be updated, deployed, and scaled to meet demand for specific functions of an application.
Data tier
The data tier in DevOps refers to the layer of the application architecture that is responsible
for storing, retrieving, and processing data. The data tier is typically composed of databases,
data warehouses, and data processing systems that manage large amounts of structured and
unstructured data.
In DevOps, the data tier is considered an important aspect of the overall application
architecture
and is typically managed as part of the DevOps process. This includes:
1. Data management and migration: Ensuring that data is properly managed and migrated as
part of the software delivery pipeline.
2. Data backup and recovery: Implementing data backup and recovery strategies to ensure
that data can be recovered in case of failures or disruptions.
3. Data security: Implementing data security measures to protect sensitive information and
comply with regulations.
4. Data performance optimization: Optimizing data performance to ensure that applications
and services perform well, even with large amounts of data.
5. Data integration: Integrating data from multiple sources to provide a unified view of data
and support business decisions.
By integrating data management into the DevOps process, teams can ensure that data is
properly managed and protected, and that data-driven applications and services perform well
and deliver value to customers.
11
Devops architecture and resilience
12
1) Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the
predefined individual usage with fixed hardware allocation. And with DevOps, the usage of
cloud, sharing of resources comes into the picture, and the build is dependent upon the user's
need, which is a mechanism to control the usage of resources or capacity.
2) Code
Many good practices such as Git enables the code to be used, which ensures writing the code
for business, helps to track changes, getting notified about the reason behind the difference in
the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.
3) Test
The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be
automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual
steps.
4) Plan
DevOps use Agile methodology to plan the development. With the operations and
development team in sync, it helps in organizing the work to plan accordingly to increase
productivity.
5) Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the
system accurately so that the health of the application can be checked. The monitoring
becomes more comfortable with services where the log data may get monitored through
many third-party tools such as Splunk.
13
6) Deploy
Many systems can support the scheduler for automated deployment. The cloud management
platform enables users to capture accurate insights and view the optimization scenario,
analytics on trends by the deployment of dashboards.
7) Operate
DevOps changes the way traditional approach of developing and testing separately. The
teams operate in a collaborative way where both the teams actively participate throughout the
service lifecycle. The operation team interacts with developers, and they come up with a
monitoring plan which serves the IT and business requirements.
8) Release
Deployment to an environment can be done by automation. But when the deployment is
made to the production environment, it is done by manual triggering. Many processes
involved in release management commonly used to do the deployment in the production
environment manually to lessen the impact on the customers.
DevOps resilience
DevOps resilience refers to the ability of a DevOps system to withstand and recover from
failures and disruptions. This means ensuring that the systems and processes used in DevOps
are robust, scalable, and able to adapt to changing conditions. Some of the key components of
DevOps resilience include:
1. Infrastructure automation: Automating infrastructure deployment, scaling, and
management helps to ensure that systems are deployed consistently and are easier to manage
in case of failures or disruptions.
2. Monitoring and logging: Monitoring systems, applications, and infrastructure in real-time
and collecting logs can help detect and diagnose issues quickly, reducing downtime.
3. Disaster recovery: Having a well-designed disaster recovery plan and regularly testing it
can help ensure that systems can quickly recover from disruptions.
4. Continuous testing: Continuously testing systems and applications can help identify and
fix issues before they become critical.
5. High availability: Designing systems for high availability helps to ensure that systems
remain up and running even in the event of failures or disruptions.
14
By focusing on these components, DevOps teams can create a resilient and adaptive DevOps
system that is able to deliver high-quality applications and services, even in the face of
failures and disruptions.
15