Kubernetes and The Enterprise: Brought To You in Partnership With
Kubernetes and The Enterprise: Brought To You in Partnership With
the Enterprise
03 Welcome Letter
Peter Connelly, Senior Editor at DZone
DZONE RESEARCH
16 Leaders in Tech
DIPTI BORKAR, CO-FOUNDER AND CLOUD EXPERT, OFFERS KEY ADVICE TO KUBERNETES USERS
Lindsay Smith, Publications Manager at DZone
33 “kubecthell”
Daniel Stori, Software Architect at TOTVS
ADDITIONAL RESOURCES
The importance of containerization and the ability to control With all of this in mind, we chose to expand on 2019’s
environments from development to production may only be “Kubernetes in the Enterprise” Trend Report to give our
overshadowed by the benefits that a container orchestration readers insight into the issues other organizations are
platform provides. As the expectations of modern users facing, the strategies they’re using to overcome them, and
become harder to meet and application complexity grows, the tooling they’re adopting as they mature in their use of
scaling an application easily, efficiently, and securely is a Kubernetes and move to a more cloud-native architecture.
need rather than a “nice-to-have.” Given this, the rate of
In addition to the aforementioned concerns, this report
Kubernetes adoption since it was first open-sourced in 2014
focuses on Kubernetes in the context of microservices and
comes as no surprise.
managed cloud services, Kubernetes’ continued aid to better
Even as big names (Amazon, Microsoft, Red Hat, etc.) release CI/CD pipelines, and what adoption and maintenance of K8’s
similar container orchestration platforms and services, the looks like for both large and small-scale applications and
rate of Kubernetes adoption and its larger ecosystem of organizations.
tooling continues to grow, making it obvious that Kubernetes
We thank everyone who contributed to the report — survey
is here to stay.
respondents, authors, editors. And to you, our readers, we
Though some still have yet to adopt Kubernetes, many hope you can derive actionable insights from this work to
organizations are no longer worried about the struggles strengthen your professional and personal understanding of
that accompany early adoption. Instead, focus has shifted Kubernetes in the larger context of industry.
to more mature concerns surrounding security, governance,
Sincerely,
and larger resource optimization.
As part of the Editorial Team, Peter’s job is to work with DZone contributors throughout every part of
the writing process. Whether it’s helping brainstorm potential topics, providing authors with feedback
on their writing, promoting their content, or connecting them with new and interesting opportunities,
Peter’s goal is to be a resource for the people who make DZone the community it is.
DZone Publications
Meet the DZone Publications team!
Publishing Refcards and Trend Reports DZone Mission Statement
year-round, this team can often be found At DZone, we foster a collaborative environment that empowers
editing contributor pieces, working developers and tech professionals to share knowledge, build skills,
with Sponsors, and coordinating with and solve problems through content, code, and community.
designers. Part of their everyday includes
working across teams, specifically DZone’s We thoughtfully — and with intention — challenge the status quo
Client Success and Editorial teams, to and value diverse perspectives so that, as one, we can inspire positive
deliver high-quality content to the DZone change through technology.
community.
Lindsay is a Publications Manager at DZone. Reviewing contributor drafts, working with sponsors,
and interviewing key players for “Leaders in Tech,” Lindsay and team oversees the entire Trend
Report process end-to-end, delivering insightful content and findings to DZone’s developer
audience. In her free time, Lindsay enjoys reading, biking, and walking her dog, Scout.
As a Publications Manager, Melissa co-leads the publication lifecycle for Trend Reports — from
coordinating project logistics like schedules and workflow processes to conducting editorial
reviews with DZone contributors and authors. She often supports Sponsors during the pre- and
post-publication stages with her fellow Client Success teammates. Outside of work, Melissa passes
the days tending to houseplants, reading, woodworking, and adoring her newly adopted cats,
Bean and Whitney.
With twenty-five years of experience as a leader and visionary in building enterprise-level online
communities, Blake plays an integral role in DZone Publications, from sourcing authors to surveying
the DZone audience and promoting each publication to our extensive developer community, DZone
Core. When he’s not hosting virtual events or working with members of DZone Core, Blake enjoys
attending film festivals, covering new cinema, and walking his miniature schnauzers, Giallo and Neo.
John Esposito works as technical architect at 6st Technologies, teaches undergrads whenever they
will listen, and moonlights as research analyst at DZone.com. He wrote his first C in junior high and
is finally starting to understand JavaScript NaN%. When he isn’t annoyed at code written by his
past self, John hangs out with his wife and cats Gilgamesh and Behemoth, who look and act like
their names.
In October 2020, DZone surveyed software developers, architects, and other IT professionals in order to understand how
containers are deployed and orchestrated using Kubernetes and other modern sub-VM-level tools.
1. The state of resource isolation, application containerization, and in particular, the use of Kubernetes
Methods:
We created a survey and distributed it to a global audience of software professionals. Question formats included multiple
choice, free response, and ranking. Survey links were distributed via email to an opt-in subscriber list, popups on DZone.com,
and short articles soliciting survey responses posted in a web portal focusing on Kubernetes-related topics. The survey was
opened on October 1st and closed on November 1st. The survey recorded 522 total responses.
In this report, we review some of our key research findings. Many secondary findings of interest are not included here; those
additional findings will be published piecemeal on DZone.com.
Research Target One: The State of Resource Isolation and Container Orchestration
Motivations:
1. Software development and runtime ecosystems are now complex and tangled enough that OS-level resource
management is often insufficient to avoid conflicts in build and runtime environments.
2. Further, as more applications run on the web where state management is not built into the application protocol,
application state management becomes increasingly difficult to manage through explicit application-level code but
easier to automate at a lower level.
3. Again, as software architectures increasingly take advantage of the “metal-indifference of cloud computing, while
depending on multi-platform runtimes and complex graphs of dependencies, a dimension for horizontal scaling that
allows for more granular control over runtime environment than VM-level (as would be needed if OS-level WORA
runtimes were not used) becomes increasingly attractive.
4. As Agile development methodologies encourage a microservice architecture with less-permeable system boundaries
and strongly opaque internals, maintenance of a single OS-level environment that serves many services’ heterogeneous
needs becomes increasingly difficult, sometimes impossible.
5. Finally, as container use increases, the need for high-level container orchestration also increases.
For this research target, we did not generate any a priori hypotheses — with one exception: the intersection of microservice
and Kubernetes use. The purpose was mainly to provide empirical data and analytical commentary.
We asked:
Table 1: Linux Resource Isolation Methods Table 2: Linux Resource Isolation Methods by Environment Type
Observations:
1. LXC is the most common method used for Linux resource isolation. This is perhaps because LXC is the only built-in, full-
power container solution available across Linux distributions. For many applications, chroot offers too little (filesystem-
only) resource isolation, while more sophisticated solutions like LXD introduce too much complexity.
• Significant caveat: Because earlier (pre-libcontainer) versions of Docker were built on LXC, it is possible that (a)
some respondents are using LXC because they are using an old version of Docker, (b) some respondents said they
were using LXC because their mental model counts Docker as using LXC (even though Docker has used libcontainer
since v0.9 in 2014), and/or (c) some users are running Docker using LXC rather than libcontainer as driver. A hint
that this might be the case is that (a) some “other” respondents specified that they use Linux capabilities (e.g.,
cgroups) that are available in Docker, and (b) other “other” respondents noted that they only use kubectl and do
not think about what is happening at a lower level. In future surveys, we will add an explicit “apart from Docker or
Rocket” qualifier to the question.
2. Resource isolation without higher-level tools (like Docker) is more common in development than in production
environments, across all methods.
• This may be because (a) production environments are likely to run into more complex resource-management
scenarios that would benefit from dedicated container-management tools and (b) development environments
are more likely to change more rapidly than production environments, which means that fine-grained resource
isolation changes (e.g., for testing a new runtime library version) are less likely to benefit from complex container
predefinitions (like a dockerfile).
• Since low-level detail is available only on a per-user, not per-scenario basis, this does not mean that, in any given
scenario, sub-VM-level resource isolation is not likely to be used in production and not in development — a situation
that in fact seems likely to be extremely common. Future research may address development and production
distinctions at a per-application or per-scenario level.
3. The simple chroot command shows the greatest gap between development and production usage (57.6% in
development vs. 42.4% in production).
• Our guess is that this is because chroot is easy to understand, simple to use, and relatively coarse-grained. In fact, it is
possible to use chroot effectively without understanding anything specific about Linux containerization: Knowledge
of file-level access controls is sufficient.
Reasoning: This survey focuses particularly on Kubernetes, and full Windows support for Kubernetes is relatively new. Based
on anecdotal evidence, it seems that adoption of Kubernetes is still orders of magnitude more common on Linux than on
Windows. And based on prior knowledge of our survey population (which is dominated by developers rather than sysadmins),
we knew that respondents were more likely to have deep experience with Linux administration than Windows, and were more
likely to depend on sysadmin specialists when running on Windows servers. In future surveys, we may expand the survey’s
focus to include Windows and other non-Linux operating systems.
So we asked:
Results (n=513):
2%
8%
Yes
No
I don't know
90%
100
75
50
25
0
2017 2018 2019 2020
1. Growth of container usage over the past three years has been roughly linear. Although Docker (the tool that made
containers easier for developers to use) has been available since 2013, container usage did not apparently explode
until 2017-2018.
2. The current level of adoption (90.4%) is extremely high. Growth between 2019 and 2020 is already slightly slower than
between 2018 and 2019 but will necessarily slow down between 2020 and 2021, even if container adoption reaches
100% saturation.
3. Note: The target survey list was built by similar methods in all three years, but no special effort was made to ensure
population continuity (e.g., respondent-level identification) over time.
• Given the topic of the survey as advertised, it seems likely that response bias would favor those who use application
containers. The percentage of container users among survey respondents is, therefore, likely to be higher than the
percentage of container users in a general population of software professionals.
• Since the advertised topics were the same across the four surveys whose results are included above, the direction
of the trend line in the chart above should not be altered by this population bias (although, of course, its slope
might be).
So we asked:
What tools/platforms are you using to manage containers in development and production?
Table 3: Tools Used for Container Management Table 4: Tools Used for Container Management by Environment
Observations:
2. The difference between Docker and Rocket usage in both development and production is nearly identical.
• This might seem a little surprising: prima facie, we might suspect a larger development vs. production difference
for Docker, since Rocket’s special value-adds are less important in lower-workload, lower-security, fewer-user
environments, as we might guess development environments would be. But in practice, the difference appears
negligible.
3. Most “other” responses were orchestration tools, including Kubernetes. In future surveys, we will reword the question to
specify container-level rather than higher-level tools explicitly.
KUBERNETES USAGE
Given that more containers are being used, and given further that ephemeral, stateless jobs run in microservices require rapid
and complex spin-up/down for a set of containers, we wanted to know how people are orchestrating containers now.
5%
18%
Yes
No
I don't know
77%
78
76
74
72
70
2019 2019.25 2019.5 2019.75
Observations:
1. Kubernetes usage at the organizational level is very high (77%), up significantly from 2019 (73.4%).
2. 83.3% (n=370) of respondents who use Docker also use Kubernetes; only 15.1% (n=67) of Docker users do not
use Kubernetes.
From this, we might wonder whether Docker’s composability is more important than its sheer portability since the benefits of
portability are present without running in a Kubernetes cluster. If this is the case, then we might further guess that usage of
Kubernetes might be higher among users of Rocket containers since the Rocket container runtime was originally optimized for
composability and security.
This turns out to be mildly not the case: 79.3% (n=69) of Rocket users also use Kubernetes. The difference is small enough that
no conclusion can be drawn, but the guess that composability dominates does not have additional support from container
runtime vs. Kubernetes usage differences.
Reasoning: Containers are good environments for microservices; microservice architecture is “ignorant” of resource-
management problems at the higher system level, so use of microservices should exert pressure toward a robust container
orchestration solution.
And we segmented the results by answer to the question (later in the survey):
100
Org runs microservices
80
Org does not run
microservices
60
40
20
0
Yes No I don't know
Observations:
1. The hypothesis was strongly verified. A large majority (83.4%) of respondents whose organizations run microservices also
run Kubernetes clusters, while a small majority (54.7%) of respondents whose organizations do not run microservices also
run Kubernetes clusters.
Research Target Two: The Mind of the Kubernetes User and Other Containerization
Technologies
Motivation:
1. Many low-level resource isolation technologies are decades old, and many ideas behind resource isolation strategies are
as old as time-shared mainframes. Since the low-level isolation barrier types themselves are not new, the way people use
them holds the bulk of interest for anyone interested in modern application architectures.
2. Higher-level technologies built to handle increased low-level containerization, such as Kubernetes, implement distributed
design patterns that were formerly more interesting to specialists in distributed infrastructure and less interesting to
application developers. As variable workloads increasingly require, and cloud services increasingly allow, more fine-
grained control over runtime environments, application developers are encountering distributed computing problems at
increasingly lower levels.
3. Different mental models are required to understand and build for modern, less-stateful systems than for the single-
server monolithic systems that many of today’s professional developers grew up with. Knowing how other developers
understand Kubernetes clusters may help developers break out of mental models less suited to current problems.
So we asked:
“Containerization” can mean many things. Please rank the following aspects of containerization in order of importance.
(1=most important, 7=least important)
• The precise definability of containerized resources depends on effective resource isolation, of course, so this
robustness indirectly follows from resource isolation. But four other answer options were about resource isolation
directly. This suggests that containerization’s effect on application performance — a high-level desideratum — is
more important in the minds of software professionals than the way containers achieve these performance increases.
3. Accordingly, the ranked order changes slightly among sysadmins, SREs, or DevOps leads only (n=73) (from most
important to least important): high availability, horizontal elasticity, process isolation, magical/effortless deployment,
filesystem isolation, memory isolation, network stack isolation, granular resource control.
• Since these types of software professionals are evaluated on uptime and related metrics, it is in their rational self-
interest to consider high availability the most important aspect of containerization. If a process is inadequately
isolated, on the other hand, and various processes cacophonously step on one another’s toes, the blame may fall by
default on application code. We would expect this distinction to fall apart in the case of SREs, but we did not receive
enough SRE responses to draw any such conclusions from the survey data.
• Both groups scored “process isolation” highest, but those who have personally worked with Kubernetes scored “high
availability” second, while those who haven’t personally worked with Kubernetes scored “filesystem isolation” second.
• Our current guess is that “filesystem isolation” would come in second for those without personal Kubernetes
experience because the concept “container” most properly denotes “process isolation” (the top-scored choice in both
groups), and “filesystem isolation” represents the simplest kind of resource isolation.
• This guess is somewhat supported when we segment responses into “senior” (those with >5 years of experience as a
software professional) and “junior” (those with <= 5 years of experience as a software professional); for senior software
professionals, “horizontal elasticity” ranks second to “process isolation,” while for junior software professionals,
“filesystem isolation” ranks second.
• This is consistent with the hypothesis that “filesystem isolation” is the simplest case of resource isolation, on the
assumption that junior professionals are more likely to model a system in a way that is closer to the theoretical
“definitions” of its components.
Observations:
So we asked:
Results (n=522):
Other 3.2% 15
1. The top three things Kubernetes improved (deployment in general, autoscaling, and CI/CD) appear orthogonal.
• Deployment and CI/CD are about how software moves to production, while autoscaling is about how the software
runs under variable load. This orthogonality suggests that Kubernetes is delivering on its promise insofar, as its
benefits are not dramatically focused on either the “dev” or the “ops” side.
2. A large majority of respondents (70.1%, n=369) noted that Kubernetes has improved some aspect of software architecture
or design: building microservices (53.6%), application modularity (44.3%), architectural refactoring (36.1%), or overall system
design (33.5%).
• These are formal improvements in the software itself, not simply improvements in runtime performance or
deployment pipeline. From a software architect’s perspective, this finding is quite significant. Further research
might explore exactly how Kubernetes helped improve each of these aspects of software design — in particular,
how much benefit came from Kubernetes’ orchestration capabilities vs. the containerization itself, as facilitated by
Kubernetes usage.
3. A slightly larger majority (73.9%, n=386) reported that Kubernetes has improved some aspect related to runtime
operations: autoscaling, security, reliability, or cost.
• This is to be expected since this kind of benefit is directly related to container orchestration, and seems less
significant than the comparable percent of respondents who credit Kubernetes with benefitting software design or
architecture itself.
Hypothesis Two:
Hypothesis: People who have run public-facing “pet” servers are less likely to be satisfied with the state of infrastructure
abstraction in 2020.
Reasoning:
1. Abstractions like Kubernetes add a lot of complexity between application code and fundamental system architecture.
2. VMs are a reasonably opaque abstraction, but lightweight containers are less so, and powerful container orchestration
tools even less so.
3. Significant portions of distributed system design that might in the past have been implemented in application code can
now be left to container management and container orchestration layers.
4. But this makes it harder to develop mechanical sympathy with the Von Neumann structure underneath.
5. People who have run their own individual “pet” servers for nontrivial applications are more likely to care about mechanical
sympathy with the operating system layer — therefore are less satisfied with the “herd” concept that a separate container
orchestration layer encodes — than people who have not.
Please select the option that best describes your attitude toward infrastructure abstraction in 2020.
Separately, we asked:
Have you ever personally maintained a single-node, public-facing server (http or otherwise)?
60
45
30
15
0
Infrastructure We're finally The cost in No opinion Other
abstraction in 2020 getting close to complexity of modern
is excessive and pure, non-leaky infrastructure
getting out of hand infrastructure abstraction is worth
abstraction the benefits of
infinite scaling and
continuous delivery
Observations:
• The difference between “have run pet server” and “have not run pet server” responses was greatest for the most
negative attitude toward modern infrastructure abstraction (“infrastructure abstraction in 2020 is excessive and
getting out of hand”): 18% (n=70) vs. 10.4% (n=11).
• Compare the differences between the pet/non-pet segments answering the more design-focused optimistic “we’re
finally getting close to pure, non-leaky infrastructure abstraction” (21.9%, n=85 vs. 16%, n=17) and the more tradeoff-
focused optimistic “the cost in complexity of modern infrastructure abstraction is worth the benefits of infinite
scaling and continuous delivery” (49.4%, n=192 vs. 46.2%, n=49).
• The fact that the “have run pet servers” responses were more relatively negative than relatively positive is taken as
evidence for our hypothesis. But the small n within some of the segments, especially the “have not run a pet server”
segments, weakens the inference.
2. Respondents who have run a “pet” server are more opinionated about the state of infrastructure abstraction.
• Significantly fewer respondents who have run a pet server (9.8%, n=38) have no opinion about the state of
infrastructure abstraction in 2020 than those who have not (26.4%, n=28). This was not an a priori hypothesis, but it is
consistent with our picture: Those who have not run pet servers are more likely to treat the sub-application layer as
“magical” or “satisfyingly opaque” than those who have worried about interrupts, shared memory, and other OS-level
resource management problems. (Consider developers whose first application ran on Heroku — judging from boot
camps and introductory tutorials, not an insignificant number.)
3. Interestingly, this picture of differing opinions of modern infrastructure abstraction does not map onto seniority.
• Senior (>5years professional IT experience) respondents were insignificantly more likely to respond with the
“excessive” answer (15.2%, n=59) vs. junior respondents (14.1%, n=11), and were much more likely to respond with the
“cost in complexity is worth it” option (51.5%, n=200) vs. junior respondents (37.2%, n=29).
• We might have imagined that “old school” people would distrust Kubernetes-level tools more than younger people,
but this is not the case.
Future Research
As usual, we asked more questions than we’ve published here and learned more from survey responses than we’ve been able
to analyze yet. Additional areas covered in our Kubernetes survey include:
• The presence or absence of microservices on Kubernetes clusters — for those organizations that run both.
• The use of stateful workloads where state is maintained within the cluster (rather than, say, in an external DBMS).
• The use of distributed design patterns (circuit breaker, leader election, sidecar, etc.).
Further analyses will be conducted over the coming months and results published on DZone.com.
Several of our research areas would benefit from follow-up analyses at future dates and/or in additional detail. These include:
• Application architecture
• System design
Leaders in Tech
Dipti Borkar, Co-Founder and Cloud Expert, Offers Key
Advice to Kubernetes Users
⊲ Embrace containers for (almost) ALL applications. Adopt now or risk getting left behind. While containers are more commonly seen
across web applications and microservices, more and more distributed systems are now utilizing containers, resulting in efficient,
faster deployments.
⊲ Understand your workloads. The first step to being successful is understanding your workloads. Think about the kind of resource
usage required, and then work down from that; some are CPU-intensive; some more memory-intensive. Understanding the applica-
tion is the first, more important step.
⊲ Adopt the right tools and cloud services. Offloading Kubernetes to a third party opens a wide array of possibilities for your team. “If
we get to the point where 90% of the users are using Kubernetes, and it’s just there and you don’t even know it, that’s what success
looks like.”
According to our survey, the current level of container adoption (90%) is very high. What is your reaction to this
statistic? And what is your advice to teams who’ve NOT adopted containers?
Containers have come a long way. From test and dev to production, many different applications now widely use containers.
There are areas where containers have not been adopted as much and a lot of those relate to data. For web applications and
microservices in operational databases, containers make a very, very good fit. But for some of the more distributed systems, it
is just now starting to become more adopted because these are persistent systems, and it’s taken some time for containers to
come up to speed as it relates with these persistent applications where they have to process data. And that’s an area where we
will see more growth in the future.
In terms of specifically the 10% [who’ve not adopted containers], if they are in the microservices web application space, such
as operational databases, they are getting left behind. Somebody is out-innovating them because they’re basically getting
there faster and being able to get into development, test, and deployment faster with containers. And so it’s time to move
on in terms of data workloads. It is the innovators and the early adopters that are already using containers and increasingly
Kubernetes. And that adoption will continue. I don’t have a number for you but it’s probably less than 50% — and that’s where
the future adoption of containers and Kubernetes will go.
Resource isolation is something that you worry less about when you’re in dev or test, but in terms of production workloads,
it will be something that you have to think more about. More specifically, you have multiple containers running on a single
instance and because containers are an abstraction on top of the operating system, they provide better isolation than VMs. To
prevent over utilization, you have to be thoughtful about how those are getting used. For web applications and microservices, it
tends to be a little bit easier for data workloads.
For example, with Spark or Presto and some of these other distributed systems, what we see is people might just run
one container per instance and they’re actually just using containers, not so much to pack the instances but to simplify
deployment.
The first step to being successful is understanding your workloads. What kind of resource usage is required? And then working
down from that; some are CPU-intensive; some are more memory-intensive. Understanding the application is probably step
one, and then step two is trying to find the right tool on top of it to simplify the resource isolation and make sure that you have
an orchestration layer where you’re not manually doing this. It’s very hard when you just have the quick data layer to support
some kind of resource isolation across multiple containers. That’s why you need the orchestration engines, and at the moment,
obviously, Kubernetes is a better orchestration engine that you know is more widely adopted on top of containers.
A large majority of respondents whose organizations run microservices also run Kubernetes clusters, while a
small majority of respondents whose organizations do not run microservices also run Kubernetes clusters. What
does this tell you about microservices adoption? And what do you think is most important for developers to
consider regarding adoption?
Microservices and Kubernetes — they go well together because they’re fairly stateless applications. And so it’s easier to
deploy microservices with Kubernetes with the operating system underneath it. For customers or users that are not heavy
on microservices, there might be other applications that they’re using; data applications are one of them. Except distributed
systems are hard to deploy and orchestrate, and Kubernetes is a good way of doing that.
Microservices and any stateless application — those are the easy ones to get going, and that’s kind of why we see such a high
percentage. But for these other applications and distributed systems, the engines themselves had to change before they
could drive natively on Kubernetes. And so the industry has gone through a few changes where, with the disaggregation of
storage and compute, it is becoming a lot easier for these data processing engines to now use Kubernetes because they can be
stateless. So we will see that the adoption of Kubernetes in this space increases now that the application itself is more aligned
and more native to Kubernetes’ needs.
With that said, it is an architectural shift to use microservices; it’s an evolution of the stack. And having these APIs essentially
connected gives users a flexibility and an advantage from a speed perspective and an interoperability perspective. And so that
is the advantage you get with microservices.
Based on our survey, the top three benefits of using Kubernetes were: deployment in general, autoscaling, and
CI/CD. And security was the lowest ranked benefit of Kubernetes. What does this tell you? And what’s your advice
to Kubernetes users in terms of security?
Security is important across multiple layers, but a lot of focus right now is on data security, and that is actually one level above
Kubernetes.
So if you ask a data platform team about their concerns, it would be pretty high on their list, but from an operating, OMS,
and infrastructure perspective, it is a pretty protected layer. Now, you still have to go through signed containers; you want to
check as you scan your containers for vulnerabilities, disallow privileges for most users, and things like that. And so only the
automation really accesses that Kubernetes layer.
But where most of the security comes in is in the layer on top of that, as it relates with data. Thinking about authentication,
how do I know that I’ve been authenticated to enter a system, or that I am authorized to access the data? These are not
Where do you see Kubernetes in the next 6-12 months? What is your most critical advice to developers so that
they can fit in with that path and stay ahead of the trends?
Kubernetes has matured significantly over the last few years, and as a center point where it’s almost a must-have for your
stack, Kubernetes is getting to the point where you are probably spending more time on deployment and orchestrating your
environment. And you could essentially offload that to Kubernetes and do more with your time, getting rid of inefficiencies.
And that’s what Kubernetes is about.
However, it is still hard if you’re running it on your own. Right? It is a distributed system; it’s a cluster that you are managing. The
ecosystem of Kubernetes is getting fairly complicated. There is monitoring for it. There’s security for it. There are hundreds of
integrations. And so if your team is doing this all on their own, you almost need a set of Kubernetes experts because otherwise,
it’s hard for a full-stack developer to go all the way from top down, or a data platform developer to deal with the data tier and
Kubernetes as well. And the way out of that is using cloud services, which have simplified Kubernetes even further.
With that, when we happen to use Kubernetes, you won’t even know it. And that’s where it needs to be. And so if we get to the
point where 90% of the users are using Kubernetes, and it’s just there and you don’t even know it, that’s what success looks like.
And we’ll probably get there I would say three to five years from now, but there’s a lot more workloads that are now getting
more to run on Kubernetes, and Kubernetes is becoming more mature; we’re at a tipping point where that adoption will start
going very fast.
aquasec.com
CASE STUDY
INDUSTRY
CHALLENGE
Internet Service Provider
Kakaku’s IT management team identified microservices, container workload
environments, and Kubernetes on-prem to speed its development process
PRODUCTS USED
and deploy applications faster — without sacrificing security. In addition, they
Aqua Enterprise, a cloud native security
needed to guarantee complete security in its containerized environment. For
platform
Kakaku.com, it was essential to find an end-to-end solution for their complete
production environment.
PRIMARY OUTCOME
Having the Aqua Vulnerability Scanner built
SOLUTION into their CI/CD pipeline and Enforcers for
Kakaku.com’s search led them to the Aqua cloud native security platform, runtime protection ensures that Kakaku.com
which featured seamless security from development through deployment. can meet their security goals. Even if issues
arise later, such as malware activity, Aqua
Aqua’s range of security attributes ensured that Kakaku.com’s containers could
detects and blocks it. This provides for more
run safely on Kubernetes. Kakaku.com also relies on Aqua for threat detection
reliable remediation and increased efficiency.
and blocking, visualization, and meeting compliance requirements. In fact,
Aqua now secures all Kakaku.com system environments, including Linux and
Windows containers, cloud and on-premises deployments, orchestration tools,
“Using Aqua makes it possible not only to
and multi-tenancy.
perform a reliable scan before release but
also to prevent abuses after.”
RESULTS
— Kazuki Hashimoto,
Kakaku.com now automates its CI/CD to scan images using Aqua during the Kakaku.com 1st Infrastructure Service Team
build phase to make sure there are no potential vulnerabilities — reducing or
even eliminating human error. Aqua’s automated security features provide for
more reliable remediation and increased efficiency. With security throughout
the development lifecycle, Aqua empowers Kakaku.com to:
Originally, Kubernetes was developed by Google to empower the management of its own container infrastructure. The idea
was quickly taken up by other large companies. But due to the initial focus on environments with usually several thousands of
servers in data centers distributed worldwide, the learning curve for newcomers was very steep. Many questions typically asked
by smaller organizations often went unanswered in the beginning.
The handover of Kubernetes to The Linux Foundation, announced alongside the foundation of the Cloud Native Computing
Foundation (CNCF) in 2015, led to participation in the development of Kubernetes by different companies and projects. The
CNCF Cloud Native Interactive Landscape illustrates just how expansive participation has become. As a result, there are
hundreds of projects, tools, and concepts that simplify the use of Kubernetes and have significantly flattened the once very
steep learning curve over the last year and a half.
Kubernetes is not a fixed piece of software but is more of a kind of framework combining various aspects and functionality to
operate a cloud environment. Thus, many functions such as storage or monitoring are not part of the kernel itself but can be
added as services. And here, of course, large enterprises have distinct requirements compared to small- and medium-sized
companies. But fortunately, the Kubernetes toolchain has evolved, and today, non-enterprise-level companies have access to
tools and services to manage their own Kubernetes cluster.
In general, the most important elements to run a self-managed Kubernetes cluster are:
• Installation
• Deployment
• Storage
• Monitoring
Below are some of the recent developments in running a self-managed Kubernetes cluster.
INSTALLATION
Kubeadm
The core tool to install a Kubernetes cluster is kubeadm, which has evolved over the last few years to make
creating a minimum viable Kubernetes cluster that conforms to best practices easier. In fact, kubeadm
allows you to set up a cluster in minutes. Most of the Linux distributions are supported and sensible
default values allow a secure and fast installation procedure.
K3s
An alternative tool to install a small Kubernetes cluster, K3s removes unnecessary features and uses
lightweight components, which significantly reduces the size of an environment and simplifies
installation.
DEPLOYMENT
The deployment of applications and microservices in Kubernetes is done via YAML files with its expressive, powerful
configuration and settings. The high complexity of YAML files is one reason for Kubernetes’ steep learning curve. Until recently,
simplifying deployment of standard applications was only possible with a Helm Chart. While Helm is a suitable solution for
standard deployments, it is not easy to learn for those who are new to Kubernetes.
Kustomize
Kustomize is an easy-to-learn alternative for Kubernetes deployments, allowing you to compose and
customize collections of resources from the local file system and external sources like a git repository.
Initially, Kustomize was developed as a separate tool, but as of March 2019 it is part of the Kubernetes
standard installation. Kustomize enables small- and medium-sized companies to roll out their product
and cloud services for a large number of customers in different configurations. Many open-source projects
use Kustomize to provide more flexibility to the community.
STORAGE
Kubernetes provides effective functionality for running stateless services in a cloud environment out of the box. But most
business applications cannot operate as stateless services. Databases and indexes are usually an integral part of a business
application. Kubernetes strongly abstracts the management of storage and does not offer one single solution. As a result, there
are a few options for operating stateful containers in Kubernetes today.
While larger organizations can usually rely on an existing extensive database cluster solution, smaller organizations tend to look
for an easy-to-use solution to store data from scratch. Because storage is all about resilience, distribution, and performance,
choosing the right solution is usually not that straightforward. In recent years, extensive development has happened in this
area, and today there are various tools that allow smaller organizations to set up a reliable storage solution quickly and easily.
Longhorn
Longhorn delivers simplified, 100% open-source cloud-native persistent block storage without the
overhead cost of open core or proprietary alternatives, making integration into a Kubernetes cluster
straightforward. Longhorn independently manages existing storage on a worker node and makes it
available as distributed volumes for containers within the cluster. Each volume is automatically distributed
across multiple nodes to increase the resiliency. Longhorn includes a UI Dashboard that allows you to
monitor the nodes and volumes in a graphical interface and also to administrate and backup all data.
Ceph
Ceph, another distributed block storage solution, can be used within Kubernetes. For its first several
years, Ceph installation was not that easy. Since version 15 (Octopus), Ceph provides a completely new
installation tool called cephadm, which is based on Docker. Separate tools or libraries no longer need
to be installed on the host, making it easy for smaller organizations to set up Ceph within a Kubernetes
cluster. Ceph also includes a UI Dashboard that allows you to monitor the nodes and manage a distributed
storage environment in a graphical interface.
MONITORING
Kubernetes provides several ways to collect and monitor cluster metrics like CPU, memory, or network usage of cluster nodes
or single pods. Additional metrics for the cluster topology, and even application-specific metrics, are available through the
Kubernetes Metrics API.
MONITORING SOLUTIONS
K9s
K9s is a simple-to-use command-line tool for monitoring the status of a cluster and its running pods,
as well as displaying cluster metrics collected by the Kubernetes Metrics API. The :pulse view provides
insights into a running Kubernetes cluster without the need to install additional tools or services.
Using this stack has become easier for smaller organizations over the last couple of years as the Kubernetes Community
expands and resources are more readily accessible.
Prometheus Operator
The new Prometheus Operator project provides a promising way for the Kubernetes-native deployment and management
of Prometheus and related monitoring components. The project serves to simplify and automate the configuration of a
Prometheus-based monitoring stack for Kubernetes clusters. This also includes a Grafana service that provides many out-of-
the-box Grafana Dashboards with no additional installation efforts, reducing installation time from days to minutes.
Conclusion
In the last two years, a lot of development happened across the Kubernetes ecosystem. This was made possible not least
through the broad and engaged support of an agile community by the CNCF. For small- and medium-sized organizations, this
means that the initially steep learning curve has flattened out significantly and operating their own Kubernetes cluster has
become much easier today.
Ralph Soika is project lead in the open-source project Imixs-Cloud and co-founder of Imixs GmbH.
For more than 15 years, he has supported small- and medium-sized companies in the design and
development of modern software solutions and service environments.
Your ideas are going to change the world. Nothing can stand in the way of you turning code and
ideas into an impactful, finished application. Not even the complexity that accompanies cloud native
app development.
CloudBees helps you leverage the power of Kubernetes for end-to-end application development.
We bring order to cloud native chaos by uniting the silos of information and automation, and
helping you scale CI/CD and DevOps across your entire enterprise software portfolio. We’ll
manage the CI/CD automation for you, so you can continue building stuff that matters.
Learn More
CloudBees, Inc.
CloudBees CI is built on top of Jenkins, an independent community project. Read more about Jenkins at: www.cloudbees.com/jenkins/about 4 North Second Street | Suite 1270
San Jose, CA 95113
© 2020 CloudBees, Inc. CloudBees is a registered trademark and CloudBees CI, CloudBees CD, CloudBees Cloud Native CI/CD, CloudBees Engineering United States
Efficiency, CloudBees Feature Management, CloudBees Build Acceleration and CloudBees CodeShip are trademarks of CloudBees. Other products or www.cloudbees.com
brand names may be trademarks or registered trademarks of their respective holders. info@cloudbees.com
SPONSOR OPINION
The advantages of the cloud are clear. Running enterprise applications fully in the cloud or in hybrid environments helps
reduce risk, cut costs and increase innovation. At this point, the question isn’t “Should we move to the cloud?” it’s “How do we
ensure quality software delivery in new hybrid and cloud environments?” The answer is continuous integration and continuous
delivery with Kubernetes.
Performing CI/CD in the cloud improves failover and reduces downtime, while Kubernetes is an enabler for your team to build
resilient, cloud-native applications quickly. Below are three top requirements for CI/CD solutions to build enterprise software in
the cloud.
As a CI/CD solution that runs natively on Kubernetes, CloudBees CI makes it incredibly easy to scale in support of enterprise
cloud strategies. It works across your entire DevOps toolchain — integrating with commonly used tools. It helps you automate
processes within a wide, varied ecosystem, which is critical for successfully modernizing legacy applications. CloudBees CI is
the enterprise-grade, Kubernetes-native solution that can help you unlock microservices, control cloud expenditure, and build
against a single source of truth.
Kubernetes is the most popular open-source orchestration tool for running and managing container-based workloads. With
an increase in the adoption of containers and microservices architecture, Kubernetes’ popularity is growing in the developer
community. When you have many containers running in production, you need a container orchestration solution to reduce
the complexity of deploying your applications at scale. Kubernetes resolves many of the challenges associated with running
containerized workloads, either on-premise or in the cloud.
Organizations are widely adopting Kubernetes as they migrate their applications to a modern platform and implementing
containers for deployment purposes. This article will:
• Explain how monitoring solutions like Prometheus and Grafana can help you resolve them.
Kubernetes ensures that the desired state and the actual state of the cluster are always in sync. As your services scale,
Kubernetes automatically monitors and maintains service health. It ensures that the system is self-healing and can
automatically recover from failures.
• kube-apiserver (API Server) – the control plane’s front end that exposes the Kubernetes API and provides an interface for
communication. It is the custodian of the entire cluster, performing all administrative tasks, and handles all internal and
external requests.
• kube-scheduler (scheduler) – responsible for workload distribution among the worker nodes based on the cluster
resource utilization. It allocates pods to the available nodes.
• kube-controller-manager (controller) – ensures that the Kubernetes cluster is maintained, and the current state is
equivalent to the desired state.
• etcd – the distributed key-value store to store the current state of the Kubernetes cluster, along with the configuration
details. It can be considered the single source of truth in the cluster.
• kubelet – the main Kubernetes agent that runs on each worker node. It is responsible for ensuring that the containers on
each pod are running. Kubelet also communicates the node health back to the master.
• kube-proxy – the central networking component of Kubernetes that ensures communication between containers, pods,
and nodes is intact.
• Container runtime – the software responsible for running containers inside each pod. There are several container
runtime options available like Docker, containerd, rkt.
• Pods – defined as the smallest deployable unit in Kubernetes. A Pod is a group of one or more co-located containers that
run in a shared context. Containers inside the pods can communicate with each other and share the pod environment.
This will create an HPA resource that increases/decreases the number of pods between 1-5 to maintain target CPU utilization of
60%.
Compared to a traditional monolithic environment, microservices running in a dynamic container environment require a
mature strategy for observability. You need the ability to review logs, metrics, and traces to perform root cause analysis. The
number of components to monitor in a Kubernetes cluster is significantly high; hence, you will need to rethink your monitoring
strategies. It is critical to identify and have a good understanding of the key metrics to monitor in your cluster.
Below is a configuration file showing resource limits and requests for a container:
apiVersion: v1
kind: Pod
metadata:
name: dzone-report
namespace: dzone-ns
spec:
containers:
- name: dzone-ctr
image: dzone
resources:
limits:
memory: 4GB
cpu: 1000m
requests:
memory: 2GB
cpu: 500m
• Liveness Probes – designed to check if the application is in a good state. If not, Kubernetes will detect the offending
application pod and automatically restart it.
• Readiness Probes – designed to check if the application is ready to service requests. Kubernetes ensures that the readiness
probe passes before sending traffic to the application pod.
Prometheus is pull-based and identifies the services it needs to monitor via service discovery. It scrapes metrics from the client
applications at periodic intervals, then collects the monitoring data and stores it in a time-series database. You can then query
the required metrics using Prometheus’ powerful query language, PromQL, or view it in Grafana dashboards. You can also
configure your own alerting rules. Prometheus sends alerts to AlertManager, which aggregates alerts and sends notifications
via different systems like OpsGenie, PagerDuty, and Email.
Grafana helps organizations to adopt a data-driven culture and make informed decisions based on metrics. The dashboard
below provides a cluster overview that allows you to monitor the Kubernetes resources and identify any workload bottlenecks.
Conclusion
Kubernetes is a rapidly developing platform that lets you focus on building your applications without worrying about the
underlying infrastructure. As organizations transition from a monolithic to microservices architecture, they can benefit from
Kubernetes’ declarative approach and orchestrate the availability of their containerized workloads.
Samir Behara is a Senior Architect with EBSCO Industries and builds software solutions using cutting
edge technologies. He is a Microsoft Data Platform MVP with over 15 years of IT experience. Samir is a
frequent speaker at technical conferences and is the Co-Chapter Lead of the Steel City SQL Server user
group.
Training
Services
& Support
Explore the
D2iQ Kubernetes Platform
Simplify your
Kubernetes Journey
CASE STUDY
Ziff Media prides itself on keeping up with modern technology trends to stay on the
forefront of their industry. Kubernetes is the foundation of their infrastructure, which
provides them the agility needed to manage multiple different brands with a variety
of customer-facing web properties.
COMPANY
CHALLENGE Ziff Media Group
From an executive decision-making level, Ziff Media needed a Kubernetes Platform
that was open, reliable, and made it possible to use open source products that they COMPANY SIZE
could plug in and implement themselves. They also required an expert support team 1,001-5,000 employees
that could be there for quick responses in the event of an emergency, and not have to
wait hours or days for each message to come back. INDUSTRY
Digital Portfolio in technology, culture,
SOLUTION and shopping
Ziff Media chose D2iQ’s Konvoy because everything is “pure open source”. The
foundation of the D2iQ Kubernetes Platform (DKP), D2iQ Konvoy is a comprehensive, PRODUCTS USED
enterprise-grade Kubernetes distribution built on pure open source with the add-ons D2iQ Konvoy
needed for Day 2 production — selected, integrated, and tested at scale, for faster
time to benefit. PRIMARY OUTCOME
Managing Kubernetes efficiently and
“Whenever I want to scale out Prometheus, Grafana, or Elasticsearch, or change independently with zero lock-in or
configurations or authentications, I can go directly to the website documentation and downtime.
just do it — everything works out of the box.” - Brett Stewart, Senior DevOps Engineer
The other thing that sold Ziff Media on D2iQ was the level of support. “The speed, “The biggest thing I enjoy about
the competence, and the ability to meet us where we’re at — on Slack. The support D2iQ Konvoy is that everything
engineers are very fast at getting answers to us quickly, even if they don’t immediately is pure open source. Whenever
I want to scale out Prometheus,
know the answer. The engagement and the knowledge on D2iQ’s end has been very
Grafana, or Elasticsearch, or change
confidence-inspiring and that is not something we saw from other vendors in the
configurations or authentications,
space.” - Chris Kite, Director of Technology I can go directly to the website
documentation and just do it —
RESULTS everything works out of the box.”
Within two months of implementing D2iQ Konvoy, Ziff Media Group was already in
production. The openness and stability of D2iQ Konvoy has given the DevOps team — Brett Stewart,
Senior DevOps Engineer,
the opportunity to get things done faster and more reliably.
Ziff Media Group
“What sets D2iQ support apart from others is that they have a DevOps mindset and
understand the impact that our issue is causing. Rather than adding a quick fix, they
dig deep to find the long-term solution, which allows us to get production up and
running as quickly as possible.”
“kubecthell”
Daniel Stori, Software Architect at TOTVS
CHALLENGE
Hotjar had challenges with how the growing number of developers
structured their work by using legacy systems, slowing remote
productivity. Developers were using BitBucket for hosting source code
and Jenkins for CI/CD; due to the constraints of some of the legacy COMPANY
applications, they had to develop and maintain large amounts of Jenkins- Hotjar
specific code to support pipelines. They were using Kubernetes as a
platform for all their microservices and some of the build pipelines. COMPANY SIZE
Hotjar was looking for a tool that offers Kubernetes integration and a 100 employees
replacement for Jenkins CI/CD.
INDUSTRY
Technology
SOLUTION
Hotjar selected GitLab Silver; GitLab integrates natively with Kubernetes,
PRODUCTS USED
which gives the development team peace of mind because they can
GitLab Silver
trust that the tool will work automatically without constant maintenance.
GitLab projects connect to their AWS EKS cluster, the tests run within the
PRIMARY OUTCOME
cluster using Kubernetes Operator, it reports back with coverage results,
Hotjar replaced Jenkins with GitLab for
then artifacts are uploaded to AWS ECR/S3. Review environments spin up
exceptional CI/CD, a robust Kubernetes
inside the EKS cluster during review. Every engineering team and some of integration, and improved source code
the customer support team members are using GitLab. management. GitLab’s integrated platform
helps to keep Hotjar up to date with cutting
edge software, end-to-end visibility, and
RESULTS
inspires all remote culture modernity.
Developers save time making use of standalone review environments
instead of in-the-loop shared staging environments. With most people
online synchronously, an MR is reviewed in minutes or hours, and so
“In terms of Kubernetes-native product that
deployments are now between 2-15 per day with 50% deployment time
supplies the whole life cycle, we actually
saved. CI Build time has decreased by 30% over previous implementation didn’t find that many competitors.”
in Jenkins. With Jenkins, the teams created a lot of custom codes to do lot
of the work that they are now getting natively with GitLab. On the code — Vasco Pinho,
Team Lead, SRE at Hotjar
management side, they used the cloud version of Bitbucket. Now, they
use GitLab.com for all of the development work and to host the CI/CD
runs.
Building a CI/CD process for an application could be a challenge, especially when you are dealing with Kubernetes and
Docker. This article will cover how Kubernetes can improve the CI/CD process with an example of a .NET Core application that
includes all deployment processes in a YAML pipeline. We will review a list of effective tools and frameworks, as well as walk
through a detailed checklist that contains key actions to help make your Kubernetes cluster production-ready.
Before diving into the complexities of integrating Kubernetes into your DevOps processes, it’s important to understand the
standard architecture of a Kubernetes cluster and how it may impact customers’ solutions.
For example, Microsoft Azure provides Azure Kubernetes Services (AKS), which is a hosted service that allows you to set up your
cluster, run the application, and create or improve CI/CD processes in a short period of time.
In Figure 1 below, I’ve created typical AKS cluster architecture with the most widely used components: load balancer, nodes and
pods. I will use this architecture for the example in the article:
• Node – represents a computer unit container or simply saying virtual machine. It is used to host pods.
• Pod – a logical container (or deployable unit) that hosts an application instance that is wrapped in a docker container.
Kubernetes allows you to use other container runtimes (e.g., containerd and CRI-O).
• Load balancer – a service that distributes traffic between nodes to avoid single-node overload.
The table below lists the key components of the control plane and nodes:
Component Description
Provides access to the control plane and allows other tools to communicate and perform
kube-apiserver
operations with the Kubernetes cluster
A process that is responsible for single operations like generating API tokens, or acting
kube-controller-manager
when pods or nodes go down, or managing load balancers
A reliable key value store that allows you to store cluster metadata, configuration data,
etcd
and application state data
kube-scheduler Assigns the pods to the node and is part of the control plane
• Supports declarative YAML format to declare the configuration, which allows DevOps architects to smoothly integrate it
into any pipeline.
• Supports zero-downtime deployment models (e.g., the blue-green deployment pattern in AKS).
• Integrates with DevOps platforms (e.g., Azure DevOps has all integration components for AKS in place, as well as Google
Cloud and Bitbucket).
• Is supported by popular IaC platforms and tools (e.g., Terraform, AWS CloudFormation, Pulumi, Azure Resource Manager).
• Has a command-line interface (CLI) that allows you to manage the whole cluster.
To demonstrate how Kubernetes can improve the process of deploying, managing, and scaling your application, I created an
example based on a .NET Core application. As a CI/CD platform, Azure DevOps offers effective support for AKS and other Azure
Resources.
TRAEFIK
Traefik is a platform that offers a set of components for the Kubernetes cluster. It contains a load balancer, reverse proxy,
monitoring, and service mesh, which are all compatible and have the same step-by-step set-up flow, allowing cloud architects
to simplify and speed up their CI/CD processes.
ISTIO
Istio is an open-source framework that allows you to set up an all-important toolset to your cluster at once. It contains tools for
traffic management, monitoring/logging components, security, and network policies. As a disadvantage, the installation can
be complex and requires a lot of time; however, everything is well documented in its GitHub.
POPEYE
Popeye scans your cluster for potential issues with configuration, resources, and network holes and generates detailed reports
with all issues.
GOLDILOCKS
Goldilocks scans pods for resource limits and creates reports with recommended resources. As a small disadvantage, it requires
a vertical-pod-autoscaler. We will talk about it in the next section.
K9S
K9s provides a command-line interface (CLI) that allows you to easily manage, monitor, and even benchmark your cluster in
your favorite terminal software.
KURED
Kured (Kubernetes Reboot Deamon) is a component that safely reboots and installs security updates of your nodes. It has an
easy and fast installation configuration based on YAML, as well as supports different alert types and sources.
• Set up requests and limits for your container and pods to avoid excessive resource usage pods eviction issues. I
recommend to also use resource quotas and limit ranges.
• Implement Container lifecycle hooks, which allow you to control container events and monitor when something goes
wrong.
• Set up cluster-autoscaler and the Horizontal Pod Autoscaler, enabling you to control the load of your cluster and increase
or decrease nodes and pods for better cluster availability. It also helps save money when the cluster has low usage.
• Set up a backup/restore strategy for your cluster data (e.g., you can use tools like Valero or Azure Site Recovery).
• Set up granular role-based access control (RBACK) policies to avoid all users having full access to the cluster.
Conclusion
In this article, I described a typical Kubernetes cluster architecture and its core components, provided a useful toolset that
simplifies working with your cluster, and touched on autoscaling as an important option for highly loaded and available
applications. The Kubernetes cluster checklist will help you prepare your cluster and application to run successfully in
production. To accompany the example above, you can find a detailed description of how to set up the AKS cluster, including
Ingress setup, deployment YAML scripts, and application source code here.
I am a software and cloud architect expert at Nordcloud GmbH, who is passionate about building
complex solutions and architecture that bring value to the business. I also work as a consultant and like
to share my knowledge with other people through my technical blogs and technology courses I create.
Learn More
CHALLENGE
When Mojix, a leading software company, was developing the next
generation of its retail edge platform, the company needed a way to
manage thousands of in-store applications across the globe. Mojix was COMPANY
working on a security and supply chain software stack that could be Mojix
deployed at thousands of retail locations for clients. These stacks acted
much like micro datacenters, which meant Mojix needed a way to COMPANY SIZE
manage the stacks efficiently. Mojix had only recently moved from VMs 200+ employees
to Kubernetes, which meant it needed a solution that could be relatively
easy to onboard. INDUSTRY
Retail
SOLUTION
Redapt developed a proof-of-concept solution built upon Google PRODUCTS USED
Cloud’s Anthos due to its native Kubernetes support and its ability Redapt Anthos
micro datacenters at scale. Through the proof-of-concept results, Mojix by vertical cloud technology from the Google
Cloud Platform with Anthos. We also rely on
gained intel into its Anthos deployment and integration capabilities,
Intel’s end-to-end hardware innovations, and
and the confidence to move forward with its ambitious edge-to-cloud
our great relationship with Redapt as an
solution. edge service partner.”
— Gustavo Rivera,
Mojix Senior VP of Software Engineering
Demystifying Kubernetes
Deployment Strategies
Choosing the Right Deployment Approach for a Reliable
Infrastructure
As per the CNCF 2019 survey, there is a steady growth in container adoption over the last four years. Seventy-eight percent of
survey respondents claimed they were using Kubernetes in production, projecting a significant increase in Kubernetes users
year-over-year.
While it is safe to infer that Kubernetes deployments will continue to increase due to rising popularity, it is also crucial for
organizations to choose the right deployment strategy for running resilient distributed systems.
• Load balancing: Load balancing will help to ensure your application is always stable. When a container gets too much
traffic, Kubernetes can distribute the network traffic, thereby ensuring a stable deployment.
• Configuration management: You can securely store SSH keys, OAuth tokens, passwords, and other sensitive information
in Kubernetes. You can also update app configurations without having to rebuild the container images or without
revealing the secrets in your configuration.
• Automated bin packing: Kubernetes allows you to manage resources better. For instance, you can create a cluster of
nodes with predetermined CPU and memory slots. Kubernetes will then fit your containers to the nodes.
• Automated rollbacks and rollouts: Kubernetes allows you to define an optimum state for all deployed containers. You
can then modify the state of a container to the optimum state.
• etcd: This is a key-value store used as a backing store for cluster data.
NODE COMPONENTS
• kubelet: This ensures that containers are running in the pod as expected.
• kube-proxy: This is a network proxy that ensures network rules are maintained on the nodes.
• Container runtime: This is the software charged with the task of running containers.
ADD-ONS
• DNS: This is a must-have add-on for all clusters. It is the DNS server that serves the DNS records for all services.
• Web UI: A general-purpose dashboard that allows users to manage and troubleshoot apps.
• Container resource monitoring: This keeps track of time-metrics in a central database and provides an interface for
browsing the data.
• Cluster-level logging: This keeps track of container logs and provides an interface for searching and browsing the logs.
There are several deployment strategies that you can use depending on your goals. For instance, you may want to conduct a
beta test before rolling out the application to all users. This would mean rolling out the changes in specific test environments
first before making it available to the public. You need to choose the right strategy in order to ensure the reliability of your
infrastructure during an app update.
Without further ado, let’s look at some prominent deployment strategies for managing successful Kubernetes applications.
RECREATE
The recreate deployment strategy is the simplest form of a Kubernetes deployment that terminates all active instances and
then creates them afresh with new versions. Though this strategy remains a popular choice, it is often not recommended for
complex cluster and application architectures. The main advantage of a recreate deployment is that the app state gets entirely
renewed.
spec:
replicas: 3
strategy:
type: Recreate
• If the app doesn’t support old and new versions of code running simultaneously
• If you must migrate all data transformations before running new code
• If you are using an RWO volume, which cannot be shared amongst multiple replicas
ROLLING
A rolling deployment gradually replaces the instances of an app with the new version. The phased replacement of the app’s
pods makes sure there is always a minimum number of available pods.
Strategy:
type: Rolling
rollingParams:
intervalSeconds:2
timeoutSeconds:60
maxSurge: “10%”
maxUnvailable: “10%”
Pre: {}
Post:{}
• When your app supports the running of old and new code concurrently
BLUE/GREEN
Blue/Green deployments make it possible to upgrade an app with zero downtime. With this strategy, two identical application
environments (Blue and Green) are run concurrently. However, at any time, only one of the environments is actually live while
the other one is idle. Any updates to the app are first applied to the idle version, and once all tests have been done and stability
confirmed, traffic is redirected from the live version to the idle version. This way, you can seamlessly switch from Blue to Green
without any downtime.
kind: Deployment
metadata:
name: myapp-1.1.0
spec:
replicas: 3
template:
metadata:
labels:
Name: myapp
Version: “1.1.0”
spec:
containers:
name: myapp
image: myapp:1.1.0
ports:
name: http
containerPort: 80
kind: Service
metadata:
name: myapp
labels:
name: myapp
spec:
ports:
name: http
port: 80
targetPort: 80
Continued on next page
• Continuous integration: Blue/Green deployments make it possible to push software live quickly and continually update
it with minimal risk to new releases.
• Testing in production: Some bugs can only be discovered by testing the app in production. This Blue/Green deployment
strategy makes it possible to test the app without the risk of bad user experience.
CANARY
Canary deployment is a method for conducting incremental rollouts by juxtaposing new versions of the app to the last known
stable version, comparing the two to determine if the new deployment will be rejected or promoted.
This is typically done by gradually deploying the new version to a subset of live users and comparing their experience with the
rest of the users who are using the old version. The steps of canary deployment are as follows:
Canary deployment helps developers to discover potential issues when the new version is only available to a small number of
users. Any errors can, therefore, be fixed first before the release is applied to all servers.
canaryDeploy:
title: “CANARY ${{CF_SHORT_UPDATE}}”
image: myapp/darwin:main
environment:
WORKING VOLUME=.
SERVICE NAME=test-app
DEPLOYMENT_NAME=test-app
TRAFFIC_INCREMENT=10
NEW_VERSION=${{CF_SHORT_UPDATE}}
SLEEP_SECONDS=30
NAMESPACE=canary
KUBE_CONTEXT=TestCluster
• If the app doesn’t use any sticky session mechanism as some users might hit a canary server in one request and a
production server in another
A/B TESTING
A/B testing is a deployment strategy where multiple variants of the app are run in parallel and then various analytics tools (e.g.,
HTTP headers and cookies) are used to pick the best variant based on user behavior. In some cases, new features can be made
provisionally available to a select number of users just to test it out and see if the new features will be accepted by users. A/B
testing is not native to Kubernetes, so you might need to set up external components like Istio, Traefik, and Linkerd.
SHADOW
Under the shadow deployment strategy, production traffic is copied to a non-production service for testing purposes.
Shadowing is almost similar to canary and Blue/Green deployment strategies — except it has some distinct applications where
the other strategies might not be ideal. For instance, shadowing traffic would be perfect for testing critical apps (e.g., payment
gateways) that may not have room for reverting changes.
apiVersion: darwin.io/vi
Kind: Mapping
metadata:
name: newservice
spec:
prefix: /newservice/
service: newservice.default
shadow: true
• When you want to measure how a service behaves with respect to your expected outcomes
• When you want to test a new version on real traffic with zero production impact
Key Takeaways
In essence, with this list of deployment strategies to choose from, it is essential to choose the one most suited to your
requirements. If you are looking to release to a staging/development environment, recreate would be the preferred choice. On
the contrary, Blue/Green would be ideal for production, while rolling and canary deployments would be good options when you
are unsure about the impact of the release. If your business needs to test an app with different users, then you may want to try
A/B testing. Lastly, you can use shadowing if you want to test your new app on real traffic without impacting production.
With these considerations in mind, opting for the right deployment strategy should get a lot easier.
Sudip is a TOGAF Certified Solutions Architect with more than 15 years of experience working for global
majors such as CSC, Hewlett Packard Enterprise, and DXC Technology. Sudip is now a full-time freelance
tech writer, covering a wide range of topics like cloud, DevOps, SaaS, cybersecurity, and ITSM. When not
reading and writing, he can be found in a squash-court or playing a game of Chess.
Get Started
"Astra is hands-down the best solution for Cassandra developer productivity. It eliminates all of the overhead involved in
setting up Cassandra. With Astra, developers can fully automate their CI/CD pipelines for Cassandra support. This means
they can concentrate on more important tasks."
#DataStax
CASE STUDY
CHALLENGE
FamilySearch is the largest genealogy organization in the world, routinely
serving 125 million transactions per hour during peak usage from its
more than 500,000 customers users spread out across the world. As the
organization grew in popularity, they began struggling with their legacy COMPANY
database technology as it strained to service their customers’ experience FamilySearch
expectations. Expecting traffic to grow up to 100x over the next three years,
FamilySearch needed to migrate to a scalable database solution that was COMPANY SIZE
highly performant, highly available, and could support the organization’s 1,000+
rapid growth with zero downtime.
INDUSTRY
SOLUTION Nonprofit
• Rapid scalability, with seamless support for 125 million transactions per
hour and a technological foundation that can grow alongside them into
the future.
• New services like Record Hints, which helps users make new research
discoveries and further improves the user experience.
Trusted by
over 20,000 users in
production-grade clusters.
www.fairwinds.com
+1 617-202-3659
sales@fairwinds.com
CASE STUDY
CHALLENGE
COMPANY SIZE
When Zonar’s infrastructure could no longer handle variable loads and
500 employees
increased volume, they strategically decided not to increase their large data
center footprint. Instead, the company’s engineering team evaluated the
INDUSTRY
performance improvements and efficiencies a containerized infrastructure
Fleet Management
would bring. Zonar knew what Kubernetes offered with regard to scaling and
delivery configuration, but Zonar lacked hands-on experience operating and
PRODUCTS USED
maintaining the container orchestration system and building Kubernetes-
Fairwinds Managed Kubernetes Services
based applications and services.
SOLUTION
“Fairwinds has saved us time and
Zonar partnered with Fairwinds to implement a brand-new architecture on
money by providing expert cloud
Google Kubernetes Engine (GKE) that allowed new and migrated applications services guidance, consulting, and
and services to receive data from the existing infrastructure. This new implementation. Every step of the way,
architecture enabled Zonar to migrate legacy applications and services where they’ve trained our team and increased
it made sense for the business while also implementing all-new applications our knowledge base, allowing us to
Kubernetes Patterns: Reusable Elements for Getting Started With Kubernetes Containers weighing you
Designing Cloud-Native Applications down? Kubernetes can scale them. In order to run and maintain
By Bilgin Ibryam and Roland Huß successful containerized applications, organization is key. This
Due to the rise of microservices and cloud-native Refcard has all you need to know about Kubernetes including key
architectures, Kubernetes patterns and tools are concepts, how to successfully build your first container, and more.
more important than ever. Learn more about Advanced Kubernetes Kubernetes is a distributed cluster
Kubernetes design elements for cloud-native technology that manages container-based systems in a
applications, including foundational, behavioral, declarative manner using an API. There are currently many
structural, configurational, and advanced patterns. learning resources to get started with the fundamentals of
Kubernetes, but there is less information on how to manage
Kubernetes in Action: 1st Edition
Kubernetes infrastructure on an ongoing basis. This Refcard aims
By Marko Luksa
to deliver quick, accessible information for operators using any
In this complete guide, learn more about Kubernetes product.
developing and running applications in a
Kubernetes environment. Not only does this book PODCASTS
explore the Kubernetes platform, it also provides a
Kubernetes Podcast Considering that
detailed overview of technologies like Docker and
Google produces it (and that Google also created
how to get started setting up containers.
Kubernetes in 2014), you might call this podcast
a classic. Enjoy weekly interviews with prominent
Kubernetes: Up & Running: Dive into the
tech folks who work with K8s.
Future of Infrastructure
By Kelsey Hightower, Brendan Burns, and Joe Beda PodCTL | Enterprise Kubernetes Produced
by Red Hat OpenShift, this podcast covers
This book dives into the Kubernetes cluster
everything related to enterprise Kubernetes
orchestrator and how its tools and APIs can be
and OpenShift, from in-depth discussions on
used to improve the development, delivery, and
Operators to conference recaps.
maintenance of distributed applications.
The Byte Looking for more on cloud and
containers? Tune into each episode of “The Byte”
TREND REPORTS
for to-the-point “byte-sized” material on cloud,
Cloud Native Questions around how to efficiently manage containers, and more.
microservices, accelerate deployments, and make applications
scalable are answered through cloud-native technology. Cloud-
native is all about taking advantage of the cloud in every way
ZONES
possible. This results in faster, more efficient ways to run, develop,
and deploy applications — every aspect of your application Cloud The Cloud Zone covers the host of providers and utilities
infrastructure has been adopted and implemented with the cloud that make cloud computing possible and push the limits (and
in mind. savings) with which we can deploy, store, and host applications
in a flexible, elastic manner. The Cloud Zone focuses on PaaS,
In this report, we detail key findings from our original research
infrastructures, containerization, security, scalability, and hosting
and address how cloud native will add business value, the role
servers.
of microservices in adopting cloud-native technology, and what
business executives are saying about cloud-native. Microservices The Microservices Zone walks you through
breaking down the monolith step-by-step and designing
Migrating to Microservices DZone Trend Reports expand on microservices architectures from scratch. It covers everything
the tech content that our readers say is most helpful, including from scalability to patterns and anti-patterns and digs deeper
thought leadership and in-depth, original DZone research. The than just containers to give you practical applications and
Migrating to Microservices Trend Report features expert predictions business use cases.
on the next phase of microservices adoption in the enterprise, as
well as insights into some challenges and opportunities presented
by current usage patterns.
Trusted by innovators
CASE STUDY
CHALLENGE COMPANY
Bose
Bose has built a cloud platform that allows customers to connect all Bose-
owned devices to play music at once, while also providing all updates and
COMPANY SIZE
patches for these devices. Bose initially built this application on MySQL,
9,000 people | $3.9B Market Cap
but they wanted a database that would allow them to leverage a multi-
region deployment in AWS. Additionally, Bose has customers all across the
INDUSTRY
world and needed a database that could scale to different regions with
Electronics
low latencies.
A major challenge was to find a cost-effective, globally scalable data store PRODUCTS USED
that would be easy for microservice developers to work with. CockroachDB and Kubernetes
Cloud Zone
Container technologies have exploded in popularity, leading to diverse use
cases and new and unexpected challenges. Developers are seeking best
practices for container performance monitoring, data security, and more.