0% found this document useful (0 votes)
104 views54 pages

Kubernetes and The Enterprise: Brought To You in Partnership With

k8s trend 2020

Uploaded by

miguel_murillo2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views54 pages

Kubernetes and The Enterprise: Brought To You in Partnership With

k8s trend 2020

Uploaded by

miguel_murillo2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Kubernetes and

the Enterprise

B RO UG H T TO YOU IN PARTN E RSH IP WITH


Table of Contents
HIGHLIGHTS AND INTRODUCTION

03 Welcome Letter
Peter Connelly, Senior Editor at DZone

04 About DZone Publications

DZONE RESEARCH

05 Key Research Findings


John Esposito, PhD, Technical Architect at 6st Technologies

16 Leaders in Tech
DIPTI BORKAR, CO-FOUNDER AND CLOUD EXPERT, OFFERS KEY ADVICE TO KUBERNETES USERS
Lindsay Smith, Publications Manager at DZone

FROM THE COMMUNITY

21 The Rise of Kubernetes in Small Companies


Ralph Soika, Developer at Imixs GmbH

26 Scaling Your Microservices Architecture in Kubernetes


Samir Behara, Senior Architect at EBSCO

33 “kubecthell”
Daniel Stori, Software Architect at TOTVS

36 Kubernetes and DevOps


INTEGRATING AKS INTO YOUR CI/CD PIPELINE
Boris Zaikin, Software and Cloud Architect at Nordcloud GmbH

42 Demystifying Kubernetes Deployment Strategies


CHOOSING THE RIGHT DEPLOYMENT APPROACH FOR A RELIABLE INFRASTRUCTURE
Sudip SenGupta, TOGAF Certified Solution Architect | Freelance Tech Writer

ADDITIONAL RESOURCES

51 Diving Deeper Into Kubernetes

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 2


Welcome Letter
Peter Connelly, Senior Editor at DZone

The importance of containerization and the ability to control With all of this in mind, we chose to expand on 2019’s
environments from development to production may only be “Kubernetes in the Enterprise” Trend Report to give our
overshadowed by the benefits that a container orchestration readers insight into the issues other organizations are
platform provides. As the expectations of modern users facing, the strategies they’re using to overcome them, and
become harder to meet and application complexity grows, the tooling they’re adopting as they mature in their use of
scaling an application easily, efficiently, and securely is a Kubernetes and move to a more cloud-native architecture.
need rather than a “nice-to-have.” Given this, the rate of
In addition to the aforementioned concerns, this report
Kubernetes adoption since it was first open-sourced in 2014
focuses on Kubernetes in the context of microservices and
comes as no surprise.
managed cloud services, Kubernetes’ continued aid to better
Even as big names (Amazon, Microsoft, Red Hat, etc.) release CI/CD pipelines, and what adoption and maintenance of K8’s
similar container orchestration platforms and services, the looks like for both large and small-scale applications and
rate of Kubernetes adoption and its larger ecosystem of organizations.
tooling continues to grow, making it obvious that Kubernetes
We thank everyone who contributed to the report — survey
is here to stay.
respondents, authors, editors. And to you, our readers, we
Though some still have yet to adopt Kubernetes, many hope you can derive actionable insights from this work to
organizations are no longer worried about the struggles strengthen your professional and personal understanding of
that accompany early adoption. Instead, focus has shifted Kubernetes in the larger context of industry.
to more mature concerns surrounding security, governance,
Sincerely,
and larger resource optimization.

With these concerns comes a need for considerable and


varied expertise across organizations and development
teams — from developers, to DevOps professionals and
Site Reliability Engineers, to software architects, to VPs of
Engineering and C-level leaders.

Peter Connelly, Senior Editor at DZone

As part of the Editorial Team, Peter’s job is to work with DZone contributors throughout every part of
the writing process. Whether it’s helping brainstorm potential topics, providing authors with feedback
on their writing, promoting their content, or connecting them with new and interesting opportunities,
Peter’s goal is to be a resource for the people who make DZone the community it is.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 3


ABOUT

DZone Publications
Meet the DZone Publications team!
Publishing Refcards and Trend Reports DZone Mission Statement
year-round, this team can often be found At DZone, we foster a collaborative environment that empowers
editing contributor pieces, working developers and tech professionals to share knowledge, build skills,
with Sponsors, and coordinating with and solve problems through content, code, and community.
designers. Part of their everyday includes
working across teams, specifically DZone’s We thoughtfully — and with intention — challenge the status quo
Client Success and Editorial teams, to and value diverse perspectives so that, as one, we can inspire positive
deliver high-quality content to the DZone change through technology.
community.

Meet the Team

Lindsay Smith, Publications Manager at DZone


@DZone_LindsayS on DZone | @Smith_Lindsay11 on Twitter

Lindsay is a Publications Manager at DZone. Reviewing contributor drafts, working with sponsors,
and interviewing key players for “Leaders in Tech,” Lindsay and team oversees the entire Trend
Report process end-to-end, delivering insightful content and findings to DZone’s developer
audience. In her free time, Lindsay enjoys reading, biking, and walking her dog, Scout.

Melissa Habit, Publications Manager at DZone


@dzone-melissah on DZone | @melissahabit on LinkedIn

As a Publications Manager, Melissa co-leads the publication lifecycle for Trend Reports — from
coordinating project logistics like schedules and workflow processes to conducting editorial
reviews with DZone contributors and authors. She often supports Sponsors during the pre- and
post-publication stages with her fellow Client Success teammates. Outside of work, Melissa passes
the days tending to houseplants, reading, woodworking, and adoring her newly adopted cats,
Bean and Whitney.

Blake Ethridge, Community Manager at DZone


@FilmFest on Twitter | @blakeethridge on LinkedIn

With twenty-five years of experience as a leader and visionary in building enterprise-level online
communities, Blake plays an integral role in DZone Publications, from sourcing authors to surveying
the DZone audience and promoting each publication to our extensive developer community, DZone
Core. When he’s not hosting virtual events or working with members of DZone Core, Blake enjoys
attending film festivals, covering new cinema, and walking his miniature schnauzers, Giallo and Neo.

John Esposito, Technical Architect at 6st Technologies 


@subwayprophet on GitHub | @johnesposito on DZone

John Esposito works as technical architect at 6st Technologies, teaches undergrads whenever they
will listen, and moonlights as research analyst at DZone.com. He wrote his first C in junior high and
is finally starting to understand JavaScript NaN%. When he isn’t annoyed at code written by his
past self, John hangs out with his wife and cats Gilgamesh and Behemoth, who look and act like
their names.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 4


ORIGINAL RESEARCH

2020 DZone Kubernetes


Survey: Key Research
Findings
By John Esposito, PhD, Technical Architect at 6st Technologies

In October 2020, DZone surveyed software developers, architects, and other IT professionals in order to understand how
containers are deployed and orchestrated using Kubernetes and other modern sub-VM-level tools.

Major research targets:

1. The state of resource isolation, application containerization, and in particular, the use of Kubernetes

2. The mind of the Kubernetes user

Methods:

We created a survey and distributed it to a global audience of software professionals. Question formats included multiple
choice, free response, and ranking. Survey links were distributed via email to an opt-in subscriber list, popups on DZone.com,
and short articles soliciting survey responses posted in a web portal focusing on Kubernetes-related topics. The survey was
opened on October 1st and closed on November 1st. The survey recorded 522 total responses.

In this report, we review some of our key research findings. Many secondary findings of interest are not included here; those
additional findings will be published piecemeal on DZone.com.

Research Target One: The State of Resource Isolation and Container Orchestration
Motivations:

1. Software development and runtime ecosystems are now complex and tangled enough that OS-level resource
management is often insufficient to avoid conflicts in build and runtime environments.

2. Further, as more applications run on the web where state management is not built into the application protocol,
application state management becomes increasingly difficult to manage through explicit application-level code but
easier to automate at a lower level.

3. Again, as software architectures increasingly take advantage of the “metal-indifference of cloud computing, while
depending on multi-platform runtimes and complex graphs of dependencies, a dimension for horizontal scaling that
allows for more granular control over runtime environment than VM-level (as would be needed if OS-level WORA
runtimes were not used) becomes increasingly attractive.

4. As Agile development methodologies encourage a microservice architecture with less-permeable system boundaries
and strongly opaque internals, maintenance of a single OS-level environment that serves many services’ heterogeneous
needs becomes increasingly difficult, sometimes impossible.

5. Finally, as container use increases, the need for high-level container orchestration also increases.

For this research target, we did not generate any a priori hypotheses — with one exception: the intersection of microservice
and Kubernetes use. The purpose was mainly to provide empirical data and analytical commentary.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 5


HOW RESOURCES ARE ISOLATED IN LINUX SYSTEMS
Modern container management developed organically from manually configurable resource isolation capabilities offered by
multi-user operating systems. We wanted to know how users were isolating resources on Linux systems without necessarily
using higher-level abstractions like Docker or rkt.

We asked:

What methods do you use to isolate resources in Linux systems?

Results across all responses (n=227):

Table 1: Linux Resource Isolation Methods Table 2: Linux Resource Isolation Methods by Environment Type

% of total respondents using In development In production


Method used Method used
(n=522) environments environments

LXC 38.1% (n=199) chroot “jail” 57.6% 42.4%

chroot “jail” 31.2% (n=163) LXC 53.3% 46.7%

LXD 22.4% (n=117) LXD 54.9% 45.1%

LXCFS 18.8% (n=98) LXCFS 51.8% 48.2%

Observations:

1. LXC is the most common method used for Linux resource isolation. This is perhaps because LXC is the only built-in, full-
power container solution available across Linux distributions. For many applications, chroot offers too little (filesystem-
only) resource isolation, while more sophisticated solutions like LXD introduce too much complexity.

• Significant caveat: Because earlier (pre-libcontainer) versions of Docker were built on LXC, it is possible that (a)
some respondents are using LXC because they are using an old version of Docker, (b) some respondents said they
were using LXC because their mental model counts Docker as using LXC (even though Docker has used libcontainer
since v0.9 in 2014), and/or (c) some users are running Docker using LXC rather than libcontainer as driver. A hint
that this might be the case is that (a) some “other” respondents specified that they use Linux capabilities (e.g.,
cgroups) that are available in Docker, and (b) other “other” respondents noted that they only use kubectl and do
not think about what is happening at a lower level. In future surveys, we will add an explicit “apart from Docker or
Rocket” qualifier to the question.

2. Resource isolation without higher-level tools (like Docker) is more common in development than in production
environments, across all methods.

• This may be because (a) production environments are likely to run into more complex resource-management
scenarios that would benefit from dedicated container-management tools and (b) development environments
are more likely to change more rapidly than production environments, which means that fine-grained resource
isolation changes (e.g., for testing a new runtime library version) are less likely to benefit from complex container
predefinitions (like a dockerfile).

• Since low-level detail is available only on a per-user, not per-scenario basis, this does not mean that, in any given
scenario, sub-VM-level resource isolation is not likely to be used in production and not in development — a situation
that in fact seems likely to be extremely common. Future research may address development and production
distinctions at a per-application or per-scenario level.

3. The simple chroot command shows the greatest gap between development and production usage (57.6% in
development vs. 42.4% in production).

• Our guess is that this is because chroot is easy to understand, simple to use, and relatively coarse-grained. In fact, it is
possible to use chroot effectively without understanding anything specific about Linux containerization: Knowledge
of file-level access controls is sufficient.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 6


NOTE: We limited this question to methods of resource isolation available in Linux.

Reasoning: This survey focuses particularly on Kubernetes, and full Windows support for Kubernetes is relatively new. Based
on anecdotal evidence, it seems that adoption of Kubernetes is still orders of magnitude more common on Linux than on
Windows. And based on prior knowledge of our survey population (which is dominated by developers rather than sysadmins),
we knew that respondents were more likely to have deep experience with Linux administration than Windows, and were more
likely to depend on sysadmin specialists when running on Windows servers. In future surveys, we may expand the survey’s
focus to include Windows and other non-Linux operating systems.

HOW OFTEN APPLICATION CONTAINERS ARE USED


Application containers offer more granular and (in some cases) more portable resource isolation control than many individual
resource isolation techniques. We wanted to know how often application containers are used. Since we have some historical
data from surveys on the same survey population, we also wanted to see how container usage has changed over time.

So we asked:

Do you use application containers in either development or production environments?

Results (n=513):

Figure 1: PERCENT OF RESPONDENTS USING APPLICATION CONTAINERS IN EITHER


DEVELOPMENT OR PRODUCTION ENVIRONMENTS

2%
8%

Yes

No

I don't know

90%

Compare results over the past four years:

Figure 2: PERCENT OF RESPONDENTS USING APPLICATION CONTAINERS

100

75

50

25

0
2017 2018 2019 2020

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 7


Observations:

1. Growth of container usage over the past three years has been roughly linear. Although Docker (the tool that made
containers easier for developers to use) has been available since 2013, container usage did not apparently explode
until 2017-2018.

2. The current level of adoption (90.4%) is extremely high. Growth between 2019 and 2020 is already slightly slower than
between 2018 and 2019 but will necessarily slow down between 2020 and 2021, even if container adoption reaches
100% saturation.

3. Note: The target survey list was built by similar methods in all three years, but no special effort was made to ensure
population continuity (e.g., respondent-level identification) over time.

• Given the topic of the survey as advertised, it seems likely that response bias would favor those who use application
containers. The percentage of container users among survey respondents is, therefore, likely to be higher than the
percentage of container users in a general population of software professionals.

• Since the advertised topics were the same across the four surveys whose results are included above, the direction
of the trend line in the chart above should not be altered by this population bias (although, of course, its slope
might be).

TOOLS FOR CONTAINER MANAGEMENT (CONTAINER-LEVEL)


As container usage and workload complexity grow, and manual resource isolation becomes less practicable, it becomes
increasingly important to understand which higher-level tools are being used for containerization. We wanted to know which
such tools are being used, and in particular, wanted to know about two most popular, Docker and Rocket.

So we asked:

What tools/platforms are you using to manage containers in development and production?

Table 3: Tools Used for Container Management Table 4: Tools Used for Container Management by Environment

% of total respondents using In development In production


Tool used Tool used
(n=522) environments environments

Docker 87.2% (n=455) Docker 53.7% 46.3%

Rocket (rkt) 17.4% (n=91) Rocket (rkt) 53% 47%

Other 1.3% (n=7) Other 7% 7%

Observations:

1. Docker dominates, as expected.

2. The difference between Docker and Rocket usage in both development and production is nearly identical.

• This might seem a little surprising: prima facie, we might suspect a larger development vs. production difference
for Docker, since Rocket’s special value-adds are less important in lower-workload, lower-security, fewer-user
environments, as we might guess development environments would be. But in practice, the difference appears
negligible.

3. Most “other” responses were orchestration tools, including Kubernetes. In future surveys, we will reword the question to
specify container-level rather than higher-level tools explicitly.

KUBERNETES USAGE
Given that more containers are being used, and given further that ephemeral, stateless jobs run in microservices require rapid
and complex spin-up/down for a set of containers, we wanted to know how people are orchestrating containers now.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 8


So we asked:

Does your organization run any Kubernetes clusters?

Figure 3: DOES YOUR ORGANIZATION RUN ANY KUBERNETES CLUSTERS?

5%

18%
Yes

No

I don't know

77%

Compare year-over-year Kubernetes usage:

Figure 4: PERCENT OF RESPONDENTS WHOSE ORGANIZATIONS ARE USING KUBERNETES

78

76

74

72

70
2019 2019.25 2019.5 2019.75

Observations:

1. Kubernetes usage at the organizational level is very high (77%), up significantly from 2019 (73.4%).

2. 83.3% (n=370) of respondents who use Docker also use Kubernetes; only 15.1% (n=67) of Docker users do not
use Kubernetes.

From this, we might wonder whether Docker’s composability is more important than its sheer portability since the benefits of
portability are present without running in a Kubernetes cluster. If this is the case, then we might further guess that usage of
Kubernetes might be higher among users of Rocket containers since the Rocket container runtime was originally optimized for
composability and security.

This turns out to be mildly not the case: 79.3% (n=69) of Rocket users also use Kubernetes. The difference is small enough that
no conclusion can be drawn, but the guess that composability dominates does not have additional support from container
runtime vs. Kubernetes usage differences.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 9


Hypothesis One
Hypothesis: Organizations using microservices are more likely to use Kubernetes.

Reasoning: Containers are good environments for microservices; microservice architecture is “ignorant” of resource-
management problems at the higher system level, so use of microservices should exert pressure toward a robust container
orchestration solution.

To test this hypothesis, we asked:

Does your organization run any Kubernetes clusters?

And we segmented the results by answer to the question (later in the survey):

Does your organization run any microservices?

Figure 5: DOES YOUR ORGANIZATION RUN ANY KUBERNETES CLUSTERS?

100
Org runs microservices

80
Org does not run
microservices
60

40

20

0
Yes No I don't know

Observations:

1. The hypothesis was strongly verified. A large majority (83.4%) of respondents whose organizations run microservices also
run Kubernetes clusters, while a small majority (54.7%) of respondents whose organizations do not run microservices also
run Kubernetes clusters.

Research Target Two: The Mind of the Kubernetes User and Other Containerization
Technologies
Motivation:

1. Many low-level resource isolation technologies are decades old, and many ideas behind resource isolation strategies are
as old as time-shared mainframes. Since the low-level isolation barrier types themselves are not new, the way people use
them holds the bulk of interest for anyone interested in modern application architectures.

2. Higher-level technologies built to handle increased low-level containerization, such as Kubernetes, implement distributed
design patterns that were formerly more interesting to specialists in distributed infrastructure and less interesting to
application developers. As variable workloads increasingly require, and cloud services increasingly allow, more fine-
grained control over runtime environments, application developers are encountering distributed computing problems at
increasingly lower levels.

3. Different mental models are required to understand and build for modern, less-stateful systems than for the single-
server monolithic systems that many of today’s professional developers grew up with. Knowing how other developers
understand Kubernetes clusters may help developers break out of mental models less suited to current problems.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 10


For this research target, we generated some a priori hypotheses from theories that reflect our experience in software
development. Where our research was designed to test any hypothesis, the hypothesis is presented along with the theoretical
reasoning that engendered it. Extra commentary is offered where hypotheses were falsified.

HOW DEVELOPERS THINK ABOUT CONTAINERIZATION AND CONTAINER ORCHESTRATION


The analyses in this section were not driven by specific hypotheses, so the presentation format will resemble that in research
target one: results with commentary.

RELATIVE IMPORTANCE OF DIFFERENT ASPECTS OF CONTAINERIZATION


The concept “container” etymologically denotes anything that draws a boundary around a subsystem. But thanks to
widespread use of application containers in the cloud, especially Docker, “container” in modern usage now has much richer
connotations. We wanted to understand what “containerization” means to software professionals in 2020.

So we asked:

“Containerization” can mean many things. Please rank the following aspects of containerization in order of importance.
(1=most important, 7=least important)

Observations: Table 5: Important Aspects of Containerization (Ranked)


1. The basic meaning “process isolation” was
Aspect Score
considered the most important aspect of
containerization. Process isolation 2026

High availability 1872


• The point gap between “process isolation”
and the second-most important aspect (“high Horizontal elasticity 1863
availability”) is the largest between any two
Magical, effortless deployment 1773
successive aspects. This is to be expected: A
broad but not inaccurate sense should be Memory isolation 1736
considered more correct than more narrow
Network stack isolation 1651
senses.
Filesystem isolation 1631
2. Rankings of narrower aspects of containerization
are considerably more interesting. Granular resource control 1561

• The second-highest scoring answer, “high


availability,” has nothing directly to do with containerization as resource isolation. Rather, high availability is a kind of
robustness that containerization facilitates by making runtime environment reconstruction trivial in terms of both
design effort and VM-level resource overhead. When one “mini-server” (application container) breaks, a replacement
“mini-server” spins up quickly (because the container is leaner than a full VM) and reliably (because the environment
is thoroughly defined).

• The precise definability of containerized resources depends on effective resource isolation, of course, so this
robustness indirectly follows from resource isolation. But four other answer options were about resource isolation
directly. This suggests that containerization’s effect on application performance — a high-level desideratum — is
more important in the minds of software professionals than the way containers achieve these performance increases.

3. Accordingly, the ranked order changes slightly among sysadmins, SREs, or DevOps leads only (n=73) (from most
important to least important): high availability, horizontal elasticity, process isolation, magical/effortless deployment,
filesystem isolation, memory isolation, network stack isolation, granular resource control.

• Since these types of software professionals are evaluated on uptime and related metrics, it is in their rational self-
interest to consider high availability the most important aspect of containerization. If a process is inadequately
isolated, on the other hand, and various processes cacophonously step on one another’s toes, the blame may fall by
default on application code. We would expect this distinction to fall apart in the case of SREs, but we did not receive
enough SRE responses to draw any such conclusions from the survey data.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 11


4. Ranked order also differs slightly between those respondents who have personally worked with Kubernetes (n=399) and
those who have not (n=103).

• Both groups scored “process isolation” highest, but those who have personally worked with Kubernetes scored “high
availability” second, while those who haven’t personally worked with Kubernetes scored “filesystem isolation” second.

• Our current guess is that “filesystem isolation” would come in second for those without personal Kubernetes
experience because the concept “container” most properly denotes “process isolation” (the top-scored choice in both
groups), and “filesystem isolation” represents the simplest kind of resource isolation.

• This guess is somewhat supported when we segment responses into “senior” (those with >5 years of experience as a
software professional) and “junior” (those with <= 5 years of experience as a software professional); for senior software
professionals, “horizontal elasticity” ranks second to “process isolation,” while for junior software professionals,
“filesystem isolation” ranks second.

• This is consistent with the hypothesis that “filesystem isolation” is the simplest case of resource isolation, on the
assumption that junior professionals are more likely to model a system in a way that is closer to the theoretical
“definitions” of its components.

Observations:

1. The hypothesis was weakly verified.

WHAT KUBERNETES IMPROVES


Ideas at design time glisten like intricate diamonds; ideas at runtime eventually all seem ugly and bad. The difference between
definition and runtime is especially evident in complex systems properly so-called and probably in distributed systems as a
special subset of complex systems vs. complex systems that have greater hierarchy in their design. So you might think that,
as a container orchestration tool, Kubernetes is very likely — even more than other system systems — to be one nice thing in
theory and one less nice thing in practice.

So we asked:

What has Kubernetes improved at your organization?

Results (n=522):

Table 6: What Kubernetes Improves

Things improved % of respondents n

Deployment in general 66% 313

Autoscaling 65% 308

CI/CD 63.7% 302

Building microservices 53.6% 254

Reliability 46% 217

Application modularity 44.3% 210

Architectural refactoring 36.1% 171

Overall system design 33.5% 159

Cost 28.9% 137

Security 24.9% 118

Other 3.2% 15

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 12


Observations:

1. The top three things Kubernetes improved (deployment in general, autoscaling, and CI/CD) appear orthogonal.

• Deployment and CI/CD are about how software moves to production, while autoscaling is about how the software
runs under variable load. This orthogonality suggests that Kubernetes is delivering on its promise insofar, as its
benefits are not dramatically focused on either the “dev” or the “ops” side.

2. A large majority of respondents (70.1%, n=369) noted that Kubernetes has improved some aspect of software architecture
or design: building microservices (53.6%), application modularity (44.3%), architectural refactoring (36.1%), or overall system
design (33.5%).

• These are formal improvements in the software itself, not simply improvements in runtime performance or
deployment pipeline. From a software architect’s perspective, this finding is quite significant. Further research
might explore exactly how Kubernetes helped improve each of these aspects of software design — in particular,
how much benefit came from Kubernetes’ orchestration capabilities vs. the containerization itself, as facilitated by
Kubernetes usage.

3. A slightly larger majority (73.9%, n=386) reported that Kubernetes has improved some aspect related to runtime
operations: autoscaling, security, reliability, or cost.

• This is to be expected since this kind of benefit is directly related to container orchestration, and seems less
significant than the comparable percent of respondents who credit Kubernetes with benefitting software design or
architecture itself.

Hypothesis Two:
Hypothesis: People who have run public-facing “pet” servers are less likely to be satisfied with the state of infrastructure
abstraction in 2020.

Reasoning:

1. Abstractions like Kubernetes add a lot of complexity between application code and fundamental system architecture.

2. VMs are a reasonably opaque abstraction, but lightweight containers are less so, and powerful container orchestration
tools even less so.

3. Significant portions of distributed system design that might in the past have been implemented in application code can
now be left to container management and container orchestration layers.

4. But this makes it harder to develop mechanical sympathy with the Von Neumann structure underneath.

5. People who have run their own individual “pet” servers for nontrivial applications are more likely to care about mechanical
sympathy with the operating system layer — therefore are less satisfied with the “herd” concept that a separate container
orchestration layer encodes — than people who have not.

To test this hypothesis, we asked:

Please select the option that best describes your attitude toward infrastructure abstraction in 2020.

Separately, we asked:

Have you ever personally maintained a single-node, public-facing server (http or otherwise)?

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 13


Figure 6: ATTITUDES TOWARD THE STATE OF INFRASTRUCTURE ABSTRACTION IN 2020

Have run pet server Have not run pet server

60

45

30

15

0
Infrastructure We're finally The cost in No opinion Other
abstraction in 2020 getting close to complexity of modern
is excessive and pure, non-leaky infrastructure
getting out of hand infrastructure abstraction is worth
abstraction the benefits of
infinite scaling and
continuous delivery

Observations:

1. The hypothesis was somewhat verified.

• The difference between “have run pet server” and “have not run pet server” responses was greatest for the most
negative attitude toward modern infrastructure abstraction (“infrastructure abstraction in 2020 is excessive and
getting out of hand”): 18% (n=70) vs. 10.4% (n=11).

• Compare the differences between the pet/non-pet segments answering the more design-focused optimistic “we’re
finally getting close to pure, non-leaky infrastructure abstraction” (21.9%, n=85 vs. 16%, n=17) and the more tradeoff-
focused optimistic “the cost in complexity of modern infrastructure abstraction is worth the benefits of infinite
scaling and continuous delivery” (49.4%, n=192 vs. 46.2%, n=49).

• The fact that the “have run pet servers” responses were more relatively negative than relatively positive is taken as
evidence for our hypothesis. But the small n within some of the segments, especially the “have not run a pet server”
segments, weakens the inference.

2. Respondents who have run a “pet” server are more opinionated about the state of infrastructure abstraction.

• Significantly fewer respondents who have run a pet server (9.8%, n=38) have no opinion about the state of
infrastructure abstraction in 2020 than those who have not (26.4%, n=28). This was not an a priori hypothesis, but it is
consistent with our picture: Those who have not run pet servers are more likely to treat the sub-application layer as
“magical” or “satisfyingly opaque” than those who have worried about interrupts, shared memory, and other OS-level
resource management problems. (Consider developers whose first application ran on Heroku — judging from boot
camps and introductory tutorials, not an insignificant number.)

3. Interestingly, this picture of differing opinions of modern infrastructure abstraction does not map onto seniority.

• Senior (>5years professional IT experience) respondents were insignificantly more likely to respond with the
“excessive” answer (15.2%, n=59) vs. junior respondents (14.1%, n=11), and were much more likely to respond with the
“cost in complexity is worth it” option (51.5%, n=200) vs. junior respondents (37.2%, n=29).

• We might have imagined that “old school” people would distrust Kubernetes-level tools more than younger people,
but this is not the case.

Future Research
As usual, we asked more questions than we’ve published here and learned more from survey responses than we’ve been able
to analyze yet. Additional areas covered in our Kubernetes survey include:

• Distribution of systems underlying Kubernetes clusters (bare metal vs. VMs).

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 14


• Degree of mixing of metal and virtual machines within the same Kubernetes cluster.

• The presence or absence of microservices on Kubernetes clusters — for those organizations that run both.

• The use of stateful workloads where state is maintained within the cluster (rather than, say, in an external DBMS).

• Pain points encountered while using Kubernetes.

• The use of distributed design patterns (circuit breaker, leader election, sidecar, etc.).

Further analyses will be conducted over the coming months and results published on DZone.com.

Several of our research areas would benefit from follow-up analyses at future dates and/or in additional detail. These include:

• Application architecture

• System design

• Release and/or delivery processes

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 15


ORIGINAL RESEARCH

Leaders in Tech
Dipti Borkar, Co-Founder and Cloud Expert, Offers Key
Advice to Kubernetes Users

Lindsay Smith, Publications Manager at DZone

Kubernetes has matured significantly over the last


Dipti Borkar few years. Speeding deployments, automating CI/CD
Co-founder and Chief Product Officer pipelines, and getting rid of inefficiencies — that’s what
at Ahana; Ambassador for Women in Kubernetes is all about.
Analytics; Chapter Leader for UPWARD
Women | @dborkar According to our survey, nearly 90% of respondents
are currently using containers in some manner. We
decided to sit down with co-founder and cloud expert,
Dipti Borkar, to talk about our key research findings
and her advice to the Kubernetes community.

DIPTI’S ADVICE TO DEVELOPERS

⊲ Embrace containers for (almost) ALL applications. Adopt now or risk getting left behind. While containers are more commonly seen
across web applications and microservices, more and more distributed systems are now utilizing containers, resulting in efficient,
faster deployments.

⊲ Understand your workloads. The first step to being successful is understanding your workloads. Think about the kind of resource
usage required, and then work down from that; some are CPU-intensive; some more memory-intensive. Understanding the applica-
tion is the first, more important step.

⊲ Adopt the right tools and cloud services. Offloading Kubernetes to a third party opens a wide array of possibilities for your team. “If
we get to the point where 90% of the users are using Kubernetes, and it’s just there and you don’t even know it, that’s what success
looks like.”

According to our survey, the current level of container adoption (90%) is very high. What is your reaction to this
statistic? And what is your advice to teams who’ve NOT adopted containers? 

Containers have come a long way. From test and dev to production, many different applications now widely use containers.
There are areas where containers have not been adopted as much and a lot of those relate to data. For web applications and
microservices in operational databases, containers make a very, very good fit. But for some of the more distributed systems, it
is just now starting to become more adopted because these are persistent systems, and it’s taken some time for containers to
come up to speed as it relates with these persistent applications where they have to process data. And that’s an area where we
will see more growth in the future.

In terms of specifically the 10% [who’ve not adopted containers], if they are in the microservices web application space, such
as operational databases, they are getting left behind. Somebody is out-innovating them because they’re basically getting
there faster and being able to get into development, test, and deployment faster with containers. And so it’s time to move
on in terms of data workloads. It is the innovators and the early adopters that are already using containers and increasingly
Kubernetes. And that adoption will continue. I don’t have a number for you but it’s probably less than 50% — and that’s where
the future adoption of containers and Kubernetes will go.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 16


Resource isolation without higher-level tools (like Docker) is more common in development than in production
environments. Any ideas as to why? And what’s your advice for resource isolation in order to be successful?

Resource isolation is something that you worry less about when you’re in dev or test, but in terms of production workloads,
it will be something that you have to think more about. More specifically, you have multiple containers running on a single
instance and because containers are an abstraction on top of the operating system, they provide better isolation than VMs. To
prevent over utilization, you have to be thoughtful about how those are getting used. For web applications and microservices, it
tends to be a little bit easier for data workloads.

For example, with Spark or Presto and some of these other distributed systems, what we see is people might just run
one container per instance and they’re actually just using containers, not so much to pack the instances but to simplify
deployment.

The first step to being successful is understanding your workloads. What kind of resource usage is required? And then working
down from that; some are CPU-intensive; some are more memory-intensive. Understanding the application is probably step
one, and then step two is trying to find the right tool on top of it to simplify the resource isolation and make sure that you have
an orchestration layer where you’re not manually doing this. It’s very hard when you just have the quick data layer to support
some kind of resource isolation across multiple containers. That’s why you need the orchestration engines, and at the moment,
obviously, Kubernetes is a better orchestration engine that you know is more widely adopted on top of containers.

A large majority of respondents whose organizations run microservices also run Kubernetes clusters, while a
small majority of respondents whose organizations do not run microservices also run Kubernetes clusters. What
does this tell you about microservices adoption? And what do you think is most important for developers to
consider regarding adoption?

Microservices and Kubernetes — they go well together because they’re fairly stateless applications. And so it’s easier to
deploy microservices with Kubernetes with the operating system underneath it. For customers or users that are not heavy
on microservices, there might be other applications that they’re using; data applications are one of them. Except distributed
systems are hard to deploy and orchestrate, and Kubernetes is a good way of doing that.

Microservices and any stateless application — those are the easy ones to get going, and that’s kind of why we see such a high
percentage. But for these other applications and distributed systems, the engines themselves had to change before they
could drive natively on Kubernetes. And so the industry has gone through a few changes where, with the disaggregation of
storage and compute, it is becoming a lot easier for these data processing engines to now use Kubernetes because they can be
stateless. So we will see that the adoption of Kubernetes in this space increases now that the application itself is more aligned
and more native to Kubernetes’ needs.

With that said, it is an architectural shift to use microservices; it’s an evolution of the stack. And having these APIs essentially
connected gives users a flexibility and an advantage from a speed perspective and an interoperability perspective. And so that
is the advantage you get with microservices.

Based on our survey, the top three benefits of using Kubernetes were: deployment in general, autoscaling, and
CI/CD. And security was the lowest ranked benefit of Kubernetes. What does this tell you? And what’s your advice
to Kubernetes users in terms of security?

Security is important across multiple layers, but a lot of focus right now is on data security, and that is actually one level above
Kubernetes.

So if you ask a data platform team about their concerns, it would be pretty high on their list, but from an operating, OMS,
and infrastructure perspective, it is a pretty protected layer. Now, you still have to go through signed containers; you want to
check as you scan your containers for vulnerabilities, disallow privileges for most users, and things like that. And so only the
automation really accesses that Kubernetes layer.

But where most of the security comes in is in the layer on top of that, as it relates with data. Thinking about authentication,
how do I know that I’ve been authenticated to enter a system, or that I am authorized to access the data? These are not

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 17


Kubernetes concerns, and that’s probably why the question ranks lower on the list as an advantage.

Where do you see Kubernetes in the next 6-12 months? What is your most critical advice to developers so that
they can fit in with that path and stay ahead of the trends?

Kubernetes has matured significantly over the last few years, and as a center point where it’s almost a must-have for your
stack, Kubernetes is getting to the point where you are probably spending more time on deployment and orchestrating your
environment. And you could essentially offload that to Kubernetes and do more with your time, getting rid of inefficiencies.
And that’s what Kubernetes is about.

However, it is still hard if you’re running it on your own. Right? It is a distributed system; it’s a cluster that you are managing. The
ecosystem of Kubernetes is getting fairly complicated. There is monitoring for it. There’s security for it. There are hundreds of
integrations. And so if your team is doing this all on their own, you almost need a set of Kubernetes experts because otherwise,
it’s hard for a full-stack developer to go all the way from top down, or a data platform developer to deal with the data tier and
Kubernetes as well. And the way out of that is using cloud services, which have simplified Kubernetes even further.

With that, when we happen to use Kubernetes, you won’t even know it. And that’s where it needs to be. And so if we get to the
point where 90% of the users are using Kubernetes, and it’s just there and you don’t even know it, that’s what success looks like.
And we’ll probably get there I would say three to five years from now, but there’s a lot more workloads that are now getting
more to run on Kubernetes, and Kubernetes is becoming more mature; we’re at a tipping point where that adoption will start
going very fast.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 18


Kubernetes Security at Enterprise Scale
Aqua tames the complexity of securing applications on Kubernetes with a full-stack,
full-lifecycle solution, combining native K8s capabilities with policy-driven controls that enable
full visibility and separation of duties across DevOps, security and compliance teams.

Securing the CI/CD Pipeline


Vulnerability, secrets and malware scanning, and flexible
policies to control deployments into your cluster.

Kubernetes Security Posture Management


Best-in-class controls for checking your cluster against the
CIS benchmark, using OPA and Rego for admission
control, and penetration testing your cluster.

Runtime Visibility and Protection


Prioritize risk in your cluster, segment workloads by their
identity, detect and automatically respond to abnormal
pod activity, and track events for compliance.

Industry-Standard Open Source Tools


Aqua is the company behind Trivy scanner,
kube-bench, kube-hunter and starboard, so you can rely
on our expertise and innovation.

Operating Kubernetes Clusters


and Applications Safely
Liz Rice and Michael Hausenblas
This practical eBook walks you through Kubernetes
security features—including when to use what—
and shows you how to augment those features with
container image best practices and secure
network communication.

Get the eBook

aquasec.com
CASE STUDY

Case Study: ISP Kakaku.com


End-to-End Security for Kubernetes Deployments Using Aqua

Kakaku.com, with headquarters in Tokyo, Japan, offers internet services that


enhance online access to information. Kakaku.com serves a diverse group
of clients with product offerings supporting multiple markets. For example,
they deliver services for the price comparison site “Kakaku.com,” and improve
usability for a restaurant discovery and reservation site, as well as providing COMPANY
information-retrieval services for a popular job search site. The company, Kakaku.com
which has over 20 years of service history, is currently supported by more than
20 subsidiaries, and its services support over 200 million unique visitors per COMPANY SIZE
month. 500+ employees

INDUSTRY
CHALLENGE
Internet Service Provider
Kakaku’s IT management team identified microservices, container workload
environments, and Kubernetes on-prem to speed its development process
PRODUCTS USED
and deploy applications faster — without sacrificing security. In addition, they
Aqua Enterprise, a cloud native security
needed to guarantee complete security in its containerized environment. For
platform
Kakaku.com, it was essential to find an end-to-end solution for their complete
production environment.
PRIMARY OUTCOME
Having the Aqua Vulnerability Scanner built
SOLUTION into their CI/CD pipeline and Enforcers for
Kakaku.com’s search led them to the Aqua cloud native security platform, runtime protection ensures that Kakaku.com

which featured seamless security from development through deployment. can meet their security goals. Even if issues
arise later, such as malware activity, Aqua
Aqua’s range of security attributes ensured that Kakaku.com’s containers could
detects and blocks it. This provides for more
run safely on Kubernetes. Kakaku.com also relies on Aqua for threat detection
reliable remediation and increased efficiency.
and blocking, visualization, and meeting compliance requirements. In fact,
Aqua now secures all Kakaku.com system environments, including Linux and
Windows containers, cloud and on-premises deployments, orchestration tools,
“Using Aqua makes it possible not only to
and multi-tenancy.
perform a reliable scan before release but
also to prevent abuses after.”

RESULTS
— Kazuki Hashimoto,
Kakaku.com now automates its CI/CD to scan images using Aqua during the Kakaku.com 1st Infrastructure Service Team
build phase to make sure there are no potential vulnerabilities — reducing or
even eliminating human error. Aqua’s automated security features provide for
more reliable remediation and increased efficiency. With security throughout
the development lifecycle, Aqua empowers Kakaku.com to:

• Deploy security that is effective and easier to manage

• Eliminate human error by automating security tasks

• Improve operational efficiency 6x through security automation

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 20


CONTRIBUTOR INSIGHTS

The Rise of Kubernetes


in Small Companies
Ralph Soika, Developer at Imixs GmbH

Originally, Kubernetes was developed by Google to empower the management of its own container infrastructure. The idea
was quickly taken up by other large companies. But due to the initial focus on environments with usually several thousands of
servers in data centers distributed worldwide, the learning curve for newcomers was very steep. Many questions typically asked
by smaller organizations often went unanswered in the beginning.

The handover of Kubernetes to The Linux Foundation, announced alongside the foundation of the Cloud Native Computing
Foundation (CNCF) in 2015, led to participation in the development of Kubernetes by different companies and projects. The
CNCF Cloud Native Interactive Landscape illustrates just how expansive participation has become. As a result, there are
hundreds of projects, tools, and concepts that simplify the use of Kubernetes and have significantly flattened the once very
steep learning curve over the last year and a half.

Self-Managed Kubernetes Clusters


Now that Kubernetes is more accessible than ever before, the big question when it comes to smaller organizations is, how have
things changed so far — is it feasible, or even practical, to set up their own managed Kubernetes cluster?

Kubernetes is not a fixed piece of software but is more of a kind of framework combining various aspects and functionality to
operate a cloud environment. Thus, many functions such as storage or monitoring are not part of the kernel itself but can be
added as services. And here, of course, large enterprises have distinct requirements compared to small- and medium-sized
companies. But fortunately, the Kubernetes toolchain has evolved, and today, non-enterprise-level companies have access to
tools and services to manage their own Kubernetes cluster.

In general, the most important elements to run a self-managed Kubernetes cluster are:

• Installation

• Deployment

• Storage

• Monitoring

Below are some of the recent developments in running a self-managed Kubernetes cluster.

INSTALLATION
Kubeadm
The core tool to install a Kubernetes cluster is kubeadm, which has evolved over the last few years to make
creating a minimum viable Kubernetes cluster that conforms to best practices easier. In fact, kubeadm
allows you to set up a cluster in minutes. Most of the Linux distributions are supported and sensible
default values allow a secure and fast installation procedure.

K3s
An alternative tool to install a small Kubernetes cluster, K3s removes unnecessary features and uses
lightweight components, which significantly reduces the size of an environment and simplifies
installation.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 21


Rancher
With Rancher comes another powerful open-source Kubernetes management platform that helps
manage existing clusters and allows you to set up new clusters in a distributed server environment. In
this way, Rancher can help manage a dynamically growing environment, as this is a typical scenario for
startups.

DEPLOYMENT
The deployment of applications and microservices in Kubernetes is done via YAML files with its expressive, powerful
configuration and settings. The high complexity of YAML files is one reason for Kubernetes’ steep learning curve. Until recently,
simplifying deployment of standard applications was only possible with a Helm Chart. While Helm is a suitable solution for
standard deployments, it is not easy to learn for those who are new to Kubernetes.

Kustomize
Kustomize is an easy-to-learn alternative for Kubernetes deployments, allowing you to compose and
customize collections of resources from the local file system and external sources like a git repository.
Initially, Kustomize was developed as a separate tool, but as of March 2019 it is part of the Kubernetes
standard installation. Kustomize enables small- and medium-sized companies to roll out their product
and cloud services for a large number of customers in different configurations. Many open-source projects
use Kustomize to provide more flexibility to the community.

STORAGE
Kubernetes provides effective functionality for running stateless services in a cloud environment out of the box. But most
business applications cannot operate as stateless services. Databases and indexes are usually an integral part of a business
application. Kubernetes strongly abstracts the management of storage and does not offer one single solution. As a result, there
are a few options for operating stateful containers in Kubernetes today.

While larger organizations can usually rely on an existing extensive database cluster solution, smaller organizations tend to look
for an easy-to-use solution to store data from scratch. Because storage is all about resilience, distribution, and performance,
choosing the right solution is usually not that straightforward. In recent years, extensive development has happened in this
area, and today there are various tools that allow smaller organizations to set up a reliable storage solution quickly and easily.

Longhorn
Longhorn delivers simplified, 100% open-source cloud-native persistent block storage without the
overhead cost of open core or proprietary alternatives, making integration into a Kubernetes cluster
straightforward. Longhorn independently manages existing storage on a worker node and makes it
available as distributed volumes for containers within the cluster. Each volume is automatically distributed
across multiple nodes to increase the resiliency. Longhorn includes a UI Dashboard that allows you to
monitor the nodes and volumes in a graphical interface and also to administrate and backup all data.

Ceph
Ceph, another distributed block storage solution, can be used within Kubernetes. For its first several
years, Ceph installation was not that easy. Since version 15 (Octopus), Ceph provides a completely new
installation tool called cephadm, which is based on Docker. Separate tools or libraries no longer need
to be installed on the host, making it easy for smaller organizations to set up Ceph within a Kubernetes
cluster. Ceph also includes a UI Dashboard that allows you to monitor the nodes and manage a distributed
storage environment in a graphical interface.

MONITORING
Kubernetes provides several ways to collect and monitor cluster metrics like CPU, memory, or network usage of cluster nodes
or single pods. Additional metrics for the cluster topology, and even application-specific metrics, are available through the
Kubernetes Metrics API.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 22


EXPOSING METRICS
Metrics Server
The open-source project Metrics Server provides a scalable, efficient source for container resource metrics and can be added
easily into an existing cluster. You no longer need to install separate services like cAdvisor for the core metrics, as this service
has become part of kubelet. This makes it very easy to set up basic monitoring. All metrics exposed by the Kubernetes
Metrics Server are available through the command-line tool kubectl top and are also used by other Kubernetes add-ons (e.g.,
Horizontal Pod Autoscaler, Kubernetes Dashboard).

$ kubectl top nodes


NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master-1 381m 19% 1505Mi 40%
worker-1 794m 19% 4088Mi 26%
worker-2 2152m 53% 13926Mi 89%

MONITORING SOLUTIONS

K9s
K9s is a simple-to-use command-line tool for monitoring the status of a cluster and its running pods,
as well as displaying cluster metrics collected by the Kubernetes Metrics API. The :pulse view provides
insights into a running Kubernetes cluster without the need to install additional tools or services.

Prometheus and Grafana


All data provided through the Kubernetes Metric API are stored in memory, and to collect and monitor them over time requires
an additional mechanism for data aggregation. Prometheus and Grafana become the de facto standard to collect and visualize
metrics. The Kubernetes Community provides new dashboards that can be used to visualize and monitor Kubernetes cluster
metrics. Grafana provides various built-in alerting features that also inform you if the cluster is running out of memory or other
resources exceeded their limits.

Using this stack has become easier for smaller organizations over the last couple of years as the Kubernetes Community
expands and resources are more readily accessible.

Prometheus Operator
The new Prometheus Operator project provides a promising way for the Kubernetes-native deployment and management
of Prometheus and related monitoring components. The project serves to simplify and automate the configuration of a
Prometheus-based monitoring stack for Kubernetes clusters. This also includes a Grafana service that provides many out-of-
the-box Grafana Dashboards with no additional installation efforts, reducing installation time from days to minutes.

Conclusion
In the last two years, a lot of development happened across the Kubernetes ecosystem. This was made possible not least
through the broad and engaged support of an agile community by the CNCF. For small- and medium-sized organizations, this
means that the initially steep learning curve has flattened out significantly and operating their own Kubernetes cluster has
become much easier today.

Ralph Soika, Developer at Imixs GmbH


@rsoika on DZone | Author of ralph.blog.imixs.com

Ralph Soika is project lead in the open-source project Imixs-Cloud and co-founder of Imixs GmbH.
For more than 15 years, he has supported small- and medium-sized companies in the design and
development of modern software solutions and service environments.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 23


Build stuff that matters.

Kubernetes and Cloud Native Development Simplified.

Your ideas are going to change the world. Nothing can stand in the way of you turning code and
ideas into an impactful, finished application. Not even the complexity that accompanies cloud native
app development.

CloudBees helps you leverage the power of Kubernetes for end-to-end application development.
We bring order to cloud native chaos by uniting the silos of information and automation, and
helping you scale CI/CD and DevOps across your entire enterprise software portfolio. We’ll
manage the CI/CD automation for you, so you can continue building stuff that matters.

Using CloudBees CI with Kubernetes, you can:

Build and deploy microservice applications


Achieve granular control with automated agent scaling,
auto-configured pipelines with built-in best practices and
comprehensive management of multiple build servers across teams.

Embrace everything as code


Configuration as code for controllers and pipelines as code enable
the creation of configuration settings from a known good state in
GitHub. Changes can be applied across several instances.

Dynamically control cloud spend dedicated to build


Use hibernating controllers to optimize resource utilization and control
cloud spend so you only pay for the metered infrastructure you use.

Learn More

CloudBees, Inc.
CloudBees CI is built on top of Jenkins, an independent community project. Read more about Jenkins at: www.cloudbees.com/jenkins/about 4 North Second Street | Suite 1270
San Jose, CA 95113
© 2020 CloudBees, Inc. CloudBees is a registered trademark and CloudBees CI, CloudBees CD, CloudBees Cloud Native CI/CD, CloudBees Engineering United States
Efficiency, CloudBees Feature Management, CloudBees Build Acceleration and CloudBees CodeShip are trademarks of CloudBees. Other products or www.cloudbees.com
brand names may be trademarks or registered trademarks of their respective holders. info@cloudbees.com
SPONSOR OPINION

Three Tips for Building


Enterprise Software in the Cloud
By Ben Williams, Vice President of Product, CloudBees

The advantages of the cloud are clear. Running enterprise applications fully in the cloud or in hybrid environments helps
reduce risk, cut costs and increase innovation. At this point, the question isn’t “Should we move to the cloud?” it’s “How do we
ensure quality software delivery in new hybrid and cloud environments?” The answer is continuous integration and continuous
delivery with Kubernetes.

Performing CI/CD in the cloud improves failover and reduces downtime, while Kubernetes is an enabler for your team to build
resilient, cloud-native applications quickly. Below are three top requirements for CI/CD solutions to build enterprise software in
the cloud.

FLEXIBILITY TO BUILD AND DEPLOY MICROSERVICE APPLICATIONS


Faced with monolithic applications that are hard to update and scale, many companies turn to microservices. By rearchitecting
an application with microservices, you’re able to isolate services to run in containers governed by an orchestration engine
like Kubernetes. CI/CD automates the pipelines that carry your application services through development stages and into
deployment. Your CI/CD solution needs to have the flexibility and extensibility that provides freedom in how you approach the
refactoring of your applications and their build and deployment pipelines.

LEVERAGE KUBERNETES TO DYNAMICALLY CONTROL CLOUD RESOURCES


With Kubernetes, scaling up and down infrastructure across fluctuating demand cycles from your development teams
becomes easy. Your CI/CD solution should enable you to easily take advantage of this to reduce your infrastructure operating
costs. Ephemeral agents should be provided to consume infrastructure resources only when running workloads, and any long-
running services should support hibernation.

EMBRACE EVERYTHING AS CODE


When everything is configured as code — from your CI/CD infrastructure and configuration to your pipelines to your
applications and their own infrastructure — you create a single source of truth that realizes several key benefits. Everything is
reproducible, everything is auditable, everything is less error-prone, everything is faster. This approach provides fine-grained
control of your entire software environment and enables GitOps workflows as an enabler of collaboration across traditionally
siloed development and operations teams.

As a CI/CD solution that runs natively on Kubernetes, CloudBees CI makes it incredibly easy to scale in support of enterprise
cloud strategies. It works across your entire DevOps toolchain — integrating with commonly used tools. It helps you automate
processes within a wide, varied ecosystem, which is critical for successfully modernizing legacy applications. CloudBees CI is
the enterprise-grade, Kubernetes-native solution that can help you unlock microservices, control cloud expenditure, and build
against a single source of truth.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 25


CONTRIBUTOR INSIGHTS

Scaling Your Microservices


Architecture in Kubernetes
Samir Behara, Senior Architect at EBSCO

Kubernetes is the most popular open-source orchestration tool for running and managing container-based workloads. With
an increase in the adoption of containers and microservices architecture, Kubernetes’ popularity is growing in the developer
community. When you have many containers running in production, you need a container orchestration solution to reduce
the complexity of deploying your applications at scale. Kubernetes resolves many of the challenges associated with running
containerized workloads, either on-premise or in the cloud.

Organizations are widely adopting Kubernetes as they migrate their applications to a modern platform and implementing
containers for deployment purposes. This article will:

• Look at the Kubernetes features and architecture.

• Showcase the scaling of your microservices architecture.

• Overview monitoring challenges with Kubernetes.

• Explain how monitoring solutions like Prometheus and Grafana can help you resolve them.

Kubernetes: The Big Picture


Using Kubernetes in your cloud-native ecosystem can lead to immense productivity gains for your development teams since
you don’t have to spend time on infrastructure management. Running your containerized workload on Kubernetes makes it
portable and gives you the freedom to deploy them either on-premise or in a cloud environment.

Kubernetes ensures that the desired state and the actual state of the cluster are always in sync. As your services scale,
Kubernetes automatically monitors and maintains service health. It ensures that the system is self-healing and can
automatically recover from failures.

Figure 1: Kubernetes key features

Overview of Kubernetes Architecture


At a high level, the Kubernetes architecture is straightforward and viewed as a master-slave design pattern. It is composed of a
master node and a set of worker nodes.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 26


Figure 2: Kubernetes architecture

MASTER NODE COMPONENTS


The master node is a set of components that make up the control plane of Kubernetes. It is responsible for managing the
entire Kubernetes cluster and takes care of scheduling pods across the nodes.

The master node has four components:

• kube-apiserver (API Server) – the control plane’s front end that exposes the Kubernetes API and provides an interface for
communication. It is the custodian of the entire cluster, performing all administrative tasks, and handles all internal and
external requests.

• kube-scheduler (scheduler) – responsible for workload distribution among the worker nodes based on the cluster
resource utilization. It allocates pods to the available nodes.

• kube-controller-manager (controller) – ensures that the Kubernetes cluster is maintained, and the current state is
equivalent to the desired state.

• etcd – the distributed key-value store to store the current state of the Kubernetes cluster, along with the configuration
details. It can be considered the single source of truth in the cluster.

WORKER NODE COMPONENTS


The worker nodes are responsible for running the pods that are scheduled on them and then reporting back to the master
node. Worker nodes have the following components:

• kubelet – the main Kubernetes agent that runs on each worker node. It is responsible for ensuring that the containers on
each pod are running. Kubelet also communicates the node health back to the master.

• kube-proxy – the central networking component of Kubernetes that ensures communication between containers, pods,
and nodes is intact. 

• Container runtime – the software responsible for running containers inside each pod. There are several container
runtime options available like Docker, containerd, rkt.

• Pods – defined as the smallest deployable unit in Kubernetes. A Pod is a group of one or more co-located containers that
run in a shared context. Containers inside the pods can communicate with each other and share the pod environment.

Scaling Your Kubernetes Cluster


You can leverage Kubernetes to monitor your workload and scale it up or down based on the CPU utilization or memory
consumption. You can use Horizontal Pod Autoscaler (HPA) to auto-scale your deployment based on a pod’s resource usage or
custom metrics. HPA can increase or decrease the number of pods to handle unpredictable production workloads and ensure
your cluster is rightsized. This automatic scaling is excellent for applications having spikes in load and usage.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 27


CONFIGURE HORIZONTAL POD AUTOSCALER
The Horizontal Pod Autoscaler does not gather any usage metrics by itself. The recommendation is to use Metrics Server to
collect usage metrics. Heapster is deprecated and moving to Metrics Server is the general direction for Kubernetes. Metrics
Server collects and sends the aggregated metrics from the running pods to the API server. The HPA controller is implemented
as a control loop and will check the metrics periodically, calculate the number of replicas required to meet the target metric
value configured in the HPA resource, and then adjust the replicas field the target resource.

You can create a HorizontalPodAutoscaler using the kubectl autoscale command:

kubectl autoscale deployment dzone-k8s –min=1 –max=5 –cpu-percent=60

This will create an HPA resource that increases/decreases the number of pods between 1-5 to maintain target CPU utilization of
60%.

Monitoring Challenges with Kubernetes


The Kubernetes environment contains several moving components that make it difficult to have complete observability inside
your application stack and its underlying infrastructure. To ensure that your services are running as expected, you need the
ability to track the health, availability, and resource utilization of the master and worker node components. The container-based
environment is dynamic and possesses monitoring and troubleshooting challenges.

Compared to a traditional monolithic environment, microservices running in a dynamic container environment require a
mature strategy for observability. You need the ability to review logs, metrics, and traces to perform root cause analysis. The
number of components to monitor in a Kubernetes cluster is significantly high; hence, you will need to rethink your monitoring
strategies. It is critical to identify and have a good understanding of the key metrics to monitor in your cluster.

Troubleshooting Techniques in Kubernetes


ACCESSING CONTAINER LOGS
You can use the kubectl command-line tool to interact with your Kubernetes cluster. For troubleshooting issues, you can access
the application logs running inside a container by using the command, kubectl logs <pod-name>.

VIEWING RESOURCE REQUESTS AND LIMITS


One of Kubernetes’ best practices is to set resource requests and limits for pods running in your cluster so that it does not run
out of memory or CPU resources. Requests and limits are measured on a per-container basis. Kubernetes uses requests and
limits to control the CPU and memory resources that are assigned to a container:

• Requests – specify the minimum resources a container is guaranteed to get

• Limits – specify the maximum amount of resources a container can get.

Below is a configuration file showing resource limits and requests for a container:

apiVersion: v1
kind: Pod
metadata:
 name: dzone-report
 namespace: dzone-ns
spec:
 containers:
 - name: dzone-ctr
   image: dzone
   resources:
     limits:
       memory: 4GB
       cpu: 1000m
     requests:
       memory: 2GB
       cpu: 500m

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 28


Figure 3: Grafana Dashboard showing resource utilization

HEALTH CHECKS: READINESS PROBE AND LIVENESS PROBE


Kubernetes provides two types of automated health checks to increase the resiliency and availability of your applications:

• Liveness Probes – designed to check if the application is in a good state. If not, Kubernetes will detect the offending
application pod and automatically restart it.

• Readiness Probes – designed to check if the application is ready to service requests. Kubernetes ensures that the readiness
probe passes before sending traffic to the application pod.

Cloud-Native Monitoring with Prometheus


Prometheus is the top open-source monitoring and alerting system for Kubernetes. You can use it as a time-series database
for storing metrics related to your applications and underlying infrastructure components. It is ideal for microservices
workloads running in containers. Cloud-native applications generate a lot of data. Prometheus is a great fit in these dynamic
environments with complex workloads.

Prometheus is pull-based and identifies the services it needs to monitor via service discovery. It scrapes metrics from the client
applications at periodic intervals, then collects the monitoring data and stores it in a time-series database. You can then query
the required metrics using Prometheus’ powerful query language, PromQL, or view it in Grafana dashboards. You can also
configure your own alerting rules. Prometheus sends alerts to AlertManager, which aggregates alerts and sends notifications
via different systems like OpsGenie, PagerDuty, and Email.

Figure 4: Prometheus architecture

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 29


Metrics Visualization With Grafana
Grafana is an open-source visualization tool that lets you query, explore, visualize, and receive alerts on application metrics
no matter where they are stored. You can build dashboards to monitor your application and the underlying Kubernetes
infrastructure. Grafana supports many data sources out of the box (e.g., Elasticsearch, AWS CloudWatch, Azure Monitor,
SQL Server) and allows you to combine data from different sources into a single dashboard. Setting up dashboards is
straightforward, and you can leverage official open-source dashboards already built by the community.

Grafana helps organizations to adopt a data-driven culture and make informed decisions based on metrics. The dashboard
below provides a cluster overview that allows you to monitor the Kubernetes resources and identify any workload bottlenecks.

Figure 5: Grafana dashboard for Kubernetes cluster monitoring

Conclusion
Kubernetes is a rapidly developing platform that lets you focus on building your applications without worrying about the
underlying infrastructure. As organizations transition from a monolithic to microservices architecture, they can benefit from
Kubernetes’ declarative approach and orchestrate the availability of their containerized workloads.

Samir Behara, Senior Architect at EBSCO


LinkedIn: @samirbehara | Author of www.dotnetvibes.com | @samirbehara on DZone

Samir Behara is a Senior Architect with EBSCO Industries and builds software solutions using cutting
edge technologies. He is a Microsoft Data Platform MVP with over 15 years of IT experience. Samir is a
frequent speaker at technical conferences and is the Co-Chapter Lead of the Steel City SQL Server user
group.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 30


Technology

Training

Services
& Support

The Leading Independent


Platform for Enterprise
Kubernetes

Explore the
D2iQ Kubernetes Platform

D2iQ simplifies and automates the really


difficult tasks needed for enterprise
Kubernetes in production at scale.

Simplify your
Kubernetes Journey
CASE STUDY

Case Study: Ziff Media Group

Ziff Media prides itself on keeping up with modern technology trends to stay on the
forefront of their industry. Kubernetes is the foundation of their infrastructure, which
provides them the agility needed to manage multiple different brands with a variety
of customer-facing web properties.
COMPANY
CHALLENGE Ziff Media Group
From an executive decision-making level, Ziff Media needed a Kubernetes Platform
that was open, reliable, and made it possible to use open source products that they COMPANY SIZE
could plug in and implement themselves. They also required an expert support team 1,001-5,000 employees
that could be there for quick responses in the event of an emergency, and not have to
wait hours or days for each message to come back. INDUSTRY
Digital Portfolio in technology, culture,
SOLUTION and shopping
Ziff Media chose D2iQ’s Konvoy because everything is “pure open source”. The
foundation of the D2iQ Kubernetes Platform (DKP), D2iQ Konvoy is a comprehensive, PRODUCTS USED
enterprise-grade Kubernetes distribution built on pure open source with the add-ons D2iQ Konvoy
needed for Day 2 production — selected, integrated, and tested at scale, for faster
time to benefit. PRIMARY OUTCOME
Managing Kubernetes efficiently and
“Whenever I want to scale out Prometheus, Grafana, or Elasticsearch, or change independently with zero lock-in or
configurations or authentications, I can go directly to the website documentation and downtime.

just do it — everything works out of the box.” - Brett Stewart, Senior DevOps Engineer

The other thing that sold Ziff Media on D2iQ was the level of support. “The speed, “The biggest thing I enjoy about
the competence, and the ability to meet us where we’re at — on Slack. The support D2iQ Konvoy is that everything

engineers are very fast at getting answers to us quickly, even if they don’t immediately is pure open source. Whenever
I want to scale out Prometheus,
know the answer. The engagement and the knowledge on D2iQ’s end has been very
Grafana, or Elasticsearch, or change
confidence-inspiring and that is not something we saw from other vendors in the
configurations or authentications,
space.” - Chris Kite, Director of Technology I can go directly to the website
documentation and just do it —
RESULTS everything works out of the box.”
Within two months of implementing D2iQ Konvoy, Ziff Media Group was already in
production. The openness and stability of D2iQ Konvoy has given the DevOps team — Brett Stewart,
Senior DevOps Engineer,
the opportunity to get things done faster and more reliably.
Ziff Media Group
“What sets D2iQ support apart from others is that they have a DevOps mindset and
understand the impact that our issue is causing. Rather than adding a quick fix, they
dig deep to find the long-term solution, which allows us to get production up and
running as quickly as possible.”

If something breaks, time-to-resolution is low and the competence of D2iQ support


engineers is high. “With D2iQ support, the initial response for all of our tickets has
been around 15 minutes, which is 50% faster than it was before,” says Stewart.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 32


CONTRIBUTOR INSIGHTS

“kubecthell”
Daniel Stori, Software Architect at TOTVS

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 33


A complete DevOps platform,
delivered as a single application.

The best solution for Cloud Native development


Everything you need to build, test, deploy, and run your app at scale

PLAN » CREATE » VERIFY » PACKAGE » SECURE » RELEASE » CONFIGURE » MONITOR » PROTECT

GitLab works with or within Kubernetes in three distinct ways,


used independently or together.

» Deploy software from GitLab CI/CD pipelines to Kubernetes


» Use Kubernetes to manage runners attached to your GitLab instance
» Run the GitLab application and services on a Kubernetes cluster

Use GitLab to test and deploy your app at scale on Kubernetes

Try GitLab Free Now


CASE STUDY

Case Study: Hotjar

CHALLENGE
Hotjar had challenges with how the growing number of developers
structured their work by using legacy systems, slowing remote
productivity. Developers were using BitBucket for hosting source code
and Jenkins for CI/CD; due to the constraints of some of the legacy COMPANY
applications, they had to develop and maintain large amounts of Jenkins- Hotjar
specific code to support pipelines. They were using Kubernetes as a
platform for all their microservices and some of the build pipelines. COMPANY SIZE
Hotjar was looking for a tool that offers Kubernetes integration and a 100 employees
replacement for Jenkins CI/CD.
INDUSTRY
Technology
SOLUTION
Hotjar selected GitLab Silver; GitLab integrates natively with Kubernetes,
PRODUCTS USED
which gives the development team peace of mind because they can
GitLab Silver
trust that the tool will work automatically without constant maintenance.
GitLab projects connect to their AWS EKS cluster, the tests run within the
PRIMARY OUTCOME
cluster using Kubernetes Operator, it reports back with coverage results,
Hotjar replaced Jenkins with GitLab for
then artifacts are uploaded to AWS ECR/S3. Review environments spin up
exceptional CI/CD, a robust Kubernetes
inside the EKS cluster during review. Every engineering team and some of integration, and improved source code
the customer support team members are using GitLab. management. GitLab’s integrated platform
helps to keep Hotjar up to date with cutting
edge software, end-to-end visibility, and
RESULTS
inspires all remote culture modernity.
Developers save time making use of standalone review environments
instead of in-the-loop shared staging environments. With most people
online synchronously, an MR is reviewed in minutes or hours, and so
“In terms of Kubernetes-native product that
deployments are now between 2-15 per day with 50% deployment time
supplies the whole life cycle, we actually
saved. CI Build time has decreased by 30% over previous implementation didn’t find that many competitors.”
in Jenkins. With Jenkins, the teams created a lot of custom codes to do lot
of the work that they are now getting natively with GitLab. On the code — Vasco Pinho,
Team Lead, SRE at Hotjar
management side, they used the cloud version of Bitbucket. Now, they
use GitLab.com for all of the development work and to host the CI/CD
runs.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 35


CONTRIBUTOR INSIGHTS

Kubernetes and DevOps


Integrating AKS Into Your CI/CD Pipeline

Boris Zaikin, Software and Cloud Architect at Nordcloud GmbH

Building a CI/CD process for an application could be a challenge, especially when you are dealing with Kubernetes and
Docker. This article will cover how Kubernetes can improve the CI/CD process with an example of a .NET Core application that
includes all deployment processes in a YAML pipeline. We will review a list of effective tools and frameworks, as well as walk
through a detailed checklist that contains key actions to help make your Kubernetes cluster production-ready.

Before diving into the complexities of integrating Kubernetes into your DevOps processes, it’s important to understand the
standard architecture of a Kubernetes cluster and how it may impact customers’ solutions.

Understanding Kubernetes Clusters


Kubernetes is a comprehensive platform with a large, fast-growing ecosystem that allows companies to run and manage
a scalable application capable of operating under high load and providing high reliability. Cloud vendors such as Google,
Microsoft, and Amazon offer services to simplify building and deploying Kubernetes clusters.

For example, Microsoft Azure provides Azure Kubernetes Services (AKS), which is a hosted service that allows you to set up your
cluster, run the application, and create or improve CI/CD processes in a short period of time.

In Figure 1 below, I’ve created typical AKS cluster architecture with the most widely used components: load balancer, nodes and
pods. I will use this architecture for the example in the article:

Figure 1: A standard Kubernetes cluster in AKS

• Node – represents a computer unit container or simply saying virtual machine. It is used to host pods.

• Pod – a logical container (or deployable unit) that hosts an application instance that is wrapped in a docker container.
Kubernetes allows you to use other container runtimes (e.g., containerd and CRI-O).

• Load balancer – a service that distributes traffic between nodes to avoid single-node overload.

CONTROL PLANE AND NODE COMPONENTS


There are several components that are worth mentioning. The control plane is a Kubernetes decision-maker, so to say. It
decides how to schedule pods, scale clusters, detect and respond to cluster events, gather logs, and so on. Control plane
components are usually deployed as pods to one of the cluster nodes. That’s why a cluster should have at least two nodes.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 36


However, it is recommended to have a three-node cluster in order to ensure the highest availability.

The table below lists the key components of the control plane and nodes:

Component Description

Provides access to the control plane and allows other tools to communicate and perform
kube-apiserver
operations with the Kubernetes cluster

A process that is responsible for single operations like generating API tokens, or acting
kube-controller-manager
when pods or nodes go down, or managing load balancers

A reliable key value store that allows you to store cluster metadata, configuration data,
etcd
and application state data

kubelet An agent that ensures the container runs in the pod

kube-proxy Maintains network rules for each node

kube-scheduler Assigns the pods to the node and is part of the control plane

CI/CD and Kubernetes for a .NET Core Application


Kubernetes contains the following properties, which can significantly improve the CI/CD process:

• Supports declarative YAML format to declare the configuration, which allows DevOps architects to smoothly integrate it
into any pipeline.

• Supports zero-downtime deployment models (e.g., the blue-green deployment pattern in AKS).

• Integrates with DevOps platforms (e.g., Azure DevOps has all integration components for AKS in place, as well as Google
Cloud and Bitbucket).

• Is supported by popular IaC platforms and tools (e.g., Terraform, AWS CloudFormation, Pulumi, Azure Resource Manager).

• Has a command-line interface (CLI) that allows you to manage the whole cluster.

To demonstrate how Kubernetes can improve the process of deploying, managing, and scaling your application, I created an
example based on a .NET Core application. As a CI/CD platform, Azure DevOps offers effective support for AKS and other Azure
Resources.

AN EXAMPLE CI/CD PROCESS WITH AZURE DEVOPS AND AKS


The following deployment process uses a declarative approach based on the YAML pipeline:

1. Fetch the code from the Git repository

2. Build the application

3. Restore NuGet packages

4. Run unit tests

5. Build unit test and code coverage reports

6. Push docker image to the container registry

7. Deploy to the AKS cluster

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 37


Figure 2: A CI/CD pipeline workflow

Tools and Frameworks That Support Kubernetes


As a complex system with a massive learning curve, Kubernetes maintenance can be difficult. However, there are some tools
and frameworks that can simplify those efforts by allowing you to install a set of components into your cluster. Also, some
components validate your cluster set-up and deployment configuration.

TRAEFIK
Traefik is a platform that offers a set of components for the Kubernetes cluster. It contains a load balancer, reverse proxy,
monitoring, and service mesh, which are all compatible and have the same step-by-step set-up flow, allowing cloud architects
to simplify and speed up their CI/CD processes.

ISTIO
Istio is an open-source framework that allows you to set up an all-important toolset to your cluster at once. It contains tools for
traffic management, monitoring/logging components, security, and network policies. As a disadvantage, the installation can
be complex and requires a lot of time; however, everything is well documented in its GitHub.

POPEYE
Popeye scans your cluster for potential issues with configuration, resources, and network holes and generates detailed reports
with all issues.

GOLDILOCKS
Goldilocks scans pods for resource limits and creates reports with recommended resources. As a small disadvantage, it requires
a vertical-pod-autoscaler. We will talk about it in the next section.

K9S
K9s provides a command-line interface (CLI) that allows you to easily manage, monitor, and even benchmark your cluster in
your favorite terminal software.

KURED
Kured (Kubernetes Reboot Deamon) is a component that safely reboots and installs security updates of your nodes. It has an
easy and fast installation configuration based on YAML, as well as supports different alert types and sources.

Production-Ready Kubernetes Cluster Checklist


Before running your application in the production environment, you should prepare your Kubernetes cluster and CI/CD
process. Below are important actions to ensure you are production-ready:

• Set up requests and limits for your container and pods to avoid excessive resource usage pods eviction issues. I
recommend to also use resource quotas and limit ranges.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 38


• Implement liveness, readiness, and startup probes, allowing Kubernetes to detect and restart containers working
incorrectly.

• Implement Container lifecycle hooks, which allow you to control container events and monitor when something goes
wrong.

• Set up cluster-autoscaler and the Horizontal Pod Autoscaler, enabling you to control the load of your cluster and increase
or decrease nodes and pods for better cluster availability. It also helps save money when the cluster has low usage.

• Set up the PodDisruptionBudget (PDB) to improve pod availability.

• Set up a backup/restore strategy for your cluster data (e.g., you can use tools like Valero or Azure Site Recovery).

• Set up granular role-based access control (RBACK) policies to avoid all users having full access to the cluster.

Conclusion
In this article, I described a typical Kubernetes cluster architecture and its core components, provided a useful toolset that
simplifies working with your cluster, and touched on autoscaling as an important option for highly loaded and available
applications. The Kubernetes cluster checklist will help you prepare your cluster and application to run successfully in
production. To accompany the example above, you can find a detailed description of how to set up the AKS cluster, including
Ingress setup, deployment YAML scripts, and application source code here.

Boris Zaikin, Software and Cloud Architect at Nordcloud


@borisza | medium.com/@boriszn

I am a software and cloud architect expert at Nordcloud GmbH, who is passionate about building
complex solutions and architecture that bring value to the business. I also work as a consultant and like
to share my knowledge with other people through my technical blogs and technology courses I create.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 39


Accelerate the velocity of
your business applications
with a cost-effective
Kubernetes solution

Redapt’s Kubernetes Enablement


Assessment simplifies your journey
to containerized applications. Within
a couple of days, you’ll have an ideal
migration plan and architecture to
modernize your applications and increase
the value of your business.

Learn More

Azure Kubernetes Service (AKS) Google Kubernetes Engine


CASE STUDY

Case Study: Mojix


Anthos at the Edge

CHALLENGE
When Mojix, a leading software company, was developing the next
generation of its retail edge platform, the company needed a way to
manage thousands of in-store applications across the globe. Mojix was COMPANY
working on a security and supply chain software stack that could be Mojix
deployed at thousands of retail locations for clients. These stacks acted
much like micro datacenters, which meant Mojix needed a way to COMPANY SIZE
manage the stacks efficiently. Mojix had only recently moved from VMs 200+ employees
to Kubernetes, which meant it needed a solution that could be relatively
easy to onboard. INDUSTRY
Retail

SOLUTION
Redapt developed a proof-of-concept solution built upon Google PRODUCTS USED
Cloud’s Anthos due to its native Kubernetes support and its ability Redapt Anthos

to scale the management of clusters across thousands of micro


datacenters. Mojix was able to build a custom tech stack with the PRIMARY OUTCOME
confidence of knowing its platform was available to handle the future Mojix’s vertical cloud technologies — fully
integrated and managed by Redapt—affords
management and operational needs of its customers. Mojix adopted
retailers winning edge solutions built for
Anthos as a foundation of its edge solution product to help deliver
continuous digital transformation.
Kubernetes across customer platforms.

RESULTS “A lot of our success, and our customer


With Redapt’s help, Mojix was able to move forward with its edge success, is rooted in our cloud-native
solution confidently, knowing it would have the ability to manage its solutions that are IoT-enabled and powered

micro datacenters at scale. Through the proof-of-concept results, Mojix by vertical cloud technology from the Google
Cloud Platform with Anthos. We also rely on
gained intel into its Anthos deployment and integration capabilities,
Intel’s end-to-end hardware innovations, and
and the confidence to move forward with its ambitious edge-to-cloud
our great relationship with Redapt as an
solution. edge service partner.”

— Gustavo Rivera,
Mojix Senior VP of Software Engineering

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 41


CONTRIBUTOR INSIGHTS

Demystifying Kubernetes
Deployment Strategies
Choosing the Right Deployment Approach for a Reliable
Infrastructure

Sudip Sengupta, TOGAF Certified Solution Architect | Freelance Tech Writer

As per the CNCF 2019 survey, there is a steady growth in container adoption over the last four years. Seventy-eight percent of
survey respondents claimed they were using Kubernetes in production, projecting a significant increase in Kubernetes users
year-over-year.

While it is safe to infer that Kubernetes deployments will continue to increase due to rising popularity, it is also crucial for
organizations to choose the right deployment strategy for running resilient distributed systems.

Kubernetes and Its Role in Container Orchestration


With the adoption of Kubernetes for container orchestration, an organization can manage and scale containers across clusters,
while monitoring node health for optimum performance. This empowers developers to deploy and scale services faster without
investing in trivial efforts to maintain cluster health.

IMPORTANCE OF CONTAINER ORCHESTRATION


• Improved security: Container orchestration makes it possible to share resources securely. Web application security is
enhanced through isolation since each application process can be separated into individual containers.

• Load balancing: Load balancing will help to ensure your application is always stable. When a container gets too much
traffic, Kubernetes can distribute the network traffic, thereby ensuring a stable deployment.

• Configuration management: You can securely store SSH keys, OAuth tokens, passwords, and other sensitive information
in Kubernetes. You can also update app configurations without having to rebuild the container images or without
revealing the secrets in your configuration.

• Automated bin packing: Kubernetes allows you to manage resources better. For instance, you can create a cluster of
nodes with predetermined CPU and memory slots. Kubernetes will then fit your containers to the nodes.

• Automated rollbacks and rollouts: Kubernetes allows you to define an optimum state for all deployed containers. You
can then modify the state of a container to the optimum state.

KUBERNETES PLATFORM OVERVIEW


Containerized applications in Kubernetes are run by at least one worker node. A worker node refers to a working machine and
is controlled by a master node. Besides, each node hosts multiple pods that contain components of the application workload.
The nodes and pods are collectively managed by a control plane.

Here are the noteworthy components of the Kubernetes platform:

CONTROL PLANE COMPONENTS


• kube-apiserver: This is the front end of the control plane that scales horizontally, i.e., scales through the deployment of
more instances.

• etcd: This is a key-value store used as a backing store for cluster data.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 42


• kube-scheduler: This assigns nodes to newly created pods.

• kube-controller-manager: This runs controller processes.

• cloud-controller-manager: This runs all cloud-specific controllers.

NODE COMPONENTS
• kubelet: This ensures that containers are running in the pod as expected.

• kube-proxy: This is a network proxy that ensures network rules are maintained on the nodes.

• Container runtime: This is the software charged with the task of running containers.

ADD-ONS
• DNS: This is a must-have add-on for all clusters. It is the DNS server that serves the DNS records for all services.

• Web UI: A general-purpose dashboard that allows users to manage and troubleshoot apps.

• Container resource monitoring: This keeps track of time-metrics in a central database and provides an interface for
browsing the data.

• Cluster-level logging: This keeps track of container logs and provides an interface for searching and browsing the logs.

Kubernetes Deployment Strategies


One of the advantages of cloud-native applications is the microservice approach, which allows multiple developers to make
changes simultaneously. While this is a good thing, frequent releases can easily affect the reliability of the application, and
thereby provide users a bad experience. DevOps teams must, therefore, come up with a mechanism for managing deployment
in a manner that minimizes risk to the application.

There are several deployment strategies that you can use depending on your goals. For instance, you may want to conduct a
beta test before rolling out the application to all users. This would mean rolling out the changes in specific test environments
first before making it available to the public. You need to choose the right strategy in order to ensure the reliability of your
infrastructure during an app update.

Without further ado, let’s look at some prominent deployment strategies for managing successful Kubernetes applications.

RECREATE
The recreate deployment strategy is the simplest form of a Kubernetes deployment that terminates all active instances and
then creates them afresh with new versions. Though this strategy remains a popular choice, it is often not recommended for
complex cluster and application architectures. The main advantage of a recreate deployment is that the app state gets entirely
renewed.

spec:
replicas: 3
strategy:
type: Recreate

WHEN TO USE RECREATE DEPLOYMENTS


• If the app can withstand some little downtime

• If the app doesn’t support old and new versions of code running simultaneously

• If you must migrate all data transformations before running new code

• If you are using an RWO volume, which cannot be shared amongst multiple replicas

ROLLING
A rolling deployment gradually replaces the instances of an app with the new version. The phased replacement of the app’s
pods makes sure there is always a minimum number of available pods.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 43


This deployment strategy will only kill old pods after checking to ensure there are enough new pods so that the availability
threshold can be met. In other words, you can safely rollout the new updates without causing any downtime. Also, in case of an
error, the process can easily be aborted without affecting the running application.

Strategy:
type: Rolling
rollingParams:
intervalSeconds:2
timeoutSeconds:60
maxSurge: “10%”
maxUnvailable: “10%”
Pre: {}
Post:{}

WHEN TO USE ROLLING DEPLOYMENTS


• When you don’t want any downtime during the update

• When your app supports the running of old and new code concurrently

BLUE/GREEN
Blue/Green deployments make it possible to upgrade an app with zero downtime. With this strategy, two identical application
environments (Blue and Green) are run concurrently. However, at any time, only one of the environments is actually live while
the other one is idle. Any updates to the app are first applied to the idle version, and once all tests have been done and stability
confirmed, traffic is redirected from the live version to the idle version. This way, you can seamlessly switch from Blue to Green
without any downtime.

Creating a Blue Deployment

kind: Deployment
metadata:
name: myapp-1.1.0
spec:
replicas: 3
template:
metadata:
labels:
Name: myapp
Version: “1.1.0”
spec:
containers:
name: myapp
image: myapp:1.1.0
ports:
name: http
containerPort: 80

Creating a Green Deployment

kind: Service
metadata:
name: myapp
labels:
name: myapp
spec:
ports:
name: http
port: 80
targetPort: 80
Continued on next page

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 44


selector:
name: myappp
version: “1.1.0”
type: LoadBalancer

WHEN TO USE BLUE/GREEN DEPLOYMENTS


• Disaster recovery: If serious issues arise after deploying the Green version, a router can direct traffic to the Blue version
without any downtime.

• Continuous integration: Blue/Green deployments make it possible to push software live quickly and continually update
it with minimal risk to new releases.

• Testing in production: Some bugs can only be discovered by testing the app in production. This Blue/Green deployment
strategy makes it possible to test the app without the risk of bad user experience.

CANARY
Canary deployment is a method for conducting incremental rollouts by juxtaposing new versions of the app to the last known
stable version, comparing the two to determine if the new deployment will be rejected or promoted.

This is typically done by gradually deploying the new version to a subset of live users and comparing their experience with the
rest of the users who are using the old version. The steps of canary deployment are as follows:

1. Deploy one or more canary servers

2. Test and observe the deployment to see if it works as expected

3. Deploy the tested release to the rest of the servers

Canary deployment helps developers to discover potential issues when the new version is only available to a small number of
users. Any errors can, therefore, be fixed first before the release is applied to all servers.

canaryDeploy:
title: “CANARY ${{CF_SHORT_UPDATE}}”
image: myapp/darwin:main
environment:
WORKING VOLUME=.
SERVICE NAME=test-app
DEPLOYMENT_NAME=test-app
TRAFFIC_INCREMENT=10
NEW_VERSION=${{CF_SHORT_UPDATE}}
SLEEP_SECONDS=30
NAMESPACE=canary
KUBE_CONTEXT=TestCluster

WHEN TO USE CANARY DEPLOYMENTS


• If multiple versions of the app are required to run concurrently and get live traffic

• If the app doesn’t use any sticky session mechanism as some users might hit a canary server in one request and a
production server in another

A/B TESTING
A/B testing is a deployment strategy where multiple variants of the app are run in parallel and then various analytics tools (e.g.,
HTTP headers and cookies) are used to pick the best variant based on user behavior. In some cases, new features can be made
provisionally available to a select number of users just to test it out and see if the new features will be accepted by users. A/B
testing is not native to Kubernetes, so you might need to set up external components like Istio, Traefik, and Linkerd.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 45


route:
tags:
version: v1.1.0
weight: 80
tags:
version: v2.1.0
weight: 10

WHEN TO USE A/B TESTING


• When you need mature deployment options, e.g., if you want to see the execution time of all or some of the
microservices, or if you want to identify any bottlenecks

• If you want to achieve minimal response latency

• If your app needs high performance with no resource limitations

SHADOW
Under the shadow deployment strategy, production traffic is copied to a non-production service for testing purposes.
Shadowing is almost similar to canary and Blue/Green deployment strategies — except it has some distinct applications where
the other strategies might not be ideal. For instance, shadowing traffic would be perfect for testing critical apps (e.g., payment
gateways) that may not have room for reverting changes.

apiVersion: darwin.io/vi
Kind: Mapping
metadata:
name: newservice
spec:
prefix: /newservice/
service: newservice.default
shadow: true

WHEN TO USE TRAFFIC SHADOWING


• Shadowing has no production impact and would, therefore, be a great deployment strategy for testing persistent services

• When you want to measure how a service behaves with respect to your expected outcomes

• When you want to test a new version on real traffic with zero production impact

Key Takeaways
In essence, with this list of deployment strategies to choose from, it is essential to choose the one most suited to your
requirements. If you are looking to release to a staging/development environment, recreate would be the preferred choice. On
the contrary, Blue/Green would be ideal for production, while rolling and canary deployments would be good options when you
are unsure about the impact of the release. If your business needs to test an app with different users, then you may want to try
A/B testing. Lastly, you can use shadowing if you want to test your new app on real traffic without impacting production.

With these considerations in mind, opting for the right deployment strategy should get a lot easier.

Sudip Sengupta, TOGAF Certified Solution Architect | Freelance Tech Writer

Sudip is a TOGAF Certified Solutions Architect with more than 15 years of experience working for global
majors such as CSC, Hewlett Packard Enterprise, and DXC Technology. Sudip is now a full-time freelance
tech writer, covering a wide range of topics like cloud, DevOps, SaaS, cybersecurity, and ITSM. When not
reading and writing, he can be found in a squash-court or playing a game of Chess.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 46


The future of scale-out,
cloud-native data is
Apache Cassandra™

Build cloud-native apps fast


with DataStax Astra DBaaS,
built on Kubernetes and
Cassandra. Get started in
minutes with 5 GB free.

Get Started

"Astra is hands-down the best solution for Cassandra developer productivity. It eliminates all of the overhead involved in
setting up Cassandra. With Astra, developers can fully automate their CI/CD pipelines for Cassandra support. This means
they can concentrate on more important tasks."

Robert Reeves, CTO, Datical

#DataStax
CASE STUDY

Case Study: FamilySearch


Connecting relatives across generations with DataStax Enterprise

CHALLENGE
FamilySearch is the largest genealogy organization in the world, routinely
serving 125 million transactions per hour during peak usage from its
more than 500,000 customers users spread out across the world. As the
organization grew in popularity, they began struggling with their legacy COMPANY
database technology as it strained to service their customers’ experience FamilySearch
expectations. Expecting traffic to grow up to 100x over the next three years,
FamilySearch needed to migrate to a scalable database solution that was COMPANY SIZE
highly performant, highly available, and could support the organization’s 1,000+
rapid growth with zero downtime.
INDUSTRY
SOLUTION Nonprofit

After comparing several relational and NoSQL databases, FamilySearch


selected DataStax Enterprise (DSE), the NoSQL database built on open- PRIMARY OUTCOME
source Apache Cassandra™, due to its distributed nature, scalability, and FamilySeach enables more people around
the world to learn more about their family
high performance. DSE’s active-everywhere architecture delivers 100
histories and connect with relatives they
percent availability with no downtime—even during traffic surges and
never knew existed.
cluster maintenance. FamilySearch also worked with DataStax to develop
its own framework for creating new applications, using DSE in conjunction
with DSE Search and OpsCenter to deliver powerful experiences to their
“FamilySearch helps our customers search
users.
for their ancestors and contribute to their
family history. DataStax Enterprise provides
RESULTS the scalable data platform we need to
The decision to migrate to DataStax Enterprise proved to be a wise one. expand our offering and continue providing
Within two weeks of going live with DSE, FamilySearch would have hit the a great experience for our customers.”
capacity limit of their previous system, and DSE was able to accommodate
— Michael Nelson,
that traffic easily. DSE delivered:
Software Development Manager,
FamilySearch
• An improved customer experience, with faster response times, high
availability, and no database downtime.

• Rapid scalability, with seamless support for 125 million transactions per
hour and a technological foundation that can grow alongside them into
the future.

• New services like Record Hints, which helps users make new research
discoveries and further improves the user experience.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 48


The Kubernetes Enablement Company

Trusted by
over 20,000 users in
production-grade clusters.

Kubernetes Managed Service

Kubernetes Audit & Improve

Policy-Driven Configuration Management

Open Source Tooling

www.fairwinds.com
+1 617-202-3659
sales@fairwinds.com
CASE STUDY

Case Study: Zonar Systems


Fairwinds Managed Kubernetes Helps Zonar Modernize and Save Time and Money

Zonar has pioneered smart fleet management solutions throughout


vocational, pupil, mass transit and commercial trucking industries. Zonar’s
mission is to enhance the safety, performance and success of their customers
by transforming the delivery of innovative insights for commercial fleets
around the world.
COMPANY
Zonar Systems

CHALLENGE
COMPANY SIZE
When Zonar’s infrastructure could no longer handle variable loads and
500 employees
increased volume, they strategically decided not to increase their large data
center footprint. Instead, the company’s engineering team evaluated the
INDUSTRY
performance improvements and efficiencies a containerized infrastructure
Fleet Management
would bring. Zonar knew what Kubernetes offered with regard to scaling and
delivery configuration, but Zonar lacked hands-on experience operating and
PRODUCTS USED
maintaining the container orchestration system and building Kubernetes-
Fairwinds Managed Kubernetes Services
based applications and services.

SOLUTION
“Fairwinds has saved us time and
Zonar partnered with Fairwinds to implement a brand-new architecture on
money by providing expert cloud
Google Kubernetes Engine (GKE) that allowed new and migrated applications services guidance, consulting, and
and services to receive data from the existing infrastructure. This new implementation. Every step of the way,
architecture enabled Zonar to migrate legacy applications and services where they’ve trained our team and increased
it made sense for the business while also implementing all-new applications our knowledge base, allowing us to

and services to run on Kubernetes. focus on building services instead


of maintaining infrastructure. More
Fairwinds worked closely with the team to help them implement Kubernetes than anything else, they have enabled
best practices, modernize development processes and adopt services a degree of automation that has
and practices that have greatly accelerated software delivery. Fairwinds shifted the culture of our engineering
collaborated with Zonar to select and use the appropriate CI/CD pipelines, organization to one where teams can
operate and own their entire service
modern monitoring and logging approaches and Kubernetes-specific
stack.”
package management, Helm.
— Arun Jacob,
RESULTS Senior Vice President,
Today, Zonar is running 130 applications and services in production. Now Software, Zonar Systems

that teams are building cloud-native applications, they have adopted a


model where Fairwinds is primarily running the infrastructure. Shifting
operational responsibility from a central IT organization to the application and
service teams is allowing for much faster issue resolution and improvement.
Kubernetes has automated deployment, scaling, and failure remediation,
and all changes to codebases are automatically integrated and validated. The
speed at which changes can be made has gone from weeks to minutes.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 50


ADDITIONAL RESOURCES

Diving Deeper Into


Kubernetes
BOOKS REFCARDS

Kubernetes Patterns: Reusable Elements for Getting Started With Kubernetes Containers weighing you
Designing Cloud-Native Applications down? Kubernetes can scale them. In order to run and maintain
By Bilgin Ibryam and Roland Huß successful containerized applications, organization is key. This
Due to the rise of microservices and cloud-native Refcard has all you need to know about Kubernetes including key
architectures, Kubernetes patterns and tools are concepts, how to successfully build your first container, and more.
more important than ever. Learn more about Advanced Kubernetes Kubernetes is a distributed cluster
Kubernetes design elements for cloud-native technology that manages container-based systems in a
applications, including foundational, behavioral, declarative manner using an API. There are currently many
structural, configurational, and advanced patterns. learning resources to get started with the fundamentals of
Kubernetes, but there is less information on how to manage
Kubernetes in Action: 1st Edition
Kubernetes infrastructure on an ongoing basis. This Refcard aims
By Marko Luksa
to deliver quick, accessible information for operators using any
In this complete guide, learn more about Kubernetes product.
developing and running applications in a
Kubernetes environment. Not only does this book PODCASTS
explore the Kubernetes platform, it also provides a
Kubernetes Podcast Considering that
detailed overview of technologies like Docker and
Google produces it (and that Google also created
how to get started setting up containers.
Kubernetes in 2014), you might call this podcast
a classic. Enjoy weekly interviews with prominent
Kubernetes: Up & Running: Dive into the
tech folks who work with K8s.
Future of Infrastructure
By Kelsey Hightower, Brendan Burns, and Joe Beda PodCTL | Enterprise Kubernetes Produced
by Red Hat OpenShift, this podcast covers
This book dives into the Kubernetes cluster
everything related to enterprise Kubernetes
orchestrator and how its tools and APIs can be
and OpenShift, from in-depth discussions on
used to improve the development, delivery, and
Operators to conference recaps.
maintenance of distributed applications.
The Byte Looking for more on cloud and
containers? Tune into each episode of “The Byte”
TREND REPORTS
for to-the-point “byte-sized” material on cloud,
Cloud Native Questions around how to efficiently manage containers, and more.
microservices, accelerate deployments, and make applications
scalable are answered through cloud-native technology. Cloud-
native is all about taking advantage of the cloud in every way
ZONES
possible. This results in faster, more efficient ways to run, develop,
and deploy applications — every aspect of your application Cloud The Cloud Zone covers the host of providers and utilities
infrastructure has been adopted and implemented with the cloud that make cloud computing possible and push the limits (and
in mind. savings) with which we can deploy, store, and host applications
in a flexible, elastic manner. The Cloud Zone focuses on PaaS,
In this report, we detail key findings from our original research
infrastructures, containerization, security, scalability, and hosting
and address how cloud native will add business value, the role
servers.
of microservices in adopting cloud-native technology, and what
business executives are saying about cloud-native. Microservices The Microservices Zone walks you through
breaking down the monolith step-by-step and designing
Migrating to Microservices DZone Trend Reports expand on microservices architectures from scratch. It covers everything
the tech content that our readers say is most helpful, including from scalability to patterns and anti-patterns and digs deeper
thought leadership and in-depth, original DZone research. The than just containers to give you practical applications and
Migrating to Microservices Trend Report features expert predictions business use cases. 
on the next phase of microservices adoption in the enterprise, as
well as insights into some challenges and opportunities presented
by current usage patterns.

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 51


K8s + CockroachDB =
Effortless App Deployment
Run your application on the cloud-native database
uniquely suited to Kubernetes.

Scale elastically Survive anything with Build fast with


with distributed SQL bullet-proof resilience PostgreSQL compatibility
Say goodbye to sharding Rest easy knowing your CockroachDB works with
and time-consuming application data is always on your current applications and
manual scaling. and always available. fits how you work today.

Get started for free today


cockroachlabs.com/k8s

Trusted by innovators
CASE STUDY

Case Study: Bose


How Bose Built a Cloud-Connected Device Platform with Kubernetes and
CockroachDB

Bose is a leading audio equipment design and development company,


best known for its home and car audio systems and noise-canceling
headphones.

CHALLENGE COMPANY
Bose
Bose has built a cloud platform that allows customers to connect all Bose-
owned devices to play music at once, while also providing all updates and
COMPANY SIZE
patches for these devices. Bose initially built this application on MySQL,
9,000 people | $3.9B Market Cap
but they wanted a database that would allow them to leverage a multi-
region deployment in AWS. Additionally, Bose has customers all across the
INDUSTRY
world and needed a database that could scale to different regions with
Electronics
low latencies.

A major challenge was to find a cost-effective, globally scalable data store PRODUCTS USED
that would be easy for microservice developers to work with. CockroachDB and Kubernetes

SOLUTION PRIMARY OUTCOME


Bose delivers a resilient, scalable, across-the-
Bose turned to CockroachDB on Kubernetes, which addressed Bose’s
globe cloud platform to their customers that
essential project goals. It allowed them to build their applications on uses CockroachDB at its foundation.
microservices and provide ultra low latencies for their geographically
distributed users. With CockroachDB, Bose has the flexibility to run in
many AWS regions in a cost effective way. The SQL interface makes it
“CockroachDB provides incredible resiliency.
easy for Bose developers to interact with the database and scale their We have literally been unable to kill this
high-transaction workloads. CockroachDB has also provided incredible thing no matter what we have thrown at it.
scale, which will allow Bose to go global across many, many different data It has been incredibly scalable. And we have
centers. plans to global across many different data
centers. CockroachDB is really setting us up
for success.”
RESULTS
— Dylan O’Mahoney,
• An incredibly resilient infrastructure that has been nearly impossible to
Principal Cloud Architect
kill

• Always-on availability and zero RTO

• Cost-effective, scalable, global data store compatible with microservices

• A global database for scaling across many geographic regions

• Low latencies and optimal performance for distributed users

• High transaction loads in a database that scales easily

DZONE TREND REPORT | KUBERNETES AND THE ENTERPRISE PAGE 53


INTRODUCING THE
INTRODUCING THE

Cloud Zone
Container technologies have exploded in popularity, leading to diverse use
cases and new and unexpected challenges. Developers are seeking best
practices for container performance monitoring, data security, and more.

Keep a pulse on the industry with topics such as:

• Testing with containers • Keeping containers simple


•  Monitoring container performance • Deploying containers in your organization

VISIT THE ZONE

TUTORIALS CASE STUDIES BEST PRACTICES CODE SNIPPETS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy