Multicloud Architecture Migration and Security
Multicloud Architecture Migration and Security
Architecture
Migration
& Security
The Benefits and Challenges of
Using Cloud Edge Solutions
REPORT
Multicloud Architecture
Migration and Security
The Benefits and Challenges
of Using Cloud Edge Solutions
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Multicloud Archi‐
tecture Migration and Security, the cover image, and related trade dress are trade‐
marks of O’Reilly Media, Inc.
The views expressed in this work are those of the authors, and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of or
reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and Oracle Dyn. See our state‐
ment of editorial independence.
978-1-492-05039-1
[LSI]
Table of Contents
iii
4. Multicloud Security Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
DNS Resiliency and Traffic Steering 39
Bot Management 43
API Protection 44
Application-Layer DDoS Protection 46
Network-Layer DDoS Protection 47
Deep Internet Monitoring: Data Intelligence 48
Combined Policy, Management, and Visibility 49
The Edge Allows for Simpler Managed Services Offerings 50
Conclusion 51
iv | Table of Contents
CHAPTER 1
Why Multicloud Architecture?
1
Amazon, Microsoft, or Google datacenter; or residing at a special‐
ized hosting provider. The security team is responsible for the sys‐
tem’s integrity regardless of where it resides.
These challenges are certainly daunting, but they are not insur‐
mountable. The goal of this book is to help managers and leaders
understand the challenges of migrating to and securing a multicloud
infrastructure. In this book, you learn about ways to manage and
orchestrate multicloud environments, including some “edge” tech‐
nologies that are designed to secure and protect your environment.
Case Study
In late February 2017, a series of cascading failures caused one of
the largest cloud providers to be unavailable for four hours. Four
hours might not seem like a very long time, but that was four hours
that Netflix, Spotify, Pinterest, and hundreds of other sites were not
There are other ways to enable this kind of load balancing. Some
organizations use advanced DNS services to replicate this kind of
architecture.
Figure 1-2 shows a similar active-passive multicloud architecture.
This solution has the same high-level architecture. The difference is
that the systems in the second cloud provider’s datacenter remain
dormant unless there is a failure of some sort at the first cloud pro‐
vider. That failure could be physical in nature, such as server crashes
or interruptions in network services. But it could also be a threshold
failure in which, for example, the site experiences a significant spike
in traffic because of an advertising promotion or it begins to trend
on social media.
Trade-Offs | 9
living document that must be constantly updated and shared with all
stakeholders, including your cloud providers, every time a change is
made. If you do not keep the documentation current and regularly
share it, one team could make a change that interferes with another
team’s change and crash the entire system.
Again, the benefit to doing this correctly is an unprecedented level
of resiliency and uptime that would be almost impossible to repli‐
cate using your own datacenters. Even if you could, the cost of doing
so would be daunting.
Development Agility
The last trade-off in considering the move to a multicloud architec‐
ture is development agility. A multicloud architecture requires the
ability to create code that can run on multiple cloud providers’ plat‐
forms, ideally with no changes to the code from one provider to the
next. Development standards within your organization help take
care of this, so even if the code base is a mess, each provider can run
that mess without different code bases.
Moving an application to a multicloud architecture might require
your team to revisit application code. The code will need to be
cleaned up and tested across your cloud providers to ensure that
pushing out updates won’t create problems that will make one of the
cloud instances unavailable. But it is more than just editing your
code; there are other tasks that you need to complete, such as mak‐
ing sure your provider has the libraries, language compilers/
runtime, and server versions available in their repositories for what‐
ever containers you are running. A good example of that is the Java
Development Kit (JDK)—you might be at the point at which you
can install only an open-JDK version. Another example is that you
could have a version of Apache Tomcat that runs a significantly
newer servlet specification.
Trade-Offs | 11
Another problem could be that you are stuck with an old system
that is part of your stack that requires something old that you can‐
not get from a cloud provider, like an older, unsupported relational
database. This can cause expenses related to the time it takes devel‐
opers to plan that upgrade and data migration, and unexpected
license costs. None of this would be code cleanup, but it is necessary
to factor in the time and cost when making the move.
Development agility is about more than just clean code that can run
across multiple cloud environments. It also means adapting to the
reality of more frequent updates to the code base. It means updating
your code multiple times a month or week instead of a couple of
times a year. The development team also needs to be able to rapidly
write and test new security patches and deploy them quickly without
taking the application offline. Truthfully, this is already a develop‐
ment reality, and being in the cloud to do it is not a requirement for
this type of development process. Agile development is how a lot of
businesses operate, irrespective of where the application resides.
Given that the Agile development process is the new norm, what
changes for most organizations when migrating to a multicloud
infrastructure is neither the code nor the development cycle. It is the
DevOps process and the Continuous Integration (CI) tools that run
DevOps. You now have to package, deploy, and run automated tests
on multiple cloud providers. And maybe one app is on just one
cloud provider, but another application that needs the redundancy is
on multiple cloud providers. You are going to want your develop‐
ment cycle to automatically provision and deploy to the appropriate
place. And your local development environment likely isn’t a cloud
provider (although it could be).
Evaluation
If you have read this far, you might be asking, how do I determine
whether my project and organization are ready for multicloud? It is
not hyperbole, or even controversial, at this point to state the future
of most IT projects will be in the cloud. The benefits of cost savings,
resiliency, and new feature enhancements that organizations realize
by moving to the cloud are too great to ignore. Moving to the cloud
allows an organization to focus on its core competencies instead of
managing a datacenter.
Evaluation | 13
patching cycles are, and how to get support if something goes
wrong. You also need to share that information with your team so
that everyone knows the limitations of each provider and is empow‐
ered to open support tickets.
Conclusion
Migrating applications to a multicloud architecture will increase
resiliency while saving your organization money in the long run.
But multicloud migrations come with challenges and short-term
costs. Before jumping into a multicloud infrastructure, it is impor‐
tant to take a step back and understand the maturity of your appli‐
cation and your team, and to make sure the organization is ready for
the complexity of a multicloud deployment.
After completing an internal assessment, the next step is to under‐
stand which providers will meet your needs and what their offerings
are. Then, it’s time to design the architecture, working with a cloud
architect whenever possible. This is also the time to document the
new design for your application and make any changes needed to
support the new architecture.
Conclusion | 15
CHAPTER 2
Multicloud Infrastructure
Orchestration and Management
17
organizations to easily redeploy an application from one cloud pro‐
vider to another.
Containers have additional security benefits, as well. By stripping
out unnecessary tools from the container and leaving just the appli‐
cation and its dependencies, you create a smaller footprint for bad
actors to attack. Even if an attacker does manage to exploit an
unpatched vulnerability on a container, they will have a difficult
time. That’s because the native tools an attacker normally uses to
move around the system will not be there. After the attacker is
detected, it is simply a matter of destroying the container and
replacing it with a newly patched version.
Even though containers bring added benefits, they also come wit
challenges, including management, security, and complexity con‐
cerns, which we discuss in this chapter.
Kubernetes on Multicloud
Kubernetes is the most widely adopted container-management and
orchestration tool, especially when it comes to multicloud environ‐
ments. Originally developed by Google, Kubernetes is currently sup‐
ported by Google Cloud Engine, AWS, Microsoft Azure, Oracle
Cloud Infrastructure, OpenStack, and a host of other cloud
providers.
The key to Kubernetes is its flexibility. It allows you to deploy fully
configured systems across all cloud providers in your multicloud
architecture. You begin by building the various components that are
necessary to run the web application. You can then cluster those
components together. You then can reuse these clustered compo‐
nents across different workloads and deploy them as needed.
Using the Kubernetes application programming interface (API), it is
easy to quickly deploy new systems and clusters of systems to differ‐
ent cloud providers. It’s also easier to deal with security patches.
When a vulnerability is announced and patched, you can build and
automatically deploy a new container across your entire architec‐
ture. This makes it less likely that a vulnerable system will remain
exposed on the internet for long periods of time.
Kubernetes is very intelligent in its management of deployed con‐
tainers. It maintains awareness of the state of each container and
monitors system resources being used on each container. As previ‐
ously mentioned, you can configure Kubernetes to deploy new con‐
tainers automatically when CPU resources reach a certain threshold.
Provisioning
One of the goals of DevOps is to quickly incorporate new require‐
ments into the application development process. This makes the
methodology a perfect fit for building container-based multicloud
Management
Proper management of resources is critical for successful DevOps
and multicloud deployments. Multicloud deployments are inher‐
ently complex with many moving components, which is why man‐
agement tools like Kubernetes are so important. But a tool is only as
good as the management policies behind it.
Proper multicloud management enhances the DevOps process by
providing a standardized view into the deployed systems and uni‐
versal logging of the deployed containers. Having access to logs for
troubleshooting and to better understand the resources being used
can be invaluable when it’s time to improve code or create new code.
Good management also empowers organizations to provide custom‐
ized information technology services to DevOps teams. Having mul‐
tiple cloud providers means that DevOps teams can access a menu
of services available to them. Using these services, organizations can
essentially offer Infrastructure as a Service (IaaS) to DevOps teams
with proper management.
Conclusion
Truly adaptive multicloud architecture makes use of containers for
rapid deployment across different cloud providers. Although the use
of containers greatly improves the agility of your organization’s
deployment, it also means greater complexity.
The best way to deal with that complexity is to use a container
orchestration tool. The orchestration tool enables users to centralize
and automate the deployment, management, and monitoring of all
containers across all cloud providers.
This type of deployment requires well-documented underlying pro‐
cesses and policies, including security policies, to ensure that your
organization consistently and securely deploys that infrastructure
across all cloud providers.
Conclusion | 25
CHAPTER 3
Security in
Multicloud Environments
27
Edge Management Principles
IT security has never been easy. But one thing that used to be easier
was the accepted definition of the network “edge.” The phrase edge
of the network used to refer to the firewall, or possibly the gateway
routers. Everything behind that easily defined demarcation was the
responsibility of the security team. Everything else was not. This
resulted in “castle and moat” analogies that persist today about
organizational boundaries. But those boundaries have eroded in
recent years.
As more operations have moved to the cloud, the definition of the
edge has changed. Regardless of your organization’s definition of the
edge, however, the principles of edge management have not
changed.
Network Monitoring
To this point we have discussed building complex multicloud envi‐
ronments with multilayered security protections in place. But we
also need to ensure that everything stays up and running, even
though the complexity of multicloud deployments makes monitor‐
ing infrastructure difficult. In this section, we discuss network mon‐
itoring; the next section discusses security monitoring.
The term “network monitoring” is actually a bit of a misnomer. Cer‐
tainly, monitoring traffic flows on the network is important, and a
39
DNS Resiliency
There are a couple of ways that the appropriate DNS architecture
can improve the resiliency of your multicloud architecture. The first
is by hosting primary and secondary DNS name servers on separate
networks and different provider platforms. When registering a new
domain, the registrar asks for a list of name servers. Most people
default to the servers the registrar provides to them. Or, if they have
a separate DNS provider, they use the name servers given by that
provider.
But the DNS protocol allows for primary and secondary name
servers. A primary name server is the authoritative name server that
hosts the zone file, which contains the relevant information for a
domain name or subdomain. The secondary name server receives
automatic updates from the primary and does not need to reside on
the same network. In fact, hosting the secondary name service on a
different network is highly recommended. This increases the resil‐
iency of the DNS setup and means that even if there were a complete
outage within the primary provider, the secondary DNS service
would continue to serve at least some traffic.
The second way that DNS providers promote resiliency is with the
implementation of anycast protocols on their authoritative name
servers. Anycast is a routing protocol commonly (but not always, so
check with your provider) implemented by DNS providers as a way
to improve availability and speed up responses to DNS queries. The
anycast protocol allows multiple, geographically diverse servers to
share the same IP address. For example, the “hints” file that sits on
every recursive server points to 13 root servers, but those 13 servers
actually mask more than 600 servers dispersed around the world.
The IP address for each root server is an anycast address for multi‐
ple servers.
When choosing or changing DNS providers, it is important to find
out whether the authoritative DNS servers used for your domain sit
behind anycast IP addresses. Anycast doesn’t just act as a force mul‐
tiplier, it actually helps speed up response by using existing routing
protocols to ensure that the request for the IP address is fulfilled by
the closest network server. DNS providers using anycast don’t just
increase resiliency by having multiple DNS servers behind an any‐
cast address. They also increase performance by ensuring the closest
server fulfills each request.
Bot Management | 43
For starters, because these services have seen thousands of bots, they
have the ability to detect bot traffic earlier than if you were trying to
do it yourself. They effectively identify patterns because they are
monitoring for signs of bot traffic across all of their customers, not
just you. They can even detect bot traffic that is operating in “low
and slow” mode, avoiding detection by accessing the target web
application infrequently and from a range of IP addresses designed
to look innocuous.
These services also have ways of challenging potentially suspicious
traffic, while not disrupting service if the traffic is legitimate. One
way that sites manage this type of behavior is through the use of
CAPTCHAs, which are little challenges that are designed to distin‐
guish human from bot. If you have ever seen the question, “How
many of these pictures have traffic lights?” or “How many images
contain cars?” you have experienced a CAPTCHA challenge.
Unfortunately, bots are getting very good at solving CAPTCHAs—
some bots are better at it than a lot of people. Rather than relying on
faulty CAPTCHAs to distinguish humans from bots, bot manage‐
ment services will try JavaScript challenges and other methods of
querying the browser to make that distinction. Because bots don’t
have full browsers behind them, they almost always fail these types
of challenges.
Bot management services can significantly reduce the amount of
malicious bot traffic that reaches your web application. Cloud-based
bot management services can be quickly deployed across a multi‐
cloud architecture, and you can easily add or remove them as you
scale up or scale down services within the multicloud environment.
API Protection
In Chapter 3, we discussed the importance of APIs in a multicloud
architecture. APIs are used to connect all of the disparate services
running in a multicloud environment and are critical for getting
information from one source to another and presenting it in a uni‐
fied manner to an end user or client.
This is why API protection is so important. Attackers have become
wise to the fact that APIs can provide them with a treasure trove of
sensitive information. As a result, these bad actors are constantly
looking for ways to exploit APIs, including the use of bots.
API Protection | 45
APIs are a critical component of multicloud architecture. It takes a
great deal of planning to deploy APIs securely and ensure that the
data shared between different systems via API calls remains
protected.
Conclusion
A multicloud architecture requires rethinking how security is
deployed across all cloud providers. By securing the installation at
the edge, organizations have the flexibility to deliver security across
the entire architecture while increasing visibility and improving the
performance of web applications.
By working closely with security partners, organizations can find
solutions that fit their specific needs. These solutions can grow
along with the web application and the organization itself.
To take full advantage of these solutions, organizations must first
understand what the needs are and ask the right questions. Failure
to do so can result in your being boxed into a solution that is not a
good fit. With proper training and research, your organization can
effectively secure multicloud architectures at the network edge.
Conclusion | 51
About the Authors
Laurent Gil runs product strategy for internet security at Oracle
Cloud Infrastructure. A cofounder of Zenedge Inc., Laurent joined
Oracle Dyn Global Business Unit in early 2018 with Oracle’s acquisi‐
tion of Zenedge. Prior to that, Laurent was CEO and cofounder of
facial recognition software and machine learning company, Viewdle,
which was acquired by Google in 2012.
Laurent holds degrees from the Cybernetic Institute of Ukraine
(Doctorate Honoris Causa), the Wharton School of Business
(MBA), Supélec (M.Sc., Computer Science and Signal processing),
the Collège des Ingénieurs in Paris (postgraduate degree, Manage‐
ment), and graduated Summa Cum Laude from the University of
Bordeaux (B.S. Mathematics).
Allan Liska is a solutions architect at Recorded Future. Allan has
more than 15 years, experience in information security and has
worked as both a blue teamer and a red teamer for the intelligence
community and the private sector. Allan has helped countless
organizations improve their security posture using more effective
and integrated intelligence. He is the author of The Practice of Net‐
work Security (Prentice Hall), Building an Intelligence-Led Security
Program (Syngress), and NTP Security: A Quick-Start Guide (Apr‐
ess), and the coauthor of DNS Security: Defending the Domain Name
System (Syngress) and Ransomware: Defending Against Digital
Extortion (O’Reilly).