0% found this document useful (0 votes)
209 views28 pages

Ebook OpenStack Made Easy 20160726

This document provides an overview of OpenStack and discusses some of the challenges of deploying and operating OpenStack clouds. It introduces Canonical and describes how their tools like MAAS, Juju, Autopilot and LXD can help make OpenStack easier to deploy and manage at scale in a cost effective way.

Uploaded by

Mohammed Saif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
209 views28 pages

Ebook OpenStack Made Easy 20160726

This document provides an overview of OpenStack and discusses some of the challenges of deploying and operating OpenStack clouds. It introduces Canonical and describes how their tools like MAAS, Juju, Autopilot and LXD can help make OpenStack easier to deploy and manage at scale in a cost effective way.

Uploaded by

Mohammed Saif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

eBook

OpenStack Made Easy

OpenStack made easy

What you will learn


Were entering a phase change
from traditional, monolithic
scale-up software to multi-host,
scale-out microservices. Welcome
to the age of big software.

Tweet this

Big software demands that infrastructure and


operations personnel approach the challenge
of deployment, integration and operations
from a different perspective. This eBook
explains how we used to do things, why that
is no longer an economically viable approach,
and what can be done to achieve technical
scalability without the large economic overhead
of traditional approaches to modern software.
By reading this eBook you will also gain a deeper
understanding of why there is a perceived
complexity to the installation and operations
of OpenStack-based clouds. You will learn that
this perceived complexity does not originate
from the software itself, but rather the use
of outdated tools and methodologies used
to deploy them.

This eBook will explain how Canonical and


Ubuntu are uniquely positioned to facilitate
the needs of modern, scalable, repeatable
implementations of cloud infrastructure
based on OpenStack.
Many of the approaches that we discuss in this
eBook are also useful when addressing other
big software challenges in areas such as scale
out applications and workloads, big data and
machine learning.

OpenStack made easy

About the author


Bill Bauman, Strategy and Content, Canonical,
began his technology career in processor
development and has worked in systems
engineering, sales, business development, and
marketing roles. He holds patents on memory
virtualisation technologies and is published
in the field of processor performance. Bill
has a passion for emerging technologies and
explaining how things work. He loves helping
others benefit from modern technology.

Bill Bauman
Strategy and Content, Canonical

Tweet this

OpenStack made easy

Contents
What is OpenStack

05

OpenStack challenges

06

Who is Canonical

08

OpenStack Interoperability Lab (OIL)

09

Ubuntu OpenStack in production

MAAS the smartest way to handle


bare metal

16

Juju model-driven operations for hybrid 17


cloud services
19

10

Autopilot the fastest way to build


an OpenStack cloud

More than one cloud

13

Containers in OpenStack

20

The perception of difficulty with big


software like OpenStack

14

LXD pure container hypervisor

21
22

Operational intelligence at scale

15

Fan networking network addressing


for containers

Tweet this

Conjure-up multi-node OpenStack


deployment on your laptop

23

ZFS and software defined storage

24

BootStack your managed cloud

25

Conclusion

27

CIOs guide to SDN, VNF and NFV

28

5
OpenStack made easy

What is OpenStack
General overview

Modular

Core projects and more

OpenStack is a collection of open source


software projects designed to work together
to form the basis of a cloud. Primarily, it is used
for private cloud implementations, but it can
be just as applicable for cloud service providers
to build public cloud resources. Its important
to understand that OpenStack is not a single
product, but rather a group of projects.

From its inception, OpenStack was designed


to be modular and to be integrated with
additional tools and plugins via APIs. You could
choose to use any single project from OpenStack
to accomplish a particular task, or several
of them, to build out a more complete cloud.

The core projects of OpenStack consist


of Nova (compute), Neutron (networking),
Horizon (dashboard), Swift (object storage),
Glance (image storage), and Keystone (identity).

Tweet this

Canonical integrates the projects, along


with additional components, into a fully
fledged enterprise Cloud Platform known
as Ubuntu OpenStack.

Beyond the core projects, there are additional


solutions and tools in the industry to enhance
the deployment, integration and daily operation
of an OpenStack cloud.

6
OpenStack made easy

OpenStack challenges
Hardware configuration

Hardware integration

OpenStack installation

Most organisations still manage some


hardware. After racking and connecting it,
initial configuration must be done. Some
use vendor tools, some write proprietary
scripts, others leverage ever-growing teams of
people. Some use a combination of all of these
approaches and more.

Beyond the initial configuration, integration


must happen. Network services must be set
up and maintained, including DHCP or static IP
address pools for the host NICs, DNS entries,
VLANs, etc. Again, these integration tasks can
be accomplished with scripts, vendor tools or
personnel, but the same potential issues arise
as with configuration.

Another major obstacle to OpenStack success


is the initial installation. The aforementioned
scripting approach is common, as are growing
teams of expensive personnel.

The issue with these approaches is


economic scalability. If you change hardware
configuration in any way, you need to pay
to add/modify an ever-growing collection of
scripts. If you change hardware vendor, you
need to add, configure and maintain a new
tool, while maintaining all previous hardware
management tools. If you add more servers,
you have to hire more people. None of this
scales with cloud economics.

Tweet this

There are also OpenStack projects to perform


installation, but they are often vendor-driven,
not neutral and lack feature completeness.
Organisations that try to use them often
find themselves doing significant, ongoing
development work to make the project useful.

7
OpenStack made easy

Additional challenges

A scalable, practical approach

On-going challenges all lend to increasing


cost and decreasing economic scalability.
Additional considerations include:

A better approach, an easier approach, are


vendor hardware and platform neutral tools.
Tools that include APIs for automation of not
just software, but your datacenter, as well.
Tools with graphical interfaces, designed
with scalable cloud economics in mind.

Upgrades
Rebuilding
New clouds
Repeatable best practices
Scaling out
Reducing cost of consultants

Tweet this

Putting the intelligence of installation and


integration complexity directly into the tools
themselves is how you make OpenStack easy
and achieve economic scalability.

OpenStack installation and integration


challenges are best solved by a thoughtful
approach, using technologies designed for
modern clouds. Legacy scripting technologies
might work now, but likely wont scale as your
clouds needs change and grow. The same
goes for personnel.
This eBook will go into detail about the approach
and tools that make OpenStack easy.

8
OpenStack made easy

Who is Canonical
The company

Market focus

Choice

Canonical is the company behind Ubuntu,


the underlying Linux server platform for 65%
share of workloads on public clouds and 74%
share of OpenStack deployments. We are the
leading platform for OpenStack with 55%
of all production OpenStack clouds based on
Ubuntu OpenStack*.

Canonical is focused on cloud scalability,


economically, and technologically. That
means focusing on density with containers,
operational efficiency with application
modeling and financial scalability with
cloud-optimized pricing.

Solutions from Canonical are hardware


agnostic, from platform to processor
architecture and public cloud options. We
recognize that modern organisations require
flexibility and choice. The tools discussed
in this eBook that enable ease of use and
decreased operational costs are designed
to work across all platforms and major
clouds, not just select partners.

A founding member of the OpenStack


Foundation, Canonical also has a long history
of interoperability testing between OpenStack,
Ubuntu and partner technologies. Its
OpenStack Interoperability Lab (OIL) currently
tests over 3,500 combinations per month.

*Source: OpenStack User Survey 2016

Tweet this

We have proven success supporting large scale


cloud customers in production, with some
examples given on the Ubuntu OpenStack
in production page of this eBook.

9
OpenStack made easy

OpenStack Interoperability Lab (OIL)


Proven integration testing
Canonical has a long history of interoperability
testing between OpenStack and Ubuntu.
The Openstack Interoperability Lab (OIL) is
the worlds largest Openstack interoperability
and integration test lab. It is operated by
Canonical with over 35 major industry hardware
and software partners participating. Each
month we create and test over 3,500 cloud
combinations in the OIL lab. We could not
do this without the solutions described
in this ebook.

Tweet this

Sophisticated testing and


integration processes
Our process tests current and future
developments of OpenStack against current
and future developments of Ubuntu Server
and Server LTS.
As our ecosystem has grown, weve expanded
it to include a wide array of guest operating
systems, hypervisors, storage technologies,
networking technologies and softwaredefined networking (SDN) stacks.

Why OIL makes OpenStack easier


OIL ensures the best possible user experience
when standing up your Ubuntu OpenStack
cloud and maintaining it.
By testing up to 500,000 test cases per month
you can run your Ubuntu OpenStack cloud and
technologies from our partner eco-system
with greater ease and confidence.
Find OIL partners

10
OpenStack made easy

Ubuntu OpenStack in production


Built on Ubuntu

Deutsche Telekom

Almost all OpenStack projects are developed,


built and tested on Ubuntu. So its no surprise
that Ubuntu OpenStack is in production at
organizations of all sizes worldwide. Over
half of all production OpenStack clouds
are running on Ubuntu.

Deutsche Telekom, a German


telecommunications company, uses Ubuntu
OpenStack as the foundation of a nextgeneration NFV (Network Functions
Virtualisation) infrastructure. Deutsche Telekom
leverages Canonicals tool chain even further,
using Juju as a generic Virtualised Network
Functions (VNF) manager. In this case, Juju
is used to model and deploy both OpenStack,
as well as the critical workloads running within
the Ubuntu OpenStack environment.

To give you an idea of what organisations


are doing with Ubuntu OpenStack, weve
highlighted a few here.

Tweet this

When I started working with OpenStack


it took 3 months to install. Now it takes
only 3 days with the help of Juju.
Robert Schwegler, Deutsche Telekom AG

11
OpenStack made easy

Tele2

Walmart

Tele2, a major European telecommunications


operator, with about 14 million customers in 9
countries, has also built an NFV infrastructure
on Ubuntu OpenStack. They have opted for
a BootStack cloud; a fully managed Ubuntu
OpenStack offer from Canonical.

Walmart, an American multinational retail


corporation, uses Ubuntu OpenStack as the
foundation of their private cloud. One of the
key factors of scalability is economics. Here,
the economic scalability of Ubuntu OpenStack
cannot be overlooked. While the technology
is certainly designed to scale, its just as critical
that the methodologies for deployment and
billing are also designed to scale.

BootStack dramatically reduces the time


it takes to bring OpenStack into production,
and allows Tele2 to focus their skilled
resources on telecoms solutions, and not
having to learn and update their skills to the
fast-paced changes of OpenStack.

Tweet this

[Ubuntu] OpenStack met all the


performance and functional metrics
we set ourselves It is now the
defacto standard and we can adapt
it to our needs.
Amandeep Singh Juneja, Walmart

12
OpenStack made easy

And many more...


NTT, Sky Group, AT&T, Ebay, Samsung and
many other organisations all represent
customers that have elected to build clouds on
Ubuntu OpenStack.
Scalable technology, scalable economics,
ease-of-use and reduced time to solution
are the primary reasons that so many
organisations choose Ubuntu OpenStack.

When we started our private cloud


initiative we were looking for a sustainable
cost base that makes it effective and
viable at scale we needed a platform
that was robust and a platform that
brings innovation. Ubuntu OpenStack
helps us meet & realise those because
of the broad experience Canonical brings.
Will Westwick, Sky Group

Were reinventing how we scale by


becoming simpler and modular, similar
to how applications have evolved in
cloud data centers. Open source and
OpenStack innovations represent a
unique opportunity to meet these
requirements and Canonicals cloud
and open source expertise make them
a good choice for AT&T.
Toby Ford, AT&T

Tweet this

13
OpenStack made easy

More than one cloud


Value of repeatable operations
When building OpenStack clouds its important
to understand the need for repeatable
operations.
One of the common conceptions of building
and operating a cloud is that you do it once
and its done. There is a tendency to put
tremendous time and effort into designing
both the physical and software infrastructure
for what is to be a static production cloud.
Often there is little thought put into rebuilding
it, modifying it, or doing it many times over.

Tweet this

The reality of modern clouds is that there is no


static production cloud that is never upgraded
or expanded to more than one cloud or rebuilt
as part of a rolling upgrade.
Also, there is no one size fits all cloud.
Successful early cloud adopters have come
to realize that remote locations may have
each their own small cloud infrastructure.
For scalability and redundancy, even within
a single datacenter, they will end up building
many, even dozens, of clouds.

Telcos, media and broadcast companies and


enterprise organisations distribute operations
globally, with potentially thousands of smaller,
off-site operations centers. All need their
own cloud to support localised and scalable
infrastructure.
Even smaller organisations build development,
test, staging and production clouds.
Everyone needs to do these builds consistently,
in a repeatable fashion, many times.

14
OpenStack made easy

The perception of difficulty with


big software like OpenStack
Theres a perception that OpenStack is
difficult to install and maintain without
expert knowledge. This perception largely
stems from a flawed approach. OpenStack
is big software which means it has so many
distributed components that no single person
can understand all of them with expert
knowledge. Yet, organisations are still looking
for individuals, or teams of people who do.
The larger the cloud, the more solutions run
on it, the more people they think they need.
This approach is not scalable economically
or technically.

A modern look at the OpenStack perception


of difficulty reveals that the best practices
for installation, integration and operations
should be distilled into the software itself.
The knowledge should be crowdsourced and
saved in bundles that encapsulate all of the
operational expertise of the leading industry
experts so that it can be easily and repeatably
deployed. That is what Canonical has done that
has made Ubuntu OpenStack so successful.
In the pages ahead we will show how this
practice has been adopted for both hardware,
with MAAS & Autopilot, as well as software,
with Juju.network performance or network
addresses.

The challenge of big software

In his keynote at the OpenStack Summit


Austin 2016, Mark Shuttleworth Executive
Chairman of Canonical and lead of the Ubuntu
project demonstrated how big software like
OpenStack can be fast, reliable & economic.
Watch Video

Tweet this

15
OpenStack made easy

Operational intelligence at scale


In order to scale, operational intelligence must
no longer be a function of number of skilled
operators, but rather a function of the right
tools designed to focus on the right issues.
This is where Canonicals unique toolset makes
Ubuntu OpenStack relatively easy compared
to other offerings.
Tools built specifically for big software like
OpenStack are the only way to achieve cloud
economics in a private cloud. Adding personnel
wont scale as your cloud grows, and using
traditional scripting technologies requires
too many, and too frequent of updates within
a growing, dynamic cloud.

Tweet this

In the next section, we introduce MAAS,


to manage bare metal hardware, Juju, to
manage application design and deployment,
and Autopilot, to completely automate
the deployment and updates of an Ubuntu
OpenStack cloud.
Additional tools and solutions are introduced,
as well. For a development/test environment
conjure-up is ideal. It can deploy single-node or
multi-node OpenStack with a single command
and a menu walk-through.

Since containers are vital to system density


and return on investment, we will also discuss
how LXD, the pure container hypervisor, and
Fan networking, play essential roles in solving
for server and IP network density.

16
OpenStack made easy

MAAS the smartest way


to handle bare metal
Why MAAS?

Hardware configuration

Accessible

Hardware must still be installed in a datacentre.


The key to economic efficiency is to touch it as
few times as possible. Installing and operating
a bare metal OS at scale wont work if done by
hand or custom scripts for every machine type.

With MAAS, you only touch the power button


once. During the initial startup of a new server,
MAAS indexes it, provisions it, and makes it
cloud ready. A catalog is maintained of not
only the servers, but also the inventory of
devices available in them. This is a key aspect
of future provision automation by Autopilot.

MAAS provides a REST API, Web-based interface


and command line interface. It is designed
with automation and hardware-at-scale in
mind. Devops can even leverage it for bare
metal workload management.

MAAS stands for Metal as a Service. MAAS


delivers the fastest OS installation times
on bare metal in the industry thanks
to its optimised image-based installer.

Ongoing infrastructure operations


Beyond initial configuration, MAAS also handles
ongoing physical IP and DNS management.
A lights out datacentre, with a near-zero need
for hands-on operations, is realized with MAAS.

Integration
Since theres an API, as well as a CLI, automation
tools like Juju, Chef, Puppet, SALT, Ansible,
and more, are all easily integrated with MAAS.
That means legacy, scripted automation, like
Puppet and Chef, are easily integrated, whilst
modern modeling tools, like Juju, can naturally
rely on MAAS for hardware information.
Learn more about MAAS at maas.io

Tweet this

17
OpenStack made easy

Juju - model-driven operations


for hybrid cloud services
Why Juju?
Its challenging to model, deploy, manage,
monitor and scale out complex services in
public or private clouds. As an application and
service modelling tool, Juju, enables you to
quickly design, configure, deploy and manage
both legacy and cloud ready applications.
Juju has been designed with the needs of
big software in mind. That is why it is not
only leveraged by Autopilot for OpenStack
installation and updates, but it can also be
used to deploy any scalable application. All
of this is possible from a web interface or with
a few commands.
Juju can be used to deploy hundreds of
preconfigured services, OpenStack, or your
own application to any public or private cloud.

Tweet this

Design

Congure

Deploy and manage

18
OpenStack made easy

Web UI and Command


Line Interface

Charms encapsulate best


practices

Jujus user interface can be used with or


without the command line interface. It
provides a drag-and-drop ability to deploy
individual software or complex bundles of
software, like Hadoop, or Ceph, performing
all the integration between the associated
components for you.

Juju is the key to repeatable operations. Juju


uses Charms that encapsulate operational
intelligence into the software itself that is
being deployed. The best practices of the
best engineers are encapsulated in Charms.
With Juju, you dont need an expert in every
OpenStack project, and an expert in every big
software application, like Hadoop, in order to
achieve operational excellence. All you need
is an understanding of the application(s) once
its been deployed using the crowdsourced
operational excellence in Jujus Charms and
bundles.

You have a graphical way to observe a


deployment and modify it, save it, export it.
All of this can be done at command line, as well.

Learn more about Juju at jujucharms.com

Tweet this

19
OpenStack made easy

OpenStack Autopilot the fastest


way to build an on-premise cloud
Why OpenStack Autopilot?

A decision engine for your cloud

Many organisations find building a production


OpenStack environment challenging and are
prepared to invest heavily in cloud experts
to achieve operational excellence. Just
like Juju and Charms, OpenStack Autopilot
encapsulates this operational excellence.

While OpenStack Autopilot allows the user


to manually determine hardware allocations,
its generally best left to the decision engine
within it. The underlying infrastructure is
modelled by MAAS and shared with Autopilot.
Availability zones are automatically created
for you.

As an integral feature of our Landscape


systems management software, OpenStack
Autopilot combines the best operational
practices with the best architectural practices
to arrive at a custom reference architecture
for every cloud.

Since Autopilot is part of Landscape, integration


with advanced systems monitoring like Nagios
is readily accomplished.

Build Canonicals OpenStack


Reference Architecture
The reference architecture that OpenStack
Autopilot will automatically design for you
will accomplish maximum utilisation of the
resources given to the cloud.
OpenStack Autopilot is built for
hyperconverged architectures. It will use
every disk, every CPU core, and dynamically
spread load, including administrative overhead,
equally across all of them. As your cloud is
upgraded or nodes are added, OpenStack
Autopilot can make intelligent decisions as
to what to do with the new hardware resource
and where to place new workloads.
Learn more about Autopilot at
ubuntu.com/cloud

Tweet this

20
OpenStack made easy

Containers in OpenStack
Why containers?

Why now?

OpenStack containers made easy

Containers have many benefits. But there are


two things they do extremely effectively. One,
package applications for easier distribution.
Thats an application container like Docker.
The other is to run both traditional and cloudnative workloads at bare metal speed. Thats
a machine container, like LXD. Application
containers can even run inside machine
containers, to potentially take full advantage
of both technologies.

As more workloads move to clouds like


OpenStack, the economies of scale are
affected not only by the right tools and the
right approach, but the right workload density
as well. We run more workloads on a given
server than ever before. The fewer resources
a given workload needs, the greater the return
on investment for a cloud operator,
public or private.

While container technology is extremely


compelling, there can be some difficulties in
integration, operation and deployment. With
the nova-lxd technology in Ubuntu 16.04, a
pure container OpenStack deployment is easily
achieved. Nova-lxd provides native integration
for OpenStack with LXD machine containers.
That means that no extra management
software is needed to deploy both traditional
virtual machines as well as modern machine
containers from a native OpenStack API
or Horizon dashboard.

Tweet this

21
OpenStack made easy

LXD the pure container hypervisor


LXD, the pure container hypervisor, is the key
to delivering the worlds fastest OpenStack, as
demonstrated at OpenStack Summit in Austin,
TX. It achieves the lowest latency and bare
metal performance.

Operational efficiency is furthered by the


ability to live migrate services from one
physical host to another, just like legacy
hypervisors, but with a pure container
hypervisor.

LXD helps enable a hyperconverged Ubuntu


OpenStack. It deploys in minutes. Instances
that run on top of OpenStack perform at bare
metal speed. Dozens of LXD instances can be
launched within that OpenStack cloud
in a matter of seconds.

Upgrading a hosts LXD containers is as simple


as upgrading the underlying OS (Ubuntu),
migrating services off and back.

When using LXD, an entire OpenStack


environment can be snapshotted in about
2 seconds.

Tweet this

You can even run LXD containers inside other


LXD containers; all at bare metal speed, with
no performance degradation. Traditional
virtual machines must run on bare metal and
cannot be run practically inside other VMs.

There are prebuilt LXD images for running


CentOS, Debian, OpenSUSE, Fedora, and
other Linux operating systems.
Security is implicit with mandatory access
controls from Apparmor profiles. LXD pure
containers are as secure as Linux itself.
LXD can run virtually any Linux distribution
as a guest operating system. It doesnt require
special virtualisation hardware. It even allows
you to deploy all of OpenStack inside another
cloud, like on Amazon, for example.
Learn more about LXD at ubuntu.com/lxd

22
OpenStack made easy

Fan networking network


addressing for containers
Why Fan networking?
The density and performance of both machine
containers (LXD) and application containers
(like Docker) are extremely compelling for
modern cloud economics. But, their operation,
specifically when it comes to network
addressing, can be problematic.
In the case of application containers, each
application, or binary, requires a unique
IP address. With potentially hundreds
of containers on any individual server,
IP addresses are quickly depleted.
While there are network addressing
workarounds available, like port-forwarding,
they just shift an administrative burden from
one technology, to another. In this case,
it is now port management.

Tweet this

Network address
expansion with Fan
A much more elegant solution is Fan
networking. The Fan is an address expansion
technology that maps a smaller, physical
address space, into a larger address space
on a given host. It uses technologies built into
the Linux kernel to achieve near-zero loss of
network performance while providing unique
IPs to hundreds or even thousands
of container guests.

Fan networking is another example of how


Canonical is taking a thoughtful, meticulous
approach to big software and OpenStack
deployments. Instead of shifting the burden
of a given issue from one administrative
domain to another, the issue is addressed at its
core, using best practices and partnership with
the open source software community.
Learn more about Fan networking on on the
Ubuntu Insights blog.

23
OpenStack made easy

Conjure-up - multi-node OpenStack


deployment on your laptop
Why Conjure-up?

Multi-node OpenStack using LXD

Get started with conjure-up

In the past developing and testing software


in an OpenStack environment has meant using
OpenStack installers like DevStack. Whilst
convenient, DevStacks monolithic architecture
cant emulate a multi-node cloud environment.

Since LXD containers are like virtual machines,


each OpenStack control node service is
independent, even on a single physical
machine.

Using conjure-up is easy if you already have


Ubuntu 16.04. Its as quick as

Conjure-up is a command line tool exclusive


to Ubuntu 16.04 that enables developers to
easily deploy real-world OpenStack on a single
laptop using LXD containers.

Tweet this

Multiple physical machines are also an option,


to further mimic production environments
in development and test, without the
complications of an entire datacenter
of hardware.

$ sudo apt install conjure-up


$ conjure-up
Learn more about conjure-up at conjure-up.io

24
OpenStack made easy

ZFS and software defined storage


ZFS makes better containers

Container characteristics

All clouds store data

ZFS accelerates LXD on Linux. Specifically,


it provides:

Critical aspects of a successful container


hypervisor are:

Copy-on-write

Density

Snapshot backups

Latency

Continuous integrity checking

Performance

Ubuntu Advantage Storage provides support


for a number of software defined storage
solutions, all priced at cloud scale. Ceph object
storage is a popular technology that is readily
available within Ubuntu OpenStack and
provides massive scale-out storage for
organisations of all sizes.

Auto repairs

Fast, secure, efficient

Efficient compression

The features of ZFS make innovative and


superior pure container technologies like
LXD even better.

Deduplication
All of these features improve the management
and density of containers.

Tweet this

Another unique advantage to Ubuntu


OpenStack and Ubuntu Advantage Storage
is the CephDash, which provides real-time data
analytics of Ceph deployments.
Learn more about Ubuntu cloud storage at
ubuntu.com/storage

25
OpenStack made easy

BootStack - your managed cloud


Why BootStack?
Even with the most advanced tools and the
best teams, it can be a lot easier to get started
with some help from the experts that build
thousands of Ubuntu OpenStack clouds every
month.
BootStack (which stands for Build, Operate
and Optionally Transfer) is a managed service
offering that gets you an OpenStack private
cloud in a matter of weeks, instead of months.
Build

Tweet this

Operate

Optionally transfer

26
OpenStack made easy

Build, Operate and


Optionally Transfer
Canonicals cloud experts will design and
build an Ubuntu OpenStack cloud to your
specifications. The hardware can be hosted
at your datacenter or a 3rd-party provider.
When you feel comfortable managing your
OpenStack environment, there is an optional
transfer of administrative ownership over
to your internal team.
Another option is BootStack Direct, which
includes training as Canonical builds out
your OpenStack cloud. Once the cloud is
operational, administration of the cloud is
directly transferred to your team.

Tweet this

With BootStack and BootStack Direct, it has


never been easier to instantiate an Ubuntu
OpenStack cloud. Regardless of the BootStack
offer you may choose, the cloud will still be
built with the aforementioned toolset in this
eBook, to best practices and reference
architecture standards, as defined by those
tools.
Learn more about BootStack at
ubuntu.com/bootstack

27
OpenStack made easy

Conclusion
OpenStack may not be easy, but it doesnt have
to be difficult. The ease of OpenStack is in the
approach. Big software cant be tackled with
legacy tools and old fashioned thinking.
With the right tools, OpenStack can be easy,
and it can reap financial rewards for your
organisation:
MAAS is the smartest way to handle
bare metal
Juju enables easy model-driven operations
for hybrid cloud services
Autopilot is the fastest way to build
an OpenStack cloud

Tweet this

L
 XD pure containers hypervisor, ZFS and
Fan networking let you run traditional and
cloud-native workloads at bare metal speed

To learn more about a managed solution


for big data, download the datasheet
BootStack Your Big Data Cloud.

C
 onjure-up is the simplest way for developers
to build a multi-node OpenStack deployment
on their laptop

If you want to start trying things out immediately,


we highly encourage you to visit
jujucharms.com

B
 ootStack is the easiest way to stand up your
production cloud and have it managed by the
worlds leading OpenStack experts
If youre excited to hear more and talk
to us directly, you can reach us on our
Contact Us page.

Enjoyed this eBook? You might also


be interested in ...
CIOs guide to SDN, NFV and VNF
CIOs guide to SDN, NFV and VNF
eBook

Why is the transition happening and why


is it important? Networking and communications
standards and methodologies are undergoing
the greatest transition since the migration
from analogue to digital. The shift is from
function-specific, proprietary devices to
software-enabled commodity hardware.

Read this eBook to:

CIOs guide to SDN, NFV and VNF

F
 amiliarise yourself with the three most
popular terminologies today SDN, NFV,
and VNF
Learn why the transition is happening
U
 nderstand why its important for anyone
responsible for a network to understand and
embrace this emerging opportunity
L
 earn about the potential benefits, and some
deployment and management solutions for
software-enabled networking

Download eBook

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy