IIB On The Cloud BuzzTalk
IIB On The Cloud BuzzTalk
IBM
At this time, all participants will be on listen-only mode until the Q&A
session of today’s conference.
When the time to ask a question, press star 1 on your touchtone phone and
record your name.
This call is being recorded. If you have any objections, please disconnect at
this time.
And welcome, everybody, to this Friday’s Buzz Talk. We got a great topic
here. It’s IIB, IBM Integration Bus on the Cloud.
Amy is the IIB Cloud product manager as well as healthcare industry and
integration.
Welcome today, everybody. Thanks very much for joining us. We’re going to
be talking about a new offering that’s going to be coming out called IBM
Integration Bus on the Cloud.
And joining me on this call I’ve got Jezz Kelway, who’s the development
manager for this product, as well as Andy Coleman, who is the architect on
this.
So as Jack said, this is not yet announced. So everything we’re talking about
today must remain confidential until we announce which is going to be
September the 8th 2015. And it’s only a few short weeks to keep the CSO.
So the session today, what we’d like for you to end up with is have an
understanding of the features and capabilities of this new offering, Integration
Bus on Cloud. We’d like you to be able to articulate the value of the
Integration Bus on Cloud product, as well as understand how you position it
with the wider hybrid integration portfolio. And thirdly, we’d like you to be
able to identify an opportunity and just be ready to start conversations with
customers, so that as soon as this product does announce, you can start having
those conversations and really start, you know, getting a lot of interest in this
product.
So here’s the agenda for this talk today. We’ve got about an hour. And we’d
like to have some questions and answers at the end. So we’re going to start off
with a very, very quick level set. So what is integration? What’s the value of
it? And a bit of a level set on Integration Bus for those of you who aren’t
aware of it.
The main chunk of this presentation is going to be all around that new
offering, the Integration Bus on Cloud. So what is it? What is the value? And
where can you access it?
Thirdly, we’re going to look at positioning. So where does this fit within the
wider portfolio. And then as I said, we’re going to go into questions and
answers at the end of the session.
My lab data needs to flow over process call (unintelligible) was that TCP/IP,
HTTP, something on the file system, FTP, so on and so forth. So lots and lots
and lots of different process call times.
Now the data itself is going to be in a certain format and that could be text
based. It could be HTML. It could be JSON. It could even be some kind of
custom binary or an industry standard messaging format. And all we need
when we say “integration” is just allowing these different endpoints to work
together in meaningful ways and the important point is that they accomplish
some business value.
So listed on this chart are some of the ways that - some traditional ways of
doing integration and some traditional use cases there.
Here are some examples of some typical integration topologies. And this chart
describes three of the most common. So first on the top left, we have bridges.
And this is where we have something that sits between two different endpoints
and converts data from, one, pivoted can be read by another. They’re very
quick to configure and usually pretty cheap. But one of the disadvantages of
bridges is that they can be quite difficult to scale into multiple endpoints.
An enterprise service bus, an ESB, is a logical construct that takes data from
one endpoint, converts it to another but does it in a way that’s very scalable
and it’s quite innovative as well. New endpoints can be added simply by
connecting it into the bus. And we call it an enterprise service bus. And it’s
often used as a backbone for service-oriented architecture.
This way, you can define endpoints in terms of inputs, in terms of outputs and
in terms of the operations of experts. And for Integration Bus, while it can be
used for simple point-to-point connections, it’s much more commonly used as
an enterprise service bus.
And finally, it can provide insight into your applications and the business
value that they bring. So if you have data flowing from between different
endpoints, you can look at that data and use it to provide analytics.
So IBM Integration Bus has been in the market for over 15 years. We have
thousands of production, installation. I’m not going to spend too long going
through this because we really want to get what we’re going to talk about
today.
But so many customers in the market are really excited about this new
offering because Integration Bus is something that’s trusted so much in the
industry today. It’s, you know, it’s one of the most reliable, most high
performance, most scalable system - solution around for integration - for
enterprise integration.
So there’s an awful lot of talk in the marketplace about this Integration Bus on
Cloud offering. But just to give you an understanding here, you know, you can
see on this screen what the analysts were saying and just why Integration Bus
is so popular in the market today.
Now let’s talk about the Integration Bus on Cloud product. So cloud is a huge
topic. It represents such significant business growth opportunity for our
clients. And there are a number of use cases in relation to cloud that we’ll go
through a bit later on. But one of the key things to remember about this new
offering is that it’s very fast. It offers incredible flexibility for our clients. It’s
going to be very scalable and also maintain that best-of-breed security that our
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 7
The main goal for Integration Bus on Cloud is to provide that ready-to-use
Integration Bus topology that can get - be, you know, you can have an
integration solution deployed literally in minutes. You don’t have to have
those traditional, very large startup costs but really reduce about - reduces
about (unintelligible). Plus, it’s managed by IBM itself. So they’re fully
managed service. So our clients can take some comfort using that we are the
ones who are making sure that we’re secure, that the system is up and running
and so on.
What I’ll do now is I’ll pass the call over to Jezz and Andy who are going to
take you through the next couple of charts and really talk to you and give you
a good idea about, you know, what this product did and what it looks like and
what it does.
So this is Jezz Kelway. I’m development manager for IIB on Cloud. And I’ve
got Andy Coleman here with us today who’s the architect.
So what we’re hopefully going to provide for you is some insight into how
we’re approaching the development of this product and some architectural
data. I think primarily we’re going to focus on some of the more technical
aspects. So hopefully that’s of use to you as a way to get in touch if you need
more information and we also got beta running. So we’re very interested if
you’re going to have conversations with your clients and they would like to
know more. We can get one in the end and talk through some of our more
future thinking thoughts.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 8
But for now, what we’ve - how we’ve approached this in development is to
look at realistic scenarios, our primary customer base. And that’s IIB in this
instance but we’re not limited to that. Any integration customer will face these
challenges.
But we’ve looked to the customer base that we understand. And we worked
through the development process in terms of scenarios that we’re looking for
and we’re looking at.
So the primary one we found that we think is going to resonate the most is
when an enterprise needs to increase its capacity. Now, there are two likely
reasons why someone would want to increase that capacity into the cloud. I’m
sure you can think of a number of examples where, you know, someone wants
to grow their enterprise but why specifically into the cloud.
Well, if you need to provision quickly, your business has a very quick desire
to increase its integration capability. It doesn’t want a bottleneck. It doesn’t
want to host things up. And ordinarily, you might, you know, you might not -
you might buy and deploy your software and hardware, you know, on a sort of
capital CapEx-type basis, right?
But in this scenario, you don’t have - necessarily have time to buy, ship, bring
in, configure and deploy all of your hardware and you might not necessarily
have the skills to do that, so you might have a rough time in terms of scale. So
it’s kind of a time scale question.
In addition to that, cost is obviously a big one in the cloud. What the managed
cloud service can do is offer support for all of those hardware maintenance
and can take away a lot of the capital funding restrictions.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 9
Now, we’ll dig a little bit more about very specific customer scenarios. But
this is what’s driving that primary scenario that we’re going to use to take you
through this. And, you know, and it’s extensively about enabling somebody
who’s got an existing IIB flow to seamlessly deploy that into a managed IIB
offering.
So I think if we can move to the next slide, Andy can take you through a few
of the more technical details.
So this one that Jezz has just introduced -- we’ve sort of verbalized those -- an
IIB administrator can provision an IIB environment and deploy an integration
to it in less than ten minutes with no upfront cost.
So what I’m going to do over the next few slides is take you through a user
journey that we’ve designed and that we’re currently developing and
obviously how different it is from sort of a typical user experience with the
on-prem product in a moment.
So essentially, the user will use that IBM ID, so it can log in, sign up via
Cloud marketplace. So there’s no need to provision or configure any
infrastructure, whether it’s, you know, physical hardware or infrastructure as a
service. There’s no need to design or worry about sort of the IIB topologies
that you would need to concern yourself with if you’re creating an on-prem
installation.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 10
And it’s the same VAR file. So a VAR file that you can have running today
on premise we’d be able to take and we’d be able to upload it into IIB on
Cloud and start running.
So if we go to the next slide, we’ll have a look at, you know, the start page.
When you first log on using your IBM ID, you might be presented with a page
that looks like this. So this is our working design.
So you can see it’s fairly simple. There are essentially a couple of options
here. You can either first login and upload the VAR file that you’ll have or
you can start exploring some samples that we’ll provide for you.
So if we click on the Upload bar -- next slide, Amy -- it will launch a, you
know, a dialog. You know, just select the VAR file that you have on your file
system. So we’ll open that one. We got holidaybooking.var.
Next page.
Okay. And we’ll see a progress bar as I start uploading that into the IIB on
Cloud environment.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 11
Once it’s loaded, it will go for a very quick validation process. Essentially,
what’s that’s doing is it’s looking inside the VAR file and it’s making sure
that there aren’t any sort of features, any nodes that are being used that in the
first instance will be unsupported within the cloud environment. So in our
initial release, there’ll be a few restrictions and I’ll briefly discuss those later.
So you might get a validation error there because obviously we want users to
be warned as soon as possible if, you know, if there’s a problem they’re going
to be encountered.
The second thing you’ll see on the screen is that we have a list, a preview of
the contents of that VAR file. So you can see that there’s a few applications
there and within those applications, there are some flows and libraries. And
really that’s there just to give the user a visual clue as to what it is they’ve
uploaded just, you know, to reassure them that they have uploaded the file that
they intended to do.
So the user will then confirm that, click on “Save.” So if we go to the next
slide.
We’ll see then that “holiday booking integration” has been loaded. And it’s
there ready to start. And all the users do then is click “Start” and that
integration will then start running. So the status changes, the running. There’s
an indicator there that it’s been running for two minutes. And there’s also an
indicator is that when it was last used. In other words, when it last processed
some data.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 12
And there’s options there to stop the integration, to delete the integration. You
can drill into it and you can view the content. You’ll be able to view things
like logs and get access to trace and those sort of problem determination-type
activities that’ll be available.
Okay. So that’s kind of the basic user experience and, you know, if you’re
already used to using the on-premise product, you know, you’ll see that there
are lots of concepts that you really don’t have to worry about, you know, that
there, you know, sort of things that IBM, as the provider and manager of this
service can worry about topologies. You know, the IIB administrator only has
to worry about their IIB integration.
So if we talk very briefly on, you know, some of the technology that’s
underpinning this, basically, all of our components we’re building into Docker
containers. Now Docker has gained popularity over the last couple of years
drastically really as a, you know, motivated by, you know, the need to run
various applications within a cloud environment. So, you know, we built all
our administration components into Docker.
We’ve also taken the, you know, the IIB products and we’ve decoupled some
of the components such that components like the admin agents and the control
service we’re not even using in the cloud environment, okay? So we’re taking
the workflow engine process, that core process that actually runs the message
flows and we built it into Docker container. And so we can rapidly, you know,
bring up and shut down and independently scale the various DFA containers
to run the - our users’ integrations.
So that’s sort of a very high level summary of the technology that we’re using.
Jezz Kelway: Good. Thanks, Andy. So there’s an idea of how we’re meeting the scenario of
having an existing VAR file and deploying it and some insight into our
container environment. So we utilize in the Alchemy BlueMix service.
So, you know, digging a little bit more into the scenario, the next steps, if you
like, along the road, we’re looking at situations that are likely to occur where a
managed cloud environment will be a viable - will be available option for
these enterprise customers.
And we always end up at the sort of peak load situation where, you know, I’ve
got two examples here, the first one being a standard retail outlook that is
expecting a significant increase in traffic for one reason or another. We got a
couple of examples there. Christmas is an obvious one or Black Friday
scenario.
And what they need - what they have is what we’re referring to as a
predictable peak because they know its coming and they want to enhance their
enterprise or extend their enterprise to be able to cope with that to make sure
everything functions clearly or sales go through and so on.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 14
So you got predictable timing and you probably got an idea of what kind of
capacity you’re going to need to undertake that but you don’t necessarily want
to buy in a whole new bunch of hardware and software in order to facilitate
that onetime event or two-time event.
So, you know, you could conceivably end up with a lot of hardware that sat
around not being active for long periods of time. So, you know, you have a
predictable peak. You want to manage that peak but you don’t necessarily
want all of the overhead and not let hardware and software to do that.
Now that also applies to perhaps a more emergency situation and we’ve got an
example here where you’ve got a charity. There may have been,
unfortunately, a disaster somewhere and, you know, suddenly a charity
organization has a peak demand for donation that needs to process a lot of
transactions around donations moving to a lot of systems across countries, you
know, global problem.
So, you know, that’s our peak load situation. If we move to the next slide,
Andy can give a little insight how we’re solving that.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 15
Andy Coleman: Yes. So we’ve translated that requirement into our second goal which is an
IIB administrator can increase and decrease the throughputs of his IIB
integrations without creating new nodes on existing hardware, provisioning
new hardware or paying for any hardware that he doesn’t need.
So we’ve already seen from the initial goal that, you know, that the IIB
administrator doesn’t need to worry about hardware, doesn’t need to worry
about topologies. It’s just the case of uploading integrations and running them.
So this really extends from running them to scaling them.
Now in IIB today, there are various ways of scaling new integrations. So one
of them is around additional instances which is, you know, property outflows
themselves which allows you to increase and balance a number of threads that
you have servicing the various flows.
And that will - that’s just as relevant within a cloud environment. But that’s
still within a single process. So the thing that we’re adding through here is the
ability to take, you know, the integration processes, that DFAs, and scale
those up or down dynamically.
And our investment in Docker really pays off here because, you know, we
now have an ability to just increase the number of Docker containers that are
serviced in a particular VAR file and being in a, you know, a container service
we can rapidly increase or subsequently decrease those running containers.
And of course, the customers are only paying for the container that is running
at that particular time.
Jezz Kelway: Thanks. So, you know, extending our scenario a little further and perhaps this
is more looking further down the road. You know, we’re talking about
deploying IIB flows into a managed IIB environment. It’s very likely that
customers utilizing this option will have on-premise data for one need or
another. So, you know, it’s not an unusual scenario to think they’ve got
sensitive data that I don’t want to share into the cloud or store in the cloud
rather.
And so they’ve got firewall protecting it and they’re going to want to punch
through that to, you know, run the necessary transformations or
communications to those endpoints using their IIB flow in the cloud.
So, you know, we’re going to need a way to manage firewall situations even if
the data isn’t particularly sensitive and they won’t be happy, it may be not
within their processing to store that data in the cloud. They might already have
an existing solution. I mean, it’s likely that if we’ve got an enterprise
customer that already has their database modeling up and they want to retain
that as is, you know, it’s not worth further investment just to move into cloud,
so they’re going to want some way of connecting to on-premise endpoints
from their cloud but aren’t just IIB.
So if we move to the next slide, we can talk a little bit about how we’re
approaching that.
Jezz Kelway: Thanks. So we translate that into Goal 3. And IIB administrator can securely
connect IIB on cloud integrations with all the endpoints that he can connect to
with IIB Version 10.
So we have a couple of options here. Essentially, what we’re asking for here is
to have a secure connector from the cloud through to the customer’s on-
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 17
So the first option and perhaps the simplest option is really a case of, well,
whenever a node in the flow attempts to access a particular data source, then,
you know, the infrastructure intercepts that connection and forwards it across
the secure channel through to the on-premise system. In this case, the
database.
Now, this really is a case of TCP/IP port forwarding. Okay? So if you got,
say, an SQL compute node that’s trying to access by maybe JDBC to a DB2
system and in this case of when it tries to open up the ports, in fact, something
needs to intercept that and channel that communication across a secure
connector of some sort through an on-premise system such that they can then
connect to Port 50,000 on the on-premise system.
And then there’s another approach we could take where, okay, well, a
customer might have an integration running in a cloud but they might also
have a - an integration running on premise, the ability to connect those two
together seamlessly again via this secure channel would allow you to, you
know, have your cloud-based flow to connect across on-premise in order to
invoke another sub-flow that’s hosted in your on-premise IIB and then all of
your connections, your sensitive data can be done in that sub-flow such that
the cloud flow, you know, has no exposure to that whatsoever.
So for our 3Q delivery in the September, we were building the first option, the
direct port forwarding approach which we’ll deliver first. So if we look at the
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 18
next slide, we’ve got a sort of a very high-level visualization of what a secure
connector is.
Now, in order to set up the secure channel, we need the, you know, the user to
download and run a secure connector agent on the on-premise side. And this
will almost certainly be embedded within a data flow engine process. So it’s
all self-contained and easy to configure.
And what happens is that that secure connector agent running on-premise will
call out, it will effectively call home back to the cloud, back to secure
connector server. So it goes out via their firewall, okay, into the cloud and
establishes the connection.
Okay. So that’s very high level overview of what the architecture looks like.
Obviously, details are much more complex. You know, again, if anyone is
interested and I know, you know, customers are always very interested when
it comes to security aspects like this, you know, we can set up calls with our
security experts here to do deep dives into this.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 19
So from 3Q, you know, we won’t have every single node that’s available. And
there are various reasons around this. But there will be a large selection of
core nodes that are available from day one to run in the cloud.
So many of the input/output nodes, especially around MQ, HTTP, SOAP Web
services, you got this, you know, Web services, REST services, MQ Client
Connections, those sort of things, will be available from day one.
All of the transform nodes will be available, the graphical mapper, the
compute nodes, if you’ve got XLT - sorry, SQL, artifacts and your flows,
you’d be able to use those, Java compute nodes, XLT as well. All of the
rooting nodes, exception handling, all those, you know, those popular nodes
will be supported.
There are a number of endpoints which would require extra work going into
the secure connector that we will iterate on over time, so, you know, we’d
deliver basic MQ connectivity and database connectivity in our initial release
and we’ll build on some of that as we go through.
There are a few nodes again that we went support from day one, things like
the aggregation nodes, the collection nodes, some of those nodes that stall
state. Okay? Because of the nature of the Docker data flow engine and the fact
that state listing can be rescaled, restarted at any time. We would need to re-
engineer those in order to go out to an external cache. So again, that’s work
that we’ll do beyond the initial release.
Jezz Kelway: Thank you, Andy. I mean, you bring up an excellent point and I just want to
add in this point is that we are iterating on this. You know, this is seen very
much as a framework for us to move forward from, so it will grow and it will
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 20
change and the node is a good example of where there are certain things we
can do and that we have certain limitations due to technology.
So I think that’s, from the technical perspective, we’re going to sign off there
and hand you back to Amy. So thank you.
Okay, yes, so I was told that this was - this presentation was predominantly
for a technical login. But I thought I would put a little bit of that here - about
pricing in here as well.
So we are offering a trial, as Jezz had said. It’s going to be free for 30 days.
You can use it for evaluation purposes. As Andy said, there are going to be
some samples there for existing customers. They can take any of the VAR
files they have with that are supported integration nodes and run those during
the trial as well.
In terms of paid-for pricing, so what we’re going to make the money here,
there are two options. I’ll start with the Integration Bus on Cloud on demand
first. So this is the one at the bottom. So this is a pay-as-you-go part. It’s
based on the number of gigabyte hours used. So as Andy talked about, the fact
that we’re using containers, that’s the way we are going to be partitioning
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 21
space on the SoftLayer service. So for every gigabyte hour that’s used, so for
instance, 1 gigabyte hour - sorry, one 4 gig container running for an hour will
cost you $4 or equivalent in your currency of course.
So basically, this will be billed one month. You know, they’ll be billed at the
end of every month. The time is one month to start with. And basically to
provision access to the service, they’ll be ordering initially that part and then
they’ll just be charged as they’re using those containers and as they’re actually
using the resources on the SoftLayer service.
Support for this particular offering is community based to start with. So just
mention that this is an ongoing process. You know, we are making this first
release in third quarter and it is going to be lightweight to start with, you
know, when you look at what IBM Integration Bus has in the product today, it
massive. There’s an awful lot of content, awful lot of capability in there.
We also have a subscription part coming out. Really what we’re going to be
using this port start with is the Bridge to SaaS sales play. So it is a
subscription part. It does include IBM support through retain because Bridge
to SaaS is aimed at existing customers who want to take existing S&S licenses
and move them to SaaS. So times 12 months payment upfront or quarterly
billing.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 22
So those are the two parts that we have to start with. We are looking later on
at adding additional subscription parts. We expect these to come online before
the end of the year. But obviously we’ll keep everybody updated on this as we
go through.
Now we’re going to look at positioning. So where does it fit within the wider
portfolio?
So like I said earlier on about the pricing, the free trial is a pay-as-you-go in
the subscription.
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 23
Later, you know, in 2016, one of the plans is to look at how we can enhance
and improve the development experience. So IIB on Cloud, in its initial form,
will be a development - a deployment and monitoring solution for integration,
so using the integration toolkit, so the toolkit that’s used with IIB today.
Let’s take a look at where it fits in with the wider portfolio. So there’s a lot of
stuff on this chart and I’m going to break it down. So you can see here that
we’ve got two speeds of IT. We have our fast speed of IT, which is
traditionally looking at things like BlueMix, we have different SAP
applications that are, you know, that are very dynamic, updated frequently and
so on. And then we have that steady speed of IT, so looking more at enterprise
integration, enterprise messaging and so on.
If we look at the components of a hybrid solution here, one of the key kind of
components, so first of all, we have the Integration Bus. So we’ve talked a
little bit about that, the fact that it’s there really sitting in between data and
application, connecting those different endpoints in a way to achieve that
business value.
Now, with that on-premise - those on-premise data sources, if you’re looking
at moving to cloud through bring your own software license or IBM Cloud,
for instance, you know, you could be moving data around that but more
commonly the data that’s actually going to be stored on premise. So you need
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 24
a way to be able to, you know, either send or receive data from outside of your
enterprise.
And the way we would suggest doing that is using a gateway. So in this case,
it will be DataPower. So in order to expose and receive that data, there needs
to be that security on the edge.
Next, we have API Management. So the API economy is just a huge topic
today. I mean, everybody is talking API. And one of the key things about that
is how you provide access to that secure data on premise. They’re not cloning
the data or taking copies of the data and providing it to those developers who
are more in the soft speed of IT. So how do you do that while keeping it
secure, while maintaining all of the governance that you already have and a lot
of people are looking to API to do that.
So the way these two connect, you know, through API Management, so what
API Management does is it provides you with the right exposure API to
manage who is calling those, who actually have access to use those and so on.
And also viewing, you know, how that’s performing.
And next we have Cast Iron. So if you have a customer who has, you know, a
SAP application such as Salesforce or SugarCRM or, you know, any of the
many SAP applications that are available today, a lot of customers are going
to be asking you, “How do we connect those SAP applications with our on-
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 25
premise data or our on-premise statuses and everything else?” So the reason
that - the way we would suggest is Cast Iron.
What an IT to do is it provides you with tons and tons and tons of connectors
to some of the (unintelligible) SAP applications, as well as some of the more
skill SAP applications to really enable customers to bridge between their on-
premise enterprise integration and it could be on cloud integration as well now
and those SAP applications.
Now, is anybody got any questions at all before I hand it back to Jack?
IBM
Moderator: Paula Hough
08-14-15/9:00 am CT
Confirmation # 3749994
Page 26
To ask a question, you may press star 1 on your touchtone phone. Please
unmute your phone and record your first and last name clearly when
prompted.
Jack Carnes: Okay. Well, we can end the call then. But this was an excellent discussion. I
like to thank the presenters, Amy, Jezz and Andy. This is going to be very
helpful for our tech salespeople in the near future before we announce and
even after we announce. But this was a great discussion on the positioning and
the technical aspects of the IIB on the Cloud. So I appreciate your
participation and thanks a lot. So see you next Friday for the next Buzz Talk.
Coordinator: Thank you for your participation. You may now disconnect.
END