0% found this document useful (0 votes)
96 views30 pages

NServiceBus, by Pluralsight

This document provides an overview and introduction to a course on scaling applications with microservices and NServiceBus. It discusses some key topics that will be covered in the course, including distributed applications theory, message-based microservice architecture using NServiceBus, modeling workflows with NServiceBus sagas, and monitoring message workflow with NServiceBus tools. The document also provides an introduction to the first module, which will discuss monolithic applications, distributed computing concepts like RPC and REST, and microservices and service buses as distributed computing architectures. It then summarizes some of the key points made in the first content clip about monolithic applications, including that they contain all functionality within a single program or process, have benefits like ease of

Uploaded by

Jarommat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views30 pages

NServiceBus, by Pluralsight

This document provides an overview and introduction to a course on scaling applications with microservices and NServiceBus. It discusses some key topics that will be covered in the course, including distributed applications theory, message-based microservice architecture using NServiceBus, modeling workflows with NServiceBus sagas, and monitoring message workflow with NServiceBus tools. The document also provides an introduction to the first module, which will discuss monolithic applications, distributed computing concepts like RPC and REST, and microservices and service buses as distributed computing architectures. It then summarizes some of the key points made in the first content clip about monolithic applications, including that they contain all functionality within a single program or process, have benefits like ease of

Uploaded by

Jarommat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

NServiceBus, by pluralsight

domingo, 11 de febrero de 2018 13:37

Course Overview
Hello everyone, my name is Roland Guijt, and welcome to my course, Scaling Applications with
Microservices and NServiceBus. I'm a Microsoft MVP, independent software architect, developer,
and trainer based in the Netherlands. Letting each service communicate reliably in a distributed
application is a real challenge. In this course, you'll see that it doesn't have to be difficult if you use
the NServiceBus Framework. Some of the major topics I will cover include distributed applications
theory, and where microservices and NServiceBus come in, message-based microservice
architecture using NServiceBus, modeling workflows with NServiceBus sagas, and using NServiceBus
tools to monitor message workflow. By the end of this course, you will be ready to create a
distributed application with messaging using NServiceBus. Before beginning the course, you should
be familiar with .NET and the C# language. Get ready to create microservices using one of the best
service bus frameworks out there, here at Pluralsight.

Distributed Applications, Microservices, and the Service Bus


Introduction
Hello, this is Roland Guijt. Welcome to the course, Scaling Applications with microservices and
NServiceBus. The microservices architecture is a great option if you want to keep complex
applications manageable, and at the same time optimize the user experience. As we will see, the
NServiceBus Framework is a great tool to support this architecture. Let's go and have a look at what
I'm going to talk about in the first module. After following the first module, you will be able to
determine if microservices are suitable for your solution. To illustrate, I want to start with looking at
other architectures to give you a sense of what problems microservices are trying to solve. The first
one is a monolithic application where everything is developed as one big package. Then we expand
these thoughts by entering a distributed computing realm. First some theory about distributed
applications, and then the techniques RPC and REST within service-oriented architecture. This leads
to the topic where we want to be, microservices, which is also a distributed computing architecture,
and the service bus, but first things first, in the next clip I will talk about monoliths.

Monolithic Applications

A monolithic application is an application where everything is contained in a single program using


one technology. This can, for example, be one executable with DLLs, or a web application running
with one IIS host process. Examples of monolithic applications are Microsoft Word and WordPress.
We'll see that a monolith has benefits and is certainly still usable in certain scenarios, but it also has
downsides that are servicing when the monolith becomes complex. These downsides are solvable
using other architectures like microservices. When I started object-oriented development, my
mentor made sure that I followed a general way of composing an application. The application has a
component, which can be seen as a logical group of code within the application. A component could,
for example, be a main feature or a department of a company, for example, sales. In a monolithic
application, this could mean a certain routine space you use for each component. And some
framework support componentizing, for example MVC, that technique uses areas to split up
functionality. Components consist of layers, layers split the code within a component into logical
segments. Typical examples of layers are a UI layer, data layer, and a business layer. Within the layer
there are classes that form the content of the layer. Besides giving structure to your application,
classes are used to reuse code using inheritance, for example. This is a cross section of a component
depicting its layers. Let's say this component uses a UI layer, a business layer, and a data layer. Now
let's say a field in a database is added, and the application has to reflect this change. First the data
layer should change to make the field available to the other layers. Some business logic probably

Microservicios Page 1
layer should change to make the field available to the other layers. Some business logic probably
should be added in the business layer, and the users should be able to provide input for the field,
thus affecting the UI layer. So the layers are vertically coupled. This way of programming is very
commonly used in monolithic applications, and you're probably familiar with it, but fear not, the
good news is this is still a good thing to do within an individual microservice. In a more complex
monolith, there could be more than one component, each with their own layers and classes. Where
you have a complex monolith, you have hopefully structured it like this, or use another way to keep
the code in logical groups, for example with namespaces. The components are communicating with
each other. Within a monolith, this is probably done by just calling methods on the different classes
within the different components. There is a tight horizontal coupling going on between the
components. Let's pretend we have a monolithic web application. The user does a request, let's say
he's entering a new order. That order is sent to the server, the server processes, and the response
comes back in the form of a new page saying thank you for the order, perhaps. In between, there
are components, here shown as puzzle pieces. In this example, the components are departments of
a company. The fact that a new order is received is of interest to all components. So through their
coupling, by calling each other in a synchronous way, and each does its share of processing, and
within this process it's common to use one and the same database for each of the components. A
benefit of a monolith is the ease of deployment. We just copy it to the desired location, and you're
done. Monoliths are by far the most programmed applications in the world, and tend to be simple in
architecture. Junior developers are therefore likely to work on the applications without many
troubles. Because there are no external dependencies, the application is easy to test as a whole, and
setting up multiple environments should be fairly straightforward. Also, a project is easily shared
among developers by a source control system. Finally it is very easy for IDEs, the whole project can
be loaded in one go, because the code base is all together. But what are the possible problems with
the monolithic approach? First because all components are called synchronously, it can take some
time before the user sees some result. It's difficult to code for the fact that something goes wrong in
one of the components. If the first two components were already done with the processing, should
they be rolled back or not? How should the user be notified, what happens to the order, should the
user retry, or should the system retry? How many times, and should the server be aware that if it
receives an order, it's a retry? The web server can only handle so many requests at the same time,
because it's very busy going through all the components each time a request is made. The risk of
overloading the server is substantial. Actually the main task of the web server should be serving
pages and not processing other tasks. And finally when building a complex application in a team, it's
difficult to have multiple teams doing each component separately. Because of the tight coupling
between components, teams have to constantly be aware of how other components work, and
when one component changes, the whole application has to be retested, because the components
are so tightly coupled. Some problems may occur as late as release time, because only then all
components come together. So here are the downsides in the list. I'm an independent software
architect and developer. When doing projects with customers, I'm often asked to develop a quick,
simple solution for a certain problem. Over time, features are added to the solution, and it grows
and grows, and before you know it, the architecture can't support all the features anymore, and a
rewrite is required, but a solution is hard to maintain when it gets complex. The problem is that
monolithics have the tendency to get complex. What first works great as a quick solution quickly
becomes an unmanageable beast. Also release hardening is required. At release time, all the
different components made by different teams come together, and chances are it doesn't work as
expected, so planning a release of a complex monolith is difficult. A complex monolith is also likely to
have performance problems, because components get called synchronously after each other.
Reliability could suffer especially in web applications when it gets busy. You will have to probably
scale out quickly to reduce the load on the server. And finally a monolith is using one technology
stack. Different components cannot use different programming languages, for example, or use
different JavaScript frameworks, or use a different database.

Demo: Simplified Monolithic App

Fire On Wheels is a startup that we'll follow throughout this course. Startups tend to have small
budgets, so Fire On Wheels is in business now with a simple web application written in C# with the
ASP.NET MVC Framework. A customer can enter a delivery order, and then the person responsible
for delivering the package is notified via email. This is the application. When we fill out the form, a

Microservicios Page 2
for delivering the package is notified via email. This is the application. When we fill out the form, a
review screen appears, in which the filled out fields are repeated, and the price is shown. When the
button is pressed, the order is sent to the delivery person. This application is a simplified monolith.
All functionality is in one application. In this case, it's contained in one Visual Studio project, but
even when layers were present in separate assemblies or DLLs, this would still be a monolithic
architecture. How MVC works is outside the scope of this course, but I'll show you the controller.
When the form is completed by the user, and the button is pressed, the Index method with the
HttpPost attribute is called. It gets the Order as a parameter. Then the internal class PriceCalculator
is invoked to calculate the price of the order, which is added to the Price object, and the Review
page with the order information is sent back to the user. When the user presses the Confirm button,
the Confirm method is called, again with the order. The email is sent via a local class, and the user
receives a thank you page. You can see the controller not only handles the sending and receiving of
pages, but it also controls how and when the price is calculated, and the email is sent. And while all
these methods are called, the user is waiting for the server to respond with the page. Before we go
into distributed system techniques, in the next clip, I'll talk about distributed systems in general.

Distributed Applications

A distributed system is a software application. Now the term application can be a little bit confusing.
In this definition, an application is a complete software solution. This could be a monolith, but it
could also consist of multiple programs taking care of a component in the application. The different
components of the application could be a collection of executables or web services from now on
called services. Each of the services could be located on other computers. In order to call and listen
to each other, there has to be some form of communication in place that is more complicated than
just method calls or classes. As we will see, there are several techniques that implement that in
different ways. Here we are back to the puzzle piece model we saw earlier. With distributed
computing, we take one or more components out of the monolith, and let it exist as another
executable or web service. The component is then called in a certain way using some technique. I'll
explain some techniques somewhat further in this course. When all or many of the components in
an application are services, we speak of a service-oriented architecture. A possible implementation
could be our web application that calls a web service, that calls another web service, etc. When all
services are called, the first service returns a status or response, which is processed by the web
application and sent to the user. Back in the day, L. Peter Deutsch wrote what can be seen as a
warning to all of us distributed system designers. He noted that we tend to make a couple of
assumptions that are not true, mainly because we tend to test our applications on a local dev
machine or a test server that's in the local network. He listed the assumptions as the policies of
distributed computing. As we'll see further on, some distributed computing techniques are more
susceptible to these pitfalls than others. But ideally, you should design the application in such a way
that these policies become evident, solve them, or at least take them under consideration. So what
makes a good service? Two important parts of the answer to that question are loose coupling and
high cohesion. I'll talk about cohesion first. Cohesion is about where to put all the pieces of
functionality that make up the application as a whole. We want functionality that is related to each
other to stick together in a service, and unrelated functionality to sit elsewhere. Also, because of
maintainability, it's important to hide as much complexity as possible for the consumers of the
service, and the exposing as less and simple of an interface to talk to the service as possible. That's
important because when we want to change some logic in the application, we want to do it within
one service, and we don't want to go through all of many of the services, and have to redeploy
everything because of one change. So it's a good idea to first think about which boundaries a service
should have before building it. In the beginning of the module, I talked about grabbing separate
components and make services out of them. But is it really that simple? The answer is no. There's
always some degree of overlap across the components and across the services. It's a very good
practice to first make a diagram of the boundaries of services that overlap before you begin
programming, and make sure you have a high cohesion in your architecture. In this way, you are also
thinking about the architecture as a whole, determining how many services should exist, comprising
of what specific functionality. What is within the boundaries of a service is known as the bounded
context. It's a term used in domain-driven design, or DDD. Here's an example of bounded context in
an architecture. So here we could be planning to create a sale service and a support service. As we
can see, a lot of stuff within the server exists only within the service, but there's also some overlap,

Microservicios Page 3
can see, a lot of stuff within the server exists only within the service, but there's also some overlap,
because customers and products are entities that must be known by both sales and support.
Coupling is the way different services depend on each other. There are different kinds of coupling.
One is platform coupling, when a service can only be called by an application that is built with the
same technology, we speak of a high degree of platform coupling. It's difficult to expose the service
to an outside world, which uses a multitude of programming platforms when the service is tied to
one particular technology. Platform coupling also ties all current and future development within an
organization to one particular platform. There's also behavioral coupling, when the caller has to
know the name of the method it's calling exactly, and what the parameters are, we speak of a high
degree of behavioral coupling. Here the caller determines what should be done, and probably has
some knowledge how it's done by the receiving service. The downside of having behavioral coupling
is that you at some point want to change the way the service is called, all surrounding services have
to be adjusted and redeployed. The next type of coupling is called temporal coupling. What happens
if the receiving service is down? When the application as a whole can't function when the service is
down, because the server is demanding a response and waits for it, then we speak of a high degree
of temporal coupling. With temporal coupling, different calls are handled synchronously, and
services in an architecture rely on other services to be up. The downside of having a high degree of
temporal coupling is that this can easily lead to disaster, which is very hard to program against. Let's
go back to the service-oriented architecture slide for a second. Let's imagine that this is an order
entry application, and something goes wrong with the third service being called. How is the user
notified of the error, how do we roll back the other services, how do we handle retries, and perhaps
more importantly, what happens to the order we don't want to miss out on? When the architecture
as a whole has a high degree of temporal coupling, services tend to be called synchronously. That
means the web server has to wait until all the calls to all services have been completed before it can
return a request to the user. Note that this has nothing to do with synchronous or asynchronous
code, like the async and wait syntax in C#. Using that pattern, resources for the web server are made
available for other requests while our servers do their work, but the user still has to wait for them all
to complete. Next I'll show you the commonly-known techniques RPC and REST.

RPC and REST

One of the earlier forms of the distributed computing was Remote Procedure Call or RPC. RPC is a
way to call a class' method over the wire. At first, the different programming platforms each develop
their own way to do this. Examples of frameworks are .NET Remoting and Java RMI. RPC has a high
degree of platform coupling, because the caller and the receiver must you the same technique. And
also a high degree of behavioral coupling, because the caller must know exactly what the names of
the methods on the other side are, and the types of the parameters must be known. There's also a
high temporal coupling. If one service is down, the application can't function, because all the calls to
different services are handled synchronously. So with traditional RPC, all three types of couplings are
evaluated at high, we can speak of a very tightly-coupled architecture. In terms of the fallacies of
distributed computing, RPC could be a dangerous practice. Often proxy classes are generated that
can be used in exactly the same manner as a call to the local class. Sounds convenient, but
remember the danger of the assumptions developers tend to make. The network is reliable, for
example, it's a very easy assumption to make if this is the case. When RPC techniques like .NET
Remoting were used for awhile, another type of RPC was discovered called SOAP, or Simple Object
Access Protocol. SOAP is a way to standardize the way a method is called on a component. SOAP-
implementing components are called web services. A method is called by using XML, which is
platform agnostic. By using another technique called WSDL or WSDL, send their components, now
called consumers, can also discover what methods are available at the receiving end, and what the
parameter types look like. Because of the way a method is called a standardized, many programming
frameworks and languages can consume the servers. This solves the problem with platform
coupling. This is great when the service is available via the internet, and different kinds of consumers
exist, for example, programmed in Java and .NET. Also within an organization, this doesn't tie us to a
specific technique anymore. But note, I've not set platform coupling to 0, that's because although
SOAP is a standard, it tends to be implemented slightly different by the different platforms, and this
can lead to problems and complexities during development. Believe me, I'm talking from experience
here. Recognizing some of the downsides of RPC, a new technique for distributed system was
introduced, Representational State Transfer, or better known as REST. There are many properties of

Microservicios Page 4
introduced, Representational State Transfer, or better known as REST. There are many properties of
REST explained by the Richardson Maturity Model shown here. RPC with SOAP is shown here as a
swamp of POX, where POX stands for Plain Old XML, which refers to RPC with SOAP. Where RPC is
mostly ignoring the underlying protocol like TCP, REST is using the semantics of the transport
protocol. The mostly commonly used protocol is HTTP. One of the properties is that the methods in
the service are not directly exposed. All resources like data are available as specific URIs. What you
want to do with the resource is partly determined by how the call to the URI is made. An HTTP
request is composed of a verb together with the URI. The most-commonly used verb in HTTP is Get,
but Post, Put, and Delete are also verbs. A pattern commonly used in REST is that Get verbs gets you
data, Post will introduce new data, Put will update it, and Delete, well, you guessed it, it will delete
it. Hypermedia controls are on the next level. This is a way to get the URIs from the service, and a
consumer knows where a certain resource is located. These work in the same way as hyperlinks on a
web page. You can click to the next relevant page. In the REST model, when creating data with a Post
call, the response returns the unique URL where the new resource is located. It's also common to
have a well-known URI that exposes some starting points in the service. In this way, it is possible to
discover the URIs by using the service, and to serve the service, so to speak, without knowing all the
URIs up front. In terms of coupling, we score even lower with platform coupling, because almost all
programming platforms can deal with HTTP requests. Behavioral coupling is still present. Well, the
coupling is still there, but a bit looser. How loose depends on how you implement REST. With the
Richard Maturity Model implemented to the max, behavioral coupling can get very low. Some
remain because of the initial URL that has to be known, but it's brought back to a minimum. But
temporal coupling is still at a maximum, because REST services still have to be up to do their jobs,
and consumers still have to wait for the response. Regarding the fallacies of distributed computing,
there's somewhere safer here than in the RPC realm. In most platforms, a distinct HTTP request has
to be made, so we are fully aware of the call over the network. However, we still need to provide for
network errors, timeout, and bandwidth problems, of course.

Demo: Simplified SOA App

Business is going great at Fire On Wheels. Even Amazon wants Fire On Wheels to deliver packages
for them. But of course, they don't want to fill out a web form for every order, it has to be
automated. But Fire On Wheels decides to make a REST-base service available. Here is the same
application as we saw before, but now there's an extra project in the solution. This is an ASP.NET
Web API REST web service. It contains a controller, which can receive an order and process it. The
class that sends the email has been moved to the service. In the controller for the web application,
the confirm method is changed. Instead of calling the method on the local class, now an HTTP
request is made. With the REST service in .NET, it is very clear that I'm going over the wire here. With
an RPC system, this would probably be like calling a local class again. The advantage of this approach
is that dispatching of an order is now in a separate deployable unit in the form of the web service.
Other parties can now enter an order with a different application running on another platform. So
there is no platform coupling. And the web service in the web application can be separately
developed. This overlaps in terms of code, because they both have to know what an order is. The
web application has to know what URL and what HTTP method is needed to send the order, so
there's behavior coupling. But the user is still waiting for the web page, just like with the monolithic
application. It is even taking longer because the request and the response have to go over the wire
now, and there's a vulnerability, what happens when the web service is down? We're ensuring a
good outcome in the code, but that just throws an exception if something goes wrong, so there's a
strong temporal coupling. we have to come up with some error handling and retry mechanism,
which could get very complicated. In the next clip, you will see microservice architecture explained.

Microservices

Here is a definition of microservices. Firstly, just like SOA, this is an architectural style. Because the
microservice architecture is actually a new version of the original servers oriented architecture idea,
it is sometimes called SOA 2.0. Microservices architecture is an architecture for complex
applications. The services are small and dedicated to do a single task, for example, create it around
features or departments in the business, and autonomous, that means there is no shared
implementation code, and also no shared data. Every microservice has its own database or other

Microservicios Page 5
implementation code, and also no shared data. Every microservice has its own database or other
data store, so each service can have a data store that suits the particular kind of service. They
communicate using language-agnostic APIs. So the services are loosely coupled and don't have to
use the same language or platform. Some communication mechanism has to be in place, and it's
consumable by all these different apps. So should you use the microservice architecture? Even
microservices architecture is not the holy grail of system design. It is more complex than a monolith
or a traditional server's oriented architecture. So that's something a development team should be
able to handle. A less complex application with a monolithic architecture wins over microservices in
terms of productivity. But when complexity increases, productivity with monoliths starts to fall
rapidly. Where microservices architecture tends to keep relatively stable, and that's because
although the architecture as a whole is more complex, the architecture of each individual
microservices tends to be easy. So is your application complex? If you think your application in a
monolithic architecture should be composed of many components that are maybe intertwined in a
complex manner, then the answer is probably yes. Here's a practical example of how microservices
could work. We have the web application again that submits an order to the web browser of the
server. This could be ASP.NET, for example. To process the order, ASP.NET sends a command in the
form of a message to the microservice responsible to process the order. Messages are routed and
queued by a backhand system, more on that later. The ASP.NET application waits for a confirmation
that a message has been received, and it turns the response to the user's browser. This response
now is not the order has been processed, but more along the lines of the order is processing. The
order is picked up from the queue when the service has time to process, and after the order is
successfully saved in the database, or some other processing occurs, the order service will omit a
message saying I'm done. Contained within the message is the relevant order information. This kind
of message is called an event. It isn't directed to particular receivers, so the services are loosely
coupled, but interested other services could subscribe to that event. The message is then put in the
queue for these services. The ASP.NET application itself could also be interested in that event,
because it wants to notify the user that the order has been processed. ASP.NET could, for example,
relay that information with a web sockets technique like SignalR. The picking service listens to that
event as well, and starts the picking process, omitting another event when it's done picking, which is
again picked up by the ASP.NET application, and relayed to the user. The microservices architecture
enables a very loose coupling. The only platform coupling that could be left is the inability for some
platform to connect to the message queue, where with RPC and REST, the behavior was in part
dictated by the caller, where the caller should have some knowledge about the request that was
handled by the receiver. All we have left now is a message with some data. How it is handled is
entirely determined by the service that has to process it. When sending a command or receiving an
event, the service doesn't have to be up, because the message is safe in a queue until the service
becomes available. In terms of the fallacies of this typical computing, we still have to watch out, we
still have a network dependency, for example. The network could be used even more than a SOA
app, we still have the same concerns for latency or for security. Let's look at some properties of
microservices. Microservices are easier to maintain than other techniques, because they are very
loosely coupled and have a high cohesion, the responsible team can do most of the work separately
from the other teams working on other services. The overall architecture is more complex, but an
individual microservices architecture is probably not complex. Versioning of services is easier than
with SOA, because they are so loosely coupled, a new version of the service could run side-by-side
with an old version. It is also possible to let one microservice has multiple messaging process
endpoints, each supporting different versions. And it's easy to expound the architecture with some
try-out feature in the form of a service. If it doesn't work, just throw it away. Each service can use a
technology best suited for that service. Program language, framework, platform, and database can
be chosen by the service. Hosting is flexible. Physical machines or virtual machines in the cloud or
virtual container format like Docker are all possible. And each service can be individually scaled.
When one service fails, the application as a whole keeps functioning. Orders are still being accepted
in our earlier example, but once the order service is down, the message sticks in the queue, and can
be picked up as soon as the service is running again. And when running each service on a separate
VM or Docker container, they are highly observable, because CPU and memory pressure can be
monitored for each service individually, and will become immediately apparent what service should
be fixed or optimized. To keep track of services, I would recommend you take a look at products like
ZooKeeper and Consul. To make a microservice completely autonomous, each service should have
and expose its own user interface. That way from a UI perspective, when one service fails, all UI

Microservicios Page 6
and expose its own user interface. That way from a UI perspective, when one service fails, all UI
elements are still shown, except for the UI from that one service. It's a technique Amazon uses, for
example. So we need a way to compose the interface from all services. One way to do it is for a web
application to use a single-page application, also known as a SPA. Using a JavaScript framework like
Angular, Durandal, or Aurelia, the browser reaches out to each service to get the UI, and brings them
together in one UI. I would recommend a way for services to discover each other in the form of
name IP address resolution, for example. This prevents a tight coupling between the service and its
URI. This could be done with DNS or some other product like Consul or ZooKeeper. Where we had a
normal web application, it could come by maybe with a simple cookie, now some security
mechanism has to be in place that supports the multiservice scenario. A way to do this is with OAuth
2 and OpenID Connect. One particular team can work on one microservice, and also deploy it
separately from the other teams. This is a big advantage compared to monolithic applications, where
everything has to come together and be tested before deployment. Continuous deployment,
essentially deployment after each check in is very straightforward to implement. Microservice
architecture solves a lot of problems, but it also introduces complexity, security, UI, discoverability,
hosting, and messaging infrastructure, and monitoring, all become much more complex than
programming a monolithic application. It's all solvable, but I'd like to emphasize here that this is not
for every application, and not for every team. Also, something about reuse, when I started
programming, I was taught whatever you do, don't repeat yourself. Implementing the same logic in
multiple places was considered a dire sin, but business rules in SOA and microservice architecture
has to exist in every service, and must be autonomous. So me and my team came up with a lot of
complex solutions to maintain the drive principle. All these great ideas eventually lead to an
application that is not maintainable anymore, because of complexity or some new feature that
doesn't fit in the existing framework. Should a business rule change, source control is your friend. It
can probably find all the places where you have implemented the rule. We also have to let go of the
big database, it contains everything there is to know. Data is and should be duplicated in a
microservice architecture. Let's take the classic example of a customer. The order entry service
probably has to know something about the customer. The picking service has to have info about the
same customer, but it only stores information relevant for picking. The customer has communicated
in a message, and the individual services take the information they need from the message, and
store it in their own database. In the next clip, we'll see what part the service bus plays in
microservice architecture.

The Service Bus

The service bus enables the different services to send and receive messages in a loosely-coupled
way. Every service uses a framework that enables programmers to send the messages and to receive
the messages. The service bus is in every service, and it's just a tool, a set of classes around the
sending and receiving of messages. Every service has an endpoint with which it can receive and send
messages. A service can send or publish a message. The sending of a message is like a command to
one specific service. The publishing of a message is like radio broadcasting, everyone who wants to
listen can listen, but nobody has to. The message is sent to an underlying messaging system, for
example MSMQ or RabbitMQ. That messaging system should be dumb and contain no business
logic, everything smart should be in the service itself. The messaging system only routes the
message to the inbox of another service, or multiple services in the case of a published event. If the
receiving service is up, it will be notified via the service bus that a new message has arrived, and if
processed successfully, it will be deleted from the queue, after which the service can process the
next one. Each endpoint is connected to one particular queue, but one service can contain multiple
endpoints. The service bus framework we will use throughout this course is a framework for .NET
called NServiceBus. NServiceBus can only be used for .Net services, but because NServiceBus lies on
top of a messaging system, bus implementations for other platforms can be used as well, as long as
they are compatible with the same underlying messaging system. Don't confuse the service bus with
an enterprise service bus. An enterprise service bus is an attempt to connect many different kinds of
services and applications together, and to implement orchestration and business rules in the bus
itself. The main difference here is that in our service bus, there is no logic in the pipes, meaning the
thing that is between the services is dumb, all the logic when programming with the service bus and
microservices is always in the service behind the endpoint. So where are the demos for
microservices and the service bus, well I have to do a bit more explaining in the next module before I

Microservicios Page 7
microservices and the service bus, well I have to do a bit more explaining in the next module before I
can demo you this, so hang on, and watch the next module first.

Summary

Here is the summary. Monolithic applications are fine, but when they become more complex, they
tend to become unmaintainable and rewrites are required. RPC and REST are great techniques to get
started with the service-oriented architecture, but they are not very loosely coupled. This could lead
to maintainability issues as well. Microservice architecture solves many of these problems, but it's
not a holy grail, it's not for everyone and for every app, because it gets very complex in the
infrastructure. And finally, the service bus is something you use in code to make the sending and
receiving of messages possible. Messaging is the way to let autonomous services talk to each other.
The service bus utilizes some messaging backend like MSMQ and RabbitMQ. In the next module,
we'll look at NServiceBus as a framework, and you'll learn all the ins and outs of how to start with
that framework.

Messaging with NServiceBus

Introduction

After this module, you'll know everything that is needed to implement a microservice architecture
with NServiceBus. First you'll have to know what NServiceBus is and how to prepare your machine
for it. Then I'll dive right in with a demo, after which we will take a look at the different kinds of
messages and how they are routed. There's also lots to tell you about the configuration of
NServiceBus. In this section, you'll also see the different transports NServiceBus can use.
NServiceBus is fault tolerant by default, and we'll see the ins and outs of that. And lastly the
request/response messaging pattern.

What Is NServiceBus?

When .NET came out, there was minimal support for MSMQ, the messaging system embedded in
Windows since 1997. The beginnings of what later would be named NServiceBus were created to fill
the gap. In 2007, NServiceBus was released as an open-source project. Udi Dahan is the creator of
the framework, he wanted to ensure a great future for NServiceBus, but at the same time saw a lot
of good open-source projects die because lack of attention. Therefore he decided to charge a license
fee so he could found a company that could further develop NServiceBus, ensure his future, and
attach professional support to the product. I'm going to show you many features of NServiceBus
throughout the course, but first things first, NServiceBus is a .NET Framework that enables you to
implement communication between apps using messaging. One possible route you could take with
this is the microservice architecture. I'm focusing on microservices in this course, but if microservices
are not for you, you can use everything I'm telling you about NServiceBus for your own scenario.
NServiceBus is maintained by a company called Particular Software, and it's part of a suite called the
Particular Service Platform. The other applications in this suite are supporting NServiceBus, and I'll
cover them later in the course. The framework lies on top of messaging backends or transports. You
could say it's an abstraction of the messaging backends. While NServiceBus started out as a
framework supporting only MSMQ, it is now supporting other transports like RabbitMQ and Azure. It
can even use SQL Server as a transport. Which transport you're using is a configuration detail, you
can, for example, supply any config file. The usage of NServiceBus remains the same regardless of
the transport you're using, that means you're not limited to a particular transport. And NServiceBus
is open source, but not free, a license fee is required. In the next clip, I will show you how to prepare
to use NServiceBus.

Preparation

Let's dive in straightaway with a demo, but first you should install NServiceBus. To do this, I
recommend downloading the entire Particular platform from this URL. When you start the
downloaded setup.exe, you'll see this dialog. I'll cover all of the applications in the platform, but if
you're only interested in NServiceBus, feel free to check off the rest. Notice that NServiceBus isn't

Microservicios Page 8
you're only interested in NServiceBus, feel free to check off the rest. Notice that NServiceBus isn't
actually installed, your machine is just prepared for it. This configures MSMQ and the DTC, more on
these later, correctly and automatically, and installs NServiceBus Performance Counters. You'll use
NuGet packages to actually download the framework in the demo project. NServiceBus is very
pluggable and extensible, so you can download a lot of different NuGet packages supporting things
like the dependency injection frameworks and databases. However the core of NServiceBus consists
of three packages. You'll need the first NuGet package called NServiceBus the most. It contains the
complete framework with MSMQ as a transport as well. Other transports are supported with other
NuGet packages. It also contains everything you need to self host NServiceBus with an app type of
your choice, for example ASP.NET MVC and WPF. It's not necessary to self host, you can also create a
DLL containing this service, and let NServiceBus Hosting do the hosting work for you. The package
contains an executable that behaves as a commandline application for your debugging, and it can be
easily installed as a Windows service. There's also a package called NServiceBus.Testing to help with
unit testing, especially with sagas. And in the next clip, I will tell you the background story that you
need to know before I'll go into the next demo.

Adapting an Architecture for an Existing Service

While we were talking, a big problem arose at Fire On Wheels. Business is better and better, but
Amazon is complaining that the web service is unavailable, and customers using the website are
complaining that after they filled out the web form, an error page appeared telling them the order
couldn't be processed, and they have to type in the whole order again. Upon investigation, the
problem lies in the fact that the web service is overloaded, because everything is handled
synchronously. The web server is waiting until the order was actually processed before it returns to
the user. In the meantime, Amazon is shooting orders to the service as well. The web service is
failing because it can't handle all the load. The CEO hires a new software architect, which is told
about the problems. The CEO wants her to think of a solution that takes more growth of the
company into account, but in the meantime, optimize the user experience. He wants to minimize the
risk of ever losing an order again as well. The first thing the architect thinks about is just scaling up or
scaling out the web service. But given the demands by the CEO and the fact that processing the
order isn't really time critical in the sense that it doesn't matter if the order takes 1 or 5 seconds to
process, she comes up with a solution that will queue the orders, after which they will be picked up
one by one by the order service. Since the company will probably grow exponentially, and business
processes will become more complex, she comes up with a solution that could be the beginnings of
a complete microservice architecture. She's going to use messaging with NServiceBus. I left the Fire
On Wheels architecture in the previous module looking like this. A user can enter an order through
the website, which then calls the web service, which processes the order, or an automated process
like Amazon can call the web service directly. And you know the shortcomings of this method if
you've watched the previous module. In the new architecture using messaging, the landscape looks
like this. Amazon still needs to connect to the web service the way it was, and from a user
perspective, the web page should remain the same. So we're going to create a new service that is
going to process the orders, and both the website and the web service are going to command that
service to process the orders. The command is in the form of a message that is queued by the
underlying transport. There are two senders of the message, the existing web service used by
Amazon and the MVC web application where the users manually enter in the order.

Demo: Fire on Wheels Goes NServiceBus

So here's the finished Fire On Wheels Visual Studio solution with the new architecture. I've prepared
it up front to save you from watching me type. I'll guide you through the code step by step. First I
renamed the projects to more sensible names. The second step is to create an assembly that can be
shared between the website, the web service, and the new service. It contains the messages we will
send as receive as C# classes that only contain properties. They will form the contract defining how
the applications communicate. I also added the NServiceBus core NuGet package to the new library.
I want to command the new service to process the order with a message, so I created a class called
ProcessOrderCommand. I let this class implement the ICommand interface from the NServiceBus
assembly. This is just a marker interface without the need to implement something in the class for
the interface. It lets the NServiceBus framework know this is a command, and it registers it

Microservicios Page 9
the interface. It lets the NServiceBus framework know this is a command, and it registers it
automatically because of the interface. There's a more elegant way to do this by using Unobtrusive
mode, which I'll show you later in the course. In the message, I want all the relevant order
information in the form of properties. Next I created the new service as a console application, and I
added the NServiceBus NuGet package. The Program.cs creates an endpointConfiguration object,
specifying the name of the endpoint. This result in the auto creation of a queue in MSMQ with this
same name. As you can see, I configure MSMQ as a transport, and I set it up to use
InMemoryPersistence. I will cover the other configuration settings later. The endpoint is created by
calling Start on the static endpoint class. From NServiceBus 6 on, almost everything happens
asynchronously. As you can see, the starting of the endpoint is also done asynchronously. We get a
further performance gain by specifying ConfigureAwait as false, which will prevent the passing in of
the controls thread context into the new thread, which we don't need for sending a message. Once
the endpoint is live, I just wait for someone to press a key. If that's done, I stop the endpoint. Next
I've created a class that will consume, or in other words respond to the command called
ProcessOrderHandler. It will automatically be recognized by NServiceBus to handle messages for our
message type ProcessOrder command, because NServiceBus scans the assembly for usable types. I
let the class implement the IHandleMessages of ProcessOrderCommand interface. When I
implement that interface, you can see a Handle method is created with the message in the
MessageHandlerContext as a parameter. As we'll see shortly, we can use the Context object to send
subsequent messages with and much more. In the body of the method, I need to do the work the
web service did before, so I brought over the EmailSender class, made a method inside it async, and
send the email. I also logged the fact that the order has received. My EmailSender in the demo
doesn't do anything, in real life, you'll probably want to leave the sending of the message to a
separate service. The next step is to send the message from the REST web service. In the Global.asax
file of this web API service from the application start method, I'm calling ConfigureEndpoint. I
configure and start the endpoint in the same as I did with the Order service, although NServiceBus
uses its own dependency injection container internally, or any other dependency injector to set up
injection into the web API controls, because that's not supported by NServiceBus' container. The
ContainerBuilder class for Autofact is called ContainerBuilder. Here I registered the controllers with
the container, and here the endpoint is registered. Finally I'm building the container, and tell Web
API to use it by setting the DependencyResolver on its Configuration object. But from now on, I can
inject the endpoint instance in a controller to do operations with messages. Now I need to replace
the previous code in the DispatchController with the sending of the command. For that, I inject the
Endpoint object into the Controller, and it sends a new instance of ProcessOrderCommand message
containing the order details. At the first parameter, I specify where to send it to. We will later see
that the routing for messages can also be done in the web.config file, instead of specifying the
destination of the message here directly. Process order command will be sent to the
FireOnWheels.Order queue. Note that the Send message on the endpoint is also asynchronous, so I
await it with an async method. The big advantage of doing things asynchronous like this is that the
actual work involved to, in this case sending the message, doesn't block the thread where the
controllers run on, so while the message is sent, controllers are able to process other requests. What
about the website? I could leave the code as it was, and it will still work, because the website calls
the web service, which sends the message, which is then picked up by the order service. But the
need to call the web service is gone now, it is just extra overhead. I can just add the package of
NServiceBus to the website, add the configuration in the Global.asax like I did with the REST API, and
inject the Endpoint instance in the controller, and send the same message to the OrderEndpoint. I
left the pricing process of an order intentionally as it is, so setting a price won't work in the web
service. I'll come back to that later in the module. When running the NServiceBus service for the first
time, it will use MSMQ as a transport by default. The missing queues will be created automatically,
but the service needs to run under a user that has permission to do that. The easiest way to get
around that is by just starting Visual Studio as administrator before running. When I run the solution
with multiple starter projects in Visual Studio, I see that everything is running as before, but now
with the Order service running as a console application. Of course you can also use other project
types like a Windows service to host your endpoints. When I put a breakpoint in a new service, I can
see that the message is picked up. Next I'll show you the details about messages.

Messages: Commands and Events

Microservicios Page 10
Without explaining much, I immediately took a dive into a working demo. Now let's take some time
to explore everything you saw in more detail. Here's the MSMQ tool to show what queues were
created in MSMQ. You can reach this screen by going to Computer Management, Services and
Applications, Message Queuing, and then Private Queues. You can see NServiceBus has a main
queue for each endpoint, as well as some supporting queues. You can also see an error queue and
an audit queue are created, which are specified in the config file of a new service. In the demo, I
introduced you to commands. Commands are messages in the form of a C# class containing data in
the form of properties. Command can have multiple senders, but always have one receiver. They are
sent by using the Send method, you will need to tell NServiceBus that a specific class has a command
by using the ICommand interface, or by using Unobtrusive mode, which I'll reveal later in the course.
Names of commands are in the imperative like ProcessOrder command or CreateNewUser
command. Before we dive into events, first some things you need to know up front. One of these
things is dependency injection. If you're not familiar with it, please watch one of the courses on it,
because NServiceBus relies heavily on it. NServiceBus uses a built-in dependency injection system,
which is a lean version of Autofac contained in the core. It will only inject NServiceBus-related types
into objects managed by NServiceBus, such as a class implementing IHandleMessages. If you want
more, such as injecting IBus instances into MVC controllers, etc., you can plug in virtually any existing
container. Maybe you've asked yourself a question during the previous demo, how does
NServiceBus know that it should call the class implementing IHandleMessages? Well as a starter,
NServiceBus by default scans all the assemblies that are in the same BIN directory, finding and
registering all the types it needs. If you don't want to scan all assemblies, the scanning can be limited
in the config file to scan only certain assemblies. So here are events. Like commands, events are also
messages. Events are implemented as C# interfaces. They are different because they always have
one sender and multiple receivers, so it's just the other way around in comparison with commands.
Send an event by using the Publish method. Events implement the publish/subscribe pattern, so
receivers interested in a specific event must register themselves with the sender. The MSMQ
transport doesn't support this directly, so NServiceBus stores the subscriptions in the configured
storage. You must mark all event classes with the IEvent marker interface. Events are typically
published to signal something that is done, so name them in the past tense like
OrderProcessedEvent or NewUserCreatedEvent. Please look at the previous module if you want to
know more about how this works in microservices architecture. Routing is up for the next clip.

Routing Messages

In the previous demo, I was sending a command message by specifying a specific endpoint name as
a string in one of the overloads of the Send method, but instead of specifying the destination
everytime you send a command there is a routing option, that lets you send certain kinds of
messages to the same endpoints by a configuration. Also for events you have to specify where your
endpoint should register the subscription. There are 2 choices for routing. You can choose the config
file or the routing API. I'll show you both, but a routing API is recommended since future versions of
NServerBus are moving to the option. Here's the tricky part to the routing story. When doing routing
for a command, it is quite obvious, we just route a message to the endpoint point and should receive
the message. With events, keep in mind that the routing must point to the publisher of the event. In
other words, to the place where the subscription should be registered. Ror the Routing API you must
first get the object which could do the routing. You get that object by calling the Routing extension
method at the time you configure the transport. For commands you can use the route to endpoint
method. In the first example all messages in the assembly were the AcceptOrder classes' in are
routed to the sales endpoint. You can also limit that to a certain namespace in that assembly. Or just
configure routing for one messageType. To register your event subscriptions use the
RegisterPublisher methodon the routing object. The rest of the syntex is the same as with
commands. Here is what to do if you choose the config file option. The config file should contain
custom config sections for NServiceBus. Then you specify the rules of the routing called mappings in
the UnicastBusConfig section in the MessageEndpointMappings node. Every mapping is contained in
an add node. There's no seperate syntax to route command an event where you use a config file
option. Here again examples to map all messages in an assemly to one enpoint or only the messages
in a certain namespace and one specific type in an assemly. Next, I'll demo Events and Routing.

Microservicios Page 11
Demo: Events and Routing

In this demo, I want to show you how an event is published. I use our existing Fire On Wheels
solution to do it. When the order is successfully processed, we're publishing an event that is picked
up by the web application, that has a subscription to the event. I've already added the event to the
messages assembly. Note that this is an interface, so only interfaces are shared between publisher
and subscriber, not concrete classes. We mark as interface with the interface; IEvent. Now we switch
to the ProcessOrderHandler in the order service. I use a context instance passed into the handle
method to publish the IOrderProcessEvent message. NServiceBus will create the implementing class
behind the scenes, I just have to tell it what the content of the message should be by specifying a
lender. In the web application, I've added a folder called Handlers, and added a class implementing
IHandleMessages exactly like we did with the receiving of a command. In the Handle method, I
could, for example, notify the user with SignalR or a similar technology, but this is outside the scope
of this course. The final step is to add the routing. As I mentioned, the routing of an event must have
the sender of the message as a destination, and the routing is done in the config file. Here it is, I
chose the path by specifying the exact message type in the assembly the type resides in, and I tell
NServiceBus to go subscribe to this event at the FireOnWheels.Order endpoint. Let's put a
breakpoint in the handler for the event in the MVC application. Now we run. Notice that in the
commandline window in the order service, there is now a notification about the new subscription.
When we fill out the order screen and confirm the order, the breakpoint is hit. Configuration is a
topic for the next clip.

Configuring NServiceBus

The configuration of NServiceBus relies on defaults. The configuration can be a combination of code
in the config file. That way options that could change after deployment can be in the config file.
Looking at the FireOnWheels.Order service, there is not much explicit configuration going on. In a
config file, I specified the error and the order queue, as well as the routing. The only thing that is
really needed in the code is its UsePersistence type. When not specified, the endpoint name is taken
from the namespace the configuration class resides in. The default transport is MSMQ, and there are
lots of other defaults. The EndpointConfig class implements an interface called
IConfigureThisEndpoint. Classes implementing this interface are picked up when NServiceBus scans
the assemblies, and they are called first when NServiceBus starts up. There's also an
INeedInitialization interface. One purpose of implementing this interface is to make an assembly
with one or more classes implementing this interface, and define company defaults and
conventions, and then reference the assembly from all projects that use NServiceBus. In that way,
IConfigureThisEndpoint can be used to just override these if necessary. You can also create the bus
to be sent only for endpoints that only send messages and not receive them. NServiceBus will not
waste processing and resources on the receiving of messages if you create the bus as SendOnly. Now
let's look at serialization in the next clip.

Message Serialization

NServiceBus has to serialize the message classes. The serialized classes form the body of the
message in the underlying transport. The default serialization used is XML, but in future versions of
NServiceBus, this will change to JSON. You can set the serialization by using the UseSerialization
method on the configuration object. Other serialization types supported out of the box are BSON
and Binary, or you can write your own if needed. In the next clip, I'll show you logging.

Logging

NServiceBus features a built-in logging mechanism, but you can easily install your favorite logging
framework by downloading a supporting NuGet package for it. The default logging contains five
logging levels. When running NServiceBus-hosted mode, all logging messages are outputted to the
console. They're also written to the trace object, which you configure the output yourself using
standard .NET techniques. And the rolling file is also supported, and has a default maximum of 10
MB per file, and 10 physical log files. The default log level threshold for messages going to trace and

Microservicios Page 12
MB per file, and 10 physical log files. The default log level threshold for messages going to trace and
the file is info, but it can be adjusted in the config file. In the next clip, I'll talk about persistence.

Persistence Options

The features of the transport define what NServiceBus should store, for example, MSMQ doesn't
support subscriptions, so the handling of that must be done in some other way. Also the state of
sagas has to be stored somewhere. There is no default persistence, so you always have to define it,
or else NServiceBus will throw an exception at startup. Again you can write your own persistence.
Here are the ones supported by Particular out of the box. InMemoryPersistence is the only
persistent class built into the core assembly. All other persistent classes require an additional NuGet
package. This is of course only suitable for testing and demo purposes. NHibernate is an ORM that
supports many relational databases, but only SQL Server and Oracle are officially supported. When
using NHibernatePersistence, additional configuration is needed in the form of a connection string in
the connection string section of the config file. NServiceBus will automatically create the schemas
necessary when they are not present, and it has permission to do so. RavenDbPersistence supports
the document database RavenDb, and Azure Storage Persistence leverages the very cheap and easy
Azure Table Storage. Azure Service fabric is also supported. It's also possible to use multiple
persistence mechanisms at once. You can instruct NServiceBus to use separate storages for
subscriptions, sagas, and saga timeouts in the Outbox feature, for example. You'll learn about
transports in the next clip.

Configuring Transports

Let's look into transports. The ones I'm talking about in this course are the ones supported by
Particular Software, but there are more community-produced transports. We already saw MSMQ at
work, it is a default transport for NServiceBus, and the only one built into the core. MSMQ is native
to every Windows server. It works in a decentralized way, meaning there's nothing between the
service, and every server has its own queues stored locally. When a message is sent, it is placed in an
outgoing queue local for the server, then the MSMQ system is taking care of delivering the message
to the incoming queue of another server, or in our demo, of the same server. This is called store and
forward, and it implies that once a message is sent, it will arrive at the destination sooner or later,
when the sender of the message goes down right after sending the message, or the receiver is down,
the message will stick the queue until it can be delivered. Events with publish subscribe are not
supported in MSMQ, so NServiceBus has its own mechanism for that using its persistent setting.
Keep in mind that every time the message is sent, the persistent storage is checked if there are any
subscribers. RabbitMQ uses a different style of message processing. It is a broker, that means there's
one server or cluster running RabbitMQ on which the messages and queues reside. It is centralized.
When a message arrives at RabbitMQ, it is processed by something called an exchange first, which
will route the message to one or more other queues. RabbitMQ has a much better configurable
routing built in compared to MSMQ. NServiceBus uses this routing mechanism, configuring it when
needed. But because it's centralized, it could be the single point of failure for your application. Be
sure to cluster in a production environment. RabbitMQ runs on multiple platforms such as Windows,
Linux, and Mac OS. There are also clients available for nearly every program language. In the first
module, I talked about the possibility to run microservices on different OS. Using MSMQ will limit
you to windows, but RabbitMQ doesn't have that limitation. But since NServiceBus isn't officially
supported on other platforms, using Mono for example, you probably have to use something other
than NServiceBus on the microservices running other OS. You could try an official RabbitMQ client,
for example. RabbitMQ also supports AMQP, which is a protocol defining a standard for messaging.
RabbitMQ could be an advantage over MSMQ, should you use other products that rely on this
standard. SQL Server can also be used as a transport. Messages are placed in a table when sent, and
the receiving side is polling the table to look for new messages. When it processes one successfully,
it just deletes the message from the table. The table can be on the separate SQL instance or the
same one the application uses. NServiceBus uses a back off strategy to do the poling. When no
messages are in the queue, it will wait longer and longer before trying again, up to a maximum of a
configurable amount of time. The default maximum is 1 second. Finally Microsoft Azure. There are 2
distinct transport options here. One is called Azure Storage Queues which is a simple storage
mechanism that supports the queuing and dequeuing of messages. NServiceBus can support that

Microservicios Page 13
mechanism that supports the queuing and dequeuing of messages. NServiceBus can support that
and use its internal routing. Azure Service Bus is also supported, which is a Storage Queues are
simple and low cost and chances are if you are already an Azure user you already have a storage
account set up. The second transport option, Azure Service Bus, is more advanced and costly but
enables bigger messages and lower _____ among more options on the message level. Keep in mind
that a great thing about NServiceBus is that it is an abstraction of the underlying transport. You can
start out with, for example, SQL Server Transport, as long as the project is small. When more
messages are flowing through the system, just switch to a more robust transport like RabbitMQ.
Switching into another transport is way more than just a mere configuration detail however. Before
you do it you should think about the operational side of things because each transports architecture
tends to be very different .

Installers

Installers are built into NServiceBus to, for example, create the queues in MSMQ upon startup, or
create the schema when using a relational database as a persistence mechanism. NServiceBus
wouldn't be NServiceBus if you couldn't create your own installers. Use the INeedToInstallSomething
interface. Environments can be, for example, Azure or NHibernate. How installs behave exactly
depends on how you're using this service. When debugging, the installers will run by default every
time you start a debugging session, unless you override this in the configuration. So your custom
installers classes should check if the thing you're installing is already there. When using self hosting
outside the debugger, the running of installers depends on the config.EnableInstaller setting. The
next clip is about the fault tolerance features of NServiceBus.

Retries and Fault Tolerance

In the previous module, I talked about the fallacies of distributed computing. We always tend to
think the happy path for a process, and tend to ignore the path that fails, but don't tell your boss, as
a software engineer, you know software will fail, and you know the network, servers, and drives will
fail. And you and your boss don't want any loss of data, no order that ends up being lost, for
example. We want the software to be resilient to exceptions. You have to take all of these things
into account when programming. This is another reason to use SOA with messages using
NServiceBus, because NServiceBus has you covered. When looking at the receiving end of a
message, the happy path looks like this. Many things you'll look at in the coming slides are of course
configurable, for example, you can switch transactions off. I'm showing you the most common and
default way of doing things. First the queue is peeked to see if there is a message waiting. Peeking is
looking at a message without actually dequeing it. If there is a message, a transaction is started, and
the message is actually dequeued. NServiceBus makes sure only one thread receives the message.
The message is deserialized, and the handlers are invoked. With handlers here, I mean the handlers
written by you, but also everything that is rounded in the form of NServiceBus infrastructure code.
When this has a successful result, the transaction is committed. Now let's say something goes wrong
during deserialization of the message, for example, the message is in a format that can't be
deserialized for some reason. Because there is no chance that this kind of error will ever go away by
itself, the message is immediately sent to the error queue, on which I will elaborate in a few slides.
When something goes wrong in the handlers, it could be that the error is transient, that basically
means that it can go away by itself. An example of such an error could be a web service you're
calling in a handler is temporarily down, or not reachable, or a deadlock occurs in the database
you're writing to. Because there's a possibility the error can go away, NServiceBus' retry sub-process
is started, which you will see in the next slide. First NServiceBus is going to re-invoke the handler the
configured number of times. This kind of retry is called immediate retries. Five times is the default,
but just trying five times right after each other might be too quick for some errors, for example, if
the web server you're calling is down. If immediate retries don't resolve the problem, delayed retries
kick in. The message is moved to a special retry queue, and NServiceBus schedules the reprocessing
of the message in 10 seconds. After that time, NServiceBus will repeat the immediate retry
sequence, so five retries in a row are done again. If there's still no solution to the problem,
NServiceBus will wait 20 seconds, and again do the immediate retries, and then after 30 seconds.
When all of this fails, the message is sent to the error queue. Here is some extra information about
delayed retries. They are of course completely configurable. You can set the time increase, 10

Microservicios Page 14
delayed retries. They are of course completely configurable. You can set the time increase, 10
seconds by default, and the number of delayed retries you want. But watch out, if the error is indeed
transient, then it won't show up in the error queue. The only way to know it was there is to check
the NServiceBus logs, and it will take at least a minute by default before the message actually fails
and is present in the error queue. This may or may not be okay for you. The error queue holds
messages that can't be processed, and keeps these out of the way of the normal message flow in the
queues. Even when delayed retries are activated, there is still a chance that this is a transient error,
for example, if the web service you are calling is down for more than a minute. It could also be an
error in the handler code, which you'll want to fix, or maybe the message sending itself was not
needed or an error. In that case, you want to just throw away the messages in the error queue. In
the other cases I mentioned, you would like to place the message back into the active queue. We
don't want to lose our order, remember. This can be done manually or by another process. The
other members of the Particular Software Suite, ServersInsight and ServicePulse might be of some
help, so I'll talk about them in the final module of this course. Now let's see the requests response
message pattern in action in the next clip.

The Request/Response Pattern

NServiceBus supports another messaging pattern besides the ones you already saw. It's the good old
request/response pattern. This pattern sends a message with the send method, but waits for a
response message to come back. Well this is somewhat against the nature of NServiceBus, because
everything is handled asynchronously natively. So it reintroduces temporal coupling, because of
that, I'm not a big fan of this pattern. I recommend you look at alternatives such as using sagas with
SignalR. I'll tell you about sagas in the next module.

Demo: Request/Response and Bus.Reply

In the previous demo, I promised to come up with a solution for the price calculation problem we
introduced when we refactored to NServiceBus. After this demo, the order service will determine
the price, instead of the web application. I've created two new messages in the messages assembly.
PriceRequest contains the weight of the package needed to calculate the price. Notice that it's
implementing IMessage, this is the interface ICommand and IEvent derive from. I also created
PriceResponse, which contains the resulting price. In the Order service, I added a new class called
PriceRequestHandler, which implements the IHandle messages of PriceRequest interface. In the
handler of the message, I use a Reply method to send a PriceResponse message back to the sender
of the message. I calculate the price by using the PriceCalculator class that is now contained in the
order service. I don't have to configure routing for the message. We don't see it, but the message
contains the endpoint name of the sender, and NServiceBus will use this info to send back the
message. Finally let's look at the code in the controller of the MVC application, which is called when
the user has filled out the form and presses the Submit button. I call Request on the endpoint object.
As a generic parameter, I specify what I expect back as a response, in this case, priceResponse. Then
I just pull out the price information and enrich the model with it. Of course, I have to set up a
RoutingTo in the web.config for PriceRequest. For the Request method to work, you'll have to install
the NServiceBus.Callbacks NuGet package. In addition to that, you also have to configure a uniqueId
in the endpoint configuration for this service. After this, I can return the View called Review with the
order that has the price information.

Summary

Here's the summary. In this module, you learned that NServiceBus is a great framework to enable
asynchronous messaging in your application. It lies on top of transports, which can easily be
switched by using configuration. You also looked at other configurable features such as the routing
and the serialization. I showed you two types of messages, commands and events. Commands tell a
service to do something, where events are transmitted when something is done. We also looked at
the request/response pattern. Also fault tolerance is a first-class feature in NServiceBus, and I
showed you how that works. In the next module, we'll look at defining workflows with NServiceBus
called sagas.

Microservicios Page 15
Modeling Workflows with NServiceBus Sagas

Introduction

After watching this module, you will be able to model sagas in NServiceBus. Sagas are long-running
business processes. First I'll explain what a saga is, then you'll see how to define a saga right away.
The next topic is about the various patterns you can use. Timeouts are next, which are like an alarm
clock, and you should know a few things about persistence in relation to sagas.

What Is a Saga?

With everything you've learned thus far, you can probably imagine a scenario where you have many
services sending and receiving messages. Chances are the process is some kind of workflow, for
example an order that is entered, and then approved, and then picked, and then shipped. When the
process is defined in code like in this slide, it becomes harder and harder to figure out which service
does what, and what happens next. There seems to be a need of some kind of coordination. Also
you might've come across an application where things have to be done automatically, without user
interaction, such as change the status of a customer to gold when the total order amount exceeds a
certain amount. This is typically handled in what's often called the daily job. Every day at the same
time during off peak hours, the processing occurs, like a newspaper that is delivered every day at
6:00 AM. But what if you or the user can't wait for the daily job, or if there's no such thing as off
peak hours? Sagas come to the rescue. Sagas are long-running business processes modeled in code.
They support a certain workflow with steps to be executed. The saga itself maintains state in the
form of an object you define until the saga finishes. In the NServiceBus context, a saga's purpose is
to coordinate the message flow in a way you implement. I will show you a few possible design
patterns in a minute. As long as the saga runs, it persists its state in a durable storage. In the
previous module, I showed you how to configure this. The way sagas are implemented in
NServiceBus leave a lot of room for your own creativity. They have a very open design. Here's a
typical way of how you could use a saga. Some kind of message triggers the creation of the saga.
Here is an order message. When it's created, the saga immediately sends an ApproveOrder
command to a service. When that service emits the event IOrderApproved, the saga will send a
PickOrder message, and when the picking completes, it marks itself as complete. But when should
you use a saga? The simple answer is when your process requires some coordination, and that is
true if the process comprises of more than one message send and receive cycle. Also sagas support
timeouts in the scheduling, so they are great when you, for example, send a user an email when a
certain amount of time has passed, and she hasn't approved the order yet. I will come back to that
topic in a separate clip in this module. Curious what a saga looks like in code? I'll cover that in the
next clip.

Defining Sagas

You define a saga by using the Saga base class contained in the NServiceBus core. Saga is a generic
class, it needs the type of the object that you wrote to maintain the state of the saga, as long as it's
running. This object is persisted along the way. Define which message triggers the creation of a new
saga by using the IAmStartedByMessages interface. As a generic parameter, the message type is
supplied, so when the saga receives the StartOrder message, it starts the workflow. A new state
object is created and persisted, but only when no existing saga data can be found for the message.
Other messages are handled in the same way I showed you earlier using the IHandleMessages
interface. So now you know when a saga is created, here's how to end it. Note that the ending of the
saga is not a requirement, it can potentially run forever. Here the implementing code is shown for
the IHandle messages of complete order interface on the previous slide. In this handler, you could
write the code to handle the order completion, and then call the MarkAsComplete method on the
saga. This signals NServiceBus to throw away the data object in the storage. All messages that arrive
MarkAsComplete is called are ignored. But how does NServiceBus know what saga belongs to what
messages? NServiceBus needs your help with that. An abstract method in a saga class is called
ConfigureHowToFindSaga, in which you have to tell NServiceBus the match between all messages
that are received by the saga and the data object that the saga has persisted.

Microservicios Page 16
that are received by the saga and the data object that the saga has persisted.
ConfigureHowToFindSaga provides a SagaPropertyMapper object as a parameter. On that object, we
can call the generic method, ConfigureMapping, as the generic parameters supply the message type
that you want to map to the SagaData object. With a lambda, we tell NServiceBus which property to
use in the message for the mapping. With ToSaga, we supply the property to use on the other side in
the SagaData object. This property has to be unique of course. Using this mapping, a query is
constructed to the underlying data store to fetch the object. If it doesn't exist, it is assumed that no
saga exists for the message. If you want to handle that in code, please look at the
IHandleSagaNotFound interface. Please note that you have to create a mapping for every message
that is received by the saga in this way. When I showed you the request/response pattern at the end
of the previous module, you already saw how to work with the reply method. In the saga context, it
is also used in the handlers of messages the saga sends to services. It will reply directly to the saga.
NServiceBus knows where to send it, because the saga details are invisibly present in the message.
You don't need to specify a mapping for the message in configure how to find the saga, because the
details of how to find the saga are also known. You already looked at a data class a saga uses. The
abstract base class also contains the adverse of the originator. The Originator is a service that started
the saga. By using the ReplyToOriginator method in the saga class, you can reply directly to the
originator without the need for routing. Designing sagas is a topic for the next clip.

Design Patterns

A very important point to keep in mind when you're designing your saga is that sagas are designed
to coordinate the message flow, but make decisions using business logic. The actual work is
delegated to services using the messages. Think hard about which messages should start the saga.
There might be more than one. You might think messages will always arrive in the same order, but
what if a service is delayed because a server is down somewhere? Could other messages for the
same saga arrive in the meantime, that would normally arrive after the delayed message? You
should design the saga with the fallacies of distributed computing in the back of your head. In the
next few slides, you see some examples of saga patterns, which are just a few ideas of how you
could use sagas. You can mix and match them the way you want. The first one is the command
pattern, which is the most commonly used. You saw it in action already. Some message or messages
come in that start a saga instance. Then a command is being sent to the service. When a
confirmation comes back in the form of an event, a new command is sent executing the next step,
and so on. In the meantime, there could be some decision making going on like what command to
send with what data. That's perfectly fine to implement in a saga, as long as the heavy lifting is done
by other services. The observer pattern saga waits until all steps are done by the different services.
So in this example, the saga makes sure the order is approved and picked. Only when both events
generated by other services have been received, it sends out the command to ship to the shipping
service. You could also use an enum in the saga data to keep track of the step or state the saga is in.
And only when a ceratin step is reached, listen for certain events and send certain commands.
Another pattern is a routing slip. Some process decides what steps have to be taken for a certain
order. The steps to take are contained in the message. So when, for example, an order comes in that
only has the pick step, the approval step will be skipped. Of course nothing prevents you from using
multiple sagas. If picking an order is a process by itself that involves multiple microservices, just
create a saga for it. The saga could then be activated by the same pick order message, and report
back with a message or event when it's complete. In the next clip, you see a nice feature of sagas
called timeouts.

Persistence

Although I've already covered persistence in the previous module, it is necessary to talk some more
about it in the context of sagas, as each support of storage works different in the background. First
RavenDB, RavenDB is a great fit because the data object of the message can be serialized and stored
as a document. NServiceBus will create an index based on the property that is marked with a unique
attribute in the data class. NServiceBus will fetch the document using this index, so it doesn't have to
go through all the documents to find the right one. NHibernate persistence supports relational
databases. You should know that child objects in the saga's data object are serialized and put in one
column. And each collection use is going to result in extra tables, the more tables, the more chance

Microservicios Page 17
column. And each collection use is going to result in extra tables, the more tables, the more chance
locks will occur. By marking the properties in the data object as virtual, a derive class is created
behind the scenes that checks if the data from the extra tables is really needed. Azure saga
persistence is a storage mechanism built on Azure table storage. It is very low cost and easy to set up
in Azure. It supports the storgage of any type table storage can handle. There are some other
options to choose from. There's SQL Persistence, which uses json.net to persist saga's in a standard
single data base such as SQL Server and MySQL. Azure Service Fabric is one of the newer
technologies supported and there are several other options developed by the community.

Demo: From Individual Services to a Saga

Fire On Wheels keeps on growing and the package delivery requests in the form of orders are
pouring in Dispatchers are delivering so many packages per day now that planning the route for a
package delivery person would increase efficiency greatly, so a planner is hired. Of course the
software should support this change, so we add another microservice, the planning service. But now
we have more than one service, and the architect foresees more services, because of the rapid
expansion rate Fire On Wheels goes through. So it's a good idea to implement coordination of the
workflow and message flow now, before the architecture becomes in chaos with all kinds of
messages flying around. The new architecture looks like this. I'm going to create a saga called
ProcessOrderSaga. The saga is started by the ProcessOrderCommand we used earlier. It is sent by
the web application and the REST service. The code to send the message in these applications
remains unchanged. There's a new service called Planning. Once a saga is started, it will send a
command to the planning service, and when the planning service is ready, it will reply to the saga
with a message called IOrderPlannedMessage. Note that this could've been an event with the
possibility to be picked up by more than one service, but I chose to just reply to the saga, so this
message will implement IMessage. When the IOrderPlannedMessage is received, the saga sends a
command to the existing order service, now renamed to Dispatch service. And when it's ready, it will
reply to the saga. The saga will then send an OrderProcessed message to the originator, which could
be the web application or the REST service, and then marks itself as complete. But let's follow the
workflow in code. I've created a new console application hosting a new endpoint called
FireOnWheels.Saga. In it, there is a ProcessOrderSaga class deriving from Saga. Saga takes a generic
type parameter with the data object it should store for this saga. The Data class is a simple class with
the properties you saw earlier in the command message. I only added an OrderId property to easily
make a certain order unique. Let's switch back to the saga. ConfigureHowToFindSaga is an abstract
method in the base class, and I tell NServiceBus how to look for saga data when it receives a
ProcessOrderCommand. I tell it to read the OrderId property from the command, and match it with
the OrderId property present in the Saga data. In SQL terms, this will create a SelectFromSaga data
when OrderId is the OrderId in the message. ProcessOrderSaga implements the
IAmStartedByMessages of ProcessOrderCommand interface, so a new saga is started when that
message arrives. Its implementation is the Handle method you've already seen from
IHandleMessages. In the method, I'm first logging the fact that the message has been received, and
then I copy all the data from the message to the saga. When it's done, I send the
PlanOrderCommand routed in the app.config file to the input queue of the new planning service. A
nice thing is that I can now be selective about what data I sent to the planning service. I only fill the
message with the data it needs. The planning service is also a commandline application hosting an
endpoint, and in its handler I do the planning work. This microservice could store the order in its
own data store, and supply a web interface for the planner to work with. When the planning is done,
I use the reply method on the context object with an IOrderPlannedMessage, which will send it to
the saga the planned order command message came from. IOrderPlanMessage doesn't have any
properties. It is just used to signal the saga it's done. The saga has all the data about the order
anyway, so only if new data was introduced by the planning process, it has to be sent back to the
saga. Back to the saga, the class implements IHandleMessages of IOrderPlannedMessage. Nothing
new here, but note there is no mapping in the ConfigureHowToFindSaga or IOrderPlannedMessage.
There is also no routing in the app.config for this message. This is because I used reply. NServiceBus
handles this for us. After logging the message, I again send the command. This time it's routed in the
app.config to the Dispatch service. The message only contains what Dispatch needs to know. When
the IOrderDispatchedMessages come back, I log again. I want to let the application that causes saga
to instantiate know that the order has been processed, so I use the ReplyToOriginator method of the

Microservicios Page 18
to instantiate know that the order has been processed, so I use the ReplyToOriginator method of the
saga. Again, no routing needed. Finally I tell the saga it's done with the MarkAscomplete method.
The saga will throw away the data object in the configured storage. When we debug the solution
with multiple starter projects enabled, you can see the three services coming up. When someone
fills out the web form or uses the REST service, you can see the saga starting and running through all
the steps.

Timeouts: Saga Reminders

Timeouts are a powerful feature of sagas that are like a reminder you get of an agenda entry. When
you set a timeout, a message is sent to NServiceBus' internal Timeout Manager. With the sending of
the message, you specify a certain time span like in 2 hours, or an absolute time like August 8 at 7:00
PM. When it's time, the Timeout Manager sends the same message back to the saga which
requested a timeout. This way you can, for example, send a registered user an email after a certain
amount of time when he forgot to confirm his email address. When the saga has been completed in
the meantime, the timeout message, like all messages for the saga, is ignored. Let's say we have
somebody who has to approve of an order, that person will receive an email with the request to
approve. But persons tend to forget things, so we want to remind the person to approve in two
days. In the saga, we can use the RequestTimeout method. The method uses a class that represents
a message that is eventually sent back to the saga. In the first overload of RequestTimeout, we don't
specify any properties for ApprovalTimeout. That's okay, maybe the saga data already contains
everything you need, and you just need to be reminded. The only parameter we pass in to a
RequestTimeout is the absolute time at which we want the message back. In the second overload,
we don't need to use generics, because we're creating a new instance of ApprovalTimeout
ourselves, filling the SomeState property. In the third overload, the generics are back, and we can
use an action delegate to fill the ApprovalTimeout instance. To handle the timeout being sent back
to the saga, we implement the IHandleTimeouts generic interface. It lets us create a method called
Timeout with the timeout type as a parameter. Here we take the action needed when the time is up.
In this case, send the approval person a reminder message.

Summary

Here's the summary. Sagas are workflows or business processes that can run a long time and never
have to end. They should coordinate the message flow between other services and maybe decide
which servers get what message. The heavy lifting must be done by the services themselves within
the safety of a transaction. I talked about time-related features of sagas, and the things you have to
know about saga persistence. In the next module, you will learn about advanced topics like
messaging and configuration.

Beyond the Basics with NServiceBus

Introduction

This module is about advanced NServiceBus topics. I'll dive into messaging a bit deeper, and we'll see
that NServiceBus offers not only flexibility in configuration, but is also fairly easy to customize. First
I'll look at how transactions work in NServiceBus with the DTC and Outbox. We'll also see a lot of
features of messaging we didn't cover in the second module. I'll tell you about the messaging
pipeline of NServiceBus with steps and behaviors, and performance counters are a great way to
monitor what's going on in your applications, but what if a service can't handle what you throw at it?
You learn the answer in the section about the scaling of services. And finally I'm going to unit test
the Fire On Wheels solution.

Distributed Transactions

This slide will probably look familiar if you watched the section on fault tolerance in the second
module. It shows the path that NServiceBus follows to process a message. I'll focus on start
transaction and commit transaction in this clip. A transaction makes sure everything you're doing in
your message handler either succeeds as a whole or fails as a whole if an exception occurs in your

Microservicios Page 19
your message handler either succeeds as a whole or fails as a whole if an exception occurs in your
handler. Again this is default behavior, it can't be switched off. In this Happy Path slide, the
transaction will commit in the end. That means, for example, the record in a database you have
inserted will get committed, and the outcome command message you have sent via the bus is
actually sent. When something throws an exception in the handler, everything you did in the handler
before the exception occurred gets rolled back, after which NServiceBus' retry mechanism kicks in,
which I explained in the second module. But how does NServiceBus know that when you inserted
the record, for example, SQL Server has to be contacted to commit a rollback? This is where
distributed transactions come in. Just like MSMQ, the DTC is also natively present in every Windows
installation, and DTC stands for Distributed Transaction Coordinator. It is by default in use with the
MSMQ transport. It is often not correctly configured, and that's why the NServiceBus platform
installer I showed you in module 2 automatically configures it in the right way. You don't have to
know anything about the DTC to work with it, because NServiceBus takes care of this for you behind
the scenes. But here is some background information. Internally the DTC works with preregistered
resource managers, but know about the resources participating in a distributed transaction. For each
distributor transaction, a number of resource managers are in play. So for example, there is a
resource manager responsible for the inserting of the SQL record, and there's a resource manager
sending an MSMQ message. If it's time to commit, the DTC will ask all resource managers if
everything is okay and ready. Only when all resource managers give the green light, it will tell all of
them to commit. If not, the DTC will give the rollback command to all resource managers eventually.
But what about transport that do not support the DTC? Outbox is a NServiceBus feature, so it's not
tied to any particular transports or operating systems the transports run on. The end result of
Outbox is the same as with the DTC, but Outbox achieves the result in a different way. It uses
Deduplication of the incoming messages in your handler. I will go into a little bit more detail about
the process in the next slide. Outbox needs some kind of data storage with a history of messages.
This database must be the same database as your business data resides in, because only then both
business data manipulation and updating message history can be executed as one transaction. The
deduplication records are kept for a default of 7 days, and the purging process runs every minute.
This is of course all configurable. Outbox is enabled by default for the RabbitMQ transport. For all
other transports, you have to explicitly enable it. In the middle of this slide is your handler. When
Outbox is enabled, when a message comes in, Outbox checks in the data store if this message was
already processed. If not, the handler logic is executed. During that phase, other messages are
probably sent. When this occurs, the messages are placed in the data store instead of sending them.
This occurs in the same transaction as the database interaction logic in the handler. The messages in
the data store are called the outbox of the handler. When the handler is done with everything else,
NServiceBus will send them. Back to the incoming message. If it's detected that the message was
already received by the handler, the handler logic will be skipped, and if needed, Outbox will send
messages that have not been sent yet.

Message Expiration

Sometimes when a message isn't handled within a certain amount of time, it is no longer relevant,
for example, a message that contains traffic jams on the highway is probably not relevant after 5
minutes when the road situation has changed, and probably another message has been generated
replacing the old one. Or maybe there are lots of messages flying around in your system, and you
don't want them in the way after a certain amount of time. You can control the lifespan of
unhandled messages with the TimeToBeReceived attribute. When the message isn't processed
within the given timeframe, it will be deleted by the transport. Messages in the error and audit
queue are considered handled, so these messages will not be deleted. Here's an example, just
decorate the message with the TimeToBeReceived attribute specifying the time span in the
constructor as a string. In this example, the unhandled lifespan is 1 minute.

Handler Order

When a service contains multiple handlers for the same message, they are not executed in a
particular order by default, but you can specify an order using the configuration object. For the
handlers that you don't specify in the sequence, you still don't know when they are going to run. It
could run before or after the sequence. You supply the types of the handlers in order to the

Microservicios Page 20
could run before or after the sequence. You supply the types of the handlers in order to the
ExecuteTheseHandlers first method.

Stopping, Deferring, and Forwarding Messages

To stop a message from being processed further by the current handler and all handlers that come
after, use the DoNotContinueDispatchingCurrentMessageToHandlers method on the context object
that's passed into your handle methods. The message is still treated as successfully processed, and a
transaction is committed, meaning that, for example, all database changes you've done this far are
committed as well. To handle a message later, you can use the sendOptions object that you specify
when sending a message. With the DelayDeliveryWIth method, you can specify a time span after
which a message is redelivered, for example, 1 hour. With DoNotDeliverBefore, a DateTime object is
needed for an absolute date and time, for example, 12 noon tomorrow. NServiceBus' Timeout
Manager handles this, which should put the message back in the queue when it's time. Please note
that in both cases, the handler transaction is committed. You can also forward a message to another
queue just like the order feature does by calling ForwardCurrentMessageTo. This will not stop the
current handler from executing further. Next up is property encryption.

Property Encryption

Use the property encryption feature of NServiceBus if a property contains sensitive data like a credit
card number. This data will be encrypted so that the data isn't visible when sent over the wire and
stored in the transport queue. Instead of specifying the string as the data type for the property, use
WireEncryptedString. It's also needed to configure the encryption in the config file or in code. The
Rijndael algorithm is used by default. Rijndael is a symmetrical algorithm. That means the key for
encryption and decryption is the same, and therefore has to be known at the sender side, as well as
the receiver side. Although possible, it's probably not wise to configure the key in the config file of
each microservice. Shared configuration is recommended here by using the IProvideConfiguration
interface in a shared DLL, for example. In the shared class, you can then pull the key out of some
secured storage. If you don't want to use Rijndael, you can implement your own encryption by using
the IEncryptionService interface, and configure NServiceBus to use the custom algorithm. Here's an
example of the configuration of Rijndael encryption in code. This class implements the generic
variant of IProvideConfiguration, specifying what kind of configuration this class provides. You have
to provide the object of the generic type in the GetConfiguration method.
RijndaelEncryptionServiceConfig consists of a key property containing the current key and a
collection of ExpiredKeys. All properties that have the WiredEncryptedString type will be encrypted
using the current key, but lingering messages might be arriving at this endpoint that were encrypted
using an older key, and the older keys are in the ExpiredKeys collection. If decryption of the property
fails, NService will try the keys and ExpiredKeys to decrypt.

DataBus: Supporting Large Messages

You might come across the need to handle large messages with images, for example. With some
transports, there is a maximum allowed message size. For MSMQ, it's 4 MB. Message size limits
aside, handling large messages might be a bad idea because of potential performance problems and
resource consumption. NServiceBus helps you out again. Properties large in size can be stored in a
location that is accessible by both the sender and the receiver of the message. The contents of the
large property is stored at that location, and the message travels with a pointer to the data location
instead of the data itself. The good news is that NServiceBus takes care of the heavy lifting for you.
What you have to do is to use a wrapper around the type of the property that has the user Databus.
I'll show you that in the next slide. Also, activate DataBus in the configuration by using the
UseDataBus method on the configuration object. As a generic parameter, you have to specify a
Databus type. Out of the box DataBuses are the FileShareDataBus that needs a path to a share, and
in a NuGet package for the Azure transport, there is also AzureDataBus using Azure Blob Storage.
And you could again create your own DataBus by implementing IDataBus and registering it with
NServiceBus. Here is a message class with a byte array property called LargeBlob. The type of the
property is wrapped with DataBusProperty. Apart from the configuration, that's basically it. Using a
TimeToBeReceived attribute on a message with a DataBus property might be a good idea. The

Microservicios Page 21
TimeToBeReceived attribute on a message with a DataBus property might be a good idea. The
FileShareDataBus doesn't throw away the data in the DataBus automatically, because it has no way
of knowing when the message has been received by all endpoints consuming the message.

Unobtrusive Mode

So far you've seen that every message in NServiceBus must implement an interface. You've watched
me use IMessage, ICommand, and IEvent. These interfaces all reside inside the NServiceBus core
assembly. That means every assembly that uses messages must have a reference to that assembly.
And when a new version of NServiceBus comes out, the assembly has to be kept up to date. It also
prevents you from using Plain Old C# classes, because you always have to implement the interface.
But with NServiceBus, you can also define conventions in a configuration of your endpoint. You
could, for example, tell NServiceBus that every class with a name that ends with command and
resides in a certain namespace is a command, and in that way you don't need ICommand. In the
same way, you can also specify which messages use TimeToBeReceived without using the attribute.
The convention feature also operates at property level for DataBus without the DataBus property
wrapper, and encryption without a WiredEncryptedString type. You could, for example, configure it
to use DataBus for every property which has a name that ends with DataBus. Here's how it's done.
First call conventions on the endpointConfiguration object for a Conventions object, and call
methods on it, which have a name that starts with Defining. Here I'm defining all classes with a name
that ends with commands and is in a namespace called MyNamespace. In the same way, you can
give messages that end with the word Expires a TimeToBeRecieved, specifying a time span of 30
seconds here. All other messages will have a time span of basically forever in this example.

Auditing Messages

Microservices are a bit harder to debug then, for example web services, that's because of their
asynchronous nature. Therefore it's probably a good thing to configure an audit queue on your
endpoints. This will send a copy of all messages processed by that endpoint to a separate queue. It is
recommended that this queue is on another central machine, so the different endpoints can have
the same audit queue. An audit queue is also a requirement for the particular platform tools you can
use to monitor your messages. I will be showing you these in the final module of this course. You can
configure auditing using the IProvideConfiguration interface or by using the Configuration object,
but the most common way to configure it is by using the app.config or the web.config file. Notice
that if you'll use MSMQ as a transport, you can specify the queue on another system by using the @
symbol, followed by the machine name. There's also an option to override the TimeToBeReceived
setting that could be already on the message.

Scheduling Messages

In sagas, there is a timeout functionality, but what if you only want to execute a task every 5
minutes? Well, you want to do that outside the saga. You can do this by using the Schedule object,
which can be injected in your handler class. The task you specified is stored in an in-memory
dictionary together with a unique ID. It's in memory, so the schedule entry won't survive an
endpoint restart. Then a message is sent to the Timeout Manager. When the time is up, the message
is sent back to the endpoint, and the task is looked up in a dictionary and executed. Should the
schedule entry be not present in the dictionary anymore, the message is ignored, but the log entry is
made. You can schedule a task by calling ScheduleEvery on your endpoint instance object. This is the
object you get back when starting the actual endpoint. The method takes a TimeSpan and a func,
which defines what should be done. The context is passed in, so it's easy to send a message like in
this example, but the task can really be anything. There is also an overload, allowing you to specify a
name for the task, which will show up in the log. In the next clip, I'll talk about polymorphic message
dispatch.

Polymorphic Message Dispatch

Let's say you have created version 1 of a service using the IOrderPlannedEvent. It turns out that the
timestamp indicating when the order was placed exactly is necessary. So you create version 2 of the

Microservicios Page 22
timestamp indicating when the order was placed exactly is necessary. So you create version 2 of the
service using an interface that is derived from IOrderPlannedEvent with the extra property added.
This scenario fits the polymorphic message dispatch feature in NServiceBus. You could just publish
the new IOrderPlannedEvent as normal in the publishing service. All other services using the old
interface in the handler will receive the event as well. Using microservice architecture, you probably
have a number of services running that handle the old version of IOrderPlannedEvent, but you don't
want to update all handlers in all services with a new version at once. So initially you just update the
publish service, and all other services will continue to get the event in their handler with the old
version. So now you can update the services gradually or only when needed. But this feature also
enables other cool things with polymorphic event handling. Let's say you have VIP customers. You
could create an IOrderPlannedVipEvent deriving from IOrderPlannedEvent. For non-VIP users, you
could then publish IOrderPlannedEvent, which will only trigger handlers for the specific event, and
for VIP users, publish the derive one, which will trigger handlers handling IOrderPlannedVipEvent
and IOrderPlannedEvent. So the handlers with IOrderPlannedEvent could do the stuff needed for
every user, and the IOrderVipEvent handlers could do the extra stuff needed for VIPs. Also
remember that interfaces support multiple inheritance. If you inherit your event interface from
multiple other event interfaces that have handlers, you can enable even more interesting scenarios.

Polymorphic Message Dispatch Demo

Here's the solution as I left it in module 2. In this stage, the Fire On Wheels solution was still without
the saga. In the ProcessOrderHandler, I processed the ProcessOrderCommand sent by either the
web application or the REST service. After the work is done, the IOrderProcessedEvent is published.
Here is the OrderProcessedEvent handler in the web application, that event is picked up. Please
notice the breakpoint. Now let's say we want a service that monitors all order activity, not only if it's
processed, but on all activity that is going to be implemented in the future. I created the
FireOnWheels.OrderActivity service. In it there is an OrderActivity handler that listens for
IOrderActivityEvent. Again notice the breakpoint. Let's see what IOrderActivityEvent looks like. It
implements IEvent and has all the properties of the order. I changed IOrderProcessedEvent. It not
inherits from IOrderActivityEvent, which means it also implements IEvent and has all the older
properties. The code for the sending of the event remains unchanged. Now let's run the application.
When the IOrderProcessedEvent is published, both breakpoints are hit, and now it's easy to
introduce a new event that has something to do with the order, which derives from
IOrderActivityEvent, and it will be automatically picked up by our new service.

The Message Pipeline

Let's talk about NServiceBus' message pipeline. The message pipeline is a series of steps NServiceBus
executes when a message comes in or a message goes out. A step has pipeline awareness, it knows
where to fit in the pipeline and when to execute. A step always contains behavior that is executed
when it's the steps turn. Let's say this is the incoming message pipeline. When a message comes in,
the first step is executed. The behavior class contained in this step has an Invoke method. The two
parameters of the Invoke method are context, used to communicate with other behaviors, and an
action delegate called next. When next is called, the behavior of the next step is triggered. So one
behavior can do something before or after the underlying steps with behaviors are executed. When
the last step in the line calls next, NServiceBus walks back in the stack to execute all the logic that
comes after the call to next. The default NServiceBus pipeline consists of a number of steps, and
there's a different pipeline for incoming and outgoing messages. In this slide, I chose three of the
default steps for each pipeline. There's a step that serializes and deserializes messages, one that
takes care of executing the registered mutators for incoming and outgoing messages, and the
registered unit of work object. Hang on for an explanation of these two. There's a step that takes
care of invoking the handlers in the incoming messages pipeline, and the
DispatchMessageToTransport to take care of delivering a message to the transport in the outgoing
message pipeline. Next I'm explaining steps and behaviors you can implement yourself in the
message pipeline.

Custom Behaviors

Microservicios Page 23
The pipeline can be changed. You can insert new steps with behaviors somewhere in the existing
pipeline or replace steps. First I'll show you how to make a new behavior. You can create a class that
derives from behavior. Behavior is generic, and you have to specify if this behavior is for the
incoming or outgoing pipeline by using the right interface. Next I implement the Invoke method,
which has a chosen context object and a func of task called next as parameters. Behaviors are great
if you want to create some sort of disposable object such as data context and dispose it after all
other behaviors down the pipeline are done with it. To let the other behaviors access the data
context, put it in the context by calling the set method on the Extensions object. Other behaviors can
pull it out using the get method. Next register the step in the pipeline. Just use the
endpointConfigurationobject for that, calling Register on the pipeline object, specifying an instance
of the behavior and a description. Or derive a class from RegisterStep, and pass an ID, the typeof the
behavior, and a description to the base constructor. A class deriving from RegisterStep will
automatically be picked up by NServiceBus when the endpoint is created, because of its assembly
scanning capability. If you don't want a new step, but want to replace an existing one, we just have
to create a new behavior, and then replace using the Replace method on the pipeline object. This
time you have to tell it what the ID is of the step you want to replace. Let's go on with message
mutators.

Message Mutators

Use message mutators to manipulate a message before it hits the handler for incoming messages,
and before it's handed over to the transport for outgoing messages. Mutators are also used
internally by NServiceBus. Features like the data bus and property encryption make use of them.
Applicative message mutators do their work after deserialization in a incoming scenario and before
serialization in an outgoing scenario, but the message is still available as an object. A good use case
for applicative message mutators is if you want to do some kind of validation on the message. You
can program implicative message mutators for the incoming or the outgoing pipeline, or use
IMessageMutator that inherits from both the other interfaces. Transport mutators are kicked off
before deserialization for the incoming pipeline and after serialization in the outgoing. You have
access to the raw byte array of the message, they are ideal to, for example, use program
compression and decompression for messages. Use IMutateIncomingTransportMessages to create a
mutator just for incoming messages or IMutateOutgoingTransportMessages for outgoing messages.
IMutateTransportMessages is both for incoming and outgoing. Don't forget to register a mutator
with NServiceBus using the RegisterComponents method that accepts a delegate with a parameter
that has the Component object on which you can call ConfigureComponent. The generic parameter
is your message mutator type. You can also specify what the lifecycle of the mutator object should
be. Here a new mutator object is created every time it's needed. Up next is the unit of work.

Unit of Work

A unit of work allows you to execute code when a message begins processing in the pipeline in the
Begin method and after it ends processing in the end method. To make use of a unit of work,
implement the IManageUnitsOfWork interface. When an exception occurs in the pipeline, it is
passed to the End method. Unit of works are great to execute code that always has to be executed
with every message. You don't want to repeat that everywhere in your handlers. For example, call
SaveChanges on an ORM or database context object. They are easier to implement than custom
behaviors, but less powerful. You can't wrap the begin and end in a using statement, for example.
Units of work must be registered in the same way as mutators. Next I'll talk about headers.

Message Headers

Headers contain information about the message that is not directly related to its business purpose. If
you've done work development, you probably know about HTTP headers, which are used for the
same purpose. Headers should only contain metadata, so if the message represents an order,
headers should contain data that is not directly related to that order, but is needed for the
infrastructure. NServiceBus itself relies heavily on headers to do its magic. An example of a good
header you could add to the message yourself could be a security token used by a security

Microservicios Page 24
header you could add to the message yourself could be a security token used by a security
mechanism like OAuth2. Apart from handlers, the header collection of a message can be
manipulated and read in behaviors and mutators, so the header logic can be easily shared. To give
an idea of how NServiceBus uses headers internally, here are just a few examples. Every message
gets a unique MessageId. The CorrelationId is used when using the Bus.Reply method. It contains the
Id of the message that triggered the reply, but the receiver of the reply knows what the original
message was. MessageIntent can be sent, publish, subscribe, unsubscribe, or reply, and
ReplyToAddress is the explanation behind the magic of Bus.Reply. It contains the address of the
endpoint to reply to, so no routing is needed. When sending messages to the audit queue, but also
the error queue, certain headers are added by NServiceBus. For example with auditing, there is
information about when the handling of the message started and ended, and there is also
information about the endpoint name and the machine it's on. To read a header in the handler, get
the header dictionary by accessing the MessageHeaders property on the MessageHandler context.
You can then read an NServiceBus header by using the headers helper type, or just use a string if it's
a custom header. For mutators and behaviors, the process is the same, the only difference is where
the dictionary comes from. For mutators, use the context.Headers in the MutatingIncoming method,
and for behaviors, use the context.Message.Headers in the Invoke method. To set a header, use the
sendOptions class, which has a SetHeader method, IMessageHandlerContext.Send as an overload
that accepts the sendsOptions object. For behaviors, just write directly to the dictionary you get
from context.Headers, and for mutators, write to the dictionary you get from accessing the
OutgoingHeaders property of the context object. In the next clip, I'll show you the gateway feature.

Gateway: Multi-site Messaging

Enterprises often have multiple physical sites, for example headquarters and sales are in different
locations that have their own IT infrastructure. The obvious solution to send messages across is to
use VPN, but if that for some reason isn't an option, you can use NServiceBus' gateway feature.
Gateway is for sites that are logically different, so it's not meant for application. Use the regular way
to replicate within the IT infrastructure such as send snapshot, SQL Server, or RavenDB replication.
When using gateway, you're going to write messages that are specifically meant to cross the
gateway using a special method on the bus object. This is designed intentionally this way with the
fallacies of distributed computing in mind. Events use the publish subscribe mechanism, which is not
supported, because that is meant to be used within one site. As a channel, gateway uses HTTP with
SSL out of the box, but it's also possible to create custom channel support. Here's a possible gateway
setup. There is a Headquarters site, and we have a SiteA, which could be sales. Each endpoint has
gateway enabled, which has its own in and out queue, by which receiving and sending of messages
to the outside world is possible. Once a message is received, it is put into the right queues to be
handled by the handlers of the endpoint. In code, you specify the different sites by using a key.
Configure them in the Config file like this. Each site has an address and a channel it should use. Next
enable the gateway in each endpoint using it by using the EnableFeature method on the
Configuration object like this. Now you're ready to send the message. Use the SendToSites method
for that, It accepts an array of site keys and the message. Gateway has the following features, I
already talked about most of them in the course, automatic retries, deduplication of incoming
messages, SSL, data bus support, and gateway can listen to multiple channels of different types at
the same time, and although there's only HTTP support out of the box, you can create your own
channels. Performance counters are up next.

Performance Counters

Windows comes with a built-in performance counter system. You can view them by going to
Computer Management, System Tools, Performance, Monitoring Tools, and then Performance
Monitor. On the screen you'll see a realtime graph showing one or more performance counters. An
obvious one is, for example, CPU load. Performance counters cannot only be read by humans, but
also by software because they're exposed with something called Windows Management
Instrumentation or WMI. The .NET class library contains classes to read from them and write to
them. MSMQ has a lot of performance counters to monitor things like number of messages in
queues, but they're not focused on performance, and as you know, MSMQ is not the only transport
that can be used by NServiceBus. For these reasons, NServiceBus has its own performance counters,

Microservicios Page 25
that can be used by NServiceBus. For these reasons, NServiceBus has its own performance counters,
they are automatically installed when you runt he setup for the Particular Software Suite I showed
you in the second module, but you can also install them with a PowerShell command. Here are the
performance counters for NServiceBus. Successful message processing rate, queue message receive
rate, and failed message processing rate, measure the rate of messages per second. They exist for
every queue individually, and they are automatically used if present. Critical time measures the time
it takes from the sending from the client machine, until the successful processing of the service is
done. This way you can monitor if your architecture adheres to SLAs, for example. This counter is
automatically used when using NServiceBus hosting. With self hosting, it has to be enabled by
configuration. SLA violation countdown acts an early warning system to warn you if the SLA is danger
of being breached. It tells you the number of seconds left until the SLA is violated. It is also enabled
by default using NServiceBus hosting, but requires explicit activation with self hosting. Let's see how
to scale services next.

Scaling Your Services

Maybe with the help of performance counters, you reach the conclusion that a service is overloaded
in the sense that it takes too long for a message to process because of a queue that is becoming too
full. If the cause of this is the processing by the service that is slow and not the infrastructure, then
you can scale your service. Scaling up is upgrading the hardware or virtual hardware it runs on, or
placing the service on the server with more muscle. We won't talk about that in this clip. Scaling out
is placing the same service on multiple machines. How to scale out depends on what transport
you're using. If you're using a broker style transport like RabbitMQ or SQL Server Transport, just
deploy your service to multiple servers, and you're done. Because the transport is centralized in
nature, all services will use the same queue, and NServiceBus will take care of the fact that a
message is only processed by one instance of the service. With MSMQ, queues are on the machines
the services run on, so just placing another instance of the service on multiple machines won't work.
Luckily the sender-side distribution feature helps out. Just put multiple instances on the server on
different machines. These instances are called workers. Then there are two parts of configuration to
do. In code, map a specific message to a logical endpoint, which is like a virtual endpoint. Next
create a configuration file called instance-mapping.xml, and map the logical endpoints to multiple
physical machines. The idea behind using a config file is that you can easily change it without the
need to deploy a new version of the service. The sending service will just loop through the list of
configured machines to determine the destination of the message. There's no feedback on the
availability of workers, so when one worker is down, messages for that worker will just pile up into
its queue. Here's how to configure the logical endpoints. You do this at the time you configure the
transport for the endpoint. You can get a routing object by calling Routing on the transport object.
Next configure a logical endpoint for a message by calling RouteToEndpoint on the routing object. To
map the logical endpoints to physical machines, use the instance-mapping.xml file. In it you use the
name of the logical endpoint to map to multiple physical machines. In the next clip, I'm going to tell
you about NServiceBus unit testing.

Unit Testing of Sagas and Handlers

Unit testing has become a common practice in every company. Unit testing of NServiceBus handlers
and sagas is hard without any help from NServiceBus. How do you test what message should come
out of a handler when sending in a command, for example? It wouldn't be possible without touching
the bare metal of the transport. NServiceBus helps you with the NServiceBus.Testing NuGet package.
It makes the unit testing of handlers and sagas a breeze, and there is no specific testing framework
required to make use of it.

Unit Testing Demo

Let's see some examples of unit testing by looking at the demo. After the growth Fire On Wheels
went through, they are now hiring multiple developers that apply many changes to the microservers
architecture every day. To be confident everything keeps working after a change, handlers and sagas
should be unit tested. I've created a new unit test project in the Fire On Wheels solution. This uses
the out of the box unit testing framework called MSTest, supplied by Microsoft with Visual Studio. It

Microservicios Page 26
the out of the box unit testing framework called MSTest, supplied by Microsoft with Visual Studio. It
uses the attributes TestClass and TestMethod to mark classes and methods, so that the test run then
knows which methods to run. My first test tests the DispatchOrderHandler. To test the handler, the
TestClass has a static method Handler. Next I'm defining what to expect if a test runs. I'm expecting a
reply of the type IOrderDispatchedMessage. As a parameter, I supply a delegate with the message as
input parameter and a Boolean as output. With the Boolean evaluation, you could test if a message
has a certain content, but my IOrderDispatchedMessage doesn't have any properties, so I just give it
an expression that is always true. Next I'm specifying the message that I want to send to the handler.
In this case, an empty DispatchOrderCommand, so in short, this test tests if I get an
IOrderDispatchMessage back if I send in an empty DispatchOrder command. Let's see how unit
testing works with sagas. I've added another test class called SagaTest with two test methods. The
first one tests if a PlanOrderCommand is returned when I send it a ProcessOrderCommand.
ProcessOrderCommand was the message that started the saga. I use the Test.Saga method that
needs a generic type parameter for the saga. At the end of the line, you see a When clause. It has an
action delegate as a parameter. With the Saga test instance as input, I can tell it go handle a process
order command. I specify what I expect to happen above the When clause. Here I expect the sending
of a PlanOrderCommand. The next test tests the scenario a little bit further in the process, instead of
When, I'm using WhenHandling. This is useful when a message is just an interface without the
concrete type. When IOrderDispatchMessage is handled by the saga, I expect a reply the originator
with an OrderProcessed message. I also expect when the handling is done that the saga is marked as
complete. Let's run the test to be sure everything is okay, and you can see the test runner. You can
see that the two tests have passed. I just show you a few test options supplied by the
NServiceBus.Testing assembly, but there are many more, for instance, to test timeouts and the
publishing of events.

Summary

In this module, you learned about the different transaction mechanisms NServiceBus uses. I showed
you many message features applicable to a lot of scenarios. I also covered the message pipeline with
its customizable steps. Monitoring should be straightforward with the performance counters
NServiceBus supplies. And scaling out can be done in a straightforward way. Finally unit testing is
easy, as long as you use the test classes supplied by NServiceBus. In the next module, we'll learn
about the applications in the Particular Software Suite that support NServiceBus.

Monitoring Your NServiceBus Services

Introduction and Preparation

This module is about the applications the Particular Platform offers that support NServiceBus. After
watching this module, you are well on your way to using these applications and even write
customizations for them. The applications I'm going to talk about are ServiceControl, ServicePulse,
and ServiceInsight. Remember the screen you saw when you installed NServiceBus. If you left all the
checkmarks, you already have everything installed to get going. If not, please download the setup
file from Particular, and run it with all the checks selected. In the next clip, we'll start with the spider
in the web called ServiceControl.

ServiceControl: The Spider in the Web

ServiceControl is an application that gathers information about messages flowing through your
application and its endpoint. It can also do custom checks, which you can create yourself. All the
data ServiceControl gathers is stored in an embedded version of RavenDB. If you've installed the
entire Particular Platform Suite, ServiceControl is already active and installed as a Windows service.
The data is exposed via a REST API that ServiceControl offers. So basically ServiceControl is a service
that gathers data and then exposes this data again to be used by other applications. It's also an
endpoint exposing messages as events. You can respond to these events in your own handlers. Here
we see the role of ServiceControl. Your endpoints are on top. They generate messages and can
report their health to ServiceControl. ServiceControl sponges up all the data and offers a REST API to
other members of the particular platform, ServicePulse and ServiceInsight, which we will cover in

Microservicios Page 27
other members of the particular platform, ServicePulse and ServiceInsight, which we will cover in
this module. But you or other developers could also benefit from the REST API ServiceControl offers.
It's also an endpoint which publishes events where you can subscribe with your own endpoint, and
you can make use of the REST API yourself. Here is a slide to give you a way to start should you be
interested in creating applications that use ServiceControl. The default URL for ServiceControl is
shown in the title of the slide. When you go to that URL, ServiceControl will give you more URLs that
you can use to fetch certain data. It's a typical starting point for a REST web service. I'm only
displaying one example here. It's a URL to read the error messages from a certain endpoint. When
calling this API in your own application, you have to specify the name of the endpoint. Paging is also
supported. Of course there's also a URL that gives you a list of endpoints. Here are some other things
you should know about ServiceControl. It only stores messages that are sent to the audit queue and
the error queue. The configuration of an error queue is compulsory, but the audit queue is not. So
be sure to enable to enable the audit queue on your endpoints. The data ServiceControl stores is by
default retained for 30 days, and the purging process runs every minute. This is of course
configurable in the config file of the application, where you can also configure ServiceControl to run
centralized preferably on the cluster. If you install ServiceControl with a platform installer, it is
configured to use MSMQ as a transport. If you use another transport, you have to deinstall it,
download additional DLLs, and reinstall it using a different transport type. Next I'll show you how to
respond to ServiceControl events in a demo.

Demo: Responding to ServiceControl Events

Fire On Wheels has grown to a medium-sized company. They have an operations team now, and
they want to be notified if messages go to the error queue. Since ServiceControl is an endpoint
publishing event, I can create my own endpoint that subscribes to these events. Let's see how to do
this. I have created another self-hosted commandline service called FireOnWheels.Monitoring.
There area a few things to set up. First install the NuGet package ServiceControl.Contracts, which
contains the event classes Secondly, add routing in the config file, which registered subscribers with
a ServiceControl endpoint. Here a subscription is created for all events in the
ServiceControl.Contracts assembly. The event classes are not marked with IEvent, so we need to
enable unobtrusive mode. Here's the configuration code that tells NServiceBus that all classes in the
ServiceControl.Contracts namespace are events. Note that I also configured the endpoint to use
JsonSerialization, since ServiceControl serializes using JSON. The final step is of course to create
handlers. The MessageFailed handler handles the MessageFailed event. The type is present in the
ServiceControl.Contracts assembly. Here I can put the code to send a notification. Servicecontrol
passes in the ID as the message, as well as the exception that caused the message to end up in the
error queue. ServiceControl.Contracts contains also other events that let you respond to heartbeats
that stop and restart, and custom checks that fail or succeed. I will tell you about these in the clip
about ServicePulse, which is next.

ServicePulse: Monitoring on a Web Page

ServicePulse is a web application intended for operations personnel to monitor the state of
endpoints. It focuses on messages that ended up in the error queue, but it also has a heartbeat
feature that monitors if endpoints are up. You can also program custom checks, which I will show
you later on. The default URL for ServicePulse is localhost on port 9090. When you type in that URL
in your browser, you'll see this dashboard. First we switch to the configuration of ServicePulse, and
you can see our endpoints were detected. ServicePulse communicates with ServiceControl to get
this information. An endpoint is detected when ServiceControl sees an audit message or error
message for it. So you should configure an audit and error queue for each of your services. Switch
each endpoint you want to monitor on. Now back to the dashboard. You'll see three indicators. The
heartbeat indicator monitors if endpoints are up, it knows if endpoints are up, because
ServiceControl expects to receive heartbeat messages from the endpoint every 30 seconds. When I
click on heartbeat, you can see ServicePulse warns me about the fact that services don't have the
heartbeat plug-in installed. This is just an extra NuGet package I have to install with the endpoint,
and everything will be fine. The next indicator is about failed messages. It shows the number of
messages in the error queue. When you click on the indicator, you'll see this screen. All messages in
the error queue are grouped by the type of error that caused it. In this example there are 5

Microservicios Page 28
the error queue are grouped by the type of error that caused it. In this example there are 5
messages that ended up in the error queue due to an exception in DispatchOrderHandler. You can
retry or archive the whole group. Clicking on the group leads to this screen. It shows an overview of
the messages in the group. And when you click on a message you get this screen. It lets you examine
the messages in the error queue by showing the complete stacktrace, the message headers, and the
message body. You can also copy the ID of the message to the clipboard or open this message in
ServiceInsight, which I'll cover next. By checking the message, you can also retry it, or press on the
Retry all messages button. Note that this message has probably already been retried several times
by the first and second-level retry mechanism. You can see here an exception was thrown, which of
course did intentionally in the service. So now I have the option to go fix the bug in the service,
redeploy it, and then press Retry. The order will be processed further, and nothing will be lost. The
third indicator on the dashboard is Custom Checks. All custom checks are passing right now because
I don't have any. Custom checks are service-controlled extensions written by you. They could
monitor things outside of NServiceBus. A custom check could, for example, monitor the availability
of your REST web service. In this way, I can monitor my whole infrastructure by just a single tool.
And let's see how to implement a custom check.

Implementing CustomChecks

I've implemented a custom check by adding a class to the monitoring class library I created earlier. I
added the ServiceControl custom checks NuGet package to it. You need a specific one for the
NServiceBus version you're using. Derive your class from CustomCheck. By default, it will run every
time your services start, but you can also optionally indicate a time interval. The check will run
continuously with the indicated time between runs. The final thing to do is to implement the actual
check in the PerformCheck method. In the constructor, we call the base constructor, giving the
CustomCheck an ID, a category, and we specify the time interval. Here it's 5 minutes. I programmed
the actual check in the override of the abstract method PerformCheck. In here, I return either
CheckResult.Pass or CheckResult.Failed with an indicator of why it failed. Custom checks are
registered automatically by the assembly scanning NServiceBus does. So when we run the services
and open up ServicePulse, I can see the custom check failing on the dashboard. When I click on the
indicator, I can find out the details of what went wrong. You can see here that the REST service could
not be reached.

SerivceInsight: Message Flow in Detail

ServiceInsight is a desktop application that lets you visualize message flows. It is intended for
developers and architects. The best way to tell you about ServiceInsight is to show it to you, so let's
switch over to the application. By default, ServiceInsight will connect to the default ServiceControl
URI, but you can connect to another ServiceControl instance by going to Tools, Connect To Service
Control. You can arrange the panels in ServiceInsight the way you please, but the initial screen looks
like this. The upper-side of the screen lists messages with columns with the ID, and the type of
message, as well as the time it was sent, it's critical time, the processing time, and the delivery time.
It's already very useful to see the processing time of the messages, so you can immediately spot if an
endpoint falls short in terms of performance. To see more information about the message, select
one and expand the Message Property pane. On the left side, you can expand the Endpoint Explorer
to filter the messages that originate from a particular endpoint. In the lower part of the screen, you
can view all kinds of information about the message by selectin tabs. The first one is already active,
it's the Flow Diagram. It shows the complete lifetime of a message across different endpoints. With
the button Show Endpoints, you can see the incoming and outgoing endpoints. By clicking the typed
name of the message, you can see information about processing times, and you have options to
retry a message or copy a conversation ID or URI to the clipboard, for example, to send to another
developer. If you send the URL and somebody clicks on it, ServiceInsight will be opened with the
right message active. To see something in the next tab called Saga, you have to do something extra.
The saga service needs to have the appropriate saga audit NuGet package installed that sends
additional information to ServiceControl. The screen shows detailed information about the saga
lifetime. For example, that process order command causes saga to come in existence. You can see
the time that it happened and the contents of the saga data object at the time. When the saga was
created, the PlanOrder command was sent out, and three seconds later, the IOrderPlannedMessage

Microservicios Page 29
created, the PlanOrder command was sent out, and three seconds later, the IOrderPlannedMessage
calls the saga to update and do the next step. Let's go to the next tab, the Sequence Diagram. It
shows you all the endpoints involved in the first horizontal line. And you can easily see which
endpoints were touched in the process. The next tab, Headers, shows you all the headers of the
message. I've selected a message that went to the error queue, so you can see the extra information
added to the header of the message when it failed. You can, for example, examine detailed
information here about the exception that was thrown. The Body tab will show you the raw body of
the message in the serialization format it was sent in. And last but not least, the Logs tab shows you
the log entries that were involved with the message.

Summary

In this module, we explored the tools that surround NServiceBus and are packaged with it in the
form of the Particular Platform. ServiceControl is the spider in the web. It gathers information data
and exposes it using an API. ServicePulse uses that API to give you a web application that provides
error message monitoring for operations. ServiceInsight is more for the developer and the architect,
showing you message flows and response times. Thank you for watching this course, and have fun
using NServiceBus.

Microservicios Page 30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy