Ad221 7.10 Student Guide
Ad221 7.10 Student Guide
The contents of this course and all its modules and related materials, including handouts to audience members, are
Copyright © 2023 Red Hat.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to training@redhat.com [mailto:training@redhat.com] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919)
754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, CloudForms,
RHCA, RHCE, RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries
in the United States and other countries.
Linux™ is the registered trademark of Linus Torvalds in the United States and other countries.
XFS™ is a registered trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or
other countries.
MySQL™ is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js™ is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent
Node.js open source or commercial project.
The OpenStack™ Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/
service marks of the OpenStack Foundation, in the United States and other countries and are used with the
OpenStack Foundation's permission. Red Hat is not affiliated with, endorsed or sponsored by the OpenStack
Foundation or the OpenStack community.
Introduction xi
Cloud-native Integration with Red Hat Fuse ................................................................. xi
Orientation to the Classroom Environment .................................................................. xii
AD221-RHF7.10-en-6-20230613 vii
Developing Transactional Routes ............................................................................ 155
Guided Exercise: Developing Transactional Routes ..................................................... 160
Quiz: Implementing Transactions ............................................................................. 163
Summary ............................................................................................................. 167
7. Building and Consuming REST Services 169
Implementing REST Services with the REST DSL ...................................................... 170
Guided Exercise: Implementing REST Services with the REST DSL ............................... 175
Consuming HTTP Services ..................................................................................... 178
Guided Exercise: Consuming HTTP Services ............................................................. 185
Quiz: Building and Consuming REST Services ............................................................. 191
Summary ............................................................................................................. 197
8. Integrating Cloud-native Services 199
Deploying Camel Applications to Red Hat OpenShift ................................................ 200
Guided Exercise: Deploying Camel Applications to Red Hat OpenShift ......................... 205
Integrating Cloud-native Services Using Camel Quarkus ............................................ 209
Guided Exercise: Integrating Cloud-native Services Using Camel Quarkus ...................... 212
Integrating Cloud-native Services Using Camel K ...................................................... 216
Guided Exercise: Integrating Cloud-native Services Using Camel K ............................... 221
Quiz: Integrating Cloud-native Services .................................................................... 224
Summary ............................................................................................................ 228
viii AD221-RHF7.10-en-6-20230613
Document Conventions
This section describes various conventions and practices that are used
throughout all Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation that is relevant to
a subject.
Note
Notes are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
Important sections provide details of information that is easily missed:
configuration changes that apply only to the current session, or
services that need restarting before an update applies. Ignoring these
admonitions will not cause data loss, but might cause irritation and
frustration.
Warning
Do not ignore warnings. Ignoring these admonitions will most likely
cause data loss.
AD221-RHF7.10-en-6-20230613 ix
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas to help remove any
potentially offensive terms. This is an ongoing process and requires alignment with the products
and services that are covered in Red Hat Training courses. Red Hat appreciates your patience
during this process.
x AD221-RHF7.10-en-6-20230613
Introduction
Course Objectives
• Students will work with a number of use cases that utilize the major
features and capabilities of Camel to develop realistic cloud-native Camel
integration applications.
Audience
Prerequisites
AD221-RHF7.10-en-6-20230613 xi
Introduction
In this environment, the main computer system used for hands-on learning activities is
workstation.
All student computer systems have a standard user account, student, which has the password
student. The root password on all student systems is redhat.
Classroom Machines
xii AD221-RHF7.10-en-6-20230613
Introduction
The bastion system acts as a router between the network that connects the student machines
and the classroom network. If bastion is down, then other student machines might not function
properly, or might even hang during boot.
The utility system acts as a router between the network that connects the Red Hat OpenShift
cluster machines and the student network. If utility is down, then the Red Hat OpenShift
cluster does not function properly, or might even hang during boot.
The students use the workstation machine to access a dedicated Red Hat OpenShift cluster,
for which they have cluster administrator privileges.
API https://api.ocp4.example.com:6443
The Red Hat OpenShift cluster has a standard user account, developer, which has developer
as the password. The administrative account, admin, has redhatocp as the password.
The git instance has a standard user account, developer, which has developer as the
password.
AD221-RHF7.10-en-6-20230613 xiii
Introduction
The student user present in the workstation machine uses the internal cache of Maven
artifacts in the /home/student/.m2/repository directory. If the dependency is not
present, then Maven downloads it from the Nexus artifacts repository in the nexus-
infra.apps.ocp4.example.com server.
The workspace directory must contain the AD221-apps directory. This is the directory that
contains the necessary files for each activity in this course. The AD221-apps directory content
is present in the Git repository at the git.ocp4.example.com internal server, and in a public
GitHub repository at https://github.com/RedHatTraining/AD221-apps.
The lab script uses the locally cloned AD221-apps repository to create a directory structure
relevant to a particular guided exercise or lab activity.
For example, the lab start route-messages command does the following:
You can find the solution for each activity in the /home/student/AD221/AD221-apps
repository. For example, for the route-messages guided exercise, see the /home/student/
AD221/AD221-apps/route-messages/solutions directory.
Troubleshooting
Cannot log in to the Red Hat OpenShift cluster
The Red Hat OpenShift cluster can take a long time to start. Consequently, you might
encounter authentication errors when executing lab scripts, for example:
You might also encounter errors when executing the oc login command, for example:
In that case, wait for the Red Hat OpenShift cluster to come online, and try again. This can
take up to 20 minutes, depending on the classroom load.
xiv AD221-RHF7.10-en-6-20230613
Introduction
• /tmp/log/labs: This directory contains log files. The lab script creates a unique log file
for each activity. For example, the log file for the lab start intro-setup command is /
tmp/log/labs/intro_setup
Machine States
active The virtual machine is running and available. If it just started, it still
might be starting services.
AD221-RHF7.10-en-6-20230613 xv
Introduction
stopped The virtual machine is completely shut down. On starting, the virtual
machine boots into the same state it was in before shutdown. The disk
state is preserved.
Classroom Actions
CREATE Create the ROLE classroom. Creates and starts all the virtual
machines needed for this classroom. Creation can take several minutes
to complete.
CREATING The ROLE classroom virtual machines are being created. Creates
and starts all the virtual machines that are needed for this classroom.
Creation can take several minutes to complete.
DELETE Delete the ROLE classroom. Destroys all virtual machines in the
classroom. All saved work on those systems' disks is lost.
Machine Actions
OPEN CONSOLE Connect to the system console of the virtual machine in a new browser
tab. You can log in directly to the virtual machine and run commands,
when required. Normally, log in to the workstation virtual machine
only, and from there, use ssh to connect to the other virtual machines.
ACTION > Shutdown Gracefully shut down the virtual machine, preserving disk contents.
ACTION > Power Off Forcefully shut down the virtual machine, while still preserving disk
contents. This is equivalent to removing the power from a physical
machine.
ACTION > Reset Forcefully shut down the virtual machine and reset associated storage
to its initial state. All saved work on that system's disks is lost.
At the start of an exercise, if instructed to reset a single virtual machine node, click ACTION >
Reset for only that specific virtual machine.
At the start of an exercise, if instructed to reset all virtual machines, click ACTION > Reset on
every virtual machine in the list.
xvi AD221-RHF7.10-en-6-20230613
Introduction
If you want to return the classroom environment to its original state at the start of the course, then
click DELETE to remove the entire classroom environment. After the lab has been deleted, then
click CREATE to provision a new set of classroom systems.
Warning
The DELETE operation cannot be undone. All completed work in the classroom
environment is lost.
To adjust the timers, locate the two + buttons at the bottom of the course management page.
Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy +
button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours,
and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are
working, so that your environment is not unexpectedly shut down. Be careful not to set the timers
unnecessarily high, which could waste your subscription time allotment.
AD221-RHF7.10-en-6-20230613 xvii
xviii AD221-RHF7.10-en-6-20230613
Chapter 1
AD221-RHF7.10-en-6-20230613 1
Chapter 1 | Introducing Red Hat Fuse and Camel
Objectives
• After completing this section, you should be able to discuss integration concepts with Red Hat
Fuse and Camel.
This approach was a powerful solution to decouple services in the waterfall era, but presents
scalability and elasticity problems in modern digital environments. These new environments
require flexibility and faster delivery speed, which are critical to attain a competitive advantage in
digital and continuously evolving markets.
Modern applications embrace the agile goals of adaptability and quick delivery. They take
advantage of the flexibility that small components, usually microservices, provide in terms of
agility. Therefore, using an isolated, monolithic piece of infrastructure, such as an ESB, impacts the
ability of a modern system to scale.
2 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
Containers
Organizations should deploy integration services as containers on the cloud. Containers
enable cloud-native deployments, and allow development and infrastructure teams to deploy
distributed integrations and APIs as microservices.
By using these pillars, integration becomes more scalable and more flexible, providing
organizations with better opportunities to quickly respond to customer needs.
A Camel application normally implements how information flows from an origin endpoint into a
destination endpoint. This is called a route.
A route uses Camel components to connect to the endpoints. Camel is based on an ecosystem of
more than 300 pluggable components and a strong community that maintains these components.
The following are examples of Camel components.
• Databases, middlewares, web and network protocols, such as FTP and MySQL
For example, with Camel, you can define multiple, reusable integration applications and use them
to integrate the different services of your application.
Figure 1.2: Example application using Apache Camel for distributed integration
AD221-RHF7.10-en-6-20230613 3
Chapter 1 | Introducing Red Hat Fuse and Camel
developers, and nontechnical users alike to create a unified solution that meets the principles of
agile integration.
The core of Red Hat Fuse is Apache Camel, which gives Fuse its agile integration capabilities.
On top of Camel, Fuse provides a set of capabilities and features to support and improve Camel
development.
Fuse Standalone
To package your application as a JAR and run it on a runtime, such as Spring Boot. This
distribution supports Apache Karaf, Spring Boot, and Red Hat JBoss Enterprise Application
Platform.
Fuse on OpenShift
To build and deploy your application and its dependencies as a container image on OpenShift.
In contrast to Fuse Standalone, Fuse on OpenShift packages all runtime components in a
container image.
Note
This course focuses on Fuse Standalone with Spring Boot as the runtime.
Additionally, some activities explore integration development with Quarkus and
Camel K.
Apache Camel K
Apache Camel K is an open-source lightweight platform, which currently uses Camel version 3.
Developers can easily run cloud-native integrations on OpenShift by using Camel K.
With Camel K, you do not need to develop and maintain an entire integration application. You only
need to provide the Camel K CLI with a Java Camel DSL file. The following are just some of the
key capabilities of Camel K:
• KNative support.
• Reusable high-level integrations with Kamelets. Kamelets are integration template documents in
YAML format.
4 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
References
Apache Camel User Manual
https://camel.apache.org/manual/
For more information, refer to the Fuse 7.10 Product Overview chapter in the Release
Notes for Red Hat Fuse 7.10 guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
release_notes_for_red_hat_fuse_7.10/index#Product
For more information, refer to the Introduction to Camel K chapter in the Getting
Started with Camel K guide at
https://access.redhat.com/documentation/en-us/red_hat_integration/2021.q4/
html/getting_started_with_camel_k/introduction-to-camel-k
AD221-RHF7.10-en-6-20230613 5
Chapter 1 | Introducing Red Hat Fuse and Camel
Guided Exercise
Outcomes
You should be able to configure the classroom environment and clone the sample code.
Instructions
The following procedure describes how to provision and configure a new classroom from the
Red Hat Online Learning platform. You must complete the following procedure before attempting
any of the course activities.
3.1. Open a new terminal window, and use the lab command to initialize the workspace.
This script configures the lab environment, and the connection parameters to
access the OpenShift cluster.
Do not change the API endpoint, neither the username and password.
6 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
4. Verify that the ~/AD221-apps directory contains the applications to use in the course.
Then, return to the workspace directory.
Finish
This exercise has no command to finish it.
AD221-RHF7.10-en-6-20230613 7
Chapter 1 | Introducing Red Hat Fuse and Camel
Objectives
• After completing this section, you should be able to describe the basic concepts of Camel and
enterprise integration patterns.
Camel supports most of the Enterprise Integration Patterns (EIP). The following are some of the
most important EIPs.
Splitter
The Splitter pattern defines separating a list or collection of items in a message payload into
individual messages. This pattern is useful when you need to process smaller, rather than
larger, bulk messages.
Aggregator
The Aggregator pattern defines collapsing multiple messages into a single message. This is
helpful to aggregate many business processes into a single event message to be delivered
8 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
to another client or to rejoin a list of messages, which were previously split using the Split
pattern.
Content Enricher
The Content Enricher pattern defines enriching the data when sending messages from one
system to the target system, which requires more information than the source system can
provide.
Content Filter
The Content Filter pattern defines filtering the data when sending messages from one system
to the target system, which requires refined information.
Messaging Gateway
The Messaging Gateway pattern defines delegating the responsibility of messaging to a
dedicated messaging layer so that business applications do not need to contain additional
logic specific to messaging. The dedicated messaging layer becomes the source or
destination of messages from EIPs.
AD221-RHF7.10-en-6-20230613 9
Chapter 1 | Introducing Red Hat Fuse and Camel
You can implement an EIP in many ways with Camel. It depends on the EIP and the suitable
components of Camel.
You can examine the Camel architecture to have a better understanding of the Camel concepts
and components.
Message
A Message is the smallest entity in a Camel architecture. It helps with the communication
of external systems and Camel by carrying information. A Message can contain headers,
attachments and a body.
10 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
• Headers are the name-value pairs that are related to the message. The values can be
anything that is important for the message such as sender identifiers, content-encoding,
and so on.
• Attachments are the optional fields, which Camel typically provides for web service and
email components.
• Body is the body of the message, which is of java.lang.Object type. Thus, a message
can be any kind of content and size.
Exchange
An Exchange is a container of the Message in Camel. The following image demonstrates the
Exchange structure.
AD221-RHF7.10-en-6-20230613 11
Chapter 1 | Introducing Red Hat Fuse and Camel
• Exchange ID: The unique identifier of the exchange. Camel automatically generates this.
• MEP: Message Exchange Pattern. This is a pattern that describes various types of
interactions between systems. You can use either the InOnly or InOut messaging style.
InOnly is a one way message. A JMS (Java Message Service) message is an example of
InOnly messaging. InOut is a request-response message. HTTP-based messaging is an
example of InOut messaging.
• Exception: If an error occurs at any time, Camel sets the exception in this field.
• Properties: Properties are similar to message headers, but there are a few differences.
Properties last for the duration of the entire exchange. Thus properties can contain global-
level information. On the contrary, message headers are specific to a particular message.
• Out message: Output message. This is an optional part of an exchange and exists only if the
MEP is InOut. The Out message contains the reply message.
CamelContext
The CamelContext is the Camel's runtime system. It is the context that keeps all the
conceptual pieces together.
Routing Engine
The Routing Engine is the under-hood mechanism, which moves the messages. It ensures the
messages are routed properly.
12 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
Routes
A Route is a chain of processors that delegates the message routing to the Routing Engine.
You must create at least one Route to create an integration system with Camel. Routes have
inputs and outputs. To define a Route, you must use a Domain-specific Language (DSL) in
Camel.
Processor
A Processor is the unit of execution in Camel. It is capable of creating or modifying an
incoming exchange. During a routing process, Camel passes the exchanges from one
processor to another. Thus, the output of a processor is the input of another.
A producer, which is totally a different concept than an external system producer, is the entity
that sends messages to an endpoint. When the endpoint receives the message, the producer
sends the message to the real system. As an example, FileProducer writes the message
body to a file.
XML DSL
This DSL is in the form of XML. In Camel, developers can create XML DSLs in more than one
form depending on the framework.
You can create an OSGI Blueprint XML that has the Camel DSL. Also, you can create
a Spring XML if you are using Spring Framework or Spring Boot Framework with Camel.
AD221-RHF7.10-en-6-20230613 13
Chapter 1 | Introducing Red Hat Fuse and Camel
This makes the routes usable as Spring beans in the context of Spring. It depends on the
framework you prefer to use for Camel development.
<route>
<from uri="file:path/orderInbox"/>
<to uri="kafka:orders"/>
</route>
Note
In this course, even if we use Spring Boot Framework for most of our examples, we
do not cover the XML DSL. We use the Java DSL in our examples and exercises.
Java DSL
The Java DSL is a fluent styled DSL, which uses chained method calls to define routes and
components in Camel.
A route in a Java DSL contains no procedural code. If you need to embed complex conditional
or transformation logic inside a route, then you can invoke a Java Bean or create a custom
Camel processor.
from("file:path/orderInbox")
.to("kafka:orders");
References
Enterprise Integration Patterns, by Gregor Hohpe and Bobby Woolf
https://www.enterpriseintegrationpatterns.com/
Camel in Action, Second Edition (2018) by Claus Ibsen and Jonathan Anstey;
Manning. ISBN 978-1-617-29293-4.
14 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
Guided Exercise
HealthGateway LLC. must provide the latest Covid-19 cases and vaccination data to external
developers and its customers. The developers of HealthGateway LLC. plan to consume data
from external resources and expose them via a REST API. They decided to work with two
official data sets provided by the European Union. One data set is a CSV file that has daily
Covid-19 case data of the European countries. The second data set is an XML file that has
the weekly Covid-19 vaccination data of the European countries.
Because of the different source formats and potential data conversion, transformation
and integration scenarios, developers of HealthGateway LLC. decide to use a powerful
integration technology; Red Hat Fuse.
• Fetch the Covid-19 data from the files that reside in an SFTP server.
• Perform the suitable transformations of the data, thus handling the data as lists and
objects.
• Filter the data, store it in a database and expose it via REST endpoints.
• Aggregate the Covid-19 cases and vaccination data and publish it to an Apache Kafka
topic.
• Consume the aggregated data, and enrich it by using an external REST service, which
provides the general data of European countries.
• Expose the final enriched Covid-19 data to be used by the front-end application and by
the end-users.
Outcomes
In this exercise you should be able to examine a series of Red Hat Fuse capabilities such as:
AD221-RHF7.10-en-6-20230613 15
Chapter 1 | Introducing Red Hat Fuse and Camel
• Sending and receiving messages from Apache Kafka by using the kafka component.
• Performing an HTTP call to another REST endpoint for fetching data by using the http
component.
• Exposing the data in JSON format by using the rest component.
From your workspace directory use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/intro-
demo/apps directory, into the ~/AD221/intro-demo directory.
Instructions
1. Examine the high-level architecture diagram.
2. Navigate to the ~/AD221/intro-demo directory and open the project with an editor, such
as VSCodium.
3. Compile and run the health-gateway application by using the ./mvnw package
spring-boot:run command.
The application exposes two REST endpoints.
16 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
4. Explore how the health-gateway application works. The following image shows how the
routes work in the application.
5. Compile and run the health-backend application by using the ./mvnw package
quarkus:dev command.
The application exposes two REST URL endpoints.
AD221-RHF7.10-en-6-20230613 17
Chapter 1 | Introducing Red Hat Fuse and Camel
6. Explore how the health-backend application works. The following image shows how the
routes work in the application.
8. Observe the Covid-19 data that the health-front application consumes and exposes in
the web UI.
Note
The SIGINT signal might not work in the health-gateway application because of
pending inflight exchanges. In this case, send a SIGTERM or SIGKILL signal to stop
the application.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
18 AD221-RHF7.10-en-6-20230613
Chapter 1 | Introducing Red Hat Fuse and Camel
Summary
• About Apache Camel and Red Hat Fuse, and their roles in Agile Integration.
AD221-RHF7.10-en-6-20230613 19
20 AD221-RHF7.10-en-6-20230613
Chapter 2
AD221-RHF7.10-en-6-20230613 21
Chapter 2 | Creating Camel Routes
Objectives
• After completing this section, you should be able to create a route that reads and writes data to
the file system.
Visual Studio Code is not required for performing the lab exercises, but it is recommended.
Maven Configuration
Fuse applications are typically built with Maven. To access artifacts that are in Red Hat Maven
repositories, you must add those repositories to Maven's settings.xml file in the .m2 directory of
your home directory. The system-level settings.xml file at M2_HOME/conf/settings.xml is
used if a user specific file is not found. Add the Red Hat repositories as illustrated in the following
example.
<?xml version="1.0"?>
<settings>
<profiles>
<profile>
<id>extra-repos</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
22 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
<repository>
<id>redhat-ga-repository</id>
<url>https://maven.repository.redhat.com/ga</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>redhat-ea-repository</id>
<url>https://maven.repository.redhat.com/earlyaccess/all</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>jboss-public</id>
<name>JBoss Public Repository Group</name>
<url>https://repository.jboss.org/nexus/content/groups/public/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>redhat-ga-repository</id>
<url>https://maven.repository.redhat.com/ga</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>redhat-ea-repository</id>
<url>https://maven.repository.redhat.com/earlyaccess/all</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>jboss-public</id>
<name>JBoss Public Repository Group</name>
<url>https://repository.jboss.org/nexus/content/groups/public</url>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
AD221-RHF7.10-en-6-20230613 23
Chapter 2 | Creating Camel Routes
<activeProfiles>
<activeProfile>extra-repos</activeProfile>
</activeProfiles>
</settings>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.redhat-fuse</groupId>
<artifactId>fuse-springboot-bom</artifactId>
<version>${fuse.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
In addition to the BOM, the Spring Boot Maven Plugin is also required. The Spring Boot Maven
Plugin implements the build process for a Spring Boot application in Maven. This plugin is
responsible for packaging your Spring Boot application as an executable Jar file. The Spring Boot
maven plugin is provided by the following pom.xml configuration.
<plugin>
<groupId>org.jboss.redhat-fuse</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${fuse.version}</version>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
Spring Boot
Red Hat Fuse includes a Spring Boot Starter module. With this module, you can use Camel in
Spring Boot applications by using starters.
To use the starter, add the following dependency to your pom.xml file.
24 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>${camel.version}</version> <!-- use the same version as your Camel
core version -->
</dependency>
With Spring Boot, you can add classes with your Camel routes, by annotating a RouteBuilder
class with the org.springframework.stereotype.Component annotation. The annotation
allows Spring Boot to find the class, register it as a Java Bean, and start a camel context that
includes the route. The following example defines a simple route:
package com.example;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
@Component
public class MyRoute extends RouteBuilder {
@Override
public void configure() throws Exception {
from("timer:foo").to("log:bar");
}
}
An application.properties file is used to customize Spring Boot. By design, you will often
find that the default values are sufficient for your needs.
Camel Routes
In Camel, a route describes the path of a message from one endpoint (the origin) to another
endpoint (the destination). The origin of a route is associated with the from method in the Java
DSL and normally consumes messages from a source.
The from method uses an integration component configured as a consumer endpoint. Likewise,
the destination is associated with the to method in the Java DSL and produces, or send messages
to a destination. The to method uses an integration component configured as a producer
endpoint.
Routes are a critical aspect of Camel because they define integration between endpoints. With
the help of components, routes can move, transform, and split messages. Traditionally, integration
implementation requires lots of complicated and unnecessary coding. With Camel, routes are
defined in a few simple, human-readable lines of code in either Java DSL or XML DSL.
A route starts with a consumer, which receives the data from a point of origin. With Camel,
consider that the consumer is referring to where and how the initial message is being picked up.
The origin determines which type of consumer endpoint Camel component is used, such as a
location on the file system, a JMS queue, or even a tweet from Twitter. The route then directs the
message to the producer, which sends data to a destination. By abstracting the integration code,
developers can implement Enterprise Integration Patterns (EIPs) that manipulate or transform the
data within the Camel route without requiring changes to either the origin or the destination.
AD221-RHF7.10-en-6-20230613 25
Chapter 2 | Creating Camel Routes
Components
One of the most compelling reasons to use Camel is for the library of over 180 components.
Each component typically has an exhaustive set of options that allow you to customize how the
component interacts with the origin or destination. In this course a subset of these components
is used for labs and demonstrations. The following items are examples of components used in this
course.
Some core Camel components are especially helpful to use when developing Camel applications.
The Direct component is a Camel core component that can be used to create consumer or
producer endpoints for receiving and sending messages within the same CamelContext. External
systems cannot send messages to direct component endpoints. In this example XML DSL route,
the from element is using the direct component to receive messages from other routes running
within the same CamelContext as this route. The log_body context provided in the uri attribute
specifies the identity for this direct component. Another route can send a message to this
direct component, within the same CamelContext by using a producing direct component
with the same uri value.
The producer in the to element in this route, is using a mock component. The mock component is
used to make testing routes easier, by simulating a real component. The mock component is often
used when a real component is not available.
A complete coverage of all components is out of scope for this course. It only takes a general
understanding of how to use components, however, to be able to use any of the other 180
components. All of them conform to the same usage pattern. Refer to the Camel documentation
for complete coverage on any component and all the options available for each component.
Endpoints
A Camel endpoint consists of a component and a URI. The URI defines how the component is used
to consume new messages from an origin or produce exchange messages to a destination. The
syntax of the URI endpoint consists of three parts: the scheme, the context path, and the options.
For example:
ftp://services.lab.example.com?username=delete=true&include=order.*xml
26 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
In this example URI, the scheme instructs Camel to use the ftp component. The context path of
services.lab.example.com provides the address of the ftp service to use. After the ? two
options are specified and separated by the & character to provide additional details for how the
component is to be used.
Each Camel route must have a consumer endpoint and can have multiple producer endpoints. The
most powerful way of creating the routes is via Camel's Java Domain-specific Language (DSL).
@Component
public class FileRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
from("file:orders/incoming")
.to("file:orders/outgoing");
}
}
In this example, the consumer uses the file component to read all files from the orders/
incoming path. Likewise, the producer also uses a file component to send the file to the
orders/outgoing path.
Inside a RouteBuilder class, each route can be uniquely identified by using a routeId method.
Naming a route makes it easy to verify route execution in the logs, and also simplifies the process
of creating unit tests. Add additional calls to the from method to create multiple routes in the
configure method.
from("file:orders/incoming")
.routeId("route1")
.to("file:orders/outgoing");
from("file:orders/new")
.routeId("routefinancial")
.to("file:orders/financial");
}
Each component can specify endpoint options to further configure how the component should
function at that endpoint. The endpoint options are listed after the ? character as illustrated in the
next example. Refer to the Camel documentation for component-specific attributes.
AD221-RHF7.10-en-6-20230613 27
Chapter 2 | Creating Camel Routes
from("file:orders/incoming?include=order.*xml")
.to("file:orders/outgoing/?fileExist=Fail");
}
The route consumes only XML files with a name starting with order.
In addition to Java DSL, routes can be created via XML DSL files. Java DSL is a richer language
to work with because you have the full power of the Java language at your fingertips. Often,
messages require customized handling that is beyond the scope of Camel routes. Java provides
an elegant solution as discussed later in this course. Also, some Java DSL features, such as value
builders (for building expressions and predicates), are not available in the XML DSL.
On the other hand, using XML DSL routes gives a convenient alternative for externalizing route
configurations.
To use the Spring DSL with Spring Boot, declare a routes element, using the custom Camel
Spring namespace, inside an XML configuration file located in a camel folder on the Java
classpath. Inside the routes element, declare one or more route elements starting with a from
element and usually ending with a to element. These from and to elements are similar to the
Java DSL from and to methods.
<routes xmlns="http://camel.apache.org/schema/spring">
<route id="XML example">
<from uri="file:orders/incoming"/>
<to uri="file:orders/outgoing"/>
</route>
</routes>
References
Apache Camel Spring Boot Documentation
https://camel.apache.org/camel-spring-boot/3.12.x/spring-boot.html
For more information, refer to the Fuse Tooling Support for Apache Camel chapter in
the Red Hat Fuse 7.10 Release Notes at
https://access.redhat.com/documentation/en-us/
red_hat_fuse/7.10/html-single/release_notes_for_red_hat_fuse_7.10/
index#fuse_tooling_support_for_apache_camel
For more information, refer to the Getting Started with Fuse on Spring Boot guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
getting_started_with_fuse_on_spring_boot/index
28 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
AD221-RHF7.10-en-6-20230613 29
Chapter 2 | Creating Camel Routes
Guided Exercise
Outcomes
You should be able to use Spring Boot to create Camel routes.
The solution files for this exercise are in the AD221-apps repository, within the route-
build/solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/route-
build/apps directory, into the ~/AD221/route-build directory.
Instructions
1. Navigate to the ~/AD221/route-build directory, and open the project with your editor
of choice.
2. Open the project's POM file, and add the following dependencies:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
30 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-test-spring</artifactId>
<scope>test</scope>
</dependency>
import org.apache.camel.builder.RouteBuilder;
...
public class SchedulerRouteBuilder extends RouteBuilder
AD221-RHF7.10-en-6-20230613 31
Chapter 2 | Creating Camel Routes
4. Add an XML DSL route to receive the messages produced by the Java DSL route. Create
the XML route by updating the camel-context.xml file.
Note
Although the course focuses on the Java DSL, this step uses the XML DSL for
illustrative purposes. You could create the same route by using the Java DSL.
In this example the mock component is used because a real component is not
available.
5.1. Run the ./mvnw clean package spring-boot:run command to start the
Spring Boot application.
5.2. Observe that the Java DSL route sends exchanges to the XML DSL route, and that
the timestamps in the log messages differ by 2 seconds for every exchange.
32 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Finish
Stop the Spring Boot application, return to your workspace directory and use the lab command to
complete this exercise. This is important to ensure that resources from previous exercises do not
impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 33
Chapter 2 | Creating Camel Routes
Objectives
• After completing this section, you should be able to create a route that reads from an FTP
server.
The following class is an example of a Camel route that uses the file and ftp components. The
route downloads files from an FTP server to a local directory.
@Override
public void configure() throws Exception {
from("ftp://localhost:21/documents")
.to("file:downloads/docs");
}
file:directoryName
• The directoryName part is required. It specifies the base directory to use for the file endpoint.
• Additionally, you can specify endpoint options. For a complete list of endpoint options, refer to
the File component documentation.
For example, to copy files from one directory to another, you can use the following
implementation:
@Override
public void configure() throws Exception {
from("file:orders/incoming")
.to("file:orders/outgoing");
34 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
}
}
Note how the example uses the file component both as the origin and destination of messages.
The file component reads all the files from the orders/incoming directory.
The file component writes all the files to the orders/outgoing directory.
Filtering Files
The file component requires a directory as the only endpoint parameter. If you want to process a
specific file, either as the origin or destination of a route, then you must use the fileName option,
as the following example shows.
from("file:datasets?fileName=covid_cases.csv")
.to("file:tmp/data/");
Likewise, you can configure the endpoint to include a subset of files, by using the filter option.
The following example demonstrates a file component endpoint that only reads product json
files.
from("file:warehouse/incoming?include=product.*json")
.to("file:warehouse/outgoing");
Note
The include parameter supports regex patterns, both for the file and ftp
components.
Message Headers
Similar to other components, the file component provides producer and consumer headers in
the message. For example, you can use the CamelFileLastModified header to inspect the last
modified time of each consumed file.
from("file:orders/")
.log("File: ${header.CamelFileLastModified}")
The File component documentation provides the full list of supported headers.
ftp://[username@]hostname[:port]/directoryName
• The component allows the use of ftp, sftp, and ftps protocols.
AD221-RHF7.10-en-6-20230613 35
Chapter 2 | Creating Camel Routes
• The hostname parameter is required, but the username and port are optional. You can also
specify the username as an endpoint query option.
• Similar to the file component, you can include further options in the endpoint. For example,
you can use options to specify additional connection parameters, such as the authentication
password.
from(
"ftp://localhost:21/documents?" +
"username=myuser&password=mypass"
)
.to("file:docs/");
The preceding Camel route reads files from the documents directory in an FTP server and copies
the files into the docs directory of the local file system. Likewise, you can use the ftp component
as a producer, to write files to an FTP endpoint.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-ftp</artifactId>
</dependency>
Filtering Files
Similar to the file component, you can use the fileName and include parameters to select
or filter specific files in an FTP server. For example, you can select a subset of the files as the
following example shows:
from(
"ftp://localhost:21/documents?" +
"username=myuser&password=mypass&" +
"include=recipe.*txt"
)
.to("file:docs/recipes");
To activate the passive mode, set the passiveMode endpoint option to true, as follows:
36 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
from(
"ftp://localhost:21/documents?" +
"username=myuser&password=mypass&" +
"include=recipe.*txt&" +
"passiveMode=true"
)
.to("file:docs/");
Message Headers
The ftp component provides producer and consumer headers for each message. For example,
you can use the CamelFileName header to inspect the produced or consumed file name.
from("ftp://localhost:21/?include=record.*txt&")
.log("File: ${header.CamelFileName}")
.to("file:records");
The FTP component documentation provides the full list of supported headers.
References
For more information, refer to the File Component chapter in the Red Hat Fuse 7.10
Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#file-component
For more information, refer to the FTP Component chapter in the Red Hat Fuse 7.10
Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#ftp-component
AD221-RHF7.10-en-6-20230613 37
Chapter 2 | Creating Camel Routes
Guided Exercise
The company stores customer support requests as text files in an FTP server. Their data
science team uses these files to train a natural language processing model. The team must
download the files every time they retrain the model, to make sure that they use the latest
available data in the training process. This is a repetitive, network-intensive, time-consuming
task.
You must create a Camel route to make the latest data continuously available to the data
science team. This route must copy the files from the FTP server to the local file system.
Outcomes
You should be able to create a Camel route that reads files from an FTP server and writes
the files to the local file system.
The solution files for this exercise are in the AD221-apps repository, within the route-
files/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/route-
files/apps directory, into the ~/AD221/route-files directory.
Instructions
1. Navigate to the ~/AD221/route-files directory and open the project with an editor,
such as VSCodium.
2. Add the Camel FTP component dependency in the project's POM file.
3. Edit the FtpToFileRouteBuilder class. Extend the RouteBuilder class and override
the configure method.
@Component
public class FtpToFileRouteBuilder extends RouteBuilder {
4. To define the route, implement a method called configure(), which consumes files from
an FTP endpoint. Specify the FTP endpoint URI by using the following parameters:
38 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Parameter Value
Host localhost
Port 21721
Username datauser
Password fuse
5. Set the ID of the route to ftpRoute. The unit tests expect your route to use this ID. Any
other ID would make unit tests fail.
6. Log the processed file names. To log the file name, use the header.CamelFileName
header.
AD221-RHF7.10-en-6-20230613 39
Chapter 2 | Creating Camel Routes
"include=ticket.*txt&" +
"passiveMode=true"
)
.routeId("ftpRoute")
.log("File: ${header.CamelFileName}")
}
7. Write the files to the local file system. You must write the files to the
customer_requests/ directory, within the project root.
8. Open a new terminal window and run the FTP server by using the following command:
The FTP server serves the files contained in the ftp/data directory of the project root.
Note that this folder contains a README.txt file. Your route must ignore this file.
10. Run ./mvnw clean test to execute the unit tests. Verify that one unit test passes.
11. Stop the Spring Boot application and the FTP server.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
40 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Objectives
• After completing this section, you should be able to use processors to manipulate exchange
messages in routes.
Camel provides many ways to implement the Translation step in a route. If a built-in Camel
component is available to perform the required function, then this option is the recommended
approach. However, when custom Java code is required to provide the needed business logic, a
Camel Processor can be used to make changes to any part of an exchange message.
Built-in Processors
Red Hat Fuse provides many built-in processors that perform a wide variety of functions. The
following list is a sample of these built-in processors.
bean
Processes the current exchange by invoking a method on a Java object (or bean).
convertBodyTo
Converts the In message body to the specified type.
filter
Uses a predicate expression to filter incoming exchanges.
log
Logs a message to the console.
AD221-RHF7.10-en-6-20230613 41
Chapter 2 | Creating Camel Routes
marshal
Transforms into a low-level or binary format by using the specified data format, in preparation
for sending over a particular transport protocol.
unmarshal
Transforms the In message body from a low-level or binary format to a high-level format, by
using the specified data format.
setBody
Sets the message body of the exchange's In message.
setHeader
Sets the specified header in the exchange's In message.
removeHeaders
Removes the headers matching the specified pattern from the exchange's In message. The
pattern can have the form prefix\* — in which case it matches every name starting with
prefix— otherwise, it is interpreted as a regular expression.
Custom Processors
Through integration with the Camel API, a custom processor can access any part of the camel
exchange message or even the broader camel context.
Implementing the Processor interface requires implementing a single method called process:
The exchange argument allows access to both the input and output messages and the parent
Camel context. Processor implementation classes take advantage of other Camel features, such
as data type converters and fluent expression builders.
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
42 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
exchange.getIn().setHeader("orderDate", formatedOrderDate);
}
}
Adds a message header called orderDate with the value of formatedOrderDate variable.
The exchange object instance includes an output message that can be accessed using the
getOut method. In practice, the outgoing message is often not used because it does not include
the message headers and attachments. You can copy the headers and attachments from the
incoming message to the outgoing message, but this can be tedious. The alternative is to set the
changes directly on the incoming message, and to not use the outgoing message as illustrated in
the preceding example.
To use a processor inside a route, insert the process method for Java DSL:
.from("file:inputFolder")
.process(new com.example.MyProcessor())
.to("activemq:outputQueue");
Before writing your own Processor implementation, determine first whether there are ready-to-
use Camel components that might provide the same result with less custom code, for example:
1. The transform component allows for changing of a message body by using any expression
language supported by Camel.
3. The bean component allows the calling of any Java bean method from inside a route.
Note
Camel is so powerful that there is a risk of embedding business logic inside a
route, as a processor or by other means. To avoid doing that, keep your routes just
about integration, and leave business logic to application components that are
interconnected by Camel routes.
Compared to Java Beans, a Camel processor is preferred when there is a need to call Camel APIs
from the custom Java code. A Java Bean is preferred when a transformation can reuse code that
has no knowledge of Camel APIs.
References
For more information, refer to the Processors chapter in the Red Hat Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#FMRS-P
For more information, refer to the Implementing a Processor chapter in the Red Hat
Apache Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#Processors
AD221-RHF7.10-en-6-20230613 43
Chapter 2 | Creating Camel Routes
Guided Exercise
Your department produces text files that contain the details of processed orders. In these
files, each line corresponds to an order. Recently, the business intelligence team of your
company has required the lines of these files to be numbered. Therefore, you must process
the input files to add a line number to each line.
Outcomes
You should be able to modify the message content to meet the requirements of the next
component in the Route.
The solution files for this exercise are in the AD221-apps repository, within the route-
processor/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/route-
processor/apps directory, into the ~/AD221/route-processor directory.
Instructions
1. Navigate to the ~/AD221/route-processor directory, open the project with your editor
of choice, and examine the code.
3. Update the route configure method of the FileRouteBuilder class to add the
Processor code.
44 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
3.1. Add the processor step to the route pipeline and create a new
org.apache.camel.Processor instance as the parameter:
3.3. Create an external thread safe counter to use in the lambda function:
3.4. Using the counter, process the lines to add the counter value at the beginning:
3.5. Set the modified lines as the content of the output message.
exchange.getIn().setBody( processedLines );
The whole section of the process function should look like the following lines:
exchange.getIn().setBody( processedLines );
}
} )
4.1. Run the route by using the ./mvnw clean spring-boot:run Maven goal.
AD221-RHF7.10-en-6-20230613 45
Chapter 2 | Creating Camel Routes
4.2. Give the route a few moments to process the incoming files, and then terminate it by
using Ctrl+C.
4.3. Inspect the output folder to verify that the output files contain line numbers at the
beginning of each line. Expected output is:
Notice the line number at the beginning of each line of the processed output.
Note
If you need to start over to test something or in case of a retry due to a mistake,
then run the following lab finish command and begin again with the lab
start.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
46 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Routing Messages
Objectives
• After completing this section, you should be able to implement routes that include dynamic
routing.
Content-based Router
The Content-based Router (CBR) EIP routes messages to the correct destination based on the
message contents.
For example, a logistics system that processes fulfillment orders can implement this EIP. A
fulfillment order for the ACME provider goes to the schema:acme-destination endpoint, and
the orders for the PACME provider goes to the schema:pacme-destination endpoint.
To support this pattern, Camel has the choice DSL element. The choice DSL element contains
multiple when DSL elements, and optionally an otherwise DSL element.
from("schema:origin")
.choice()
.when(predicate1)
.to("schema:acme-destination")
.when(predicate2)
.to("schema:pacme-destination")
...
.otherwise()
.to("schema:destinationN");
The when DSL element requires a predicate that, when true, sends the message to a specific
destination. If the predicate is false, then the route flow moves to the next when element. The
predicate can use Simple language expression, XPath expressions, or any other expression
language supported by Camel.
The otherwise DSL element defines a destination for messages that fail to match any of
the when predicates.
Routing Slip
The Routing Slip EIP routes a message consecutively through a series of processing steps. The
sequence of steps is unknown at design time, and varies for each message.
For example, a pipeline that processes security checks for a credit card company might have
different processing steps. A purchase from the same country requires standard security checks.
But, an international purchase requires additional endpoints to verify that the purchase is not
fraudulent.
AD221-RHF7.10-en-6-20230613 47
Chapter 2 | Creating Camel Routes
In this pattern, a header field (the slip) contains the list of endpoints required in the processing
steps. At run time, Apache Camel reads this header and constructs the pipeline.
Note
A pipeline is a route in which all the intermediate steps are endpoints.
from("schema:origin")
.routingSlip(
header("destination")
);
The header method extracts the list of endpoints from the destination header.
You can use a bean to compute the header that contains the list of endpoints.
from("schema:origin")
.setHeader("destination")
.method(MyBeanImplementation.class)
.routingSlip(header("destination"));
The setHeader method creates a header named destination to store the list of
endpoints.
The method method uses the MyBeanImplementation Java bean to calculate the
sequence of endpoints, and adds the result to the destination header.
The routingSlip method constructs a pipeline from the list of endpoints stored in the
destination header, and sends the message to those endpoints.
Dynamic Router
The Dynamic Router EIP routes a message consecutively through a series of processing steps.
This pattern does not require the series of steps to be predetermined, as with the Routing Slip
EIP. Each time the message returns from an endpoint, the dynamic router recalculates the next
endpoint in the route.
To use the Dynamic Router EIP, create a Java bean with the logic that determines where the
message should go next. Each time the endpoint process finishes, the route uses the same
method to recalculate the next step.
from("schema:origin")
.dynamicRouter(
method(MyBeanImplementation.class, "calculateDestination")
);
48 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Dynamic To
The toD DSL method allows you to send a message to a single dynamically computed endpoint.
In this method, the parameter must be a String, or a Simple language expression that resolves to a
destination.
The following example resolves the key named destination from the exchange header to
identify the destination.
from("schema:origin")
.toD("${header.destination}");
References
For more information, refer to the Bean Integration chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#BasicPrinciples-BeanIntegration
For more information, refer to the Content-based Router chapter in the Apache
Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgRout-ContentBased
For more information, refer to the Routing Slip chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgRout-RoutingSlip
For more information, refer to the Dynamic Router chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#DynamicRouter
For more information, refer to the Dynamic To chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#topic-dynamicto
AD221-RHF7.10-en-6-20230613 49
Chapter 2 | Creating Camel Routes
Guided Exercise
Routing Messages
• In this exercise, you will develop a pipeline that routes messages for a book publishing
company dynamically.
The company stores the books as DocBook files, and uses a shared file system for the
publishing process. A team of editors and graphic designers review the book manuscripts
before they are ready for printing.
At this moment, the company publishes technical, and novel books. Editors review all types
of books, and graphic designers only the technical ones.
Selecting the books to review for each one of the teams is a repetitive, manual, and time-
consuming task. You must use Red Hat Fuse, and create a Camel route to route the correct
type of book to the correct team.
The company also has printing services. The printing services use different machines
depending on the book type. You must create a Camel route to dynamically route the
reviewed books to the correct printing system.
Outcomes
You should be able to create Camel routes that route messages dynamically by using the
Routing Slip EIP, and the toD component.
The solution files for this exercise are in the AD221-apps repository, within the route-
messages/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/route-
messages/apps directory, into the ~/AD221/route-messages directory.
Instructions
1. Navigate to the ~/AD221/route-messages directory, open the project with your editor
of choice, and examine the code.
2. Create a destination algorithm for the book review pipeline. This algorithm must use the
book type to decide the destination endpoint. Edit the RoutingSlipStrategy class, and
implement an algorithm that implements the following rules:
50 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
novel file://data/pipeline/editor
switch (type) {
case "technical":
destinations.add("file://data/pipeline/graphic-designer");
// No break
case "novel":
destinations.add("file://data/pipeline/editor");
}
4. Verify the correctness of the book-review-pipeline route by executing the unit tests.
Run the ./mvnw clean -Dtest=BookReviewPipelineRouteBuilderTest test
command, and verify that three unit tests pass.
AD221-RHF7.10-en-6-20230613 51
Chapter 2 | Creating Camel Routes
7. Run the Spring Boot application by using the ./mvnw clean package spring-
boot:run command.
8. Wait for the application to process the books stored in the file://data/manuscripts
endpoint, and manually verify the correctness of the book review pipeline:
9. To simulate the end of the book review process by the editors, copy the book-01.xml
and book-02.xml files from the data/pipeline/editor directory to the data/
pipeline/ready-for-printing directory. Wait for the application to process the files,
and manually verify the correctness of the book printing pipeline:
10. Stop the Spring Boot application, run the tests by executing the ./mvnw test command,
and verify that five unit tests pass.
52 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 53
Chapter 2 | Creating Camel Routes
Quiz
1. The first step is to connect to the FTP server to retrieve the orders. Regarding the
following line, which two of the sentences are true? (Choose two.):
from( "ftp://ftpserver/integration?include?include=incoming_orders.*csv" )
2. The next step is to split orders by type. Regarding the following code, which three
sentences are true? (Choose three.)
.choice()
.when( header( "CamelFileName" ).contains( "parts") )
.to( "direct:parts" )
.when( header( "CamelFileName" ).contains( "accessories" ) )
.to( "direct:accessories" )
.otherwise()
.log( "File: ${header.CamelFileName} does not have correct type." )
a. Files containing parts in their name are logged in the parts log.
b. Files containing parts in their name are sent to the direct:parts endpoint.
c. Files containing accessories in their name are sent to the direct:accessories
endpoint.
d. Files with neither parts nor accessories in their name are logged.
e. All files are logged.
54 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
3. After the splitting step, orders in the accesories consumer need to be processed to
attend to the specifications of a legacy system. Regarding the following processing,
which of the following sentences is true?
from( "direct:accessories" )
.process( new Processor() {
String inputMessage = exchange.getIn().getBody( String.class );
exchange.getIn().setBody( output );
})
.to( "direct:accessories2" )
a. The processor replaces each newline character with legacy characters in messages
received from the accessories consumer, and the route sends the result to the
accessories2 producer.
b. The processor replaces each newline character with legacy characters in the
accessories consumer.
c. The processor replaces each newline character with legacy characters in the
accessories2 producer.
d. The processor replaces each return carriage character with newline characters in the
accessories consumer.
e. The processor replaces each newline character with legacy characters in the
accessories consumer to the accessories2 producer.
4. To finish the route, you must store each type of order in files. Regarding the following
code, which two sentences are true? (Choose two.)
from( "direct:accessories2" )
.to( "file:/shared/?fileName=accessories_${date:now:yyyyMMdd}.csv" )
from( "direct:parts" )
.to( "file:/shared/?fileName=parts_${date:now:yyyyMMdd}.csv" )
a. The route stores orders in the accessories and parts consumers into their respective
files.
b. The route stores orders in the accessories2 and parts producers into their respective
files.
c. The route stores orders in the accessories2 and parts consumers into their
respective files.
d. The route appends the current date to each file name.
AD221-RHF7.10-en-6-20230613 55
Chapter 2 | Creating Camel Routes
Solution
1. The first step is to connect to the FTP server to retrieve the orders. Regarding the
following line, which two of the sentences are true? (Choose two.):
from( "ftp://ftpserver/integration?include?include=incoming_orders.*csv" )
2. The next step is to split orders by type. Regarding the following code, which three
sentences are true? (Choose three.)
.choice()
.when( header( "CamelFileName" ).contains( "parts") )
.to( "direct:parts" )
.when( header( "CamelFileName" ).contains( "accessories" ) )
.to( "direct:accessories" )
.otherwise()
.log( "File: ${header.CamelFileName} does not have correct type." )
a. Files containing parts in their name are logged in the parts log.
b. Files containing parts in their name are sent to the direct:parts endpoint.
c. Files containing accessories in their name are sent to the direct:accessories
endpoint.
d. Files with neither parts nor accessories in their name are logged.
e. All files are logged.
56 AD221-RHF7.10-en-6-20230613
Chapter 2 | Creating Camel Routes
3. After the splitting step, orders in the accesories consumer need to be processed to
attend to the specifications of a legacy system. Regarding the following processing,
which of the following sentences is true?
from( "direct:accessories" )
.process( new Processor() {
String inputMessage = exchange.getIn().getBody( String.class );
exchange.getIn().setBody( output );
})
.to( "direct:accessories2" )
a. The processor replaces each newline character with legacy characters in messages
received from the accessories consumer, and the route sends the result to the
accessories2 producer.
b. The processor replaces each newline character with legacy characters in the
accessories consumer.
c. The processor replaces each newline character with legacy characters in the
accessories2 producer.
d. The processor replaces each return carriage character with newline characters in the
accessories consumer.
e. The processor replaces each newline character with legacy characters in the
accessories consumer to the accessories2 producer.
4. To finish the route, you must store each type of order in files. Regarding the following
code, which two sentences are true? (Choose two.)
from( "direct:accessories2" )
.to( "file:/shared/?fileName=accessories_${date:now:yyyyMMdd}.csv" )
from( "direct:parts" )
.to( "file:/shared/?fileName=parts_${date:now:yyyyMMdd}.csv" )
a. The route stores orders in the accessories and parts consumers into their respective
files.
b. The route stores orders in the accessories2 and parts producers into their respective
files.
c. The route stores orders in the accessories2 and parts consumers into their
respective files.
d. The route appends the current date to each file name.
AD221-RHF7.10-en-6-20230613 57
Chapter 2 | Creating Camel Routes
Summary
• A Camel route describes the path of a message from an origin endpoint to a destination
endpoint.
• You can define routes with the Java DSL or the XML DSL.
• For complex message routing, Camel provides built-in enterprise integration pattern (EIP)
implementations.
58 AD221-RHF7.10-en-6-20230613
Chapter 3
Implementing Enterprise
Integration Patterns
Goal Implement enterprise integration patterns using
Camel components.
AD221-RHF7.10-en-6-20230613 59
Chapter 3 | Implementing Enterprise Integration Patterns
Objectives
• After completing this section, you should be able to invoke data transformation automatically
and explicitly by using a variety of different techniques, and develop a route that filters
messages.
The following terms are commonly used to describe transformation features in Camel.
Data Formats
Data Formats in Camel are the various forms that your data can be represented, either in
binary or text. Each data format has a class you must instantiate and optionally configure
before you can use it in a route.
Marshaling Data
Marshaling is the process where Camel converts the message payload from a memory-based
format (for example, a Java object) to a data format suitable for storage or transmission (XML
or JSON, for example). To perform marshaling, Camel uses the marshal method, which
requires a DataFormat object as a parameter.
Unmarshaling Data
Unmarshaling is the opposite process of marshaling, where Camel converts the message
payload from a data format suitable for transmission such as XML or JSON to a memory-
based format, typically a Java object. To perform unmarshaling explicitly in a Camel route, use
the unmarshal method in the Java DSL.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jaxb</artifactId>
</dependency>
60 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
The camel-jaxb library also provides the JAXB annotations that you must use to annotate your
model class. JAXB matches fields on the model class to elements and attributes contained in the
XML data by using the information provided by the JAXB annotations.
The following JAXB model class contains the necessary annotations to marshal the XML to a Java
object:
package com.redhat.training;
import java.io.Serializable;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAttribute;
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Order implements Serializable {
@XmlAttribute
private String id;
@XmlAttribute
private String description;
@XmlAttribute
private double value;
@XmlAttribute
private double tax;
}
Note that this model class implements the java.io.Serializable interface, which is required by
JAXB to execute the marshaling and unmarshaling process.
In the previous example, each field in the Java class represents an attribute on the root Order
element. By default, if no alternate name is specified in the JAXB annotation parameters, the
marshaller uses the name of the field directly to map the XML data.
Note
JAXB annotations are beyond the scope of this course. You can find more
documentation in the JAXB user guide.
In the following sample route, an external system places Java objects in the body of a message
and then sends the message to an ActiveMQ queue called itemInput. The Camel route
consumes the messages, and JAXB marshals the object to XML data and replaces the contents of
AD221-RHF7.10-en-6-20230613 61
Chapter 3 | Implementing Enterprise Integration Patterns
the exchange body with the corresponding XML equivalent. The route then sends the XML data in
a message to an ActiveMQ queue called itemOutput.
from("activemq:queue:itemInput)
.marshal().jaxb()
.to("activemq:queue:itemOutput);
Notice that in the previous example route, no JAXB data format is instantiated. This is possible
because camel-jaxb automatically finds all classes that contain JAXB annotations if no context
path is specified, and uses those annotations to marshal to the XML data.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jackson</artifactId>
</dependency>
Similar to JAXB, Jackson also provides a set of annotations you use to control the mapping of
JSON data into your model classes.
{
"ID": "1",
"value": 5.00,
"tax": 0.50,
...
The following example is a Jackson-annotated model class which you can use to marshal the
JSON data:
package com.redhat.training;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
@JsonProperty("ID")
private String id;
@JsonIgnore
private String description;
62 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Use the @JsonProperty annotation to override the field name with a name for Jackson to
use when marshaling and unmarshaling JSON data for that property.
Use the @JsonIgnore annotation to make Jackson ignore a field entirely when marshaling
an instance of the model class into JSON data.
The following is a Java DSL example of using Jackson to marshal JSON data before writing the
JSON data to a file in the outbox directory:
from("queue:activemq:queue:itemInput")
.marshal().json(JsonLibrary.Jackson)
.to("file:outbox")
When you use the camel-xmljson library, the terminologies marshaling and unmarshaling
are not as obvious because there are no Java objects involved. The library defines XML as the
high-level format or the equivalent of what your Java model classes typically represent, and
JSON as the low-level format more suitable for transmission or storage. This designation is mostly
arbitrary for the purpose of defining the marshal and unmarshal terms.
Marshaling
Converting from XML to JSON
Unmarshaling
Converting from JSON to XML
To use the XmlJsonDataFormat class in your Camel routes you must add the following
dependencies to your POM file:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-xmljson</artifactId>
</dependency>
<dependency>
<groupId>xom</groupId>
<artifactId>xom</artifactId>
</dependency>
AD221-RHF7.10-en-6-20230613 63
Chapter 3 | Implementing Enterprise Integration Patterns
Note
The XOM library cannot be included by default due to an incompatible license with
Apache Software Foundation. You need to add this dependency manually for the
camel-xmljson module to function.
The following example Camel route includes the use of the XmlJsonDataFormat:
Filtering Messages
Camel defines the Message Filter pattern to remove some messages during route execution
based on the content of the Camel message.
from("<Endpoint URI>")
.filter(<filter>)
.to("<Endpoint URI>");
The <filter> must be a Camel predicate, which evaluates to either true or false based on the
message content. The filter drops any messages that evaluate to false and the remainder of the
route is not processed. Predicates can be created using expressions, which Camel evaluates at
runtime.
Because of the number of data formats supported by integration systems, a set of technologies
can be used to filter information. For example, in an XML-based message, XPath can be used to
identify fields in an XML file.
To evaluate if a certain value is available at a specific XML element, use the following syntax:
Likewise, the Simple expression language can be used to filter Java objects.
from("direct:a")
.filter(simple("${header.foo} == 'bar'"))
.to("direct:b")
64 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Implementing Predicates
Camel uses expressions to look for information inside messages. Expressions support a large
number of data formats, including Java-based data and common data format exchanges (XML,
JSON, SQL, and so on).
Predicates in Camel are essentially expressions that must return a Boolean value. This is often
used to look for a certain value in an Exchange instance. Predicates can be leveraged by Camel in
conjunction with expression languages to customize routes and filter data in a route. For example,
to use the Simple expression language in a filter, the Simple expression must be called inside a
route calling the simple method. Similarly, to use an XPath expression, there is a method called
xpath.
<order>
<orderId>100</orderId>
<shippingAddress>
<zipCode>22322</zipCode>
</shippingAddress>
</order>
To navigate in an XML file, XPath separates each element with a forward slash (/). Therefore, to
get the text within the <orderId> element, use the following XPath expression:
/order/orderId/text()
To get the zipCode from the previous XML, use the following expression:
/order/shippingAddress/zipCode/text()
To get all XML contents where zipCode is not 23221, use the following expression:
/order/shippingAddress/[not(contains(zipCode,'23221'))]
To use the expression as a predicate in a Camel route, the XPath method parses the XPath
expression and returns a Boolean:
xpath("/order/shippingAddress/[not(contains(zipCode,'23221'))]")
To filter XML messages sent to a destination, use the following Java DSL example:
from("file:orders/incoming?include=order.*xml")
.filter(xpath("/order/orderItems/orderItem/orderItemQty > 1"))
.to("file:orders/outgoing/?fileExist=Fail");
An expression language (EL) used to identify Java objects is the Simple EL. It uses a syntax that
resembles many other scripting languages, using dots to step through nested Java objects. For
example, in a class called Order, with an address attribute that contains a ZIP code, the Simple EL
to search for the zip code 33212 is:
AD221-RHF7.10-en-6-20230613 65
Chapter 3 | Implementing Enterprise Integration Patterns
${order.address.zipCode = '33212'}
from("activemq:queue:orders.in")
.wireTap("file:backup")
.to("direct:start");
The preceding example uses the Wire Tap pattern, and sends a copy of every message received on
the activemq:queue:orders.in endpoint to the file:backup endpoint.
References
For more information, refer to the JAXB DataFormat chapter in the Apache Camel
Component Reference Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#jaxb-dataformat
For more information, refer to the JSON Jackson DataFormat chapter in the Apache
Camel Component Reference Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#json-jackson-dataformat
For more information, refer to the Message Filter chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgRout-MsgFilter
For more information, refer to the Wire Tap chapter in the Apache Camel
Development Guide
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#WireTap
66 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Guided Exercise
Outcomes
In this exercise you should be able to:
The solution files for this exercise are in the AD221-apps repository, within the pattern-
filter/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/pattern-
filter/apps directory, into the ~/AD221/pattern-filter directory. The Lab
command also creates a Red Hat AMQ instance.
Note
You can inspect the logs for the AMQ instance at any time with the following
command:
Instructions
1. Navigate to the ~/AD221/pattern-filter directory and open the project with an
editor, such as VSCodium.
2. Open the project POM file and add the dependencies that are required.
AD221-RHF7.10-en-6-20230613 67
Chapter 3 | Implementing Enterprise Integration Patterns
Dependencies to Add
XML to JSON
<dependency>
Transformations
<groupId>org.apache.camel</groupId>
<artifactId>camel-xmljson</artifactId>
</dependency>
<dependency>
<groupId>xom</groupId>
<artifactId>xom</artifactId>
<version>1.3.7</version>
</dependency>
JSON Paths
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jsonpath</artifactId>
</dependency>
XML Transformation
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jaxb</artifactId>
</dependency>
4. Build and run the application to verify that the route works. Note that the route processes
the XML for the five sample Order objects.
This class is used by the marshal method of the Java DSL to convert the XML data into
JSON data.
68 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
from("jms:queue:orderInput")
.routeId("Transforming Orders")
.marshal().jaxb()
.log("XML Body: ${body}")
.marshal(xmlJson)
.log("JSON Body: ${body}")
.to("mock:fufillmentSystem");
7. Using the delivered field on the Order object, filter out any orders that were already
delivered.
Add the following method to filter out Order objects that have a property of delivered
set to true.
from("jms:queue:orderInput")
.routeId("Transforming Orders")
.marshal().jaxb()
.log("XML Body: ${body}")
.marshal(xmlJson)
.log("JSON Body: ${body}")
.filter().jsonpath("$[?(@.delivered !='true')]")
.to("mock:fufillmentSystem");
Since the data is now in JSON format, you can use a predicate that uses the jsonpath
method to only allow messages where the value of the delivered field is not equal to
true.
8. Send all undelivered orders to a mock endpoint representing an order logging system.
Add the wireTap DSL method to the route definition to send a copy of all undelivered
orders to a separate direct endpoint:
from("jms:queue:orderInput")
.routeId("Transforming Orders")
.marshal().jaxb()
.log("XML Body: ${body}")
.marshal(xmlJson)
.log("JSON Body: ${body}")
.filter().jsonpath("$[?(@.delivered !='true')]")
.wireTap("direct:jsonOrderLog")
.to("mock:fufillmentSystem");
AD221-RHF7.10-en-6-20230613 69
Chapter 3 | Implementing Enterprise Integration Patterns
10. Build and run the application with the ./mvnw clean spring-boot:run command.
In the sample, two of the five orders have the delivered field set to false. Thus you
should find two of these Log Orders resulting from the wiretap.
12. Use the ./mvnw clean test command to run the unit tests, and verify that the two tests
pass.
Results :
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
70 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Objectives
• After completing this section, you should be able to create custom type converters, to convert
the message payloads into custom object types.
For the data type transformation, Camel has a built-in type-converter system that automatically
converts between well-known types. This system allows Camel components to work together
without having type mismatches.
When routing messages from one endpoint to another, it is often necessary for Camel to convert
the body payload from one Java type to another. Conversions frequently occur between the
following types:
• File
• String
The Message interface defines the getBody helper method to allow such automatic conversion.
For example:
In this example, Camel converts the body payload during the routing execution from a data format
such as File to a Java byte[] array.
If you needed to route files to a JMS queue using javax.jmx.TextMessage objects then you
must convert each file to a String, which forces the JMS component to use the TextMessage
class. Use the convertBodyTo method to convert to String:
AD221-RHF7.10-en-6-20230613 71
Chapter 3 | Implementing Enterprise Integration Patterns
from("file://orders/inbox")
.convertBodyTo(String.class)
.to("activemq:queue:inbox");
Note
Camel implements more than 350 out-of-the-box type converters, which are
capable of applying type conversion for the most commonly used types.
A Camel context can have more than one converter active. Camel keeps track of these converters
in a registry called TypeConverterRegistry where all the type converters are registered when
Camel is started. This allows Camel to pick up type converters not only from camel-core, but also
from any of the Camel components, including the custom converters in your Camel applications.
To develop a type converter, use the Camel annotation @Converter in any class that implements
custom conversion logic. A custom converter must have a static method whose signature must
meet the following requirements:
• The return value must be a type that is compatible with the target object type you are
converting to.
• The first parameter must be a type that is compatible with the format you wish to convert.
72 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
You must annotate both the converter class and the static method by using the @Converter
annotation.
The following code is an example of a custom type converter class that converts an Order object
to an encrypted InputStream.
package com.redhat.training.ad221.converters;
// imports omitted
@Converter
public class OrderConverter {
@Converter
public static InputStream toInputStream(Order order) {
String encryptedOrderInfo = encryptor.encrypt(order.toString());
return new ByteArrayInputStream(encryptedOrderInfo.getBytes());
}
}
The toInputStream method takes an Order object instance as a source and returns an
InputStream as the target of the conversion.
AD221-RHF7.10-en-6-20230613 73
Chapter 3 | Implementing Enterprise Integration Patterns
Note
In this example, the Order class has a toString method that returns the details
of an order in String format. The OrderConverter class uses this method to get
the order information as String and encrypt it.
// imports omitted
// constructor(s) omitted
@Override
public String toString() {
return "Order [description=" + description + ", id=" + id + ", price=" +
price + ", tax=" + tax + "]";
}
Additionally, to allow Camel to register your custom type converter classes in the
TypeConverterRegistry, you must include your custom converter package name in the META-
INF/services/org/apache/camel/TypeConverter file.
TypeConverter is a service discovery file that has a list of fully qualified class names or packages
that contain Camel type converters. Each record in this file must be on a new line.
com.redhat.training.ad221.MyConverter
com.redhat.training.ad221.MyOtherConverter
com.redhat.training.ad221.converters
A converters package definition that might have custom converters in it. This enables the
OrderConverter in the preceding example to be discovered.
After setting up the type converter registry for your converters, you can create a route that uses
the OrderConverter implicitly as follows:
// imports omitted
74 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
@Override
public void configure() throws Exception {
from("kafka:orders")
.unmarshal(new JacksonDataFormat(Order.class))
.to("http4://localhost:8081/orders");
}
The route consumes messages from the orders Kafka topic in JSON format.
The http4 component requires InputStream as the input data format. In the previous step,
you unmarshaled the JSON data to Order data, so a conversion must happen before this
step.
Camel searches for a converter that converts from an Order type to an InputStream type.
Camel finds your custom converter OrderConverter in the type converter registry and runs
the conversion implicitly.
References
For more information, refer to the Type Converters chapter in the Apache Camel
Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#TypeConv
Camel in Action, Second Edition (2018) by Claus Ibsen and Jonathan Anstey;
Manning. ISBN 978-1-617-29293-4.
AD221-RHF7.10-en-6-20230613 75
Chapter 3 | Implementing Enterprise Integration Patterns
Guided Exercise
The IoT software company sends commands to air purifier devices for configuration. The
current air purifier devices accept these commands in JSON data format. However, they
renew the devices and use a second version of the air purifier devices. The new versioned
devices accept a custom format instead of the JSON format.
You must update the Camel route in the command-router application to send the
configuration commands to the version-2 air purifier devices in the required custom format.
Outcomes
In this exercise you should be able to:
• Create a Camel route that reads the configuration data from a CSV file.
• Unmarshal the configuration data into Java objects.
• Create a custom type converter for converting the objects to the required command
object format.
The solution files for this exercise are in the AD221-apps repository, within the pattern-
converter/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/pattern-
converter/apps directory, into the ~/AD221/pattern-converter directory.
Instructions
1. Navigate to the ~/AD221/pattern-converter/air-purifier-v2 directory, compile
and run the application by using ./mvnw package quarkus:dev.
This is the application that represents the new version of the air purifier device. You do not
need to modify this application in this exercise.
76 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
3. In the command-router application directory, run ./mvnw clean test to execute the
test.
Verify that the unit test fails.
The unit test fails because the current command-router application sends JSON
formatted messages to the air-purifier-v2 application. The air-purifier-v2
application does not accept JSON format, it accepts a custom data format instead, so the
Quarkus application throws an exception.
5. Open the
com.redhat.training.route.converter.CommandConfigurationConverter
class and add the @Converter annotations both for the class level and the method level.
Implement the convertToCommandConfiguration method by setting the attributes of
the commandConfig object instance.
The commandConfig is an instance of the CommandConfiguration class. The code uses
the toString method to obtain formatted results.
Use the csvRecord object instance to obtain the attribute values.
private CommandConfigurationConverter() {}
AD221-RHF7.10-en-6-20230613 77
Chapter 3 | Implementing Enterprise Integration Patterns
8. Run ./mvnw clean test to execute the test of the command-router application again.
Verify that the test passes.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
78 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Objectives
• After completing this section, you should be able to use the Splitter pattern to break a message
into a series of individual messages, and merge multiple messages by using the Aggregator
pattern.
To implement this pattern, Camel provides the split method, which you can use when creating
the route. The following example demonstrates the use of an XPath predicate to split XML data
stored in the exchange body:
from("activemq:queue:NewOrders")
.split(body(xpath("/order/product")))
.to("activemq:queue:Orders.Items");
The split function also supports the splitting of certain Java types by default without any
predicate specified. A common use case is to split a Collection, Iterator, or array from the
exchange body.
The split method creates separate exchanges for each part of the data that is split and then
sends those exchanges along the route separately.
For example, consider a route that has unmarshaled CSV data into a List object. Once the data is
unmarshaled, the exchange body contains a List of model objects, and a call to split creates a new
exchange for each record from the CSV file.
The following examples demonstrate splitting the exchange body as well as splitting an exchange
header:
from("direct:splitUsingBody").split(body()).to("mock:result");
from("direct:splitUsingHeader").split(header("foo")).to("mock:result");
This example splits the body containing an Iterable Java object (List, Set, Map, etc) into
separate exchanges.
This example splits a header containing an Iterable Java object into separate exchanges
with the same body. For example, if the exchange header contains a list of users then split
could transform it into multiple exchanges, one per user in the list, all containing the same
exchange body.
AD221-RHF7.10-en-6-20230613 79
Chapter 3 | Implementing Enterprise Integration Patterns
The following code snippet is an example of a route that uses the tokenizer expression by using
the tokenize method. The route reads a text-based file and splits the messages per line, and
then sends it to a Kafka topic one by one.
from("file:messages.txt")
.split(body().tokenize("\n"))
.to("kafka:messages");
Additionally, if you are splitting XML data, Camel provides the DSL method tokenizeXML as an
optimized version of the tokenizer.
from("file:inbox")
.split().tokenizeXML("order")
.to("activemq:queue:order");
If streaming is enabled then Camel splits the input message into chunks, instead of attempting to
load the entire body into memory at once, and then splitting it. This reduces the memory required
for each invocation of the route.
When dealing with extremely large payloads, it is recommended that you enable streaming. If data
is small enough to fit in memory, streaming is an unnecessary overhead.
You can split streams by enabling the streaming mode using the streaming builder method.
from("direct:streaming")
.split(body().tokenizeXML("order")).streaming()
.to("activemq:queue.order");
If the data you are splitting is in XML format then be sure to use tokenizeXML instead of an
XPath expression. This is because the XPath engine in Java loads the entire XML content into
memory, negating the effects of streaming for very big XML payloads.
80 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
For example, when you want to batch process data that you receive in fragments, such as
combining individual orders that need to be fulfilled by the same vendor. By using this pattern, you
can define custom aggregation behavior to control how Camel uses the source data fragments to
build the final aggregated message.
To build the final message, there are three pieces of information needed:
To use this pattern in your Camel route, use the aggregate DSL method, which requires two
parameters:
.aggregate(correlationExpression, AggregationStrategyImpl)
Additionally, you must define a completion condition to specify when to send the aggregated
exchange. You can define the completion condition by using methods from the Java domain-
specific language (DSL). The Java DSL takes place in the subsequent parts of this section.
For example, in the following route, the messages with a matching header field called
destination are aggregated using the MyNewStrategy AggregationStrategy
implementation.:
from("file:in")
.aggregate(header("destination"),new MyNewStrategy())...
.to("file:out");
• The aggregate method requires two exchange parameters. The first parameter is the old
exchange. The second parameter is the new exchange. The first parameter is always null for
the first message. This is because when you receive the first message exchange you have not
yet created the aggregated message. Therefore, when implementing the aggregate method,
you must verify whether the old exchange is null. If this condition is true, then the aggregate
method must ignore the old exchange and return only the new exchange.
AD221-RHF7.10-en-6-20230613 81
Chapter 3 | Implementing Enterprise Integration Patterns
the method execution, with body contents that represent the aggregation of the two exchange
objects passed into the aggregate method execution.
The exchange object that was previously processed and returned by this
AggregationStrategy implementation.
The exchange object containing the newest exchange object received by this
AggregationStrategy implementation.
Retrieves the body contents from the exchange object sent by the previous execution of this
AggregationStrategy implementation and transforms it into a String.
Merge the body content from both exchanges. This implementation uses a simple string
concatenation to merge the two exchange bodies.
Updates the body of the exchange object with the merged body content to be sent to the
next execution of this AggregatorStrategy implementation.
Sends the updated exchange object to the next execution of this AggregationStrategy
implementation.
completionInterval(long completionInterval)
Build the aggregated message after a certain time interval (in milliseconds).
completionPredicate(Predicate predicate)
Build the aggregated message if the predicate is true.
completionSize(int completionSize)
Build the aggregated message when the number of messages defined in the
completionSize is reached.
82 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
completionSize(Expression completionSize)
Build the aggregated message when the number of messages processed by a Camel
expression is reached.
completionTimeout(long completionTimeout)
Build the aggregated message when there are no additional messages for processing and the
completionTimeout (in milliseconds) is reached.
completionTimeout(Expression completionTimeout)
Build the aggregated message when there are no additional messages for processing and the
timeout defined by a Camel expression is reached.
In the following route, the completionSize method is used to trigger the aggregated message
creation:
from("file:in")
.aggregate(header("destination"),new MyNewStrategy())
.completionSize(5)
.to("file:out");
Note
The completionSize method waits until the number of messages defined in
the completionSize parameter is reached. If this number is not reached, the
aggregate method hangs when you try to stop the Camel context. To avoid this,
you must set the AGGREGATION_COMPLETE_ALL_GROUPS header to true in the
implementation of your aggregation strategy:
newExchange.getIn().setHeader
(Exchange.AGGREGATION_COMPLETE_ALL_GROUPS,true).
It is also possible to use multiple completion conditions, as shown in the following example:
from("file:in")
.aggregate(header("destination"),new MyNewStrategy())
.completionInterval(10000)
.completionSize(5)
.to("file:out");
When multiple completion conditions are defined, which ever condition is met first triggers the
completion of the aggregation. In the previous example, for completion of the batch to occur,
either five total exchanges are processed, or 10 seconds have passed, whichever occurs first.
AD221-RHF7.10-en-6-20230613 83
Chapter 3 | Implementing Enterprise Integration Patterns
References
For more information, refer to the Splitter chapter in the Apache Camel
Development Guide
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgRout-Splitter
For more information, refer to the Aggregator chapter in the Apache Camel
Development Guide
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgRout-Aggregator
Camel in Action, Second Edition (2018) by Claus Ibsen and Jonathan Anstey;
Manning. ISBN 978-1-617-29293-4.
84 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Guided Exercise
Your department produces text files that contain the details of processed orders. The
number of orders that each file contains is too large to be processed by other departments
in your company. Each line of an orders file represents an order. You must split the input
files by order, and then produce batches of 10 orders, so that other departments can easily
process the data.
Outcomes
You should be able to split incoming messages into smaller parts and implement a custom
aggregation strategy to aggregate batches of lines every 10 lines.
The solution files for this exercise are in the AD221-apps repository, within the pattern-
combine/solutions directory.
From your workspace directory use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/pattern-
combine/apps directory, into the ~/AD221/pattern-combine directory.
Instructions
1. Navigate to the ~/AD221/pattern-combine directory, open the project with your editor
of choice, and examine the code.
2. Update the route configure method of the CombineRouteBuilder class to add the
splitting and aggregation code.
2.1. Add the split step to the route pipeline that uses the system line separator as the
tokenizer for the message body:
2.2. Add the aggregating step to the route pipeline and create a new
org.apache.camel.processor.aggregate.AggregationStrategy instance
as the parameter:
AD221-RHF7.10-en-6-20230613 85
Chapter 3 | Implementing Enterprise Integration Patterns
2.3. Inside the aggregate method, return the next element in the queue if it is the first
one:
if (oldExchange == null) {
return newExchange;
}
2.4. Otherwise, concatenate the previous message body with the new one by using the
line separator:
return oldExchange;
3. Accumulate the lines, batching them every 10 lines. After the aggregate method, call the
completionSize method with a batch size of 10.
.completionSize( 10 )
4. Test the CombineRouteBuilder class. The project contains a test for the route that you
can execute to verify that the execution is correct.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
86 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Quiz
The company has an online Software as a Service (SaaS) solution for file conversion.
The company's Apache Camel based services have all the conversion and integration logic.
In some integration services, the legacy developer left some unfinished code for you to
complete.
You must choose the correct answers to the following questions to complete the relevant
part of the services:
1. In the file service, you must implement a custom converter that converts any metric
type to an InputStream. The Metric class, which represents the metric type, has a
toString method that returns the metric values in String format. Regarding the
following code, which two of the following options are true? (Choose two.)
...imports omitted...
a. You must annotate the class with @ClassConverter and the method with @Converter.
b. The return statement must be as follows: return new
ByteArrayInputStream(metric.toString().getBytes());.
c. The return statement must be as follows: return metric.toString();.
d. You must both annotate the class and the method with @Converter.
e. The return statement must be as follows: return new
ByteArrayInputStream(metric);.
AD221-RHF7.10-en-6-20230613 87
Chapter 3 | Implementing Enterprise Integration Patterns
2. In the XML converter service, you must refactor a Camel route that converts XML data
to JSON. The legacy developer implemented this by first converting the XML to an
object, and converting that object to JSON. The following code shows the Camel route
the legacy developer developed before. You must refactor the route to use the camel-
xmljson library to directly convert from XML to JSON. The legacy developer added
all the required dependencies in the pom.xml file. Regarding this code, which of the
following options is true for the new implementation? (Choose one.)
...code omitted...
JaxbDataFormat jaxbDataFormat =
new JaxbDataFormat(JAXBContext.newInstance(CustomObject.class)); //[1]
from("file:xml-in")
.unmarshal(jaxbDataFormat) //[2]
.marshal().json(JsonLibrary.Jackson) //[3]
.to("kafka:json-out");
...code omitted...
a. You must replace the code marked as [1] with XmlJsonDataFormat xmlJsonFormat
= new XmlJsonDataFormat();. Also you must remove the codes marked as [2] and
[3] and leave the code like that.
b. You must only remove the code marked as [2] and [3] and add .unmarshal() instead.
c. You must remove the code marked as [1] and put XmlJsonDataFormat
xmlJsonFormat = new XmlJsonDataFormat(); instead. Also you must remove the
codes marked as [2] and [3] and add .unmarshal(xmlJsonFormat) instead.
d. You must keep the code marked as [2] but delete the codes marked as [1] and [3].
e. You must only add .unmarshal(xmlJsonFormat) after the code marked as [3].
88 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
3. In the XML converter service, you have another route that reads big XML files. The
route is not complete, because it has to process the data one by one for each <object>
of the XML file. The route should filter the split XML, and skip all the <object>
elements that have the type attribute empty. Regarding the following code, which two
of the following options are true? (Choose two.)
...code omitted...
from("file:objects-xml")
// TODO: Split the XML
// TODO: Filter the split XML
.to("activemq:queue:filtered-objects");
...code omitted...
...code omitted...
from("file:source-file")
.aggregate(header("content"),new AppenderAggregationStrategy())
.to("file:appended-file");
...code omitted...
AD221-RHF7.10-en-6-20230613 89
Chapter 3 | Implementing Enterprise Integration Patterns
Solution
The company has an online Software as a Service (SaaS) solution for file conversion.
The company's Apache Camel based services have all the conversion and integration logic.
In some integration services, the legacy developer left some unfinished code for you to
complete.
You must choose the correct answers to the following questions to complete the relevant
part of the services:
1. In the file service, you must implement a custom converter that converts any metric
type to an InputStream. The Metric class, which represents the metric type, has a
toString method that returns the metric values in String format. Regarding the
following code, which two of the following options are true? (Choose two.)
...imports omitted...
a. You must annotate the class with @ClassConverter and the method with @Converter.
b. The return statement must be as follows: return new
ByteArrayInputStream(metric.toString().getBytes());.
c. The return statement must be as follows: return metric.toString();.
d. You must both annotate the class and the method with @Converter.
e. The return statement must be as follows: return new
ByteArrayInputStream(metric);.
90 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
2. In the XML converter service, you must refactor a Camel route that converts XML data
to JSON. The legacy developer implemented this by first converting the XML to an
object, and converting that object to JSON. The following code shows the Camel route
the legacy developer developed before. You must refactor the route to use the camel-
xmljson library to directly convert from XML to JSON. The legacy developer added
all the required dependencies in the pom.xml file. Regarding this code, which of the
following options is true for the new implementation? (Choose one.)
...code omitted...
JaxbDataFormat jaxbDataFormat =
new JaxbDataFormat(JAXBContext.newInstance(CustomObject.class)); //[1]
from("file:xml-in")
.unmarshal(jaxbDataFormat) //[2]
.marshal().json(JsonLibrary.Jackson) //[3]
.to("kafka:json-out");
...code omitted...
a. You must replace the code marked as [1] with XmlJsonDataFormat xmlJsonFormat
= new XmlJsonDataFormat();. Also you must remove the codes marked as [2] and
[3] and leave the code like that.
b. You must only remove the code marked as [2] and [3] and add .unmarshal() instead.
c. You must remove the code marked as [1] and put XmlJsonDataFormat
xmlJsonFormat = new XmlJsonDataFormat(); instead. Also you must remove the
codes marked as [2] and [3] and add .unmarshal(xmlJsonFormat) instead.
d. You must keep the code marked as [2] but delete the codes marked as [1] and [3].
e. You must only add .unmarshal(xmlJsonFormat) after the code marked as [3].
AD221-RHF7.10-en-6-20230613 91
Chapter 3 | Implementing Enterprise Integration Patterns
3. In the XML converter service, you have another route that reads big XML files. The
route is not complete, because it has to process the data one by one for each <object>
of the XML file. The route should filter the split XML, and skip all the <object>
elements that have the type attribute empty. Regarding the following code, which two
of the following options are true? (Choose two.)
...code omitted...
from("file:objects-xml")
// TODO: Split the XML
// TODO: Filter the split XML
.to("activemq:queue:filtered-objects");
...code omitted...
...code omitted...
from("file:source-file")
.aggregate(header("content"),new AppenderAggregationStrategy())
.to("file:appended-file");
...code omitted...
92 AD221-RHF7.10-en-6-20230613
Chapter 3 | Implementing Enterprise Integration Patterns
Summary
• You can use the JAXB library to transform messages to and from XML.
• You can use the Jackson library to transform messages to and from JSON.
• A message filter selectively removes message exchanges from a route based upon message
contents.
• Camel type converters run implicitly to convert data from the source data format to the target
data format.
• You can create a custom type converter and register it with Camel's type conversion system.
• Camel supports the Splitter pattern, enabling you to break messages into chunks for separate
processing.
• You can use the Aggregator pattern to combine multiple related messages into a single
message.
AD221-RHF7.10-en-6-20230613 93
94 AD221-RHF7.10-en-6-20230613
Chapter 4
AD221-RHF7.10-en-6-20230613 95
Chapter 4 | Creating Tests for Routes and Error Handling
Objectives
• After completing this section, you should be able to develop tests for Camel routes with Camel
Test Kit.
Similar to testing any other piece of software, you should distribute your testing efforts by using
various types of tests. The required effort to adopt each test type is defined by the testing
pyramid. The testing pyramid suggests an agile testing strategy including unit, integration, and
end-to-end acceptance tests.
With the Camel Test Kit, you can implement unit and integration tests. Higher-level tests, such as
user interface or end-to-end tests require additional testing frameworks, which are not covered in
this course.
The following are the available Camel Test Kit modules in Red Hat Fuse 7.10 for Spring Boot.
camel-test
The main testing module, which provides a number of helpers for writing JUnit tests for Camel
routes.
camel-test-spring
This module wraps camel-test to make the Camel testing helpers available in Spring and
Spring Boot tests.
Note
Camel 3 replaces the preceding libraries with camel-test-junit5 and camel-
test-spring-junit5. These JUnit 5 libraries are not covered in this course,
because Red Hat Fuse 7.10 is based on Camel 2.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
96 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
2. Add the camel-test-spring dependency to your POM file. This module also adds camel-
test as a transitive dependency.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-test-spring</artifactId>
<scope>test</scope>
</dependency>
Important
If you do not use the Fuse Spring Boot BOM to control the versions of your
dependencies, then you must explicitly declare the spring-boot-starter-test
and camel-test-spring versions.
@Component
public class DoubleNumbersRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
from( "direct:doubleNumber")
.process(exchange -> {
Double number = exchange.getIn().getBody(Double.class);
exchange.getIn().setBody(String.format("%,.2f", number));
})
.to("file:formatted");
}
Note that the @Component Spring annotation is required for your Spring Boot tests to discover
the route.
Next, consider that you want to test that, given a double number, the preceding route produces
the correct string representation. You can write a Spring Boot test case as follows:
@RunWith( CamelSpringBootRunner.class )
@SpringBootTest( classes = Application.class )
public class DoubleNumbersRouteBuilderTest {
@Autowired
private ProducerTemplate producerTemplate;
@Autowired
private ConsumerTemplate consumerTemplate;
AD221-RHF7.10-en-6-20230613 97
Chapter 4 | Creating Tests for Routes and Error Handling
@Test
public void testRouteParsesLatestWarningText() {
producerTemplate.sendBody("direct:doubleNumber", Math.PI);
assertEquals("3.14", formatted);
}
}
@CamelSpringBootRunner enables the Camel test helpers in Spring Boot test cases and
automatically handles the Camel context.
@SpringBootTest specifies the configuration class to use for the Spring Boot application
context. Note that you must also annotate the main configuration class of your application
with @SpringBootApplication.
@Autowired injects the ProducerTemplate object. Use this template to send messages
to a route endpoint.
@Autowired injects the ConsumerTemplate object. Use this template to receive messages
from a route endpoint.
The producer template sends the Pi number to the direct:doubleNumber input endpoint.
The ProducerTemplate class provides additional methods, other than sendBody, to easily
send messages to endpoints.
The consumer template receives the resulting message body from the file:formatted
destination endpoint. The ConsumerTemplate class provides additional methods, other
than receiveBody, to easily receive messages from endpoints.
The test verifies that the resulting message body is a 2-decimal string representation of Pi.
@RunWith( CamelSpringBootRunner.class )
@SpringBootTest( classes = Application.class )
public class MyRouteBuilderTest {
@Autowired
private CamelContext context;
@Test
public void testRouteParsesLatestWarningText() {
Collection<Endpoint> endpoints = context.getEndpoints();
98 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
...
}
}
Testing Utilities
The TestSupport class provides a number of static testing utility methods. For example, if
you want to create a directory before each test runs, then you can use the createDirectory
method.
@BeforeEach
public void setUp() {
TestSupport.createDirectory( "my/testing/dir" );
}
Similarly, you can delete a directory after each test, with the deleteDirectory method.
@AfterEach
public void clean() {
TestSupport.deleteDirectory( "my/testing/dir" );
}
References
The Practical Test Pyramid
https://martinfowler.com/articles/practical-test-pyramid.html
For more information, refer to the Testing with Camel Spring Boot section in the
Red Hat Fuse 7.10 - Deploying into Spring Boot Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
deploying_into_spring_boot/index#test-with-camel-spring-boot
AD221-RHF7.10-en-6-20230613 99
Chapter 4 | Creating Tests for Routes and Error Handling
Guided Exercise
Consider a legacy application that exposes its errors and warnings as HTML files. The
provided code for this exercise implements a route that parses HTML markup and extracts
the text of the most recent error messages. The routing logic is as follows:
• For errors, the route must write the latest error text to the out/latest-error.txt file.
• For warnings, the route must write the latest warning text to the out/latest-
warning.txt file.
• The HTML markup for errors is different from the markup for warnings.
Outcomes
You should be able to implement a test case for a Camel route, by using the Camel Test Kit.
The solution files for this exercise are in the AD221-apps repository, within the test-kit/
solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/test-kit/
apps directory, into the ~/AD221/test-kit directory.
Instructions
1. Navigate to the ~/AD221/test-kit directory and open the project with an editor, such as
VSCodium.
100 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
<!-- TODO: Add camel test dependencies by uncommenting the following code -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-test-spring</artifactId>
<scope>test</scope>
</dependency>
6. Run the tests with ./mvnw clean test and verify that tests fail.
...output omitted...
Tests in error:
testRouteParsesLatestWarningText(...)
testRouteWritesLatestWarningToFile(...)
testRouteWritesLatestErrorToFile(...)
Tests do not pass because the test class is missing the annotations required for Spring Boot
and Camel Test Kit.
7. Annotate the HtmlRouteBuilderTest class to use the Camel Test Kit in Spring Boot.
AD221-RHF7.10-en-6-20230613 101
Chapter 4 | Creating Tests for Routes and Error Handling
• @RunWith( CamelSpringBootRunner.class )
• @SpringBootTest( classes = Application.class )
7.2. Rerun the tests. Only the testRouteParsesLatestErrorText test case should
fail.
8.2. Send the errors HTML as the body to the direct:parseHtmlErrors endpoint.
Use the producerTemplate.sendBody method.
8.3. Read the resulting body from the file:out endpoint. Use the
consumerTemplate.receiveBody method.
8.4. Assert that the resulting body contains the expected fragment of the first
<article> in the test_errors.html file.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
102 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Objectives
• After completing this section, you should be able to create realistic test cases with mock
components.
The mock endpoint only supports producer endpoints. Use this endpoint by specifying the mock:
prefix, followed by a name that identifies the mock. The following example shows how to create a
mock endpoint called out.
from("direct:in")
.to("mock:out");
...implementation omitted...
}
The uri parameter of the @EndpointInject annotation determines the specific mock endpoint
to inject. This URI must match the URI of the endpoint that you want to test.
In this particular example, the uri parameter is mock:out, which matches the endpoint defined
in the preceding route example. Therefore, the outEndpoint variable is the MockEndpoint
instance that defines the behaviors and expectations of the mock:out endpoint.
AD221-RHF7.10-en-6-20230613 103
Chapter 4 | Creating Tests for Routes and Error Handling
@Test
public void yourTestCase() throws InterruptedException {
outEndpoint.expectedMessageCount(1);
outEndpoint.expectedBodiesReceived("Hello Eduardo");
outEndpoint.assertIsSatisfied();
}
Define the mock expectations as preconditions before sending messages to the route.
Defining Behavior
You can also tell a mock how to behave to simulate test scenarios. For example, you can configure
an HTTP mock endpoint to simulate a specific HTTP response body, and later make your test
verify how the route processes this response.
@Test
public void yourTestCase() throws InterruptedException {
httpMockEndpoint.whenAnyExchangeReceived(e -> {
e.getOut().setBody("Hello, Randy!");
});
template.sendBody("direct:start", null);
...assertions omitted...
}
With the whenAnyExchangeReceived function, you can define the preprogrammed behavior
and responses required to set up the test scenario.
104 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Note
The mock component exposes numerous methods to define expectations and
behaviors, other than the ones covered in this lecture. Refer to the mock component
documentation for a complete list of methods.
Camel supports the replacement of endpoints in routes with the adviceWith feature. With this
feature, you can advise the route to intercept messages and substitute endpoints with mock
endpoints, before starting the Camel context.
@RunWith( CamelSpringBootRunner.class )
@SpringBootTest
@UseAdviceWith
public class YourTest {
@EndpointInject(uri = "mock:file:customer_requests")
MockEndpoint fileMock;
@Before
public void setUp() throws Exception {
context
.getRouteDefinition("myRoute")
.adviceWith(context, new AdviceWithRouteBuilder() {
@Override
public void configure() {
replaceFromWith("direct:origin);
interceptSendToEndpoint("file:.*customer_requests.*")
.skipSendToOriginalEndpoint()
.to("mock:file:customer_requests");
}
});
context.start();
}
@After
public void tearDown() throws Exception {
context.stop();
}
AD221-RHF7.10-en-6-20230613 105
Chapter 4 | Creating Tests for Routes and Error Handling
The @UseAdviceWith annotation marks the use of adviceWith in the test class. This
annotation deactivates the automatic Camel context start/stop feature. Deactivating the
context autostart is a prerequisite, because Camel needs to know route advice before
starting the context.
The replaceFromWith call replaces the from endpoint with another component.
The skipSendToOriginalEndpoint call skips the original endpoint and just sends
messages to the mock.
The mock endpoint where Camel should send intercepted messages. Note that this endpoint
must match the URI of the MockEndpoint object injected in the test class.
Automocking Endpoints
Instead of using interceptSendToEndpoint, you can use the simpler
mockEndpointsAndSkip call to easily replace endpoints with mocks. For example:
context
.getRouteDefinition( "myRoute" )
.adviceWith( context, new AdviceWithRouteBuilder() {
@Override
public void configure() {
replaceFromWith( "direct:start" );
mockEndpointsAndSkip("file:.*customer_requests.*");
}
} );
The mockEndpointsAndSkip method replaces all the endpoints that match the given pattern by
following these steps:
106 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
@MockEndpointsAndSkip("file:.*customer_requests.*")
public class YourTest {
...implementation omitted...
from("{{myroute.queue}}")
.to("{{myroute.api}}")
Next, you can use your application.properties file to define values for these properties.
Additionally, you can define specific property values for your tests, such as mock or direct
endpoints, as follows:
@SpringBootTest(properties = {
"myroute.queue=direct:start",
"myroute.api=mock:api"
})
public class YourTest {
...implementation omitted...
}
References
For more information, refer to the Mock Component chapter in the Red Hat
Fuse 7.10 Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#mock-component
For more information, refer to the Property Placeholders section in the Red Hat
Fuse 7.10 Apache Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/
html/apache_camel_development_guide/basicprinciples#BasicPrinciples-
PropPlaceholders
AD221-RHF7.10-en-6-20230613 107
Chapter 4 | Creating Tests for Routes and Error Handling
Guided Exercise
Assume you have developed a route that reads data from an HTTP endpoint and writes the
result to a file. You must test this route in isolation from the external HTTP service and the
file system.
Outcomes
You should be able to use mocks to test routes in isolation and decoupled from external
systems.
The solution files for this exercise are in the AD221-apps repository, within the test-
mock/solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/test-mock/
apps directory, into the ~/AD221/test-mock directory.
Instructions
1. Navigate to the ~/AD221/test-mock directory and open the project with an editor, such
as VSCodium.
from("direct:start")
.to("http4://my-external-service/greeting")
.to("file:out?fileName=response.txt");
}
The file component writes the response body to the out/response.txt endpoint.
3. Open the HttpRouteBuilderTest class and configure the test to have access to the
mock endpoint instances required for testing the route.
108 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Use the @EndpointInject annotation to inject the mock endpoints. You must inject a
mock for the HTTP endpoint and a mock for the file endpoint. The mock URIs should be as
follows:
4.1. Configure the HTTP mock endpoint to return the Hello test! response body.
4.3. Configure the fileMockEndpoint mock to expect the Hello test! message
body.
5. Run ./mvnw test. The test fails with an UnknownHostException error because the
route is still using the http4://my-external-service/greeting endpoint instead of
mock:http4:my-external-service/greeting.
6. Enable the automock feature to automatically advise the route endpoints and replace them
with mock endpoints.
AD221-RHF7.10-en-6-20230613 109
Chapter 4 | Creating Tests for Routes and Error Handling
6.3. Review the test logs. Verify that the logs show that the original endpoints have been
advised with mock endpoints.
7. Use property placeholders to decouple the route from specific URIs and components.
7.1. In the HttpRouteBuilder class, replace the from and HTTP endpoint URIs with
property placeholders.
from("{{http_route.start}}")
.to("{{http_route.server}}/greeting")
.to("file:out?fileName=response.txt");
7.2. Open the HttpRouteBuilderTest class and assign the property values specific to
the test.
@SpringBootTest(properties = {
// TODO: add properties
"http_route.start=direct:start",
"http_route.server=http4://test-fake"
})
7.3. In the same class, update the URI of the injected HTTP mock to use the correct
URI. Remember that @MockEndpointsAndSkip replaces http4://test-fake/
greeting with mock:http4:test-fake/greeting.
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
110 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
AD221-RHF7.10-en-6-20230613 111
Chapter 4 | Creating Tests for Routes and Error Handling
Objectives
• After completing this section, you should be able to create reliable routes that handle errors
gracefully.
Recoverable errors
Errors caused by transient problems that can be solved by retrying the delivery, such as a
connection timeout. Normally, these errors generate a Java exception, which Camel attaches
to the Exchange object transmitted by a route.
Irrecoverable errors
Failures without an immediate solution, such as file system failure. These errors set a fault
message in the Exchange object of a route. By default, Camel does not handle these errors.
A channel is a Camel piece that routes messages from one route step to the next one. If an error
occurs in a step, then the channel that precedes the step captures the error and passes the error
to the error handler.
from("file:inputDir")
.routeId("first")
.to(...)
from("file:anotherDir")
.routeId("second")
.errorHandler(defaultErrorHandler())
.to(...)
}
}
The Camel context uses the LoggingErrorHandler class as the error handler.
112 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
The first route inherits LoggingErrorHandler from the context as the error handler.
The second route uses the DefaultErrorHandler class as the error handler.
DefaultErrorHandler
The default error handling strategy if none is explicitly set. This handler does not redeliver
exchanges and notifies the caller about the failure.
LoggingErrorHandler
Logs the exception to the default output.
NoErrorHandler
Disables the error handler mechanism.
DeadLetterChannel
Implements the dead letter channel enterprise integration pattern (EIP).
Camel implements a type-specific exception trapping policy with the onException method. With
this method, you can trap exceptions by type, and customize the exchange, routing, and redelivery
policies. The following example illustrates the use of the onException method.
@Override
public void configure() throws Exception {
onException(LogException.class)
.to("file:log")
.handled(true)
.maximumRedeliveries(3);
from("file:in")
...
}
Sets the exception as handled. Keep in mind that, by default, the onException clause does
not mark exceptions as handled.
AD221-RHF7.10-en-6-20230613 113
Chapter 4 | Creating Tests for Routes and Error Handling
Note
Notice the difference between trapping an exception and catching an exception.
• Catching implies handling errors raised by a specific code fragment, as you would
do with a regular Java try/catch block.
• Trapping implies handling errors raised at any point of the context or the route.
With the onException method, you trap exceptions.
In Camel routes, you cannot use the ordinary try/catch/finally Java mechanism. This is because
Camel executes the configure method of your route builder only once, when registering the
route definition in the context. Instead, you must use the Camel doTry/doCatch/doFinally
syntax, as follows:
@Override
public void configure() throws Exception {
from("file:in?noop=true")
.doTry()
.process(new LogProcessor())
.doCatch(LogException.class)
.process(new FileProcessor())
.doFinally()
.to("mock:finally")
.end();
...
}
}
As the preceding example shows, you can use the doTry/doCatch/doFinally Camel
mechanism in a similar way to regular Java try/catch/finally blocks. This syntax presents a few
differences, when compared to onException:
• The doTry/doCatch/doFinally syntax is restricted to the route where you use it. In
contrast, the onException clause, applies to the context and route levels.
114 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
References
Apache Camel Manual - Exception Clause
https://camel.apache.org/manual/exception-clause.html
For more information, refer to the Exception Handling section in the Red Hat
Fuse 7.10 Apache Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#BasicPrinciples-ExceptionHandling
AD221-RHF7.10-en-6-20230613 115
Chapter 4 | Creating Tests for Routes and Error Handling
Guided Exercise
The company wants to integrate the internal financial system with an external service that
generates employee payslips. To validate the generated payslips, you will add error handling
to the application, and flag the payslips that contain errors.
Outcomes
You should be able to provide exception management capabilities by using Camel's doTry/
doCatch blocks, error handlers, and onException mechanisms.
The solution files for this exercise are in the AD221-apps repository, within the test-
error/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/test-
error/apps directory, into the ~/AD221/test-error directory.
Instructions
1. Navigate to the ~/AD221/test-error directory, open the project with your editor of
choice, and examine the code.
116 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
...output omitted...
...output omitted...
4. Verify the correctness of the changes made to the route by executing the unit tests. Run
the ./mvnw clean -Dtest=AmountProcessRouteTest test command, and verify
that two unit tests pass.
AD221-RHF7.10-en-6-20230613 117
Chapter 4 | Creating Tests for Routes and Error Handling
6. Implement an error handler that uses the Dead Letter EIP. Capture any exception raised,
and send the messages to the file://data/validation/error-dead-letter
endpoint. You must disable the redelivery policy.
7. Verify the correctness of the changes made to the route by executing the unit tests. Run
the ./mvnw clean test command, and verify that five unit tests pass.
8. Execute the ./mvnw clean package spring-boot:run command to start the Spring
Boot application.
9. Wait for the application to process the payslips stored in the file://data/payslips
endpoint, and manually verify the correctness of the application logic:
Finish
Stop the Spring Boot application, return to your workspace directory, and use the lab command
to complete this exercise. This is important to ensure that resources from previous exercises do
not impact upcoming exercises.
118 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Quiz
Consider a Camel route that retrieves weather forecasts from an API and sends the data to
ActiveMQ.
The route parses the API response by converting this raw response into a format supported
by your application.
Finally, the route sends the processed forecast to ActiveMQ, which delivers the forecast to
other parts of your application.
1. You must develop your first test case for the Camel route in Spring Boot. Which two of
the following annotations are required for the test class? (Choose two.)
a. @Autowired
b. @RunWith( CamelSpringBootRunner.class )
c. @Test
d. @BeforeEach
e. @SpringBootTest( classes = Application.class )
2. Assuming that the route code is as follows, what statement should you use in your test
to trigger the execution of the route?
from("direct:start")
.to("http://api.example.com/weather/forecast")
.process(new WeatherApiResponseProcessor())
.to("activemq:weatherForecasts");
a. Use the Java File API to send test data to the direct:start endpoint.
b. Inject a ProducerTemplate instance in the test class, and send a message body to the
activemq:weatherForecasts endpoint.
c. Inject a ProducerTemplate instance in the test class, and send a message body to the
direct:start endpoint.
d. Inject a ProducerTemplate instance in the test class, and send a message body to the
file:direct:start endpoint.
AD221-RHF7.10-en-6-20230613 119
Chapter 4 | Creating Tests for Routes and Error Handling
class YourTest {
@EndpointInject(uri = "mock:http:api.example.com/weather/forecast")
MockEndpoint mockHttp;
@EndpointInject(uri = "mock:activemq:weatherForecasts")
MockEndpoint mockActiveMQ;
@Test
public void test...() {
// Expectations/Behaviours definition
template.sendBody("direct:start", null);
mockActiveMQ.assertIsSatisfied();
}
}
4. The HTTP endpoint sometimes returns invalid message bodies. You observe that these
messages generate InvalidResponseException errors in your route. You want to
handle these exceptions and send invalid messages to a file for subsequent processing.
How should you handle this?
a. Use onException(InvalidResponseException.class).to("file:invalid-
responses")
b. Use onException(InvalidResponseException.class).to("file:invalid-
responses").handled(true)
c. Use errorHandler(loggingErrorHandler())
d. Use errorHandler(defaultErrorHandler())
120 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Solution
Consider a Camel route that retrieves weather forecasts from an API and sends the data to
ActiveMQ.
The route parses the API response by converting this raw response into a format supported
by your application.
Finally, the route sends the processed forecast to ActiveMQ, which delivers the forecast to
other parts of your application.
1. You must develop your first test case for the Camel route in Spring Boot. Which two of
the following annotations are required for the test class? (Choose two.)
a. @Autowired
b. @RunWith( CamelSpringBootRunner.class )
c. @Test
d. @BeforeEach
e. @SpringBootTest( classes = Application.class )
2. Assuming that the route code is as follows, what statement should you use in your test
to trigger the execution of the route?
from("direct:start")
.to("http://api.example.com/weather/forecast")
.process(new WeatherApiResponseProcessor())
.to("activemq:weatherForecasts");
a. Use the Java File API to send test data to the direct:start endpoint.
b. Inject a ProducerTemplate instance in the test class, and send a message body to the
activemq:weatherForecasts endpoint.
c. Inject a ProducerTemplate instance in the test class, and send a message body to the
direct:start endpoint.
d. Inject a ProducerTemplate instance in the test class, and send a message body to the
file:direct:start endpoint.
AD221-RHF7.10-en-6-20230613 121
Chapter 4 | Creating Tests for Routes and Error Handling
class YourTest {
@EndpointInject(uri = "mock:http:api.example.com/weather/forecast")
MockEndpoint mockHttp;
@EndpointInject(uri = "mock:activemq:weatherForecasts")
MockEndpoint mockActiveMQ;
@Test
public void test...() {
// Expectations/Behaviours definition
template.sendBody("direct:start", null);
mockActiveMQ.assertIsSatisfied();
}
}
4. The HTTP endpoint sometimes returns invalid message bodies. You observe that these
messages generate InvalidResponseException errors in your route. You want to
handle these exceptions and send invalid messages to a file for subsequent processing.
How should you handle this?
a. Use onException(InvalidResponseException.class).to("file:invalid-
responses")
b. Use onException(InvalidResponseException.class).to("file:invalid-
responses").handled(true)
c. Use errorHandler(loggingErrorHandler())
d. Use errorHandler(defaultErrorHandler())
122 AD221-RHF7.10-en-6-20230613
Chapter 4 | Creating Tests for Routes and Error Handling
Summary
• Apache Camel provides a collection of modules and JUnit extension classes known as the
Camel Test Kit. With this kit, you can implement unit and integration tests of Camel routes.
• When implementing route tests, you should replace endpoints with mocks or other test doubles
to decouple tests from route dependencies.
AD221-RHF7.10-en-6-20230613 123
124 AD221-RHF7.10-en-6-20230613
Chapter 5
AD221-RHF7.10-en-6-20230613 125
Chapter 5 | Integrating Services using Asynchronous Messaging
Objectives
• After completing this section, you should be able to create routes that use the JMS and AMQP
components to receive and send asynchronous messages.
JMS acts as a generic wrapper layer that is used to access many different kinds of messaging
systems, and supports the JMS API. For example, ActiveMQ, MQSeries, Tibco, Sonic, and more
all implement JMS. Red Hat Fuse uses a JMS component to send and receive messages to any of
these messaging systems.
JMS uses queues or topics for message channels. The syntax of the endpoint URI, for the JMS
component, uses the following format:
jms:[queue:|topic:]destinationName[?options]
For example, the following Java DSL receives order objects from a queue called
jms_order_input, transforms the message body to JSON, and sends the JSON formatted
order to a queue called json_order_input.
from("jms:queue:jms_order_input")
.routeId("ROUTE_NAME")
.marshal().json(JsonLibrary.Jackson)
.to("jms:queue:json_order_input");
The camel-jms library provides the JMS component. Spring Boot users use the camel-jms-
starter and can configure options by specifying camel.component.jms.* properties in the
application.properties file.
For example, to consume messages concurrently in multiple threads, add an entry like the
following:
camel.component.jms.concurrent-consumers = 20
There are more than 80 configuration options for the JMS component. See the component
documentation for a complete list of component options.
126 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
...imports omitted...
@Configuration
public class MessagingConnectionFactory {
@Bean
public JmsComponent jmsComponent() throws JMSException {
// Creates the connectionfactory that connects to Artemis
ActiveMQConnectionFactory connectionFactory = new
ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61616");
connectionFactory.setUser("admin");
connectionFactory.setPassword("admin");
return jms;
}
}
The Configuration annotation for defining Spring framework beans. This class can contain
one or more bean methods that you must annotate by using @Bean.
Spring framework takes the name of the method as the bean ID. In this case the bean's name
is jmsComponent.
To use a defined JMS component with a connection factory configured, you must add the
jmsComponent bean in the route endpoint as follows:
from("jmsComponent:queue:jms_order_input")
.routeId("ROUTE_NAME")
.marshal().json(JsonLibrary.Jackson)
.to("jmsComponent:queue:json_order_input");
To integrate multiple brokers with Camel, you must define multiple JMS component beans. For
example, if you want to use two different brokers in a Camel context, you must define two JMS
component beans. This is because you must configure different connection factories for each
broker.
AD221-RHF7.10-en-6-20230613 127
Chapter 5 | Integrating Services using Asynchronous Messaging
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-client</artifactId>
</dependency>
Camel implements the AMQP component by inheriting the JMS component. The AMQP
component provides access to messaging providers that require the AMQP protocol. The
component supports the AMQP 1.0 protocol by using the JMS Client API of the Qpid project.
The camel-amqp library provides the AMQP component. The URI syntax for the AMQP
component is identical to the syntax for the JMS component and supports all of the options of the
JMS component.
amqp:[queue:|topic:]destinationName[?options]
See the component documentation for details on configuration options for the AMQP
component.
References
Apache Camel Component Documentation
https://camel.apache.org/components/2.x/index.html
For more information, refer to the JMS Component chapter in the Apache Camel
Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#jms-component
For more information, refer to the AMQP Component chapter in the Apache Camel
Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#amqp-component
128 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
Guided Exercise
The first system allows integration via the JMS protocol, while the other system uses the
AMQP specification. In both cases you will use Red Hat Fuse to simplify the integration via
the messaging components.
Outcomes
In this exercise you should be able to:
The solution files for this exercise are in the AD221-apps repository, within the async-
jms/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/async-jms/
apps directory, into the ~/AD221/async-jms directory. The Lab command also creates a
Red Hat AMQ instance.
You can inspect the logs for the AMQ instance at any time with the following command:
Instructions
1. Navigate to the ~/AD221/async-jms directory and open the project with an editor, such
as VSCodium.
2. Open the project's POM file, and add the required dependencies.
AD221-RHF7.10-en-6-20230613 129
Chapter 5 | Integrating Services using Asynchronous Messaging
Dependencies to Add
3. Open the JmsRouteBuilder class, and add a route to receive orders from the
jms:queue:jms_order_input endpoint. Marshall the message body into a JSON
format.
Send the message to the direct:log_orders endpoint if the delivered field is set to
true. Otherwise, forward the order to the amqp:queue:amqp_order_input endpoint.
Set the ID of the route to jms-order-input and use the log method to track the
progress of messages through the route.
Note
You can use the following JSON path expression to filter orders by the delivered
field: $[?(@.Delivered == false)]
130 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
return jms;
}
5. Open the AMQPRouteBuilder class, and add a route to receive orders from
the amqp:queue:amqp_order_input endpoint. Send the messages to the
direct:log_orders endpoint. You can use the log method to track the progress of the
route, and set amqp-order-input as the ID of the route.
// TODO: receive messages from AMQP queue and send to the log-orders route
from("amqp:queue:amqp_order_input")
.routeId(ROUTE_NAME)
.log("AMQPRouteBuilder: Processing Non-delivered Orders")
.to("direct:log_orders");
Red Hat AMQ supports the AMQP 1.0 specification. The AMQ broker, used in this lab,
accepts AMQP connections on port 61616. Thus, the amqp component can use the
connection factory provided in the previous step.
6. Open the OrderLogRouteBuilder class and observe the steps of this route.
7. Use the ./mvnw clean test command to run the unit tests. The application has a unit
test for each route.
Results :
AD221-RHF7.10-en-6-20230613 131
Chapter 5 | Integrating Services using Asynchronous Messaging
8. Build and run the application. Inspect the logs, and verify that the logs display received
orders.
Finish
Stop the Spring Boot application, return to your workspace directory and use the lab command to
complete this exercise. This is important to ensure that resources from previous exercises do not
impact upcoming exercises.
132 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
Objectives
• After completing this section, you should be able to create routes that use the Kafka component
to send and receive durable asynchronous messages.
Kafka is composed of several servers that are called brokers. To be horizontally scalable, Kafka
distributes messages with copies through brokers. These messages are also known as records.
Kafka categorizes messages into topics. Messages within the same topic are usually related. That
is why a topic is conceptually similar to the table of a relational database.
Topics consist of partitions which are distributed into brokers. Kafka distributes messages into
partitions for scalability.
Kafka has clients that either write messages to the topics or read messages from topics. A client
that writes messages to a topic is called a producer, and a client that reads messages from a topic
is called a consumer.
Kafka accepts messages in the binary format. That is why it provides a serialization and
deserialization (SerDe) mechanism with its client API. Producers serialize messages before
sending them, and consumers deserialize messages after receiving them.
You can either use the SerDe classes provided by the Kafka client API for some basic types such
as String, or you can create your custom SerDe classes depending on your requirements.
AD221-RHF7.10-en-6-20230613 133
Chapter 5 | Integrating Services using Asynchronous Messaging
With the Kafka component, Camel can take advantage of Kafka benefits such as resilience, high-
performance and durability, which traditional message brokers usually lack.
As an example, Kafka is durable, so it provides a message replay feature. You can use this
feature to consume previous messages in case of a delivery failure. With a traditional broker
implementation of Camel, such as JMS or AMQP, you have to implement the Dead Letter Channel
enterprise integration pattern for resiliency. You do not have to implement the same pattern when
using the Kafka component.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka-starter</artifactId>
</dependency>
The Spring Boot starter dependency uses the core dependency as well but extends it for
Spring Boot usage. This dependency is very useful when adding Kafka configurations for a
Spring Boot based Camel application. The subsequent parts of this lecture cover more about
how to use the autoconfiguration.
134 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
By adding the Maven dependency, the Kafka component becomes ready to use and ready to
configure. The Camel URI format for the Kafka component is as follows:
kafka:my-topic[?options]
The endpoint URI must start with the kafka: prefix. Then as a mandatory path parameter, the
topic name must follow. In the example the topic name is my-topic. You can add additional
options. Each option must follow the URI parameter format.
These options are the parameters that configure the Camel Kafka component. For example, you
can define the brokers option and an optional consumer parameter autoCommitEnable as
follows:
kafka:my-topic?brokers=localhost:9092&autoCommitEnable=false
If you are using the Spring Boot starter dependency, then you do not have to specify the
options in the URI. You can define the configurations in the application.properties file
of the application. The Spring Boot based Red Hat Fuse application uses its autoconfiguration
mechanism to apply the configuration. The following snippet applies the same configuration of the
preceding example:
camel.component.kafka.configuration.brokers=localhost:9092
camel.component.kafka.configuration.auto-commit-enable=false
Note
For more information about the configuration options of the Kafka component,
refer to the Camel Kafka Component reference, which is in the references list of this
lecture.
Consuming messages
You can consume messages from Kafka by using the from method of Camel. The following
code snippet is a minimal example of a route that reads messages from Kafka.
from("kafka:my-topic")
.log("Message received from Kafka : ${body}")
.log("on the topic ${headers[kafka.TOPIC]}")
.log("on the partition ${headers[kafka.PARTITION]}")
.log("with the offset ${headers[kafka.OFFSET]}")
.log("with the key ${headers[kafka.KEY]}");
The Camel message body, which is also the received Kafka message.
The Camel Kafka component carries the information that returns for the consumed Kafka
message by using the message header. You can access this information by using the
AD221-RHF7.10-en-6-20230613 135
Chapter 5 | Integrating Services using Asynchronous Messaging
headers array and the kafka.* prefixed keys. In this example, the client returns the
topic name, the partition, offset and key information of the consumed message.
You can consume from more than one topic with a single Camel Kafka component by
separating the topic names with commas:
from("kafka:my-topic,other-topic,another-topic")
.log("Message received from Kafka : ${body}");
Producing messages
You can produce messages to Kafka by using the to method of Camel. The following code
snippet is a minimal example of a route that writes messages to Kafka.
from("direct:kafka-producer")
.setBody(constant("Message from Camel"))
.setHeader(KafkaConstants.KEY, constant("Camel"))
.to("kafka:my-topic");
A String key that the Camel Kafka component must carry in the message header. A
key is an optional part of a message so this setting is not mandatory. Headers have an
important role for carrying message related data for producers, like they do for the
consumers.
The producing part of the route. The route sends the defined message and the key to the
my-topic topic.
Note
For the preceding examples of consumer and producer routes, you might notice
there are no configuration parameters in the URIs. Suppose that you use Spring
Boot autoconfiguration for the examples.
References
For more information, refer to the Kafka Component chapter in the Apache Camel
Component Reference Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#kafka-component
136 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
Guided Exercise
The association requires the integration service to be asynchronous and resilient. They hired
you to refactor the integration service, which is based on Apache Camel, to use Apache
Kafka as a message backbone. In this guided exercise you are expected to make the relevant
changes in the integration service.
Outcomes
You should be able to configure the Camel application for Kafka broker access, create
a Camel route that uses the Kafka component and set it up for sending messages to a
particular Kafka topic.
The solution files for this exercise are in the AD221-apps repository, within the async-
kafka/solutions directory.
From your workspace directory use the lab command to start this exercise. This will make a
Kafka cluster and a MySQL instance available for the exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/async-
kafka/apps directory, into the ~/AD221/async-kafka directory.
Instructions
1. Navigate to the ~/AD221/async-kafka/emergency-location-service project
directory and examine the EmergencyLocationRouteBuilder class.
2. Run the following command to execute the test for the emergency-location-route
route.
The test must run successfully. The emergency-location-route route gets the location
data from the vendor file system and saves the data in the MySQL database.
AD221-RHF7.10-en-6-20230613 137
Chapter 5 | Integrating Services using Asynchronous Messaging
3. Run the podman stop async-kafka_mysql_1 command to stop the MySQL database
that you started with the startup lab script. This simulates a database service down
situation.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka-starter</artifactId>
</dependency>
Property Value
brokers localhost:9092
auto-offset-reset earliest
value-deserializer com.redhat.training.emergency
.serde.LocationDeserializer
...output omitted...
.split(body())
.to("kafka:locations")
.to("direct:logger");
138 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
from("kafka:locations")
.routeId("kafka-consumer-route")
.setBody(simple("insert into locations
values('${body.latitude}','${body.longitude}')"))
.to("jdbc:dataSource")
.to("direct:logger");
10. Run the podman start async-kafka_mysql_1 command to start the database.
11. Run the following command to execute the test for the kafka-consumer-route route.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 139
Chapter 5 | Integrating Services using Asynchronous Messaging
Quiz
Consider you are working on a development team for a startup company called The Q LLC.
This company provides queues as a service, and publish-subscribe APIs for commercial
companies.
The development team has to implement some important parts of the application before the
deadline.
You must choose the correct answers to the following questions to complete the project:
1. The registration microservice has a Camel route that consumes the registration
data of customers from a Kafka topic called registrations. Another service called
accountancy microservice, which can only use the JMS protocol, must access this
data but Kafka does not support the JMS protocol. You must send the consumed Kafka
messages to the ActiveMQ broker's registrations queue by using the JMS protocol.
Regarding this information, which two of the following options are true? (Choose two.)
a. You must create the outgoing endpoint of the route as
to("jms:queue:registrations").
b. You must create the outgoing endpoint of the route as
to("jms:topic:registrations").
c. The incoming endpoint of the route must be
from("kafka:topic:queue:registrations").
d. The incoming endpoint of the route must be from("kafka:topic:registrations").
e. You must configure a serializer for the jms component
as camel.component.jms.configuration.value-
serializer=com.theq.serde.RegistrationSerializer in the
application.properties file.
140 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
2. Your team has recently converted one of the microservices to a Spring Boot
application, but they have not changed the Camel route code or applied the
configuration in the application.properties file. You must refactor the following
Camel route and apply the required configuration. Regarding this information, which of
the following options is FALSE? (Choose one.)
from("kafka:announcements?brokers=broker1:9092
&valueDeserializer=com.theq.services.kafka.serde.AnnouncementDeserializer
&heartbeatIntervalMs=1500")
.to("mongodb:camelMongoClient");
3. You must complete a Camel route code that uses the Camel AMQP component. Which
of the following options is true for the AMQP component? (Choose one.)
a. You can use the camel-amqp-runner dependency to configure the Camel AMQP
component in a Spring Boot microservice.
b. The AMQP component uses the Kafka client API to provide messaging topics.
c. camel.component.amqp.concurrent-consumers=2 is a valid Spring Boot
autoconfiguration for the route that uses the AMQP component. This configuration sets
the concurrent consumers count to two.
d. amqp:application-topic is a valid URI for the AMQP component.
e. You can consume from more than one topic such as
from("amqp:topic:applicants-topic,registration-topic,accounts-
topic").
AD221-RHF7.10-en-6-20230613 141
Chapter 5 | Integrating Services using Asynchronous Messaging
4. The team realized that the monitoring system shows a failure in one of the Camel
microservices occasionally. The Camel route gets the applied patient data from the
government system and saves it in a relational database by sorting it by the case
urgency. The database occasionally becomes unresponsive and there is no time to fix
or replace it. Regarding this information, which two of the following options are true?
(Choose two.)
a. You can solve this issue by using an in-memory queue mechanism. The SEDA component
makes the system resilient.
b. You use the Kafka component to queue and sort the messages in a Kafka system before
saving them to the database. Kafka's durability makes the system resilient.
c. You can use the AMQP component to queue the messages in an ActiveMQ Broker system
and sort them before saving to the database. You can use the Dead Letter Channel
implementation to provide resiliency.
d. You can use the AMQP component to queue and sort the messages in an ActiveMQ
Broker system before saving to the database. ActiveMQ's durability makes the system
resilient.
142 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
Solution
Consider you are working on a development team for a startup company called The Q LLC.
This company provides queues as a service, and publish-subscribe APIs for commercial
companies.
The development team has to implement some important parts of the application before the
deadline.
You must choose the correct answers to the following questions to complete the project:
1. The registration microservice has a Camel route that consumes the registration
data of customers from a Kafka topic called registrations. Another service called
accountancy microservice, which can only use the JMS protocol, must access this
data but Kafka does not support the JMS protocol. You must send the consumed Kafka
messages to the ActiveMQ broker's registrations queue by using the JMS protocol.
Regarding this information, which two of the following options are true? (Choose two.)
a. You must create the outgoing endpoint of the route as
to("jms:queue:registrations").
b. You must create the outgoing endpoint of the route as
to("jms:topic:registrations").
c. The incoming endpoint of the route must be
from("kafka:topic:queue:registrations").
d. The incoming endpoint of the route must be from("kafka:topic:registrations").
e. You must configure a serializer for the jms component
as camel.component.jms.configuration.value-
serializer=com.theq.serde.RegistrationSerializer in the
application.properties file.
AD221-RHF7.10-en-6-20230613 143
Chapter 5 | Integrating Services using Asynchronous Messaging
2. Your team has recently converted one of the microservices to a Spring Boot
application, but they have not changed the Camel route code or applied the
configuration in the application.properties file. You must refactor the following
Camel route and apply the required configuration. Regarding this information, which of
the following options is FALSE? (Choose one.)
from("kafka:announcements?brokers=broker1:9092
&valueDeserializer=com.theq.services.kafka.serde.AnnouncementDeserializer
&heartbeatIntervalMs=1500")
.to("mongodb:camelMongoClient");
3. You must complete a Camel route code that uses the Camel AMQP component. Which
of the following options is true for the AMQP component? (Choose one.)
a. You can use the camel-amqp-runner dependency to configure the Camel AMQP
component in a Spring Boot microservice.
b. The AMQP component uses the Kafka client API to provide messaging topics.
c. camel.component.amqp.concurrent-consumers=2 is a valid Spring Boot
autoconfiguration for the route that uses the AMQP component. This configuration sets
the concurrent consumers count to two.
d. amqp:application-topic is a valid URI for the AMQP component.
e. You can consume from more than one topic such as
from("amqp:topic:applicants-topic,registration-topic,accounts-
topic").
144 AD221-RHF7.10-en-6-20230613
Chapter 5 | Integrating Services using Asynchronous Messaging
4. The team realized that the monitoring system shows a failure in one of the Camel
microservices occasionally. The Camel route gets the applied patient data from the
government system and saves it in a relational database by sorting it by the case
urgency. The database occasionally becomes unresponsive and there is no time to fix
or replace it. Regarding this information, which two of the following options are true?
(Choose two.)
a. You can solve this issue by using an in-memory queue mechanism. The SEDA component
makes the system resilient.
b. You use the Kafka component to queue and sort the messages in a Kafka system before
saving them to the database. Kafka's durability makes the system resilient.
c. You can use the AMQP component to queue the messages in an ActiveMQ Broker system
and sort them before saving to the database. You can use the Dead Letter Channel
implementation to provide resiliency.
d. You can use the AMQP component to queue and sort the messages in an ActiveMQ
Broker system before saving to the database. ActiveMQ's durability makes the system
resilient.
AD221-RHF7.10-en-6-20230613 145
Chapter 5 | Integrating Services using Asynchronous Messaging
Summary
• You can send and receive messages by using the Camel JMS component.
• You can configure the Camel JMS component for the connection factory.
• You can send and receive messages by using the Camel AMQP component, and configure it in a
Spring Boot application.
• You can create a producer for writing messages to Kafka and a consumer for reading messages
from Kafka, by using the Camel Kafka component.
• You can configure the Camel Kafka component for almost any client option.
• Kafka is durable, and its messages are re-playable. This provides a resilient system for the Camel
applications.
146 AD221-RHF7.10-en-6-20230613
Chapter 6
Implementing Transactions
Goal Provide data integrity in route processing by
implementing transactions.
AD221-RHF7.10-en-6-20230613 147
Chapter 6 | Implementing Transactions
Objectives
• After completing this section, you should be able to use the JDBC, JPA and SQL components in
Camel to retrieve data from, or persist data into an external database.
JDBC
Provided by the camel-jdbc library. This component uses Java Database Connectivity
(JDBC) queries through the JDBC API. When using this component, you specify the database
query as the message body. By default, the component returns query records in the message
body as a list of Map objects.
SQL
Provided by the camel-sql library. This component uses JDBC queries through the
spring-jdbc dependency. By default, it returns the result of queries as a list of Map objects.
In contrast to the JDBC component, which uses the Camel message body for queries, the
SQL component uses the Camel endpoint to specify the query. The use of the JDBC, or
the SQL component depends on the integration use case. For example, if you have static or
simple queries that only require a few parameters, then the SQL component is easier to use
and maintain.
spring.datasource.url=jdbc:mysql://localhost/my_database
spring.datasource.username=dbuser
spring.datasource.password=dbpass
spring.datasource.platform=mysql
jdbc:dataSourceName[?options]
Camel looks for the dataSourceName bean in the Camel registry. If dataSourceName is
dataSource or default, then Camel attempts to use the default data source from the
148 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Camel registry. You can define the default data source by using the spring.datasource.*
configuration properties.
The jdbc component only supports producer endpoints, as the following example shows:
from("direct:getUsers")
.setBody(constant("select * from users"))
.to("jdbc:dataSource")
.to("direct:processUsers")
The message body contains the query. This particular query selects all users from the
database.
The jdbc component runs the query. The component uses the default data source.
The route sends the resulting list of users to another endpoint, for further processing.
sql:query[?options]
You can use exchange properties in a query, by prepending the property name with the #
character. For example, you can select a user by using the syntax shown in the following example.
If the Camel message body is an instance of java.util.Map, then Camel looks for the userId
property in the body. If the userId property is not in the body, or if the body is not a Map object,
then Camel looks for the userId property in the message headers.
Alternatively, you can use an external file to define your query by using the
classpath:file_path expression:
sql:classpath:path/to/my_query_file.sql
Similar to the jdbc component, the result of a SELECT query is a list of Map objects. Both
the jdbc and sql components, however, provide options to control the output type, such as
outputType and outputClass. Refer to the documentation for more details about these
options and the output of other SQL statements, such as INSERT or UPDATE.
AD221-RHF7.10-en-6-20230613 149
Chapter 6 | Implementing Transactions
Note
In Spring Boot applications, you can use the camel-sql-starter dependency to
extend the camel-sql library capabilities.
jpa:entityClassName[?options]
The following example shows how to periodically read Order entities from a database.
from("jpa:com.redhat.training.entity.Order"
+ "persistenceUnit=mysql"
+ "&consumeDelete=false"
+ "&consumer.namedQuery=getPending"
+ "&maximumResults=5"
+ "&consumer.delay=3000"
+ "&consumeLockEntity=false"
)
.process(new OrderProcessor())
.to("file:out");
The Order class is a JPA entity used to query orders from the database.
Use a named query from the JPA entity. In this particular example, the getPending named
query of the Order entity retrieves pending orders only. If you do not specify this option,
then the jpa component selects all records.
Do not set an exclusive lock on each entity bean while processing the results from polling.
Similar to the preceding components, the jpa component provides options to control the output
of different operations and queries. Refer to the documentation for a full list of configuration
options.
150 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Note
In Spring Boot applications, you can use the camel-jpa-starter dependency to
extend the camel-jpa library capabilities.
References
For more information, refer to the JDBC Component chapter in the Red Hat
Fuse 7.10 Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#jdbc-component
For more information, refer to the SQL Component chapter in the Red Hat Fuse 7.10
Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#sql-component
For more information, refer to the JPA Component chapter in the Red Hat Fuse 7.10
Apache Camel Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#jpa-component
AD221-RHF7.10-en-6-20230613 151
Chapter 6 | Implementing Transactions
Guided Exercise
Your company is working on a payment fraud detection system. Payments are stored in a
database table called payments. The outcome of the fraud detection algorithm is stored in
a table called payment_analysis.
You must develop a Camel integration that retrieves payments from the payments table,
processes each payment by running the fraud detection algorithm, and stores the results in
the payment_analysis table.
Outcomes
You should be able to implement a route that retrieves payments from a database table
by using the camel-jpa component, and updates another table by using the camel-sql
component.
The solution files for this exercise are in the AD221-apps repository, within the
transaction-database/solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/
transaction-database/apps directory, into the ~/AD221/transaction-database
directory. This command also starts a containerized MySQL server and creates a database
called payments.
Instructions
1. Navigate to the ~/AD221/transaction-database directory, and open the project with
your editor of choice.
• The src/main/resources directory contains SQL files to create and seed the
payments and payment_analysis tables. Spring Boot executes these SQL scripts on
application startup.
152 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
• The Payment class implements a Java Persistence API (JPA) entity, which maps to the
payments table.
...output omitted...
Payment [id=1, userId=11, amount=41.0, currency=EUR]
Payment [id=2, userId=12, amount=500000.0, currency=USD]
...output omitted...
AD221-RHF7.10-en-6-20230613 153
Chapter 6 | Implementing Transactions
8. Add the SQL endpoint to update the fraud scores value for each payment in the
payment_analysis table.
Edit the PaymentAnalysisRouteBuilder class and add the producer SQL endpoint to
update each payment_analysis row. For each payment, you must do the following:
• The WHERE clause must select rows with a payment_id field equal to the payment ID of
the message body.
• Set the fraud_score field to the value of the fraudScore header. This header is set
by the PaymentFraudAnalyzer processor.
• Set the analysis_status field to Completed.
9. Run the tests to verify that the route has written fraud scores to the payment_analysis
table of the database. Use the ./mvnw clean test command for this.
Verify that two tests pass.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
154 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Objectives
• After completing this section, you should be able to implement transaction management in
Camel routes by using the Spring Transaction Manager.
For example, a route that processes bank transfers between accounts must require that each
message can complete different endpoints without data loss. Using transactions during route
processing alleviates many common integration issues by rolling back the transfer operation.
When developing a Camel route with transactions, each component must support transactions,
and all of them must use the same transaction manager. The JMS messaging, and SQL database
components support the use of transactions by implementing the Transactional Client EIP.
Local transaction
A transaction that spans over one single resource, such as one database.
Global transaction
A transaction that spans over multiple resources, such as one database and one messaging
system.
Note
Camel supports transactions by using Spring Transactions, or a JTA transaction
manager.
Transaction Managers
A transaction manager is the part of an application responsible for coordinating transactions
across endpoints. In an integration pipeline, the transaction manager restores the processing state
immediately following a failure. This allows a two-phase commit (2PC) approach, where each
system is part of a transaction. The transaction is committed when the entire route execution is
complete. At this point, all the systems are ready to process the transaction.
Spring offers a number of transaction managers for local transactions. The following snippet
creates a DataSourceTransactionManager instance to use as the application local
transaction manager in Spring Boot.
AD221-RHF7.10-en-6-20230613 155
Chapter 6 | Implementing Transactions
@Configuration
public class MyCustomTransactionManager {
@Bean
public PlatformTransactionManager transactionManager(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
}
To implement global transactions, you must use a third-party JTA transaction manager, or
implement an event-driven architecture.
Transactions Implementation
To enable transaction management in a route, the route must have a transacted DSL method
after the from method.
from("schema:origin")
.transacted()
...
Transactions Rollback
If a route throws an exception, then Camel might not roll back the transaction.
RuntimeException exceptions are automatically rolled back by a route processing operation.
However, other exceptions are not subject to rollback. To roll back a transaction, it must be marked
as markRollbackOnly as part of the exception management.
onException(ConnectionException.class)
.handled(true)
.to("schema:failure-destination")
.markRollbackOnly();
Camel supports predefined and custom transaction propagation policies. The following list
contains the most commonly used propagation policies:
PROPAGATION_REQUIRED
All Camel processors from a route must use the same transaction.
PROPAGATION_REQUIRES_NEW
Each processor creates its own transaction.
PROPAGATION_NOT_SUPPORTED
Does not support a current transaction.
The following snippet defines a custom transaction policy as a Java bean in Spring Boot.
156 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
@Bean(name = "myTransactionPolicy")
public SpringTransactionPolicy propagationRequired(
PlatformTransactionManager transactionManager
) {
policy.setTransactionManager(transactionManager);
policy.setPropagationBehaviorName("PROPAGATION_REQUIRED");
return policy;
}
The following snippet uses the custom transaction management policy called
myTransactionPolicy in the route transaction.
from("schema:origin")
.transacted("myTransactionPolicy")
.to("schema:destination");
In Camel, a route can only have exactly one transaction policy. If you need to change transaction
propagation, for example on nested transactions, then you must use a new route.
Camel supports a set of mechanisms to emulate these requirements, such as the Camel Test Kit
(CTK) to mock external systems to run integration tests with live systems, or extension capabilities
to plug external technologies to substitute services.
For testing purposes, tests must use a transaction manager to create a runtime environment as
accurate as possible to a real world environment. To test whether a transaction fails during a route
execution, the Camel Test Kit supports the capability to throw route exceptions, and evaluate the
results of the error as well as the success or failure of any transaction rollbacks.
@Before
public void setUp() throws Exception {
context
.getRouteDefinition("route-one")
.adviceWith(context, new AdviceWithRouteBuilder() {
@Override
public void configure() {
interceptSendToEndpoint("jpa:*")
.throwException(
new SQLException("Cannot connect to the database")
AD221-RHF7.10-en-6-20230613 157
Chapter 6 | Implementing Transactions
);
}
});
context.start();
}
On rollbacks, the route execution must not affect resources, such as external databases. To verify
if a rollback was successful, tests must query the resources for changes. Alternatively, tests can
send invalid content to generate a transaction error.
In Camel, idempotent consumers prevent processing the same message multiple times. Camel
provides the idempotentConsumer processor, which implements the Idempotent Consumer EIP
to filter out duplicates.
from("direct:start")
.idempotentConsumer(
header("paymentId"),
MemoryIdempotentRepository.memoryIdempotentRepository()
)
.log("Unique message ${body}")
.to("direct:process_unique_messages");
The unique key is the paymentId header. The idempotent consumer verifies this header to
filter out duplicates.
158 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
References
Spring Transaction Management
https://docs.spring.io/spring-framework/docs/5.2.15.RELEASE/spring-framework-
reference/data-access.html#transaction
For more information, refer to the Transactional Client section in the Red Hat
Fuse 7.10 Apache Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgEnd-Transactional
For more information, refer to the Idempotent Consumer section in the Red Hat
Fuse 7.10 Apache Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#MsgEnd-Idempotent
AD221-RHF7.10-en-6-20230613 159
Chapter 6 | Implementing Transactions
Guided Exercise
The e-commerce system stores the payments in separate XML files. The payments can have
a wrong order ID, or an invalid email.
You must develop a Camel integration to notify customers about processed payments.
The integration must retrieve payments from a folder, store them into a database, validate
the payment data, and finally send the payments to a queue for further processing. The
payments in the database and in the queue must be the ones with valid data.
Outcomes
You should be able to implement transactional routes in Camel.
The solution files for this exercise are in the AD221-apps repository, within the
transaction-routes/solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/
transaction-routes/apps directory, into the ~/AD221/transaction-routes
directory. It also starts a containerized MySQL server, creates a database called payments,
and starts a containerized Red Hat AMQ broker.
Instructions
1. Navigate to the ~/AD221/transaction-routes directory, open the project with your
editor of choice, and examine the code.
3. Open a new terminal window, and execute the following command to verify the existence
of multiple records in the database for the invalid payments. Any payment with a negative
order ID, or with an empty email, is an invalid order.
160 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
propagationRequired.setTransactionManager(transactionManager);
propagationRequired.setPropagationBehaviorName("PROPAGATION_REQUIRED");
return propagationRequired;
}
AD221-RHF7.10-en-6-20230613 161
Chapter 6 | Implementing Transactions
9. Verify the correctness of the changes made to the route by executing the unit tests. Run
the ./mvnw clean test command, and verify that three unit tests pass.
10. Run the ./mvnw clean package spring-boot:run command to start the Spring
Boot application. Wait for the application to process the payments, and execute the
following command to verify the existence of two records in the database.
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
162 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Quiz
Implementing Transactions
Multiple Choice Sample Title
In this quiz, consider that you are developing Camel routes for data integration and
processing in a video streaming platform.
Your Camel routes communicate with a relational database and use Spring Boot as the
runtime.
1. The data science team has provided you with a complex and fine-tuned SQL query to
extract popularity statistics about each show from the database. They would like you
to extract the results of the query and post the data to the REST API of a data lake for
further processing. Which two of the following components are suitable components
to execute the query? (Choose two.)
a. The CSV component
b. The JDBC component
c. The SQL component
d. The JMS component
2. One of your Camel routes consumes the Documentary JPA entity from the database
for analysis. This route must read data without deleting records from the database
after consumption. Which one of the following options should you use? (Choose one.)
a. consumeDelete=false
b. consumer.delete=true
c. consumer.delete=false
d. consumeDelete=true
AD221-RHF7.10-en-6-20230613 163
Chapter 6 | Implementing Transactions
MemoryIdempotentRepository repository =
MemoryIdempotentRepository.memoryIdempotentRepository();
from("direct:paymentAnalysis")
.idempotentConsumer(
body("subscriptionId"),
repository
)
.process(new PaymentAnalyzer())
.to("jms:queue:analisys_results");
a. Replace the repository with a database repository, to ensure that uniqueness is persisted.
b. Use the header("paymentId") expression as the unique key.
c. Run the PaymentAnalyzer processor before the idempotentConsumer processor.
d. Use the header("subscriptionId") expression as the unique key.
4. One of the Camel integrations of the application requires the use of the same
transaction to control all Camel processors in route. Which one of the following options
should you choose as the transaction policy? (Choose one.)
a. PROPAGATION_REQUIRED
b. PROPAGATION_REQUIRES_NEW
c. PROPAGATION_NOT_SUPPORTED
d. PROPAGATION_SUPPORTED
164 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Solution
Implementing Transactions
Multiple Choice Sample Title
In this quiz, consider that you are developing Camel routes for data integration and
processing in a video streaming platform.
Your Camel routes communicate with a relational database and use Spring Boot as the
runtime.
1. The data science team has provided you with a complex and fine-tuned SQL query to
extract popularity statistics about each show from the database. They would like you
to extract the results of the query and post the data to the REST API of a data lake for
further processing. Which two of the following components are suitable components
to execute the query? (Choose two.)
a. The CSV component
b. The JDBC component
c. The SQL component
d. The JMS component
2. One of your Camel routes consumes the Documentary JPA entity from the database
for analysis. This route must read data without deleting records from the database
after consumption. Which one of the following options should you use? (Choose one.)
a. consumeDelete=false
b. consumer.delete=true
c. consumer.delete=false
d. consumeDelete=true
AD221-RHF7.10-en-6-20230613 165
Chapter 6 | Implementing Transactions
MemoryIdempotentRepository repository =
MemoryIdempotentRepository.memoryIdempotentRepository();
from("direct:paymentAnalysis")
.idempotentConsumer(
body("subscriptionId"),
repository
)
.process(new PaymentAnalyzer())
.to("jms:queue:analisys_results");
a. Replace the repository with a database repository, to ensure that uniqueness is persisted.
b. Use the header("paymentId") expression as the unique key.
c. Run the PaymentAnalyzer processor before the idempotentConsumer processor.
d. Use the header("subscriptionId") expression as the unique key.
4. One of the Camel integrations of the application requires the use of the same
transaction to control all Camel processors in route. Which one of the following options
should you choose as the transaction policy? (Choose one.)
a. PROPAGATION_REQUIRED
b. PROPAGATION_REQUIRES_NEW
c. PROPAGATION_NOT_SUPPORTED
d. PROPAGATION_SUPPORTED
166 AD221-RHF7.10-en-6-20230613
Chapter 6 | Implementing Transactions
Summary
• You can use the JDBC, SQL, and JPA components in Camel to integrate with databases.
• In Spring Boot, the data source of JDBC, SQL, and JPA components is configurable via
spring.datasource.* properties.
• Transactions alleviate common Camel integration problems by rolling back operations and
leaving the system in a consistent state.
AD221-RHF7.10-en-6-20230613 167
168 AD221-RHF7.10-en-6-20230613
Chapter 7
AD221-RHF7.10-en-6-20230613 169
Chapter 7 | Building and Consuming REST Services
Objectives
• After completing this section, you should be able to create a route that hosts a REST service by
using the REST DSL, and customize a REST service to use various data bindings.
For example, a GET request to the /users/1 endpoint can either return a 404 - Not Found
status code if the user is not present in the system, or it can return a JSON representation of the
user if it exists.
Beginning in version 2.14, Camel offers a REST DSL that developers can use in route definitions to
build REST web services. You can use the REST DSL to define REST services in Camel routes by
using verbs that align with the REST HTTP protocol, such as GET, POST, DELETE, and so on.
The benefit of using this DSL is that it drastically reduces the amount of development time
necessary to build REST services into your Camel routes. This reduction comes from eliminating
a lot of the boilerplate networking code and enables you to focus on the business logic that
supports the REST service.
The DSL builds REST endpoints as consumers for Camel routes. The REST DSL requires an
underlying REST implementation provided by components such as Restlet, Spark, and other
components that include REST integration.
rest("/speak")
.get("/hello")
.transform().constant("Hello World");
}
}
170 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
The REST DSL works as an extension to the existing Camel routing DSL, by using specialized
keywords to more closely resemble the underlying REST and HTTP technologies.
The REST DSL provides a simple syntax that extends Camel's existing DSL by mapping each
keyword to a method. This also means that all existing functionality of a Camel route is available
inside of a REST DSL defined route, enabling REST developers to leverage EIPs and other Camel
features to implement their service.
The following are some of the components that currently support the REST DSL:
To specify the REST implementation to use, the REST DSL provides the restConfiguration
method. By using this method, you can control the resulting REST service created by Camel, as
shown in the following example:
restConfiguration()
.component("servlet")
.contextPath("/restService")
.port(8080);
Because the REST DSL is not an implementation, only a subset of the options common to all
implementations, most options are specific to the REST component used by the DSL. The
following is a table of the common options across all components:
Option Description
component The Camel component to use as the HTTP server. Options include
servlet, jetty, restlet, spark-rest, and undertow.
AD221-RHF7.10-en-6-20230613 171
Chapter 7 | Building and Consuming REST Services
Option Description
You can also set options on the restConfiguration DSL method to configure the intended
Component, Endpoint, and Consumer. Because options vary from Component to Component, to
set them you must use a generic DSL method, as shown in the following table:
component componentProperty
endpoint endpointProperty
consumer consumerProperty
To use any of these properties, you must set a key and value, where the key corresponds to the
name of a property available for that component, endpoint, or consumer.
The following example sets the minThreads and maxThreads properties for the Jetty web
server:
restConfiguration()
.component("jetty")
.componentProperty("minThreads", "1")
.componentProperty("maxThreads", "8");
Note
Ensure that any component, endpoint, or consumer properties you set are using
key values that match the component, endpoint, or consumer available options. If
you try to set an option that does not exist, then the route compiles but throws a
runtime error.
You can then define individual services by using REST DSL methods such as get, post, put, and
delete. You can also define path parameters by using the {} syntax for each service.
The following example shows how to use the REST DSL to define multiple services:
restConfiguration()
.component("servlet")
.port(8080);
172 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
rest("/orders")
.get("{id}")
.to("bean:orderService?method=getOrder(${header.id})")
.post()
.to("bean:orderService?method=createOrder")
.put()
.to("bean:orderService?method=updateOrder")
.delete("{id}")
.to("bean:orderService?method=cancelOrder(${header.id})");
}
}
For example, a service that consumes new order records in JSON format can automatically
unmarshal that JSON into the Order model class for easier processing by subsequent components
in the Camel route.
The following table lists the supported binding modes in the REST DSL, which are defined in the
org.apache.camel.model.rest.RestBindingMode enumeration:
Mode Description
auto Binding is automatic, assuming class path contains the necessary data formats.
Typically based on the Content-Type header.
json Enables binding to and from JSON, requires camel-jackson on the class path.
xml Enables binding to and from XML, requires camel-jaxb on the class path.
json_xml Enables binding to and from JSON and XML. Requires class path containing both
data formats.
Similar to other configurations for REST DSL, the restConfiguration method sets the binding
mode, as shown in the following example:
AD221-RHF7.10-en-6-20230613 173
Chapter 7 | Building and Consuming REST Services
restConfiguration()
.component("spark-rest").port(8080)
.bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true");
Similar to component or endpoint properties, data format properties specific to the data format
you are using can be set generically by using dataFormatProperty. In the previous example,
Jackson's prettyPrint option is set to true by using a data format property that formats the
JSON output in a human-readable format.
References
For more information, refer to the Defining REST Services chapter in the Apache
Camel Development Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_development_guide/index#RestServices
Camel in Action, Second Edition (2018) by Claus Ibsen and Jonathan Anstey;
Manning. ISBN 978-1-617-29293-4.
174 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Guided Exercise
The company requires you to create two REST endpoints. One to expose all customer
payments, and another to expose all the payments for a specific customer.
The internal database has all the customer payments in the payments table, and the
userId field identifies each customer. The company also requires you to use the JPA
component to interact with the database, so you can reuse code from a previous integration.
Outcomes
You should be able to implement a route that hosts a REST service that implements two use
cases:
The solution files for this exercise are in the AD221-apps repository, within the rest-dsl/
solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
This command ensures that the MySQL database is already running before you proceed with
the exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/rest-dsl/
apps directory, into the ~/AD221/rest-dsl directory. The lab command also creates a
MySQL instance with the data.
Instructions
1. Navigate to the ~/AD221/rest-dsl directory and open the project with an editor, such as
VSCodium.
AD221-RHF7.10-en-6-20230613 175
Chapter 7 | Building and Consuming REST Services
3. Add two GET REST endpoints by using REST DSL under the /payments path that fetches
data from the payments table.
3.1. The first one on the / subpath redirects to a Direct component that fetches all the
Payments.
3.2. The second one on the /{userId} subpath redirects to a Direct component that
fetches all the Payments that belong to the specified user ID.
4. Create the Direct routes to retrieve the data for the REST endpoints.
4.1. Create a direct route to retrieve all the Payments in the database.
4.2. Create a direct route to retrieve all the Payments of a specific user ID.
from("direct:getPayment")
.log("Retrieving payment with id ${header.userId}")
.toD("jpa:com.redhat.training.payments.Payment?query=select p from
com.redhat.training.rest.Payment p where p.userId = ${header.userId}");
5. Run the Route with ./mvnw spring-boot:run and use the curl command to verify
that the route is working. The URL to get the payments is localhost:8080/camel/
payments. If the application works successfully, then you should see a list of payments.
176 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
6. To verify that the Route is working as expected, there is a test in the project that you can
use. Use ./mvnw test to verify that the route matches the expected behavior. If you still
have the Route running, use Ctrl+C to terminate it before running the tests.
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 177
Chapter 7 | Building and Consuming REST Services
Objectives
• After completing this section, you should be able to develop a Camel route that uses Camel's
HTTP component to enrich a message exchange.
http[s]4://hostname[:port][/resourceURI][?options]
By default, the http4 component uses port 80 for HTTP or port 443 for HTTPS.
To import the camel-http4 library, include the following configuration in the pom.xml file:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-http4</artifactId>
</dependency>
Camel always uses the InOut message exchange protocol because of the HTTP protocol nature
(based on a request/response paradigm).
Note
You can only produce to endpoints generated by the http4 component. Therefore,
it should never be used as input into your Camel routes.
Camel uses the following algorithm to determine if either the GET or POST HTTP method should
be used:
5. GET, otherwise.
178 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Therefore, by default, depending on the content contained in the body of the inMessage object
on the exchange, Camel either sends a GET or POST request as follows:
• Sends an HTTP POST request to the URL by using the exchange body as the body of the HTTP
request and returns the HTTP response as the outMessage object on the exchange, if there is
message content.
• If the body is null then it sends an HTTP GET request to the URL and returns the response as
the outMessage object on the exchange.
Endpoint options and HTTP query parameters have the same syntax. You must use the
Exchange.HTTP_QUERY header to set HTTP query parameters. For example, to make the
http://example.com?order=123&detail=short GET request, use the following:
from("direct:start")
.setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short"))
.to("http4://example.com");
You can also use the connectTimeout endpoint option in the http4 endpoint, as the following
example shows:
from("direct:start")
.setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short"))
.to("http4://example.com?connectTimeout=2000");
from("direct:start")
.setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short"))
.to("http4://example.com?throwExceptionOnFailure=false");
AD221-RHF7.10-en-6-20230613 179
Chapter 7 | Building and Consuming REST Services
In these situations, use the content enricher pattern in your Camel route to enrich or enhance your
message exchange with the required additional data. The http4 component is a common option
for acquiring the additional data.
Camel supports the enrich EIP by using the enrich DSL method to enrich the message.
The enrich DSL method has two parameters. The first is the URI of the producer Camel must
invoke to retrieve the enrichment data. The second parameter is an optional instance of the
AggregationStrategy implementation, which you must provide to Camel for use when
combining the original message exchange with the enrichment data. If you do not provide an
aggregation strategy, then Camel uses the body obtained from the resource as the enriched
message exchange.
The enrich method synchronously retrieves additional data from a resource endpoint to enrich
an incoming message (contained in the original exchange). Here is an example template for
implementing an aggregation strategy to use with the enrich DSL method:
You can use the http4 component in conjunction with the content enricher pattern to update
your message exchanges with data from an external web resource. You could use this to retrieve
some relevant data from an external system exposed over HTTP. This approach is especially
helpful in a microservices-based environment. The following example implements this use case:
from("activemq:orders")
.enrich("direct:enrich",
new HttpAggregationStrategy())
.log("Order sent to fulfillment: ${body}")
.to("mock:fulfillmentSystem");
from("direct:enrich")
.setBody(constant(null))
.to("http4://webservice.example.com");
The URI for the producer that the enrich DSL element invokes to retrieve the resource
message.
The AggregationStrategy implementation that the enrich DSL element uses to combine
the original message exchange and the resource message.
The URI for the consumer that the enrich DSL element invokes.
180 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Setting the body of the exchange to null causes the http4 component to send an HTTP
GET request to the resource.
The URI for the HTTP component producer is the address of the external web service.
@Override
public Exchange aggregate(Exchange original, Exchange resource) {
Order originalBody = original.getIn().getBody(Order.class);
String resourceResponse = resource.getIn().getBody(String.class);
originalBody.setFulfilledBy(resourceResponse);
return original;
}
}
Retrieve the original message exchange body as an instance of the Order model class.
While the http4 component could be used to query a SOAP application, additional tools are
required to build SOAP requests. The additional tools are provided by the camel-cxf library. The
camel-cxf library provides a cxf component that is a wrapper for Apache CXF, a Java library
for working with web services. To invoke a SOAP service in a camel route, take the following three
steps:
Use the cxf-codgen-plugin for Maven to create the Java classes from your WSDL file. To use
this feature, first include the following in your project's pom.xml:
<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
AD221-RHF7.10-en-6-20230613 181
Chapter 7 | Building and Consuming REST Services
<version>${cxf.version}</version>
<executions>
<execution>
<id>generate-sources</id>
<phase>generate-sources</phase>
<configuration>
<wsdlOptions>
<wsdlOption>
<wsdl>src/main/resources/wsdl/Footprint.wsdl</wsdl>
</wsdlOption>
</wsdlOptions>
</configuration>
<goals>
<goal>wsdl2java</goal>
</goals>
</execution>
</executions>
</plugin>
The cxf-codgen-plugin creates and compiles Java classes from the WSDL file to the
target/generated-sources/cxf directory.
Obtain the WSDL file from the web service and copy it to the src/main/resources/wsdl
directory.
To generate classes from the WSDL use the following Maven command:
mvn generate-sources
return request;
}
}
182 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
from("direct:start")
.setBody(constant("12"))
.bean(GetFootprintBuilder.class)
...
When the GetFootprintBuilder bean is invoked, camel uses bean parameter binding to
pass the value 12 as the argument to the getFootprint(String id) method. The bean
replaces the exchange body with a GetFootPrintRequest object with an id value of 12.
Identify the generated serviceClass for the SOAP operation. The class is an interface
that includes the values that must match the cxf component configuration. Use the
OPERATION_NAME and OPERATION_NAMESPACE in the message header, as illustrated in the
following example:
from("direct:start")
.setBody(constant("12"))
.bean(GetFootprintBuilder.class)
.setHeader(CxfConstants.OPERATION_NAME, constant("GetFootprint"))
.setHeader(CxfConstants.OPERATION_NAMESPACE,
constant("http://training.redhat.com/FootprintService/"))
...
from("direct:start")
.setBody(constant("12"))
.bean(GetFootprintBuilder.class)
.setHeader(CxfConstants.OPERATION_NAME, constant("GetFootprint"))
.setHeader(CxfConstants.OPERATION_NAMESPACE,
constant("http://training.redhat.com/FootprintService/"))
.to("cxf://http://localhost:8423/ws"
+ "?serviceClass=com.redhattraining.service.FootprintServiceEndpoint")
Interface generated by the cxf-codegen-plugin plug-in that represents the SOAP service
endpoint.
AD221-RHF7.10-en-6-20230613 183
Chapter 7 | Building and Consuming REST Services
References
Consuming a SOAP service with Apache Camel
https://tomd.xyz/camel-consume-soap-service/
For more information, refer to the Http4 Component chapter in the Apache Camel
Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#http4-component
For more information, refer to the CXF Component chapter in the Apache Camel
Component Reference at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
apache_camel_component_reference/index#cxf-component
184 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Guided Exercise
The company has a SOAP service that calculates the carbon footprint of the customers,
based on their previous orders. The company requires you to enrich the received orders with
an additional header, which includes the carbon footprint value.
Outcomes
In this exercise you should be able to:
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/rest-http/
apps directory, into the ~/AD221/rest-http directory. The Lab command also creates a
Red Hat AMQ instance, and a SOAP service.
Note
You can inspect the logs for the AMQ instance at any time with the following
command:
You can inspect the logs for the SOAP server instance at any time with the
following command:
AD221-RHF7.10-en-6-20230613 185
Chapter 7 | Building and Consuming REST Services
Instructions
1. Navigate to the ~/AD221/rest-http directory, open the project with your editor of
choice, and examine the code.
2. View the WSDL file for the SOAP service located at src/main/resources/wsdl/
Footprint.wsdl. Subsequent steps use this file to generate source code for the project.
The SOAP service provides this file at http://localhost:8080/footprints.php?
wsdl.
3. Open the project's POM file to add the required CXF dependencies and plug-ins.
186 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
4. Execute the ./mvnw generate-sources command to generate client classes from the
WSDL.
Open the target/generated-sources/cxf/com/redhat/training/
carbonfootprintservice/ folder to view the generated files.
5. Create a Java bean called GetFootprintBuilder to create the SOAP request object.
package com.redhat.training.messaging;
import com.redhat.training.carbonfootprintservice.CarbonFootprintRequest;
AD221-RHF7.10-en-6-20230613 187
Chapter 7 | Building and Consuming REST Services
return request;
}
}
6. Create a route that uses the CXF component to query the SOAP service.
Open the SoapRouteBuilder class, and edit the class based on the following
requirements:
SoapRouteBuilder requirements
Description Value
from("direct:soap")
.routeId(ROUTE_NAME)
.setBody(jsonpath("$.Name"))
.log("New body value: ${body}")
.bean(GetFootprintBuilder.class)
.setHeader(CxfConstants.OPERATION_NAME, constant("CarbonFootprint"))
.setHeader(CxfConstants.OPERATION_NAMESPACE,
constant("http://training.redhat.com/CarbonFootprintService/"))
.to("cxf://http://localhost:8080/footprints.php" + "?
serviceClass=com.redhat.training.carbonfootprintservice.CarbonFootprintEndpoint")
.log("From SoapRouteBuilder: ${body[0].carbonFootprint}")
.to("direct:log_orders");
188 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
package com.redhat.training.messaging;
import org.apache.camel.processor.aggregate.AggregationStrategy;
import org.apache.camel.Exchange;
import com.redhat.training.carbonfootprintservice.CarbonFootprintResponse;
return original;
}
}
9. Create a route to receive messages with orders, and enrich the message with data from the
SOAP service.
Open the EnrichRouteBuilder class, and use the enrich method to pull data from
the direct:soap endpoint. Use the HttpAggregationStrategy class to enrich the
original message with a new header value.
The route must look like the following:
from("direct:enrich")
.routeId(ROUTE_NAME)
.enrich("direct:soap", new HttpAggregationStrategy())
.log("Order sent to fulfillment: ${body}")
.log("New Header value: ${in.header.FOOT_PRINT}")
.to("mock:fulfillmentSystem");
10. Verify the correctness of the route by executing the unit tests.
Run the ./mvnw clean -Dtest=EnrichRouteBuilderTest test command, and
verify that one unit test passes.
11. Build and run the application with the ./mvnw clean spring-boot:run command.
Verify in the console output that the enrich-route route did not process any messages,
and stop the application.
AD221-RHF7.10-en-6-20230613 189
Chapter 7 | Building and Consuming REST Services
12. Open the JmsRouteBuilder class, and edit the route to send messages to the
direct:enrich endpoint instead of the direct:soap endpoint.
The end of the modified route must look like the following:
...output omitted...
choice()
.when(jsonpath("$[?(@.Delivered == false)]"))
.to("direct:log_orders")
.when(jsonpath("$[?(@.Delivered == true)]"))
.to("direct:enrich");
13. Build and run the application with the ./mvnw clean spring-boot:run command.
Verify in the console output that the enrich-route route processed messages.
...output omitted...
... enrich-route : Order sent to fulfillment: {"ID":2 ... "customer-b"}
... enrich-route : New Header value: 16428.22
Finish
Stop the Spring Boot application, return to your workspace directory, and use the lab command
to complete this exercise. This is important to ensure that resources from previous exercises do
not impact upcoming exercises.
190 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Quiz
The startup has an online platform that enriches the information available for the properties
with external data.
You must choose the correct answers to the following questions to complete the relevant
part of the integration services:
1. The company has a REST service that exposes the information available for the
properties. Regarding the following code, which sentence is true? (Choose one.)
restConfiguration()
.component("servlet")
.port(8080)
.bindingMode(RestBindingMode.json);
rest("/house")
.get("/{maxPrice}")
.route()
.choice()
.when().simple("${maxPrize} > 5000")
.to("direct:partnerHouses")
.otherwise()
.to("direct:companyHouses)
.get("/")
.to("direct:allHouses")
.get("/{id}/details")
.to("direct:houseDetails");
a. The service exposes all the available houses in the / GET REST endpoint.
b. The direct:partnerHouses endpoint is responsible for processing GET requests for
houses with a maxPrice greater than 5000.
c. A GET request to the /123/details endpoint returns details for the property with ID
123.
d. The code has a bug in the definition and usage of the REST path parameters.
AD221-RHF7.10-en-6-20230613 191
Chapter 7 | Building and Consuming REST Services
2. The company has a service responsible for enriching the information available for
the properties with external data. Regarding the following integration code, which
sentence is true? (Choose one.)
from("direct:origin-a")
.enrich("direct:origin-b", aggregationStrategy)
.to("direct:result");
192 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
4. The startup enriches the data retrieved from the partners with statistical information
about the property location. A SOAP service available in the stats.example.com
host provides the statistical information. Regarding the following code, which two
sentences are true? (Choose two.)
from("direct:start")
.setBody(constant("Liverpool"))
.bean(GetStatsRequestBuilder.class)
.setHeader(CxfConstants.OPERATION_NAME, constant("GetStats"))
.setHeader(CxfConstants.OPERATION_NAMESPACE,
constant("http://stats.example.com/"))
.to("cxf://http://stats.example.com:8423"
+ "?serviceClass=com.example.StatsService"
+ "&wsdlURL=/stats.wsdl")
.log("The population in the area is : ${body[0].stats.population}");
AD221-RHF7.10-en-6-20230613 193
Chapter 7 | Building and Consuming REST Services
Solution
The startup has an online platform that enriches the information available for the properties
with external data.
You must choose the correct answers to the following questions to complete the relevant
part of the integration services:
1. The company has a REST service that exposes the information available for the
properties. Regarding the following code, which sentence is true? (Choose one.)
restConfiguration()
.component("servlet")
.port(8080)
.bindingMode(RestBindingMode.json);
rest("/house")
.get("/{maxPrice}")
.route()
.choice()
.when().simple("${maxPrize} > 5000")
.to("direct:partnerHouses")
.otherwise()
.to("direct:companyHouses)
.get("/")
.to("direct:allHouses")
.get("/{id}/details")
.to("direct:houseDetails");
a. The service exposes all the available houses in the / GET REST endpoint.
b. The direct:partnerHouses endpoint is responsible for processing GET requests for
houses with a maxPrice greater than 5000.
c. A GET request to the /123/details endpoint returns details for the property with ID
123.
d. The code has a bug in the definition and usage of the REST path parameters.
194 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
2. The company has a service responsible for enriching the information available for
the properties with external data. Regarding the following integration code, which
sentence is true? (Choose one.)
from("direct:origin-a")
.enrich("direct:origin-b", aggregationStrategy)
.to("direct:result");
AD221-RHF7.10-en-6-20230613 195
Chapter 7 | Building and Consuming REST Services
4. The startup enriches the data retrieved from the partners with statistical information
about the property location. A SOAP service available in the stats.example.com
host provides the statistical information. Regarding the following code, which two
sentences are true? (Choose two.)
from("direct:start")
.setBody(constant("Liverpool"))
.bean(GetStatsRequestBuilder.class)
.setHeader(CxfConstants.OPERATION_NAME, constant("GetStats"))
.setHeader(CxfConstants.OPERATION_NAMESPACE,
constant("http://stats.example.com/"))
.to("cxf://http://stats.example.com:8423"
+ "?serviceClass=com.example.StatsService"
+ "&wsdlURL=/stats.wsdl")
.log("The population in the area is : ${body[0].stats.population}");
196 AD221-RHF7.10-en-6-20230613
Chapter 7 | Building and Consuming REST Services
Summary
• REST DSL is a wrapper layer that provides REST DSL methods such as get, post, put, and
delete.
• You can define and configure the REST implementation to use with the restConfiguration
method.
• The content enricher pattern enhances your message exchange with additional data.
• The HTTP4 component provides HTTP based endpoints for calling external HTTP resources.
• You can use the CXF component to communicate with a SOAP service.
AD221-RHF7.10-en-6-20230613 197
198 AD221-RHF7.10-en-6-20230613
Chapter 8
Integrating Cloud-native
Services
Goal Deploy cloud-native integration services based on
Camel Routes to OpenShift.
AD221-RHF7.10-en-6-20230613 199
Chapter 8 | Integrating Cloud-native Services
Objectives
• After completing this section, you should be able to deploy Spring Boot Camel applications to
OpenShift.
Alternatively, you can use the Red Hat OpenShift Source-to-Image (S2I) build process to offload
parts of the deployment process to Red Hat OpenShift. The following options are some common
deployment workflows:
• Do not use S2I. Instead, build the container image, push the image to a registry, and create the
deployment in Red Hat OpenShift.
This course focuses on S2I builds from the source code triggered by JKube. JKube is a project of
the Eclipse foundation, which provides components to simplify the deployment of cloud-native
Java applications, relieving developers from repetitive deployment tasks. JKube uses S2I to
generate the container image for you, also creating the rest of Red Hat OpenShift and Kubernetes
resources required to deploy the application.
<profile>
<id>openshift</id>
<properties>
<jkube.generator.from>
registry.redhat.io/fuse7/fuse-java-openshift-rhel8:1.10
</jkube.generator.from>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.jboss.redhat-fuse</groupId>
<artifactId>openshift-maven-plugin</artifactId>
<version>7.10.0.fuse-sb2-7_10_0-00014-redhat-00001</version>
200 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
<executions>
<execution>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>apply</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
The base image used to generate the container image where the application runs. The
registry.redhat.io/fuse7/fuse-java-openshift-rhel8 image provides all the
required dependencies to run Spring Boot Camel applications on Red Hat OpenShift.
The name of the plug-in to use in this profile: openshift-maven-plugin. The plug-in
provides build goals to generate Kubernetes and Red Hat OpenShift artifacts and resources.
The resource goal generates the Red Hat OpenShift resources required for your
application, based on the contents of your project's src/main/jkube directory.
The build goal builds the container image for your application.
The apply goal applies the resources produced by the resource goal to your Red Hat
OpenShift cluster.
Additionally, if you use Java 11 with Fuse 7.10, you might want to specify a Java 11 profile to simplify
the build process.
Note
Alternatively, you can use the spring-boot-2-camel-xml quick start, provided
by Red Hat Fuse. The archetype creates a Spring Boot Camel project ready to be
deployed to Red Hat OpenShift.
To deploy your application to Red Hat OpenShift, use the following command from the root of
your Spring Boot project:
Important
Before running mvn oc:deploy, you must use the oc CLI to log in to the cluster,
and select the project in which you want to deploy your application.
AD221-RHF7.10-en-6-20230613 201
Chapter 8 | Integrating Cloud-native Services
Readiness Probes
A readiness probe determines whether a container is ready to service requests. If the probe
fails, then Red Hat OpenShift stops sending traffic to the container.
Liveness Probes
A liveness probe determines whether a container is still running. If the probe fails, then
Red Hat OpenShift kills the container, which is subjected to its restart policy.
Startup probe
A startup probe verifies whether an application in a container has started. Startup probes
automatically disable readiness and liveness probes until the application starts. If the startup
probe fails, then Red Hat OpenShift kills the container, which is subjected to its restart policy.
To configure probes with JKube, specify the configuration of the probes in the src/main/
jkube/deployment.yml file of your project. The following snippet shows an example of this file
including probes configuration:
spec:
template:
spec:
containers:
- readinessProbe:
httpGet:
path: /health/ready
port: 80
scheme: HTTP
timeoutSeconds: 5
livenessProbe:
exec:
command:
- cat
- /tmp/health
failureThreshold: 4
...container spec omitted...
• The readiness probe makes requests to the /health/ready endpoint and port 80 of the
container host.
• The liveness probe executes the cat /tmp/health command in the container.
Probes can verify the health status by using checks such as HTTP endpoint checks, and Container
execution checks. You can also set up configuration options, such as failure thresholds and
timeouts. Refer to the Red Hat OpenShift health monitoring documentation for more details
about different types of checks and configurations.
Spring Boot Actuator is preconfigured with default health checks exposed through /actuator/
health/* endpoints. An endpoint reporting a healthy state returns a 200 OK HTTP code.
Otherwise, the endpoint returns a 503 Service Unavailable HTTP code.
202 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
In the Actuator, beans that implement health checks are called health indicators. Each indicator
implements the org.springframework.boot.actuate.health.HealthIndicator
interface. By implementing this interface, you can define custom health indicators. For example,
you can define an indicator to verify the health of a Camel route, as the following example
demonstrates:
@Component
public class MyCamelRouteHealthIndicator implements HealthIndicator {
...code omitted...
@Override
public Health health() {
if (myRouteChecker.isDown()) {
return Health
.down()
.withDetail(
"Route failed",
CamelRouteHealth.getErrorMessage())
.build();
}
return Health.up().build();
}
You can use a simple bean instance, such as myRouteChecker in the preceding example, to store
the health status of a route. From your route, you can use this bean to set the health status of the
route.
Actuator Configuration
You can configure Actuator endpoints and health checks in your application.properties file.
For example:
management.endpoint.health.show-details = always
management.health.probes.enabled=true
Shows additional details in the /actuator/health/ endpoint for all requests. When show-
details is activated, Actuator returns more detailed information, such as the uptime, and
the result of each registered indicator.
• /actuator/health/liveness
• /actuator/health/readiness
These endpoints correspond to the liveness and readiness health groups. A health group is a way
to organize multiple indicators together.
AD221-RHF7.10-en-6-20230613 203
Chapter 8 | Integrating Cloud-native Services
By default, the readiness group only includes the readinessState indicator. The liveness
group only includes the livenessState indicator.
You can add additional indicators to a group with the corresponding configuration parameter, as
the following example shows:
management.endpoint.health.group.readiness.include=myCustomCheck,readinessState
The preceding example uses the myCustomCheck and readinessState indicators for the
readiness group. If any of these indicators report that the health status is down, then the
/actuator/health/readiness endpoint returns a 503 HTTP error, indicating that the
application is not ready.
To map the class name of an indicator to an indicator identifier, Actuator removes the trailing
HealthIndicator from the class name. Therefore, for the previous example, myCustomCheck
resolves to the MyCustomCheckHealthIndicator bean.
References
JKube openshift-maven-plugin
https://www.eclipse.org/jkube/docs/openshift-maven-plugin
For more information, refer to the Monitoring application health by using health
checks chapter in the Red Hat OpenShift Container Platform 4.12 Applications Guide
at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/
index#application-health
For more information, refer to the Creating and deploying applications on Fuse on
OpenShift section in the Fuse on Red Hat OpenShift Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
fuse_on_openshift_guide/index#create-and-deploy-applications
For more information, refer to the Red Hat OpenShift Maven plugin appendix in the
Fuse on Red Hat OpenShift Guide at
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.10/html-single/
fuse_on_openshift_guide/index#openshift-maven-plugin
204 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
Guided Exercise
You must deploy a Spring Boot Camel application called temperatures-route, which
exposes a set of temperature measurements in the Fahrenheit scale. The measurements
originally come from a Node.js gateway service, called temperatures-celsius-
app, which serves Celsius values gathered from temperature sensor devices. The
temperatures-route application implements Camel routes that read the Celsius values
from temperatures-celsius-app, converts them to Fahrenheit values, and exposes the
values through a REST endpoint.
Outcomes
You should be able to deploy a Spring Boot Camel application to Red Hat OpenShift, and
configure health checks for a Camel application.
The solution files for this exercise are in the AD221-apps repository, within the cloud-
deploy/solutions directory.
From your workspace directory, use the lab command to prepare your system for this
exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/cloud-
deploy/apps directory, into the ~/AD221/cloud-deploy directory. This command also
logs you in to the Red Hat OpenShift cluster and deploys the temperatures-celsius-
app service.
Instructions
1. Verify that the temperatures-celsius-app service is deployed and returning
temperature values.
AD221-RHF7.10-en-6-20230613 205
Chapter 8 | Integrating Cloud-native Services
1.1. Send a GET request to the temperatures-celsius-app URL. Verify that the
temperatures-celsius-app service returns a list of temperature values. You can
use the curl command for this.
2.1. Scale down the deployment to zero replicas to simulate that the service is not ready.
Use the oc scale deployment temperatures-celsius-app --replicas=0
command for this.
2.2. Make another request to the service to verify that the application is not available.
3. Navigate to the ~/AD221/cloud-deploy directory, and open the project with your editor
of choice. Inspect the Camel routes in the TemperaturesRESTRouteBuilder class.
The builder class implements a route that reads Celsius temperature values from the
temperatures-celsius-app service, converts them to Fahrenheit, and exposes the
result through the /camel/temperatures/fahrenheit REST endpoint.
4. Configure the Red Hat OpenShift deployment in the POM file, by using the openshift-
maven-plugin plug-in.
Open the project's POM file and uncomment the openshift profile. Inspect the profile.
Note that the profile uses the openshift-maven-plugin plug-in from JKube, and the
fuse-java-openshift-jdk11-rhel8 image for the S2I deployment.
Red Hat OpenShift is sending traffic to the Spring Boot application, even though the
Camel route is not healthy.
7. Update the application code to expose the readiness status of the Camel route through an
HTTP endpoint.
7.1. Inspect the application.properties file. The readiness group includes the
camelRoute health check. Invoking the /actuator/health/readiness endpoint
means that the camelRoute health check is executed.
206 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
7.3. Inspect the RouteHealth class. This class implements the up and down methods.
You must use these methods from the Camel route to set the health state of the
route.
from("direct:onException")
.routeId("processException")
.process(exchange -> {
exchange
.getIn()
.setHeader("error", exchange.getProperty(Exchange.EXCEPTION_CAUGHT,
Exception.class));
})
// TODO: use the route-health bean to set health down
.bean("route-health", "down");
7.5. In the celsiusToFahrenheit route, use the Wire Tap EIP to call the up method of
the route-health bean.
from("direct:celsiusToFahrenheit")
.routeId("celsiusToFahrenheit")
...route details omitted...
// TODO: use the Wire Tap EIP with the route-health bean to set health up
.wireTap("bean:route-health?method=up");
readinessProbe:
httpGet:
# TODO: Change path to actuator readiness group
path: /actuator/health/readiness
port: 8080
9. Verify that the application pod is not ready. Run the oc describe pod -l
app=temperatures-route command to verify that the readiness probe failed.
AD221-RHF7.10-en-6-20230613 207
Chapter 8 | Integrating Cloud-native Services
Note that the probe generates an Unhealthy event and the pod conditions mark the pod
as not ready.
11. Verify that the temperatures-route application is ready. Send a request to the /
camel/temperatures/fahrenheit application REST endpoint, and verify that the
response contains a list of temperatures. You need to wait a few seconds until the readiness
probe succeeds and the Ready and ContainersReady pod conditions are set to True.
Finish
Return to your workspace directory and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
208 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
Objectives
• After completing this section, you should be able to develop Camel routes with Quarkus.
• Takes advantage of the performance benefits, developer joy, and the container first ethos
provided by Quarkus.
• Takes advantage of the performance improvements made in Camel 3, which results in a lower
memory footprint, less reliance on reflection, and faster startup times.
import org.apache.camel.builder.RouteBuilder;
@Override
public void configure() throws Exception {
from("schema:origin")
.log("Hello World");
}
}
The camel-quarkus-core artifact contains builder methods for all Camel components. You still
need to add the component's extension as a dependency for the route to work properly. You can
tune Camel Quarkus extensions by using quarkus.camel.* properties.
AD221-RHF7.10-en-6-20230613 209
Chapter 8 | Integrating Cloud-native Services
Camel Quarkus automatically configures and deploys a Camel Context bean. By default, the
lifecycle of this bean is tied to the Quarkus application lifecycle.
Note
To test routes in the context of Quarkus, the recommendation is to write integration
tests.
• Docker build
• Source to Image (S2I)
• S2I Binary
You can create custom heath check implementations. Any checks provided by your application are
automatically discovered, and bound to the Camel registry. They are available in the /q/health/
live, and /q/health/ready Quarkus health endpoints.
Prior to the deployment, you are required to use the Red Hat OpenShift CLI to log in, and select
the project in which you want to deploy your application.
The following example packages and deploys your Quarkus application to the current Red Hat
OpenShift project:
210 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
References
Red Hat Build of Quarkus Online Project Generator
https://code.quarkus.redhat.com/
For more information, refer to the Extensions Overview chapter in the Camel
Extensions for Quarkus Reference at
https://access.redhat.com/documentation/en-us/red_hat_integration/2021.q4/
html-single/camel_extensions_for_quarkus_reference/index#camel-quarkus-
extensions-overview
AD221-RHF7.10-en-6-20230613 211
Chapter 8 | Integrating Cloud-native Services
Guided Exercise
The company stores the books as DocBook files, and uses a shared file system for the
publishing process. A team of editors and graphic designers review the book manuscripts
before they are ready for printing.
The company currently publishes technical, and novel books. Editors review all types of
books, and graphic designers only the technical ones.
Selecting the books to review for each one of the teams is a repetitive, manual, and time-
consuming task. You must use Red Hat Fuse, and create a Camel route to send the correct
type of book to the correct team.
The company also requires you to expose the books assigned to each one of the teams in
REST endpoints. You must deploy the Quarkus application to Red Hat OpenShift.
Outcomes
You should be able to create routes in Quarkus, add health checks, and deploy the
application to Red Hat OpenShift.
The solution files for this exercise are in the AD221-apps repository, within the cloud-
quarkus/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/cloud-
quarkus/apps directory, into the ~/AD221/cloud-quarkus directory. It also verifies
the connection to the Red Hat OpenShift cluster, and creates a project to deploy the
application.
Instructions
1. Navigate to the ~/AD221/cloud-quarkus directory, open the project with your editor of
choice, and examine the code.
212 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
2. Open the project's POM file, and add the following dependencies from the
org.apache.camel.quarkus group:
• camel-quarkus-bean
• camel-quarkus-file
• camel-quarkus-jackson
• camel-quarkus-jacksonxml
• camel-quarkus-rest
• camel-quarkus-xpath
4. Create a route in the EditorRestRouteBuilder class to store all the books assigned to
the editors into an in-memory storage.
The route must collect all the books located in the file://data/pipeline/editor
endpoint, with the noop option activated. Next, unmarshal the data, and convert the
unmarshalled data to JSON with the jacksonxml processor. Process the JSON data,
and store it as an Object type in the inMemoryBooksForEditor variable. You must set
pipeline-editor as the route ID.
The route must look like the following:
// TODO: Add a route to store all editor books in the local variable
from("file://data/pipeline/editor?noop=true")
.routeId("pipeline-editor")
.log("Processing file: ${header.CamelFileName}")
.unmarshal().jacksonxml()
.process().body(Object.class, (Consumer<Object>) inMemoryBooksForEditor::add);
AD221-RHF7.10-en-6-20230613 213
Chapter 8 | Integrating Cloud-native Services
5. Create a Camel REST route in the EditorRestRouteBuilder class, and expose the
books assigned to the editors.
The route must define a GET endpoint in the /pipeline/editor path. Return the
content of the inMemoryBooksForEditor variable in the response body, and set rest-
pipeline-editor as the route ID.
The route must look like the following:
// TODO: Add a REST route to expose the books stored in the local variable
rest("/pipeline/editor")
.get()
.route()
.routeId("rest-pipeline-editor")
.setBody(exchange -> inMemoryBooksForEditor)
.endRest();
7. Run the Quarkus application by using the ./mvnw -DskipTests clean package
quarkus:dev command, and wait for the application to process the books stored in the
file://data/manuscripts endpoint.
Open a browser window, and navigate to http://localhost:8080/pipeline/editor. Notice that
the application returns a JSON with the three books assigned to the editors.
8. Open the project's POM file, and add the following dependencies to deploy the application
to Red Hat OpenShift.
Group ID Artifact ID
org.apache.camel.quarkus camel-quarkus-microprofile-health
io.quarkus quarkus-openshift
9. Wait for Quarkus to perform a live reload, and verify the status of the application in the
health check endpoint.
In a browser window, navigate to http://localhost:8080/q/health. Notice that the
application returns a JSON response with the status field set to UP, and stop the
application.
10. Verify the correctness of the complete application by executing the unit tests.
Run the ./mvnw clean test command, and verify that three unit tests pass.
214 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
12. By using the oc describe pod command, verify that the deployed application is running,
and it has the health checks configured.
Finish
Return to your workspace directory, and use the lab command to complete this exercise. This is
important to ensure that resources from previous exercises do not impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 215
Chapter 8 | Integrating Cloud-native Services
Objectives
• After completing this section, you should be able to create cloud-native integrations with Camel
K.
The Camel K runtime is a Java application based on Camel Quarkus. The main goal of the runtime
is to run a Camel Quarkus application, and configure the routes defined by the developer.
With Camel K, you write your integration code in a single file, and use the Camel K CLI to run
the code immediately in the cloud. As a developer, you concentrate on developing only the
integration source code; minimizing the costs of maintaining a complete application, and reducing
the deployment complexity.
Describing Kamelets
Kamelets are an alternative approach to application integration. They are reusable route
components, implemented as Kubernetes resources, you can use to connect to external systems.
Source
Consumes data from an external system.
Sink
Sends data to an external system.
Action
Executes a specific action to manipulate data while it passes from a source to a sink Kamelet.
Camel K automatically handles the dependency management, and imports the required libraries
by using code inspection. With Camel K, you do not need to build, and package your integration.
Important
The automatic resolution of dependencies only works with dependencies from the
Camel catalog (camel-* artifacts), and with routes that do not specify dynamic
URIs.
216 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
import org.apache.camel.builder.RouteBuilder;
@Override
public void configure() throws Exception {
from("schema:origin")
.log("Hello World");
}
}
The following table is a non-comprehensive list of commands supported by the Camel K CLI.
Command Description
You can run a Camel K integration in developer mode by adding the --dev parameter to the run
command.
The preceding example runs the integration defined in the MyIntegration.java file in
developer mode. The developer mode watches the file for changes, and automatically refreshes
the integration deployed in the cloud. That means that you can make live updates to the
integration code, and view the results instantly.
Integration Configuration
There are two configuration phases in a Camel K integration lifecycle: build time, and runtime. You
can provide configuration values to the kamel run command to customize the different phases.
AD221-RHF7.10-en-6-20230613 217
Chapter 8 | Integrating Cloud-native Services
Build time
Use the --build-property option to provide the property values required in the build
process.
Runtime
Use the --property, --config, or --resource options to provide the property values
required when the integration is running.
Traits are high-level features of Camel K that you can configure to customize the behaviour of
your integration. You can define traits by using the --trait option on the kamel run command.
Camel K Modeline
Camel K modeline is a feature that processes integration options defined in code comments. The
following table is a non-comprehensive list of modeline options.
Option Description
// camel-k: dependency=camel-quarkus-jacksonxml
// camel-k: resource=file:./path/to/file.txt
import org.apache.camel.builder.RouteBuilder;
@Override
public void configure() throws Exception {
...code omitted...
}
}
In an initial phase, the kamel run command inspects the integration file to detect modeline
options. Then, it transforms the modeline options into arguments that are later executed by the
run command.
218 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
You must write integrations in Java You can write the integration file in multiple
languages, such as Java, Groovy, or
JavaScript
Based on the context, scope, and constraints of your project, you might want to use either Camel
or Camel K. The following is a non-comprehensive list of use cases and recommendations about
when to use one or the other.
• You might want to use Apache Camel when working with traditional Java development
workflows. This includes non-cloud-native projects and Java projects with a strong dependency
on a specific runtime, such as Spring Boot.
• If you do not want to deal with a Java runtime, the details of a specific Java framework, or if you
want to define integrations in a different language than Java, then use Camel K.
• Similarly, if you want to add Camel integration capabilities to other technology stacks, then
use Camel K. An example of this is the Rayvens Python project [https://github.com/project-
codeflare/rayvens].
• Camel is specifically designed for serverless applications. If you are comfortable with serverless
and KNative, then you might want to consider Camel K. KNative allows you to optimize how
Camel K integrations use cluster resources and other capabilities, such as eventing.
AD221-RHF7.10-en-6-20230613 219
Chapter 8 | Integrating Cloud-native Services
References
For more information, refer to the Kamelets Reference guide at
https://access.redhat.com/documentation/en-us/red_hat_integration/2021.q4/
html-single/kamelets_reference/index
For more information, refer to the Configuring Camel K Integrations chapter in the
Developing and Managing Integrations Using Camel K Guide at
https://access.redhat.com/documentation/en-us/red_hat_integration/2021.q4/
html-single/developing_and_managing_integrations_using_camel_k/
index#configuring-camel-k
For more information, refer to the Camel K trait configuration reference chapter in
the Developing and Managing Integrations Using Camel K Guide at
https://access.redhat.com/documentation/en-us/red_hat_integration/2021.q4/
html-single/developing_and_managing_integrations_using_camel_k/index#camel-
k-traits-reference
220 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
Guided Exercise
The company requires you to expose the book manuscripts in a REST endpoint. You must
deploy the integration application to Red Hat OpenShift.
Outcomes
You should be able to:
The solution files for this exercise are in the AD221-apps repository, within the cloud-
camelk/solutions directory.
From your workspace directory, use the lab command to start this exercise.
The lab command copies the exercise files from the ~/AD221/AD221-apps/cloud-
camelk/apps directory, into the ~/AD221/cloud-camelk directory. It also verifies
the connection to the Red Hat OpenShift cluster, and creates a project to deploy the
application.
Instructions
1. Navigate to the ~/AD221/cloud-camelk directory, open the project with your editor of
choice, and examine the code.
2. Run the Camel K application in develop mode by using the kamel run
ManuscriptsApi.java --dev command. Wait for the deployment to finish.
...output omitted...
... [io.quarkus] (main) camel-k-integration 1.6.3 ... started in 2.707s. Listening
on: http://0.0.0.0:8080
...output omitted...
AD221-RHF7.10-en-6-20230613 221
Chapter 8 | Integrating Cloud-native Services
Important
The initial Camel K deployment might take a few minutes to finish. Subsequent
deployments should be faster.
3. Open a new terminal and use the oc get route command to get the route assigned to
the Camel K application.
Open your web browser, and navigate to the REST endpoint. Verify that the application
throws an internal server error because the integration has some missing parts.
• camel-quarkus-jackson
• camel-quarkus-jacksonxml
• ./data/manuscripts/book-01.xml
• ./data/manuscripts/book-02.xml
Camel K copies your local runtime resources into the /etc/camel/resources directory
of the integration pod container.
The resource declarations must look like the following:
6. Create a route to store all the books assigned to the editors into an in-memory storage.
The route must collect all the books located in the file:/etc/camel/resources
endpoint, and unmarshal the data to convert it to the JSON format. Process the JSON
222 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
data, and store it as an Object type in the inMemoryBooks variable. You must set
manuscripts as the route ID, and use the noop option in the consumer endpoint.
Note
You can use the log method in the route to track the progress of the file
processing.
7. Save the changes to trigger a new deployment of the Camel K application, and wait for the
integration to process the book files.
...output omitted...
... Processing file: book-01.xml
... Processing file: book-02.xml
Important
The Camel K development mode command might hang if you introduce compilation
errors. If this happens, then stop and restart the command.
8. Return to the browser, and reload the Camel K application URL. Notice that the application
returns a JSON response with the information about two books.
Finish
Stop the Camel K application, return to your workspace directory, and use the lab command to
complete this exercise. This is important to ensure that resources from previous exercises do not
impact upcoming exercises.
AD221-RHF7.10-en-6-20230613 223
Chapter 8 | Integrating Cloud-native Services
Quiz
Consider that you have developed a Spring Boot Camel application to route weather
condition measurements from garden sensors to a REST API.
1. Which two of the following are valid strategies to deploy the application to the Red Hat
OpenShift? (Choose two.)
a. Run a Source-to-Image (S2I) build to deploy the application from the source code.
b. Run an S2I build to deploy the application from the source code. Next, build the container
image, and push the image to a registry.
c. Generate a JAR file. Next, run an S2I build to deploy the application from the JAR file.
d. Run an S2I build to deploy the application from the source code. Next, build the container
image, push the image to a registry, and create the deployment.
2. You are developing an additional integration in Camel K to route garden sensor data
to a relational database. While you develop the integration in the GardenToSql.java
file, you would like to run the route, show logs, and watch for changes to automatically
refresh the integration as you add more code. Which command of the following should
you use?
a. kamel dev GardenToSql.java
b. kamel start GardenToSql.java
c. kamel start GardenToSql.java --dev
d. kamel run GardenToSql.java --dev
3. The garden sensors lose network connection occasionally. When the sensors are
unreachable, the Camel route becomes unavailable and throws connection errors. How
can you configure the deployment so that, under these conditions, Red Hat OpenShift
marks the application as not Ready, without killing the application container?
a. Configure a startup probe.
b. Configure a readiness probe.
c. Configure a liveness probe.
d. Configure a container execution probe.
224 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
4. Your team is considering a switch to Camel Quarkus for the next version of the
application. Which two of the following statements are true regarding Camel Quarkus?
(Choose two.)
a. You do not need to add component dependencies to Quarkus. Quarkus handles all
dependencies for you.
b. You can tune Camel Quarkus extensions by using quarkus.camel.* properties.
c. You must add the camel-quarkus-core artifact as a dependency of your Quarkus
project.
d. Quarkus provides support for liveness probes only.
AD221-RHF7.10-en-6-20230613 225
Chapter 8 | Integrating Cloud-native Services
Solution
Consider that you have developed a Spring Boot Camel application to route weather
condition measurements from garden sensors to a REST API.
1. Which two of the following are valid strategies to deploy the application to the Red Hat
OpenShift? (Choose two.)
a. Run a Source-to-Image (S2I) build to deploy the application from the source code.
b. Run an S2I build to deploy the application from the source code. Next, build the container
image, and push the image to a registry.
c. Generate a JAR file. Next, run an S2I build to deploy the application from the JAR file.
d. Run an S2I build to deploy the application from the source code. Next, build the container
image, push the image to a registry, and create the deployment.
2. You are developing an additional integration in Camel K to route garden sensor data
to a relational database. While you develop the integration in the GardenToSql.java
file, you would like to run the route, show logs, and watch for changes to automatically
refresh the integration as you add more code. Which command of the following should
you use?
a. kamel dev GardenToSql.java
b. kamel start GardenToSql.java
c. kamel start GardenToSql.java --dev
d. kamel run GardenToSql.java --dev
3. The garden sensors lose network connection occasionally. When the sensors are
unreachable, the Camel route becomes unavailable and throws connection errors. How
can you configure the deployment so that, under these conditions, Red Hat OpenShift
marks the application as not Ready, without killing the application container?
a. Configure a startup probe.
b. Configure a readiness probe.
c. Configure a liveness probe.
d. Configure a container execution probe.
226 AD221-RHF7.10-en-6-20230613
Chapter 8 | Integrating Cloud-native Services
4. Your team is considering a switch to Camel Quarkus for the next version of the
application. Which two of the following statements are true regarding Camel Quarkus?
(Choose two.)
a. You do not need to add component dependencies to Quarkus. Quarkus handles all
dependencies for you.
b. You can tune Camel Quarkus extensions by using quarkus.camel.* properties.
c. You must add the camel-quarkus-core artifact as a dependency of your Quarkus
project.
d. Quarkus provides support for liveness probes only.
AD221-RHF7.10-en-6-20230613 227
Chapter 8 | Integrating Cloud-native Services
Summary
• You can deploy Spring Boot Camel applications to Red Hat OpenShift with the JKube
openshift-maven-plugin plug-in.
• You can deploy Camel Quarkus applications to Red Hat OpenShift with the quarkus-
openshift Maven extension.
• You can monitor the health of Camel routes and contexts both in Quarkus and Spring Boot
applications.
228 AD221-RHF7.10-en-6-20230613