Message-Driven-Bean Performance Using Websphere MQ V5.3 and Websphere Application Server V5.0
Message-Driven-Bean Performance Using Websphere MQ V5.3 and Websphere Application Server V5.0
Version 1.1
Marc Carter
WebSphere MQ Performance
IBM UK Laboratories
Hursley Park
Winchester
Hampshire
SO21 2JN
Property of IBM
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
Notices
This report is intended to help the reader understand the performance characteristics of
WebSphere MQ for Windows V5.3 in conjunction with WebSphere Application Server v5.0.
The information is not intended as the specification of any programming interfaces that are
provided by WebSphere MQ.
References in this report to IBM products or programs do not imply that IBM intends to make
these available in all countries in which it operates.
Information contained in this report has not been submitted to any formal IBM test and is
distributed “as-is”. The use of this information and the implementation of any of the
techniques is the responsibility of the customer. Much depends on the ability of the reader to
evaluate the information and project the results to their operational environment.
The performance measurements included in this report were measured in a controlled
environment and the results obtained in other environments may vary significantly.
Java, JMS, J2EE and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Page II
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
Preface
The contents of this SupportPac
This SupportPac is intended to:
• Show the performance impact on messaging throughput of using WebSphere
Application Server MDBs in place of basic JMS 1.0 applications.
• Demonstrate the advantages of using WebSphere MQ as an external JMS Resource
Provider to WebSphere Application Server.
• Provide the reader with some hints on how to configure their messaging scenario.
Acknowledgements
The author is very grateful to Anthony Tuel for help in producing this report.
Page III
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
1 Test scenario
1.1 Description
JMSPrim is a benchmark developed by the WebSphere Application Server (WAS)
performance team to evaluate an application server’s performance in the area of JMS. It is
designed to measure a wide range of primitive scenarios, to exercise the individual “building
blocks” that would make up a typical enterprise application. JMSPrim is made up of a set of
Message Driven Beans (MDBs) and contrasting standalone scenarios designed to
demonstrate a common customer upgrade scenario. The sender and receiver each run at the
same time, with appropriate warm-up intervals, to attempt to measure a typical system under
load. In each case the receivers run in a tight loop, performing negligible processing per
message. As is typical of primitives, this provides an indication of the upper bound to
performance - the best-case performance. This is a good thing for applications which are
100% JMS oriented, but most applications use JMS to drive some task such as updating a
database or have operations that are not based in any sort of JMS technology.
Each test herein is carried out with senders threads throttled to a particular rate. The number
of these senders is increased until the receivers could not remove messages from the queue
faster than they were arriving. All applications connect to a single queue and the messages
are used are simple 2048-byte text messages. Unless otherwise specified all tests are
exercising WMQ as External JMS Provider and not the default Embedded JMS.
1.2 Architecture
Server Driver
WAS WMQ
MDB
receivers
1 100Mbps
Ethernet JMS Senders
Standalone 2
JMS receivers
Page 1
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
2.1 Chart
1800
1600
1400
Msgs / sec
1200
1000
800
600
400
200
0
0 10 20 30 40
Apps
standalone p_tr mdb p_tr
standalone np_tns mdb np_tns
2.1.1 Results
Introduction of the application server and the flexible services it provides has decreased the
raw throughput (at their respective peaks) of these comparable tests.
Non-persistent results have dropped 45% (from 1650 to 900)
Persistent tests have dropped 36% (from 780 to 500) showing that not as much of their
processing is dependant upon raw CPU due to increased locking and serialisation required in
persisting a message
Particularly for non-persistent tests, these results demonstrate that increased CPU-cost-per-
message can have the controlling effect on the throughput of the system. It also shows that
the potential benefits provided by using WMQ as an External JMS Provider to run the queue
manager on a separate physical server can be important in a scenario where performance is
of concern.
Page 2
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
3.1 Chart
1000
900
800
700
Msgs / sec
600
500
400
300
200
100
0
0 5 10 15 20 25 30
Apps
3.2 Results
Use of a tuned, external Queue Manager has improved throughput over an untuned
embedded JMS provider. The CPU provides the constraining point for all tests, which masks
some of the improvements of the external Queue Manager, especially in the persistent tests
where it showed a 4% increase (from 480 to 500) in peak throughput. This increase is
expected to be much larger when placed in a server with more CPU resource available. Non-
persistent messaging is, by its nature, limited by CPU. External WMQ outperformed the
embedded JMS by 20% in the non-persistent case (going from 750 to 900).
Notes
This test only made use of Client connections on the external WMQ. Use of Bindings will be
shown to improve throughput by another 50% in the next section.
Details on the tuning applied to WMQ are in section 5 of this report.
Page 3
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
4 Increasing performance
There are many variables pertaining to the topology and workload you are planning for your
system. Most of these cannot have generic recommendations applied to them but here are
some areas that may benefit WAS messaging performance.
4.1.1 Chart
1600
1400
1200
Msgs / sec
1000
800
600
400
200
0
0 10 20 30 40
Apps
mdb np_tns Bindings mdb np_tns Client
mdb p_tr Bindings mdb p_tr Client
4.1.2 Results
Use of Bindings instead of Client connections has increased peak throughput by 50% for both
persistent and non-persistent results. This is caused by a reduction in CPU time spent
validating the interactions the MDB has with the Queue Manager.
Page 4
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
4.2.1 Chart
600
500
Msgs / sec
400
300
200
100
0
0 5 10 15 20 25 30
Apps
nonXA XA
4.2.2 Results
Turning off XA co-ordination on the persistent, transacted test amounted to a 30% increase in
maximum messaging throughput. This demonstrates that the default setting of enableXA is
sub-optimal if your scenario does not require this additional logic.
The same effect, demonstrated here on QueueConnectionFactories, will be seen with
TopicConnectionFactories
Page 5
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
4.3.1 Chart
Effect of ListenerPort.maxSessions
MDB_NP_TNS rated at 50 msgs/sec
1000
900
800
700
Msgs / sec
600
500
400
300
200
100
0
0 5 10 15 20 25
Apps
6 sessions 2 sessions 1 session
4.3.2 Results
Message Driven Beans are asynchronous by nature and therefore suffer a significant penalty
when forced to run in a serial mode (i.e. with maxSessions=1)
In order to make best use of parallelism in your server you need to set realistic values for your
maximum number of concurrent sessions. It is also sensible to set the minimum sessions (in
the QueueConnectionFactory) to a value greater than the default if you expect to handle
intermittent, heavy bursts of traffic
Page 6
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
Page 7
Message-Driven-Bean Performance using WMQ 5.3 and WAS 5.0
7 Test Environment
7.1 Hardware
Server
• IBM Netfinity 5500 M20, 4 * 500MHz P3 Xeon
• Windows 2000 Server SP3
• 4GB Ram
• 5 * SCSI 7,200 RPM drives
• 100Mb Ethernet card
Driver
• IBM Netfinity 6000R, 4 * 700MHz P3 Xeon
• Windows 2000 Server SP3
• 0.5GB Ram
• 3 * SCSI 10,000 RPM drives
• 1Gb Ethernet card
Page 8