10543 MQ Internals and Performance
10543 MQ Internals and Performance
March 2012
Session 10543
Agenda
What is distributed WebSphere MQ?
Design Objectives
Function Walkthroughs
1
What is Distributed MQ?
V7 Pub/Sub
(common)
Common Services
MQ for Unix
MQ for
10%
Windows MQ for z/OS
30% System i
AIX Solaris HP-UX Linux
7%
1% 1% 1% 1%
The Distributed version is the code base for range of systems. Its design point is a full function, high performance
N queue manager with a focus on highly portable code. These are the Distributed queue managers.
There is a very high percentage of common code between versions on different operating systems.
The code is mainly written in C with special attention to portability. Even the environment to build the executables is
portable.
Code differences relate mainly to differences in operating system facilities and packaging. On some platforms there
O is specific code to integrate the user interface with the operating system (eg Windows and System i)
The z/OS version of MQ is aimed at being a full function, high performance queue manager with maximum
exploitation of MVS architecture. Some code, notably the MCAs and clustering, is shared between the
implementations.
T
Percentages in the diagram represent the approximate proportion of code specific to each version.
System i is classed as a UNIX platform from the Distributed queue managers point-of-view as it shares many
similarities with other UNIX platforms. The platform-specific code here is mostly in dealing with configuration
E panels, and the use of native logging/journalling features.
The source code of the Distributed version of MQ is licensed to several partners who use it as the basis of ports
of MQ to other operating systems; it is also used by other parts of IBM to further extend the operating system
S support.
With V7, there is now truly common code between z/OS and Distributed: previously the “common” code was
ported between environments
2
Performance Bottlenecks
Bottlenecks
Performance Bottlenecks
Modern systems are complex and there are many factors which can influence the performance of a system. The
hardware resources available to the application as well as the way that application is written all affect the
behavior.
N
Tuning these environments for maximum performance can be fairly tricky and requires fairly good knowledge of
both the application and the underlying system. One of the key points to make is that the simple trial and error
O approach of changing a value and then measuring may not yield good results. For example, a user could just
measure the throughput of messages on the previous foil. They could then double the speed of the disk and re-
measure. They would see practically no increase of speed and could wrongly deduce that disk I/O is not a
bottleneck.
T
Of course throughput is not the only metric that people want to tune for. Sometimes it is more important that a
system is scalable or that it uses as little network bandwidth as possible. Make sure you understand what your
E key goal is before tuning a system. Often increasing one metric will have a detrimental affect on the other.
3
The Structure of the
Queue Manager
It also describes the way in which the processing for an MQI call is separated across operating system
N processes and among the components within the processes.
4
Queue Manager: Functional View
Communications
Applications Command Server Interface
Message Channel
MQI Agent
SPI
Application Interface
Queue Manager
Kernel
Object
Authority
Data Abstraction and Persistence Manager
Log Manager
Common Services
Application Interface provides the environment and mechanism for execution of MQI calls.
N Queue Manager Kernel provides most of the function of the MQI. For example, triggering is implemented here
along with message location.
Object Authority Manager provides access control for the queue manager and its resources. It allows
O specification of which users and groups are permitted to perform which operations against which resources.
Data Abstraction and Persistence provides storage and recovery of the data held by the queue manager.
Log Manager maintains a sequential record of all persistent changes made to the queue manager. This record is
T used for recovery after a machine crash and for remembering which operations were in which transaction.
Message Channel Agents are special applications using the SPI for the majority of their operations. They are
concerned with reliable transmission of messages between queue managers.
E SPI is a lower-level API similar to MQI but only available to Queue Manager processes offering greater
performance and functionality.
Command Server is a special application. It is concerned with processing messages containing commands to
manage the queue manager.
S
Common Services provides a common set of operating system-like services such as storage management,
NLS, serialisation and process management. This isolates the Queue Manager from platform differences.
5
Queue Manager Process Tree
Also:
Execution runmqchi
V5.3 Controller
(amqzxma0)
runmqlsr
amqzdmaa
amqrmppa
amqpcsea
Agent
(amqzlaa0)
These are the processes you see when a queue manager is running. The example is taken from AIX although
N Windows and other Unix platforms are similar. The iSeries process structure is slightly different, but still contains
many of the same blocks of code and processes.
The Execution Controller is program amqzxma0. This is the root process of the queue manager and the parent of
all of the other processes. It can be thought of as the owner of all of the queue manager's shared resources. It is
O concerned with managing and monitoring the other queue manager processes and the applications that connect.
The LQM agents are program amqzlaa0 or amqzlsa0. Agents perform the operations required to process MQI
calls on behalf of applications. Nearly all of the code beneath the MQI is actually executed by the agents.
The separation of application programs from the queue manager's critical resources protects the queue
T manager from rogue or malicious applications.
The number of agent processes depends on the workload. By default, agents each handle about 60
concurrent connections.
E The log manager is program amqhasmx. All log reading and writing requests go through this process.
Communication with the agents is achieved through a set of shared memory buffers.
If a queue manager is running with linear logging enabled, there will be a log formatter running, This program is
amqharmx. Its task is to pre-format log files in time for the log manager to use them. This process is not needed
S when a qmgr is running a circular log as all the log files are created and initialised when the qmgr is created.
The checkpoint processor is program amqzllp0. It is concerned with minimising the amount of recovery
processing when the queue manager is started.
6
Queue Manager Process Tree
Also:
Execution runmqchi
V6 Controller
(amqzxma0)
runmqlsr
amqzdmaa
amqrmppa
amqpcsea
Critical Object
Restartable External Repository
Services Authority
Services Processes Manager
Manager
(amqzmuc0) (amqzmur0) (amqzmgr0) (amqrrmfa)
(amqzfuma)
LQM Logs, Stats,
Agent
(amqzlaa0) Expiry Errors
amqzmgr0
Controls the traditional external processes such as command server, listener
Also controls processes defined as SERVICES
O amqzmuc0
Hosts internal services which are fundamental to the health of the queue manager
Failures in this process result in queue manager termination
Logger, checkpoint, formatter,
T New function: Message Expiry scanner – approximately every 5 minutes
amqzmur0
Hosts internal services considered not fundamental to queue manager health
This process can be restarted in the event of a failure
E Error logging task and the statistics task
7
Queue Manager Process Tree
Also:
Execution runmqchi
V7
V6 Controller
(amqzxma0)
runmqlsr
amqzdmaa
amqrmppa
amqpcsea
Critical Object
Restartable External Repository
Services Authority
Services Processes Manager
Manager
(amqzmuc0) (amqzmur0) (amqzmgr0) (amqrrmfa)
(amqzfuma)
LQM Subs Brw/Mark
Agent PubSub PubSub Utils
(amqzlaa0) PubSub V6 Compat (amqzmuf0)
Streams (amqfqpub)
(amqfcxba) Cache,
Inter QMgr
amqfcxba, amqfqpub
N Provides compatibility with V6 queued pub/sub processing for streams
amqzmuf0
Pub/Sub Utility Container
Cache management
O Inter-queue manager pub/sub daemon
8
Queue Manager: Process Model
The application communicates with the Execution Controller when it needs an agent to talk to. The EC is
N responsible for managing the agent processes. It monitors the agents and their associated applications.
The application communicates with its agent process via the IPCC. The agent process performs the MQI calls on
T the application's behalf. The IPCC exchanges between the application and agent are synchronous request-reply
exchanges.
The processes within the queue manager share information using shared memory. The other queue manager
tasks such as the log manager and the checkpoint process also share queue manager information in this way.
E
The IPCC is implemented with several different options: the normal mechanism uses shared memory, which
provides for reasonable isolation with reasonable performance. Isolated bindings use Unix-domain sockets,
giving greater isolation but slower operations. Applications using shared bindings can inhibit restart of a queue
S manager if they are not terminated. Trusted bindings give the best performance (particularly for non-persistent
operations) but can lead to internal corruption if the application runs rogue.
9
Function
Walkthroughs
Function Walkthroughs
This section shows how the various components interact to provide the MQI functions.
10
MQCONN
Application (MQI Stub)
Verify parameters and handles
Construct a Connect message
Call API crossing exit
Application Send a message to the EC
Execution Controller
MQI Stub Choose an agent or start a new one
Construct a reply IPCC message
Return reply to application
IPCC
Execution
Controller Application (MQI Stub)
Receive the reply
Construct an IPCC message
IPCC Send the message to the agent
Agent
Check App's permission to connect
Allocate and assign agent resources
IPCC
IPCC Send IPCC reply back to application
Agent
Application (MQI Stub)
Kernel Receive reply and call API crossing exit
DAP Return HCONN
MQCONN
MQCONN is different to most calls in that the application communicates directly with the Execution Controller. The
Execution Controller owns and manages the agent processes. When an application tries to make a connection, the EC
decides whether to start a new agent, to start a thread in an existing agent or to reuse an existing agent which has just
N been released by another application. It will also create an IPCC link for the application and agent to use to communicate
if a new agent/thread is to be created.
When the application issues MQCONN (not a client connect) the application stub which is bound to the application does
basic parameter checking. This is limited to checks which can be performed without access to protected queue manager
O resources. For example, the stub can check to see if the application is already connected and the queue manager
requested exists on the machine.
The parameters to MQCONN are bundled up into a Connect message. This is then sent across to the EC using the
IPCC. The EC selects or starts a new agent and returns the details to the application stub.
T If a new thread is to be created in the agent (the EC tells the application if it is) the application stub sends a Start Thread
message to the agent using the IPCC. The agent receives the message and associates itself with the application. A
thread will be started if the agent is running on an operating system where multiple threads can be used in the agent.
The application stub then sends a connection message to this thread.
E Otherwise, an existing agent thread is to be used and the application stub sends a connection message directly to the
thread.
The Kernel checks that the application is authorised to connect. It creates a connection handle which the application will
use on all future calls.
S
When the IPCC reply message is received in the application stub, it is unpacked and the output parameters are returned
to the application.
FastPath applications bypass most of the IPCC processing at the expense of integrity
11
Performance Implications: Connection Binding
Standard MQ Memory
Appl
IPC Agent
Log
Binding
FASTPATH MQ Memory
Appl
Agent
Log
Binding
N For non-persistent messages, the I/O subsystem is rarely used. Therefore there is substantial benefit to be
gained from by-passing the IPC component. This is what the Trusted Binding provides.
Depending upon the efficiency of the IPC component for a particular platform, the use of a Trusted Binding will
provide anything up to an 3 times reduction in the pathlength for non-persistent message processing.
O There is a price to pay for this improvement in pathlength. The Standard Binding for applications provides
separation of user code and MQ code (via the IPC component). The actual Queue Manager code runs in a
separate process from the application, known as an agent process (AMQZLAA0). Using standard binding it is not
possible for a user application to corrupt queue manager internal control blocks or queue data. This will NOT be
T the case when a Trusted Binding is used, and this implies that ONLY applications which are fully tested and are
known to be reliable should use the Trusted Binding.
The Trusted Binding applies to the application process and will also apply to persistent message processing.
E However, the performance improvements are not so great as the major bottleneck for persistent messages is the
I/O subsystem.
12
MQOPEN of a queue
MQI Stub
IPCC Agent
Application interface
Verify open parameters
Kernel
Verify operation validity
Resolve target – including cluster lookup
Check permissions on the queue
DAP
IPCC Load the queue ready for gets and puts if required
ƒThis is the part that can use system resources
Agent Kernel
Generate handle to object for application
Kernel Generate responses and event messages
IPCC
DAP Send reply back to application
MQOPEN of a queue
The MQI application stub first does basic parameter checking. This is limited to checks which can be performed without access to
protected queue manager resources.
The parameters to MQOPEN are bundled up into an Open message. This is then sent across to the agent using the IPCC.
N The agent thread dedicated to this connection in the meantime has been waiting for a message. Periodically, it checks that the
application is still alive so that cleanup can be performed if it ends without disconnecting.
The kernel sorts this lot out and opens the appropriate underlying queue. Whilst doing this, the kernel also checks that the requester
of the operation is actually authorised to perform it. It calls the OAM to perform these checks.
The DAP performs the operations needed to make the physical (local) queue available. This is termed loading the queue. It involves
E opening the file containing the underlying message data and allocating the shared memory buffers and other shared resources
necessary for the queue to be used. Of course, if the queue is already loaded, this work can be avoided.
Finally, the Kernel creates the 'handle' which the application will use to access the queue.
S When the IPCC reply message is received in the application stub, it is unpacked, the API crossing exit is called again, and the
output parameters are returned to the application.
13
Performance Implications: Heavyweight MQI Calls
MQCONN is a “heavy” operation
Don’t let your application do lots of them
Wrappers and OO interfaces can sometimes hide what’s really happening
Lots of MQCONNs can drop throughput from 1000s Msgs/Sec to 10s Msgs/Sec
MQPUT1
O If just putting a single message to a queue, MQPUT1 is going to be cheaper than MQOPEN, MQPUT and
MQCLOSE. This is because it is only necessary to cross over from the application address space into the queue
manager address space once, rather than the three times required for the separate calls. Under the covers inside
the queue manager the MQPUT1 is implemented as an MQOPEN followed by MQPUT and finally the
T MQCLOSE. The cost saving for a single put to a queue is the switching from application to queue manager
address space. Of course, if multiple messages need to be put to the queue then the queue should be opened
first and them MQPUT used. It is a relatively expensive operation to open a queue.
14
In the Depths of a Queue
Message Chain(s)
NP Queue
Space Hdl Space Map
Non- Buffer
01 00 00 01 01 persistent
01 00 00 01 01
01010001001101110111 Overflow
01 01 11 11 11
01110111 11 1111 0011 00
11 11 11 00 00
11 0011 0011 0100010001
00 00 01 01 01 Persistent
00000000010001000100 Queue Buffer
00 00 00 00 00
00110011 001100110011
11 11 11 11 11
11 1011 1011 1011 1011 10 Overflow
10 10 10 10 10 Persistent
10101010101010101010
10 10 10 10 10
10 10 10 10 10 File System
Buffer
Message
Detail
Cache Log
Queue file
Message Chains: Each message on a queue has an entry in a message chain. All messages, persistent and non-persistent,
N committed and uncommitted, appear in one of the message chains. There are actually 10 message chains - one for each message
priority. The message chain is a linked list of 32 byte "space handles" made up of a hash of the message id and correl id, message
expiry time, flags, the location of the message head in the queue. If the messages is fragmented, the handles link to space handles
for the other parts of the message.
The width of the hash for msgid/correlid was doubled in MQ V6 on 64-bit queue managers; those systems also build better
O indexes for searching by correlid
Message Details Cache: This is a table of selected message attributes for the 512 most recently used messages. This optimises
access for messages which don't stay on the queue very long. It contains details of the message ID, Correl ID etc.
Space Map: A map is kept to manage the space in the queue buffers and queue file. The queue is split up into blocks of 512 bytes
T which contain messages or parts of a message. 2 bits are used to represent each block, with different values to indicate if a block
is Free (10), Free and allocated (11), contains NP data (00), or contains persistent data (01).
Non-Persistent and Persistent Queue Buffers, and the Queue File: Messages are stored in shared memory buffers by preference. If
the buffers overflow, they are written to the file system buffer but we never perform a synchronous disk write for an NP message.
E The buffers default to 64k, and 128k for the NP and P buffers respectively, doubled on the 64-bit queue managers.
Log: Whenever a persistent message is put or got, at least one log record is written. If the message is put or got outside syncpoint,
the log record will be written synchronously. If it is done inside syncpoint, a synchronous write is not required until commit or
rollback.
For a transaction containing only non-persistent message operations, we don't write any log records at all.
S There is an exception to the rule about writing log records for persistent messages - one scenario allows us to pass messages
directly between applications without any I/O provided the messages are not part of a transaction (outside syncpoint).
Additional tables are maintained so that segmented and grouped messages can be recognised and retrieved when flags such as
MQGMO_COMPLETE_MSG are issued.
15
Views of the Queue
Message Chain(s)
Space Hdl Space Map
1 2
01 00 00 01 01
1 01 01 11 11 11
Persistent
11 11 11 00 00
Queue
00 00 01 01 01
Buffer
00 00 00 00 00
11 11 11 11 11
10 10 10 10 10
10 10 10 10 10
NP
Queue
Buffer
N The queue file is split into 512 byte blocks. If a message is larger than 512 bytes it will be fragmented across different 512 byte blocks. A chain
of space handles is created in the message chains to indicate where all parts of the message are held. These space handles ma y not be
adjacent to each other in the queue buffer - the next available free block will be allocated. On the slide message 1 takes up 2 blocks, indicating
by the two space handles marked '1'.
When an MQGET of a persistent message is performed the space handle of the message just got will be placed into the log. There is no need
O to describe the data removed. We only keep track of which parts of storage became free. The queue buffer must therefore remain in a
consistent state with the logs otherwise if we came to undo the MQGET we might overwrite data. Therefore before we can undo any operations
we must ensure the log and the queue buffer are synchronized.
Queues are unloaded in two phases: the first phase occurs at the first checkpoint after the last open handle to a queue is closed, the second
phase then occurs if the queue remains unreferenced to the next checkpoint. Shrinking of the queue file occurs during the first of these two
T phases of unload. Checkpoints are typically taken every 10,000 recoverable (i.e persistent) operations. If all message opera tions are for non-
persistent messages then checkpoints could be very infrequent.
Older versions of MQ were VERY conservative about releasing queue space back to the OS; V6.0 is more aggressive in releasing
unused queue space. Originally MQ would will release unused queue space if the queue is idle (no puts or gets to the queue) for 5
consecutive checkpoints, or if the queue is empty at the time of the checkpoint for 20 consecutive checkpoints.
From MQ V6.0, we compare the actual size of the queue file with the required size of the queue file every time that queue is
E checkpointed and truncates the queue file if it is oversized by both 1% and 16KB (regardles of how many open handles reference the
queue or the current qdepth, or the number of puts and gets since the last checkpoint).
A queue is checkpointed when a checkpoint occurs AND a recoverable (i.e.persistent) update to the queue has been made since the last
checkpoint. MQ will always truncate the queue to a minimal size when a CLEAR QL command is issued.
16
Tuning Queue Buffers
Increasing buffers can improve performance
More information can be kept in memory, without flushing to disk
File c:\mqm\qmgrs\QMA\queues\SYSTEM!DEFAULT!LOCAL!QUEUE\q
Stored npBuff = 64 kB
Stored pBuff = QMgr default
Stored maxQSize = 2,097,151 MB
N The queue manager must be stopped when changing these buffers, but you can display current values without
stopping it.
Remember that the buffers take up memory while the queue is opened, so do not over-size every queue on the
queue manager.
O
17
Performance Implications: Recovery Requirement
Disaster
Failure Type
Disk Media
Power
Application
None
The higher up this arrow the less likelihood of the occurrence of errors but the higher the cost of protection.
Is it important that the database and/or message queues have 'atomic' changes in the event of application
O failure? If so, then using syncpoint coordination and possibly an XA Coordinator are needed.
Is it important that disaster recovery results in message recovery? Then consider systems - in particular z/OS -
with remote site dual logging since distributed platforms depend on the operating systems 'mirrored' disks.
E
18
Performance Implications: Persistence
Log bandwidth is going to restrict throughput
Put the log files on the fastest disks you have
Persistent messages are the main things requiring recovery after an outage
Can significantly affect restart times
Log
If your application (or operational procedures) can detect and deal with lost messages, then you do not need to
T use persistent messages.
Consider:
A bank may have sophisticated cross checking in its applications and in its procedures to ensure no transactions
are lost or repeated.
E An airline might assume (reasonably) that a passenger who does not get a response to a reservation request within
a few seconds will check what happened and if necessary repeat or cancel the reservation.
In both cases the messages are important but the justification for persistence may be weak.
S
19
MQPUT Walkthrough
Kernel
Verify operation validity Also check for "if waiting getter"
(Resolve cluster queue destination)
DAP
Reserve space for the message data
If (persistent message)
Write log records for the update
(Wait for log records to reach the disk if outside syncpoint)
SerialisedWrite the message to the queue file
Else (non-persistent)
If (space available in queue buffer)
Copy the message data into the buffer
Else
Write the message to the queue file without logging
Maintain queue statistics such as queue depth
Kernel
Generate responses and events, wakeup getters/drive async consumers
MQPUT Walkthrough
The mechanism for reaching the kernel layer for MQPUT is the same as MQOPEN.
The Kernel verifies the operation for validity. Many aspects will already have been verified, but some can only be checked at this stage. For
example, it has to check that puts have not been inhibited for the queue.
N If the message is being put to a cluster queue, resolution of the target may be done here before the message is put to the cluster transmission
queue.
The DAP allocates space for the new message using the space map. If there is space, the message will be allocated in one of the queue
buffers, otherwise it will be allocated in the queue file.
O The operation will normally result in at least one log record being written if the message is persistent. If the message is non-persistent but
spilled to the queue file, we still do not write a log record.
If the space was allocated in one of the queue buffers, the message data is copied into the buffer. If the space was allocate d in the queue file,
the data will be written to the queue file via the file system buffer. If a log record is needed to record the update, it will be written before the
T message data is written to the queue file. If the message is put under syncpoint, neither write will be synchronous. A synchronous write to the
log will be required when the transaction commits or rolls back.
The DAP maintains queue statistics, such as the number of uncommitted messages and the depth of the queue. It also keeps track of which
queues are used as initiation queues to speed up checking of the rules for trigger message generation.
E All of the updates which the DAP performs are done atomically. If there is a failure at any point, the partial operation will either be completed or
removed completely. There's a lot of code to ensure that even complete failures of the agent process do not destroy message integrity -
updates to control structures are done in a defined order, and marked as they occur so that another agent process can complete or backout the
changes if necessary.
On return to the Kernel, the final responses for the application are generated as are any event or trigger messages required.
20
Put to a waiting getter
MQPUT most efficient if there is getting application waiting
Having multiple applications processing queue increases percentage
No queuing required
Removes a lot of processing of placing the message onto the queue
MQGET
MQPUT MQGET
MQGET
You should not expect to see “even” workload distribution between applications when they are all getting from
T the same queue
21
MQGET Walkthrough
Kernel
Verify operation validity
Check message expiry
Wait for message if not available
DAP
Locate a message meeting the requested criteria including
current browse cursor position
priority
Serialisedmessage id, correlation id, segment or group conditions
properties
Copy data into the message buffer
If (persistent)
Write log record
(Wait for log record to reach the disk if outside syncpoint)
Move the browse cursor if required
Maintain queue statistics such as queue depth
Kernel
Generate responses and events
MQGET Walkthrough
The Kernel verifies the operation for validity.
If the application specified a browse or tried to get the message under its browse cursor, the scope of the
message search is reduced.
T The operation will result in a log record being written if the message is persistent.
As for MQPUT, all of the updates which the DAP performs are done atomically. If there is a failure at any point,
the partial operation will either be completed or removed completely.
E On return to the Kernel, the final responses for the application are generated as are any event or report
messages required.
22
MQCMIT Walkthrough
Kernel
Verify operation validity
Kernel
Generate responses and events
Wakeup Getters/Drive Async Consumers
MQCMIT Walkthrough
The Kernel verifies the operation for validity.
O Once the log records have been written to the disk all changes under this transaction are made visible to the rest
of the queue manager.
On return to the Kernel, the final responses for the application are generated as are any event, report or trigger
messages required.
T
23
Performance Implications: Syncpoint
Do you need it?
Yes, when a set of work needs to either all be performed, or all not performed
24
Publish/Subscribe Implementation in V7
MQOPEN, MQPUT, MQGET very similar to point-to-point
Includes cluster resolution
Need to find closest admin topic node
Internal subscribers may forward publication to another queue manager
Managed destinations
Agent creates queue in MQSUB - trace shows internal MQOPEN (kqiOpenModel)
Publish/Subscribe Implementation in V7
There are many new capabilities inside the queue manager to handle publish/subscribe. But wherever possible,
existing techniques have been reused. For example, subscription information is stored in a similar way to cluster
repository data, as messages on queues. The number of messages does not correspond to the number of
N subscriptions as many records can be consolidated into a single message. These messages are read at startup,
and a memory-based view is built. That memory view is used for the lifetime of the queue manager, and the on-
disk messages are only updated when new durable (persistent) subscriptions are made or consolidation is
needed.
O Because an application’s subscription is directly connected to its hConn, we now know when the application dies
and can automatically remove resources such as managed queues or non-durable subscriptions. Previously, the
queue manager did not really understand non-durable subscriptions and these were emulated with mechanisms
that sometimes required manual cleanup.
T Match-space (topic tree + subscription list) is held in shared memory for all agents to use. This permits parallel
access for multiple applications publishing on the same topic. A lock is held during the subscribe/unsubscribe
processing, to ensure a consistent view of this match-space. While the lock is held, publication to that topic will
be delayed.
E
25
Message Processing in V7
Persistent pubs switch to non-persistent-ish for non-durable subscriptions
Does not change the reliability level
Messages are not logged, but they keep the “persistent” flag
Improves performance
Message Processing in V7
A non-durable subscriber is permitted to miss messages if it abends, so there is no point in doing a full hardening
of those messages. But they do need to know if the message was originally a persistent message, in case they
want to forward it unchanged.
N
26
Channels
Channels
How they work
27
Internal Resources
Channel synchronisation uses ScratchPads
The SYNCQ holds channel status across restarts
A small area of data which can be part of 2-phase commit processing
Channel sync also uses file AMQRSYNA.DAT as an index into the scratchpads
Messages in an in-doubt batch cannot be reallocated by clustering algorithm
Message
Transmission Application
Queue Confirm Flow
Queues
Channel
Indoubt
Network
Internal Resources
Here we can see the two ends of the channel transferring a batch of messages. Note how the MCA's write data
to disk at the end of the batch. This is only done for recoverable batches. The data written contains the
transaction id of the transaction used at the sending end to retrieve all the messages. Once the sender end has
N issued a 'confirm' flow to its partner it is 'indoubt' until it receives a response. In other words, the sender channel
is not sure whether the messages have been delivered successfully or not. If there is a communications failure
during this phase then the channel will end indoubt. When it reconnects to it's partner it will notice from the data
store that it was indoubt with its partner and so will ask the other channel whether the last batch of messages
O were delivered or not. Using the answer the partner sends, the channel can decide whether to commit or rollback
the messages on the transmission queue.
This synchronisation data is viewable by issuing a DIS CHSTATUS(*) SAVED command. The values displayed
should be the same at both ends of the channel.
T Note that if the channel is restarted when it is indoubt it will automatically resolve the indoubt. However, it can
only do this if it is talking to the same partner. If the channel attributes are changed or a different Queue Manager
takes over the IP address or a different channel serving the same transmission queue is started then the channel
will end immediately with message saying that it is still indoubt with a different Queue Manager. The user must
E start the channel directing it at the correct Queue Manager or resolve the indoubt manually by issuing the
RESOLVE CHANNEL command. Note that in this case the user should use the output from DIS CHS(*) SAVED
to ensure that the correct action COMMIT or BACKOUT is chosen.
28
Channel Protocol
Pipeline Length =2 provides additional thread that will start processing next
batch after 'End-of-Batch' sent to Remote MCA
Channel Protocol
The channel operation conforms to a quite simple model:
Do until (batchsize/batchlimit reached) or (no more messages and batchint expired)
Local MCA gets a message from the transmission queue
N A header is put on the data and sent using APPC, TCP etc.
End
Harden the message ids/indoubt flag
Send "End of batch flag"
O Remote end commits
Remote end sends "OK" flag back
Local end updoubt synchronisation record to non-indoubt state and commits
T If there is any failure in the communications link or the MCA processes, then the protocol allows for re-
synchronisation to take place and messages to be appropriately recovered.
Probably the most misunderstood part of the message exchange protocol is Batchsize. Batchsize controls the
frequency of commit flows used by the sending MCA. This, in turn, controls how often the communications line is
E turned around and - perhaps more importantly - how quickly messages at the receiving side are committed on
the target application queues. The value for Batchsize that is negotiated at channel start-up is the maximum
Batchsize only - if the transmission queue becomes empty then a batch of messages is automatically committed.
Each batch containing Persistent messages uses the Scratchpad. The larger the effective batch size, the smaller
S is the resource cost per message on the channel. Batchint can increase the effective batch size and can reduce
cost per message in the server.
Pipelinelength=2 enable overlap of putting messages onto TCP while waiting for acknowledgment of previous
batch. This enables overlap of sending messages while waiting for Batch synchronization at remote system.
29
Performance Implications: Channel Batch Size
Batchsize parameter determines maximum batch size
Actual batchsize depends on
Arrival rate
Processing speed
Because the batch size is so greatly influenced by the message arrival rate on the transmission queue, it is
generally recommended to set the Batchsize quite high(ie leave at default of 50) - unless there are contrary
factors, which are:
E Data visibility - due to (outstanding) commit processing.
Unreliable, slow or costly communication links, making frequent commit processing a necessity.
Large Messages. Upon restart it may be necessary to resend the last batch.
S Entirely non-persistent message batches do not use disk for hardening batch information (NPMSpeed(FAST))
but still cause a line turnaround.
With MQ V7.1 you can also use BatchLimit as an additional control on the amount of data transferred in each
batch. This can be helpful when the size of messages on a transmission queue varies significantly.
30
Performance Implications: One or Multiple Channels
Message Flow
MCA MCA
Message Flow
MCA MCA
N
However, where there are different classes of message being moved around the MQ network, it may be
appropriate to have multiple channels to handle the different classes.
O
Messages deemed to be 'more important' may be processed by a separate channel. While this may not be the
most efficient method of transfer, it may be the most appropriate. Note that a similar effect may be achieved by
making the transmission queue a priority order queue and placing these message at the head of the transmission
T queue.
Very large messages may hold up smaller messages with a corresponding deterioration in message throughput.
E In this instance, providing a priority transmission queue will not solve the problem as a large message must be
completely transferred before a high priority message is handled. In this case, a separate channel for large
messages will enable other messages to be transferred faster.
S
If it is appropriate to route message traffic through different parts of the underlying network, multiple channels will
enable those different network routes to be utilized.
31
Clients
Clients
Section on MQ Clients
32
MQ Client Architectures
Large systems have been built with clients
50,000 clients per server
Scalability considerations
Large number of processes on server
Trusted bindings (Channel programs)
Overheads of per-client ReplyToQ
Share queues, using CorrelId/MsgId to select correct response
Recovery processing
If the queue manager fails, all clients reconnect at the same time
MQ Client Architectures
MQ clients offer lightweight, low overhead, low cost and low administration access to MQ services. Clients
reduce the requirements for machine resources on the client machine, but there are tradeoffs: Resources on the
server are required for the MCAs to handle the client connections - 1 per client connection (MQCONN).
N Application architectures built around thin clients often feature large numbers of connections. MQ has been
proven with large configurations of up to 50,000 clients concurrently attached to a single AIX server. However,
there are some points to consider to achieve the best performance with thin clients:
Large configurations (ie many client attachments) result in a large number of MQ processes:
O Each client connection requires a channel. Each channel requires a receiver and an agent.
The number of processes can be reduced by using trusted bindings for the receiver, eliminating the agent
processes.
T Since each queue requires control structures in memory, having a ReplyToQ for each client will result in a large
number of queues and high memory usage. You can reduce the number of queues, and therefore memory
requirements, by sharing a ReplyToQ between some (or all) of the clients, and referencing reply messages using
MsgId and/or CorrelId.
E Each API call is transferred (without batching) to the server, where the call is executed and the results returned
to the client. The MQMD has to be passed on input and output flow. Similarly the MQGMO/MQPMO.
33
Asynchronous Put Response
MQCONN
MQOPEN
MQOPEN
MQPUT
Client Server
MQPUT
MQPUT
MQPUT
MQCMIT
Once the application has competed its put sequence it will issue MQCMIT or MQDISC etc which will flush out
O any MQPUT calls which have not yet completed.
Because this mechanism is designed to remove the network delay it currently only has a benefit on client
T applications.
34
Read-ahead of messages
MQCONN
MQGET
MQGET
Read-ahead of messages
Read Ahead (also known as 'Streaming') is a recognition of the fact that a large proportion of the cost of an
MQGET from a client is the line turnaround of the network connection. When using Read Ahead the MQ client
code makes a request for more than one message from the server. The server will send as many non-persistent
N messages matching the criteria (such as MsgId) as it can up to the limit set by the client. The largest speed
benefit will be seen where there are a number of similar non-persistent messages to be delivered and where the
network is slow.
O Read Ahead is useful for applications which want to get large numbers of non-persistent messages, outside of
syncpoint where they are not changing the selection criteria on a regular basis. For example, getting responses
from a command server or a query such as a list of airline flights.
T If an application requests read ahead but the messages are not suitable, for example, they are all persistent then
only one message will be sent to the client at any one time. Read ahead is effectively turned off until a sequence
of non-persistent messages are on the queue again.
E
The message buffer is purely an 'in memory' queue of messages. If the application ends or the machine crashes
these messages will be lost.
S Because this mechanism is designed to remove the network delay it currently only has a benefit on client
applications.
35
Logging and
Recovery
36
WebSphere MQ Objects
Recoverable entities known by the LQM
Queue, Process, Queue Manager, Channel etc definitions
Scratch Pads
WebSphere MQ Objects
The term "MQ Object" has a special meaning inside the queue manager. It refers to entities which are
recoverably updated. As well as queue (message) data, system configuration information is stored in these
objects. Before V6, channels were not considered Objects in this sense.
N There is one object catalog file which lists all the objects (QMQMOBJCAT) and each object then has its own file
containing its attributes. The QFiles contain the message data AND the queue definitions.
"Scratchpad" objects are used by the channel programs for faster updates of synchronisation information. These
O scratchpads are not exposed by any API or command, so they cannot be used by user programs. These objects
replace the SYSTEM.CHANNEL.SYNCQ in the majority of cases. The SYNCQ is still used to hold the status of
channels (disabled, retry) across system restarts.
The channel programs still have a separate file (AMQRSYNA) containing what is now an index into the
T scratchpad objects. This file is not an MQ object, although it is recognised by the rcdmqimg command. It is
updated only when channels are created or removed; it does not take part in batch commit processing.
37
What's the point of logging?
A log record is written for each persistent update
The log record describes the update
Write-Ahead Logging
The log is always more up-to-date than the actual data
Non-persistent messages, even those that are spilled to disk, do not cause log records to be written.
T
The log record describes the update in enough detail for the update to be recreated.
The log records are written using a protocol called Write-Ahead Logging.
The log record describing an operation is guaranteed to arrive on disk before the data being updated.
E The log is never less up-to-date than the actual data.
The contents of the log records can be used to perform the updates on the real data.
Every now and again the log and data are brought into line. This point of consistency is called a checkpoint. At
the end of a checkpoint, the queue files can be brought as up to date as the log at the start of the checkpoint if
S the queue manager was recovered.
During normal running, checkpoints are taken either every 30 minutes provided there are at least 100 log records,
but also driven when 10000 log records have been written
The log and data are reconciled during strmqm. This is called restart recovery. There are messages displayed as
the queue manager goes through the phases of reconciliation.
38
DO, REDO and UNDO
Old State
Log Record
Old State DO
New State
Log Record
New State
O REDO
During recovery operations, resource managers may need to reapply changes which were originally made. The
contents of the log record and the old copy of the resources affected by the operation can be used to recreate the
updated state of the resources from the last checkpoint.
T
UNDO
After the REDO phase there may be certain operations which need to be undone such as partial transactions. The
contents of the log record and the updated copy of the resources affected by the operation can be used to recreate
E the state of the resources as it was before the operations were performed.
The DO-REDO-UNDO protocol is commonly used for resource and transaction managers. It relies on the correct
information being logged. It also relies on the availability of programs which can perform the operations
S independently of the original application.
The important point is that during restart, the log must contain all the information necessary to allow the resource
to be recovered without the intervention of any code other than the resource manager. The applications do not
have to be restarted.
39
Phases of Restart Recovery
$ strmqm QMC
WebSphere MQ queue manager 'QMC' starting.
"Backup" Queue Managers 9 log records accessed on queue manager 'QMC' during log replay phase.
Only do REDO Log replay for queue manager 'QMC' complete.
Transaction manager state recovered for queue manager 'QMC'.
WebSphere MQ queue manager 'QMC' started.
Indoubt Phase
UNDO Phase
T Indoubt Phase
We have a list of transactions in-flight at the time that the queue manager ended. For each transaction, we scan
backwards through the log from the last record the transaction wrote following the links between records in the
same transaction building up a picture of what the transaction was actually doing at the time the queue manager
stopped.
E
Undo Phase
Any transactions in the list which are not prepared are rolled back. This involves writing special undo log records
called Compensation Log Records (CLRs). A CLR contains an after image only which corresponds to the before
image of the log record it is undoing.
S Once the CLR has been written, the operation which it describes will be ignored by the indoubt phase if we get into
a situation where restart is interrupted and restarted at second time.
At the end of restart, we may have some prepared transactions. The indoubt phase will have reconstructed the
list of operations making up the transaction so we can subsequently commit or rollback the transactions when
called by the transaction manager.
40
Summary
Common code for multi-platform delivery
Summary
41