Sios Protection Suite For Linux Websphere MQ / Mqseries Recovery Kit V8.4.1
Sios Protection Suite For Linux Websphere MQ / Mqseries Recovery Kit V8.4.1
Administration Guide
Jun 2015
This document and the information herein is the property of SIOS Technology Corp. (previously known as
SteelEye® Technology, Inc.) and all unauthorized use and reproduction is prohibited. SIOS Technology
Corp. makes no warranties with respect to the contents of this document and reserves the right to revise this
publication and make changes to the products described herein without prior notification. It is the policy of
SIOS Technology Corp. to improve products as new technology, components and software become
available. SIOS Technology Corp., therefore, reserves the right to change specifications without prior notice.
LifeKeeper, SteelEye and SteelEye DataKeeper are registered trademarks of SIOS Technology Corp.
Other brand and product names used herein are for identification purposes only and may be trademarks of
their respective companies.
To maintain the quality of our publications, we welcome your comments on the accuracy, clarity,
organization, and value of this document.
Copyright © 2015
By SIOS Technology Corp.
San Mateo, CA U.S.A.
All rights reserved
Table of Contents
Chapter 1: Introduction 1
MQ Recovery Kit Technical Documentation 1
Document Contents 1
SPS Documentation 2
Reference Documents 2
Abbreviations 2
Chapter 2: Requirements 4
Hardware Requirements 4
Software Requirements 4
Configuration Notes 21
Configuration Notes 23
Configuration Notes 24
Configuration Notes 25
GUI 40
Command Line 40
GUI 41
Command Line 41
Command Line 42
GUI 43
Command Line 44
GUI 45
Command Line 45
GUI 46
Command Line 47
Error Messages 49
Create 52
Extend 53
Remove 54
Resource Monitoring 54
Warning Messages 55
Document Contents
This guide contains the following topics:
l SIOS Protection Suite Documentation. Provides a list of SPS for Linux documentation and where to
find it.
l Abbreviations. Contains a list of abbreviations that are used throughout this document along with their
meaning.
l Requirements. Describes the hardware and software necessary to properly set up, install and operate
the WebSphere MQ Recovery Kit. Refer to the SIOS Protection Suite Installation Guide for specific
instructions on how to install or remove SPS for Linux software.
l WebSphere MQ Recovery Kit Overview. Provides a brief description of the WebSphere MQ Recovery
Kit’s features and functionality as well as lists the versions of the WebSphere MQ software supported
by this Recovery Kit.
l Configuring WebSphere MQ for Use with LifeKeeper. Provides a step-by-step guide of how to install
and configure WebSphere MQ for use with LifeKeeper.
l Configuration Changes Post Resource Creation. Provides information on how WebSphere MQ con-
figuration changes affect LifeKeeper WebSphere MQ resource hierarchies.
l LifeKeeper Configuration Tasks. Describes the tasks for creating and managing your WebSphere MQ
resource hierarchies using the LifeKeeper GUI.
l WebSphere MQ Troubleshooting. Provides a list of informational and error messages with recom-
mended solutions.
l Appendices. Provide sample configuration files for WebSphere MQ and a configuration sheet that can
be used to plan your WebSphere MQ installation.
SPS Documentation
The following is a list of SPS related information available from SIOS Technology Corp.:
This documentation, along with documentation associated with other SPS Recovery Kits, is available online
at:
http://docs.us.sios.com/
Reference Documents
The following are documents associated with WebSphere MQ referenced throughout this guide:
http://www.ibm.com/software/integration/wmq/library/
Abbreviations
The following abbreviations are used throughout this document:
Abbreviation Meaning
HA Highly Available, High Availability
WebSphere MQ queue manager directory. This directory holds the queue manager
persistent queue data and is typically located in /var/mqm/qmgrs with the name of
QMDIR
the queue manager as subdirectory name. The exact location of this directory is
specified in the global mqs.ini configuration file.
Abbreviation Meaning
WebSphere MQ queue manager log directory. This directory holds the queue manager
log data and is typically located in /var/mqm/log with the queue manager name as
QMLOGDIR
subdirectory. The exact location of this directory is specified in the queue manager
configuration file (QMDIR/qm.ini).
The operating system user running all WebSphere MQ commands. This user is the
MQUSER owner of the QMDIR. The user must be a member of the MQGROUP administrative
group mqm (see below).
The operating system user group that the MQUSER must be part of. This group must be
MQGROUP
named mqm.
UID Numeric user id of an operating system user.
GID Numeric group id of an operating system user group.
Hardware Requirements
l Servers. The Recovery Kit requires two or more servers configured in accordance with the
requirements described in the SIOS Protection Suite Installation Guide. See the Linux Configuration
Table for supported Linux distributions.
l Data Storage. The WebSphere MQ Recovery Kit can be used in conjunction both with shared storage
and with replicated storage provided by the DataKeeper product. It can also be used with network-
attached storage (NAS).
Software Requirements
l SPS Software. You must install the same version of SPS software and any patches on each server.
l LifeKeeper WebSphere MQ Recovery Kit. Version 7.5.1 or later of the WebSphere MQ Recovery
Kit is required for systems running WebSphere MQ v7.1 or later.
l LifeKeeper IP Recovery Kit. You must have the same version of the LifeKeeper IP Recovery Kit on
each server.
l IP Network Interface. Each server requires at least one Ethernet TCP/IP-supported network
interface. In order for IP switchover to work properly, user systems connected to the local network
should conform to standard TCP/IP specifications.
Note: Even though each server requires only a single network interface, you should use multiple
interfaces for a number of reasons: heterogeneous media requirements, throughput requirements,
elimination of single points of failure, network segmentation and so forth.
l WebSphere MQ Software. IBM WebSphere MQ must be ordered separately from IBM. See the SPS
Release Notes for supported WebSphere MQ versions. The WebSphere MQ Software must be
installed on each server of the cluster prior to installing the WebSphere MQ Recovery Kit. The
following WebSphere MQ packages must be installed to successfully install the WebSphere MQ
Recovery Kit:
Beginning with IBM WebSphere MQ Version 7.0.1 Fix Pack 6, a new feature was introduced
allowing multiple versions of WebSphere MQ to be installed and run on the same server (e.g.
MQ Versions 7.0.1 Fix Pack 6 and 7.1). This feature, known as multi-instance support, is not
currently supported by the WebSphere MQ Recovery Kit. Protecting multiple queue managers
within a single IBM WebSphere MQ installation version as well as the use of the DataPath
parameter in the mqs.ini introduced as part of the multi-instance feature set are supported in
this version of the recovery kit.
l Optional C Compiler. The WebSphere MQ Recovery Kit contains a modified amqsget0.c sample
program from the WebSphere MQ samples package. This program has been modified to work with a
timeout of 0 seconds instead of the default 15 seconds. It is used to perform PUT/GET tests for the
queue manager. This program is compiled during RPM installation and therefore a C compiler must be
installed and must be located in the PATH of the “root” user.
l Syslog.pm. If you want to use syslog logging for WebSphere MQ resources, the Syslog.pm PERL
module must be installed. This module is part of the standard PERL distribution and is not required to
be installed separately.
2. Unextend each IBM WebSphere MQ resource hierarchy from all its standby nodes in the cluster
(nodes where the Queue Manager is not currently running). This step will leave each IBM WebSphere
MQ resource running on only its primary node (there will be no LifeKeeper protection from failures at
this point until completing Step 5).
3. Upgrade IBM WebSphere MQ software on each node in the cluster using the following steps:
a. If one or more LifeKeeper IBM WebSphere MQ resource hierarchies are in service on the
node, they must be taken out of service before the upgrade of the IBM WebSphere MQ
software.
b. Follow the IBM WebSphere MQ V7 upgrade instructions. This includes, but is not limited
to, the following steps at a minimum:
iii. Uninstall all IBM WebSphere MQ v6 base packages using the rpm "--
nodeps" option to avoid the LifeKeeper MQ Recovery Kit dependency
4. Once the IBM WebSphere MQ V7 software has been installed on each node in the cluster, bring the
LifeKeeper IBM WebSphere MQ resource hierarchies in service (restore) and verify operation of each
Queue Manager.
The WebSphere MQ Recovery Kit enables LifeKeeper to protect WebSphere MQ queue managers including
the command server, the listener and the persistent queue manager data. Protection of the queue manager
listener can be optionally disabled on a per queue manager basis to support configurations that do not handle
client connects or to enable the administrator to shut down the listener without causing LifeKeeper recovery.
The WebSphere MQ Recovery Kit provides a mechanism to recover protected WebSphere MQ queue
managers from a failed primary server onto a backup server. LifeKeeper can detect failures either at the server
level (via a heartbeat) or resource level (by monitoring the WebSphere MQ daemons) so that control of the
protected WebSphere MQ services are transferred to a backup server.
l Supports end to end application health check via server connect and client connect
l Supports optional PUT/GET tests (with definable test queue via GUI and command line)
For instructions on installing WebSphere MQ on Linux distributions supported by SPS, please see
“WebSphere MQ for Linux VX Quick Beginnings”, with X reflecting your version of WebSphere MQ (6.0, 7.0 or
7.1).
Configuration Requirements
The section Configuring WebSphere MQ for Use with LifeKeeper contains a process for protecting a queue
manager with LifeKeeper. In general, the following requirements must be met to successfully configure a
WebSphere MQ queue manager with LifeKeeper:
1. Configure Kernel Parameters. Please refer to the WebSphere MQ configuration for information on
how Linux kernel parameters such as shared memory and other kernel resources should be configured.
2. MQUSER and MQGROUP. The MQGROUP and the MQUSER must exist on all servers of the cluster. Use
the operating system commands adduser and groupadd to create the MQUSER and the MQGROUP.
Additionally, the MQUSER profile must be updated to append the MQ install location to the PATH envir-
onment variable. It must include the location of the WebSphere MQ executables which is typically /op-
t/mqm/bin and must be placed before /usr/bin. This is necessary for LifeKeeper to be able to run
WebSphere MQ commands while running as the MQUSER.
3. MQUSER UID and MQGROUP GID. Each WebSphere MQ queue manager must run as MQUSER and
the MQUSER UID and MQGROUP GID must be the same on all servers of the cluster (e.g., username:
mqm, UID 10000). The recovery kit tests if the MQUSER has the same UID on all servers and that the
MQUSER is part of the MQGROUP group.
4. Manual command server startup. If you want to have LifeKeeper start the command server, disable
the automatic command server startup using the following command on the primary server. Otherwise,
the startup of the command server will be performed automatically when the Queue Manager is started:
runmqsc QUEUE.MANAGER.NAME
ALTER QMGR SCMDSERV(MANUAL)
5. QMDIR and QMLOGDIR must be located on shared storage. The queue manager directory QMDIR
and the queue manager log directory QMLOGDIR must be located on LifeKeeper-supported shared stor-
age to let the WebSphere MQ on the backup server access the data. See “Supported File System Lay-
outs” for further details.
6. QMDIR and QMLOGDIR permissions. The QMDIR and QMLOGDIR directories must be owned by
MQUSER and the group MQGROUP. The ARK dynamically determines the MQUSER by looking at the
owner of this directory. It also detects symbolic links and follows them to the final targets. Use the sys-
tem command chown to change the owner of these directories if required.
7. Disable Automatic Startup of Queue Manager. If you are using an init script to start and stop
WebSphere MQ, disable it for the queue manager(s) protected by LifeKeeper. To disable the init
script, use the operating system provided functions like insserv on SuSE or chkconfig on Red
Hat.
8. Create Server Connection Channel. Beginning with MQ Version 7.1, changes in MQ's Channel
Authentication require that a channel other than the defaults SYSTEM.DEF.SVRCONN and
SYSTEM.AUTO.SVRCONN be used and that the MQADMIN user be enabled for the specified channel.
See the WebSphere MQ documentation for details on how to create channels.
10. Optional C Compiler. For the optional PUT/GET tests to take place, a C compiler must be installed
on the machine. If not, a warning is issued during the installation.
11. LifeKeeper Test Queue. The WebSphere MQ Recovery Kit optionally performs a PUT/GET test to
verify queue manager operation. A dedicated test queue has to be created because the recovery kit
retrieves all messages from this queue and discards them. This queue should have set the default per-
sistency setting to “yes” (DEFPSIST=yes). When you protect a queue manager in LifeKeeper, a test
queue named “LIFEKEEPER.TESTQUEUE” will be automatically created. You can also use the fol-
lowing command to create the test queue manually before protecting the queue manager:
su - MQUSER
runmqsc QUEUE.MANAGER.NAME
Note: If you want to use a name for the LifeKeeper test queue other than the default
“LIFEKEEPER.TESTQUEUE”, the name of this test queue must be configured. See “Editing
Configuration Resource Properties” for details.
12. TCP Port for Listener Object. For WebSphere MQ v6 or later, alter the Listener object via runmqsc
to reflect the TCP port in use. Use the following command to change the TCP port of the default
Listener:
su - MQUSER
runmqsc QUEUE.MANAGER.NAME
Note: The listener object must be altered even if using the default MQ listener TCP port 1414,
but it is not necessary to set a specific IP address (IPADDR). If you skip the IPADDR setting,
the listener will bind to all interfaces on the server. If you do set IPADDR, it is strongly
recommended that a virtual IP resource be created in LifeKeeper using the IPADDR defined
address. This ensures the IP address is available when the MQ listener is started.
13. TCP Port Number. Each WebSphere MQ listener must use a different port (default 1414) or bind to a
different virtual IP with no listener binding to all interfaces. This includes protected and unprotected
queue managers within the cluster.
14. Queue Manager configured in mqs.ini. In Active/Active configurations, each server holds its own
copy of the global queue manager configuration file mqs.ini. In order to run the protected queue man-
ager on all servers in the cluster, the queue manager must be configured in the mqs.ini configuration
file of all servers in the cluster. Copy the appropriate QueueManager: stanza from the primary server
and add it to the mqs.ini configuration files on all backup servers.
Before installing WebSphere MQ, you must plan your installation. This includes choosing an
MQUSER, MQUSER UID and MQGROUP GID. You must also decide which file system layout
you want to use (see “Supported File System Layouts”). To ease this process, SIOS
Technology Corp. provides a form that contains fields for all required information. See
“Appendix C – WebSphere MQ Configuration Sheet”. Fill out this form to be prepared for the
installation process.
WebSphere MQ may require special Linux kernel parameter settings like shared memory. See
the “WebSphere MQ Quick Beginnings Guide” for your release of WebSphere MQ for the
minimum requirements to run WebSphere MQ. To make kernel parameter changes persistent
across reboots, you can use the /etc/sysctl.conf configuration file. It may be necessary
to add the command sysctl -p to your startup scripts (boot.local). On SuSE, you can run
insserv boot.sysctl to enable the automatic setting of the parameters in the
sysctl.conf file.
Use the operating system commands groupadd and adduser to create the MQUSER and
MQGROUP with the UID and GID from the “WebSphere MQ Configuration Sheet” you used in
Step 1.
If the MQUSER you have chosen is named mqm and has UID 1002 and the MQGROUP GID is
1000, you can run the following command on each server of the cluster (change the MQUSER,
UID and GID values to reflect your settings):
Note: If you are running NIS or LDAP, create the user and group only once. You may need to
create home directories if you have no central home directory server.
Set the PATH environment variable to include the WebSphere MQ binary directory. This is
necessary for LifeKeeper to be able to run WebSphere MQ commands while running as the
MQUSER.
export PATH=/opt/mqm/bin:$PATH:
MQSeries installation requires the installation of X11 libraries and Java for license activation
(mqlicense_lnx.sh). Install the required software packages.
Follow the steps described in the "WebSphere MQ for Linux Quick Beginnings Guide" for your
release of WebSphere MQ.
7. Create a Server Connection Channel via the MQ GUI. If using MQ Version 7.1 or later, the default
server connection channels (SYSTEM.DEF.SVRCONN and SYSTEM.AUTO.SVRCONN) can no longer
be used. See the WebSphere MQ documentation for details on how to create channels.
8. If using MQ Version 7.1 or later, enable the MQADMIN user for the specified channel within MQ.
See the SIOS Protection Suite Installation Guide for details on how to install SPS.
10. Prepare the shared storage and mount the shared storage.
See section “Supported File System Layouts” for file system layouts supported. Depending on
the file system layout and the storage type, this involves creating volume groups, logical
volumes, creating file systems or mounting NFS shares.
11. Set the owner and group of QMDIR and QMLOGDIR to MQUSER and MQGROUP.
The QMDIR and QMLOGDIR must be owned by MQUSER and MQGROUP. Use the following
commands to set the file system rights accordingly:
The values of MQUSER, QMDIR and QMLOGDIR depend on your file system layout and the user
name of your MQUSER. Use the sheet from Step 1 to determine the correct values for the fields.
Here is an example for MQUSER mqm and queue managerTEST.QM with default QMDIR and
QMLOGDIR destinations:
Follow the steps described in the “WebSphere MQ System Administration Guide” and
"WebSphere MQ for Linux VX Quick Beginnings" documents for how to create a queue
manager, X being 6.0 or later, depending on your version of WebSphere MQ.
node1:/var/mqm/qmgrs # su - mqm
mqm@node1:~> crtmqm TEST.QM
WebSphere MQ queue manager created.
Creating or replacing default objects for TEST.QM.
Default objects statistics : 31 created. 0 replaced. 0
failed.
Completing setup.
Setup completed.
Note: If you want to protect an already existing queue manager, use the following steps to move
the queue manager data to the shared storage:
b. Copy the content of the queue manager directory and the queue manager log directory to the
shared storage created in Step 9.
c. Change the global configuration file (mqs.ini) and queue manager configuration file (qm.ini)
as required to reflect the new location of the QMDIR and the QMLOGDIR.
Follow the steps and guidelines described in the SPS for Linux IP Recovery Kit Administration
Guide and the SIOS Protection Suite Installation Guide.
Note: If your queue manager is only accessed by server connects, you do not have to configure
the LifeKeeper virtual IP.
12. For WebSphere MQ v6 or later: Modify the listener object to reflect your TCP port:
su – MQUSER
runmqsc QUEUE.MANAGER.NAME
Note: Use the same IP address used in the Step 13 to set the value for IPADDR. Do not set
IPADDR to have WebSphere MQ bind to all addresses.
On the primary server, start the queue manager, the command server if it is configured to be
started manually and the listener:
su – MQUSER
strmqm QUEUE.MANAGER.NAME
strmqcsv QUEUE.MANAGER.NAME
runmqlsr –m QUEUE.MANAGER.NAME –t TCP &
14. Verify that the queue manager has been started successfully:
su – MQUSER
echo ‘display qlocal(*)’ | runmqsc QUEUE.MANAGER.NAME
15. Add the queue manager stanza to the global queue manager configuration file mqs.ini on the backup
server.
16. Optional: Create the LifeKeeper test queue on the primary server.
runmqsc TEST.QM
5724-B41 (C) Copyright IBM Corp. 1994, 2002. ALL RIGHTS
RESERVED.
Starting MQSC for queue manager TEST.QM.
define qlocal(LIFEKEEPER.TESTQUEUE) defpsist(yes) descr
('LifeKeeper test queue')
1 : define qlocal(LIFEKEEPER.TESTQUEUE) defpsist(yes)
descr('LifeKeeper test queue')
AMQ8006: WebSphere MQ queue created.
17. If you want to have LifeKeeper start the command server, disable the automatic command server star-
tup using the following command on the primary server. Otherwise, the startup of the command server
will be performed automatically when the Queue Manager is started:
su – MQUSER
runmqsc TEST.QM
ALTER QMGR SCMDSERV(MANUAL)
This involves deletion of the queue manager hierarchy and creation of the queue manager
hierarchy. See sections “Deleting a WebSphere MQ Hierarchy” and “Creating a WebSphere MQ
Resource Hierarchy” for details.
2. Create the new file system hierarchies manually and add the new file system hierarchies to the
WebSphere MQ hierarchy. Remove the old file system hierarchies from the WebSphere MQ hierarchy
and remove the old file system hierarchies. See the SIOS Protection Suite Installation Guide for details
on how to create and remove file system hierarchies.
Alter the listener object in runmqsc, then stop and start the listener:
su – MQUSER
runmqsc QUEUE.MANAGER.NAME
5. If needed, modify your listener object in runmqsc and restart the listener:
su – MQUSER
runmqsc QUEUE.MANAGER.NAME
As an alternative, you can use the LifeKeeper lk_chg_value facility to change the IP. See
the lk_chg_value(8) man page for details.
Configuration Notes
l The clients connect to the WebSphere MQ servers using the LifeKeeper protected IP 192.168.1.100
designated to float between the servers in the cluster.
l Each queue manager has modified the listener object to contain a unique port number.
Configuration Notes
l The clients connect to the WebSphere MQ servers using the LifeKeeper protected IP 192.168.1.100
designated to float between the servers in the cluster.
l The active server mounts the directory /var/mqm from the NAS server with IP 10.0.0.100 using a ded-
icated network interface.
l Each queue manager has modified the listener object to contain a unique port number.
Configuration Notes
l The clients connect to the queue manager QMGR1 using the LifeKeeper floating IP 192.168.1.100.
l The clients connect to the queue manager QMGR2 using the LifeKeeper floating IP 192.168.1.101.
l Each queue manager has modified the listener object to contain a unique port number.
l QMGR1 data is located on a volume group on the shared storage with two logical volumes configured.
Each logical volume contains a file system that is mounted on QMDIR or QMLOGDIR.
l QMGR2 data is located on a secondary volume group on the shared storage with two logical volumes
configured. Each logical volume contains a file system that is mounted on QMDIR or QMLOGDIR.
Configuration Notes
l The clients connect to the queue manager QMGR1 using the LifeKeeper floating IP 192.168.1.100.
l The clients connect to the queue manager QMGR2 using the LifeKeeper floating IP 192.168.1.101.
l Each server has a dedicated network interface to access the NAS server.
l Each queue manager has modified the listener object to contain a unique port number.
l QMGR1 data is located on two NFS exports on the NAS server. The exports are mounted on QMDIR or
QMLOGDIR. The NAS server IP is 10.0.0.100.
l QMGR2 data is located on two NFS exports on the NAS server. The exports are mounted on QMDIR or
QMLOGDIR. The NAS server IP is 10.0.0.100.
Overview
The following tasks are described in this guide, as they are unique to a WebSphere MQ resource instance and
different for each Recovery Kit.
l Extend a Resource Hierarchy - Extends a WebSphere MQ resource hierarchy from the primary server
to a backup server.
The following tasks are described in the Administration section within the SPS for Linux Technical
Documentation because they are common tasks with steps that are identical across all Recovery Kits.
l Delete a Resource Dependency. Deletes a resource dependency and propagates the dependency
changes to all applicable servers in the cluster.
l View/Edit Properties. View or edit the properties of a resource hierarchy on a specific server.
Note: Throughout the rest of this section, configuration tasks are performed using the Edit menu. You can
also perform most of these tasks:
Using the right-click method allows you to avoid entering information that is required when using the Edit
menu.
1. From the LifeKeeper GUI menu, select Edit, then Server. From here, select Create Resource Hier-
archy.
The Create Resource Wizard dialog box will appear with a drop-down list box displaying all
recognized Recovery Kits installed within the cluster.
3. You will be prompted to enter the following information. When the Back button is active in any of the
dialog boxes, you can go back to the previous dialog box. This is helpful should you encounter an error
requiring you to correct previously entered information. You may click Cancel at any time to cancel the
entire creation process.
Field Tips
Choose either Intelligent or Automatic. This dictates how the WebSphere
MQ instance will be switched back to this server when the server comes
back up after a failover. The switchback type can be changed later from the
Switchback Type General tab of the Resource Properties dialog box.
Note: The switchback strategy should match that of the IP or File System
resource to be used by the WebSphere MQ resource. If they do not match
the WebSphere MQ resource, creation will attempt to reset them to match
the setting selected for the WebSphere MQ resource.
Server Select the Server on which you want to create the hierarchy.
Select the WebSphere MQ queue manager you want to protect. The queue
manager must be created prior to creating the resource hierarchy. Queue
Queue Manager Name
managers already under LifeKeeper protection are excluded from this list.
The queue managers are taken from the global mqs.ini configuration file.
Select “YES” to protect and manage the WebSphere MQ queue manager
listener. Select “NO” if LifeKeeper should not manage the WebSphere MQ
Manage Listener listener.
Note: You can change this setting later. See “Editing Configuration
Resource Properties” for details.
Field Tips
Select the server connection channel to use for connection tests. By
default, the channel SYSTEM.DEF.SVRCONN will be used; however,
beginning with MQ Version 7.1, changes in MQ's Channel Authentication
require that a channel other than the default be used and that the MQADMIN
user be enabled for the specified channel.
Server Connection
Channel Note: Make sure the Server Connection Channel has been created
PRIOR to creating your resource. For more information, see Configuring
WebSphere MQ for Use with LifeKeeper.
Note: The virtual IP must be ISP (active) on the primary node to appear in
the selection list.
Either select the default root tag offered by LifeKeepe, or enter a unique
IBM WebSphere MQ name for the resource instance on this server. The default is the queue
Resource Tag manager name. Letters, numbers and the following special characters may
be used: - _ . /
4. Click Create. The Create Resource Wizard will then create your WebSphere MQ resource hierarchy.
LifeKeeper will validate the data entered. If LifeKeeper detects a problem, an error message will appear
in the information box.
5. An information box will appear indicating that you have successfully created a WebSphere MQ
resource hierarchy and that hierarchy must be extended to another server in your cluster in order to
achieve failover protection. Click Next.
6. Click Continue. LifeKeeper will then launch the Pre-Extend Wizard. Refer to Step 2 under Extending
a WebSphere MQ Hierarchy for details on how to extend your resource hierarchy to another server.
1. On the Edit menu, select Resource, then Extend Resource Hierarchy. The Pre-Extend Wizard
appears. If you are unfamiliar with the Extend operation, click Next. If you are familiar with the
LifeKeeper Extend Resource Hierarchy defaults and want to bypass the prompts for input/-
confirmation, click Accept Defaults.
2. The Pre-Extend Wizard will prompt you to enter the following information.
Note: The first two fields appear only if you initiated the Extend from the Edit menu.
Field Tips
Template Server Enter the server where your WebSphere MQ resource is currently in service.
Tag to Extend Select the WebSphere MQ resource you wish to extend.
Target Server Enter or select the server you are extending to.
Select either Intelligent or Automatic. The switchback type can be changed
later, if desired, from the General tab of the Resource Properties dialog box.
Switchback Type
Note: Remember that the switchback strategy must match that of the
dependent resources to be used by the WebSphere MQ resource.
Select or enter a priority for the template hierarchy. Any unused priority value
from 1 to 999 is valid, where a lower number means a higher priority (the number
1 indicates the highest priority). The extend process will reject any priority for
Template Priority this hierarchy that is already in use by another system. The default value is
recommended.
Note: This selection will appear only for the initial extend of the hierarchy.
Target Priority Either select or enter the priority of the hierarchy for the target server.
Queue Manager This informational field shows the queue manager name you are about to
Name extend. You cannot change this value.
LifeKeeper will provide a default tag name for the new WebSphere MQ resource
instance on the target server. The default tag name is the same as the tag name
Root Tag for this resource on the template server. If you enter a new name, be sure it is
unique on the target server. Letters, numbers and the following special
characters may be used: - _ . /
Note: All configurable queue manager parameters like listener management, the name of the
LifeKeeper test queue and the shutdown timeout values are taken from the template server.
3. After receiving the message that the pre-extend checks were successful, click Next.
4. Depending upon the hierarchy being extended, LifeKeeper will display a series of information boxes
showing the Resource Tags to be extended which cannot be edited. Click Extend.
5. After receiving the message "Hierarchy extend operations completed", click Next Server to extend
the hierarchy to another server or click Finish if there are no other extend operations to perform.
2. Select the Target Server where you want to unextend the WebSphere MQ resource. It cannot be the
server where the WebSphere MQ resource is currently in service. (This dialog box will not appear if you
selected the Unextend task by right-clicking on a resource instance in the right pane.) Click Next.
3. Select the WebSphere MQ hierarchy to unextend and click Next. (This dialog will not appear if you
selected the Unextend task by right-clicking on a resource instance in either pane.)
4. An information box appears confirming the target server and the WebSphere MQ resource hierarchy
you have chosen to unextend. Click Unextend.
5. Another information box appears confirming that the WebSphere MQ resource was unextended suc-
cessfully. Click Done to exit the Unextend Resource Hierarchy menu selection.
l Dependencies: Before removing a resource hierarchy, you may wish to remove the dependencies.
Dependent file systems will be removed. Dependent non-file system resources like IP or Generic
Application will not be removed as long as the delete is done via the LifeKeeper GUI or the WebSphere
MQ delete script. For LifeKeeper to not delete the dependent file systems of the WebSphere MQ queue
manager, manually remove the dependencies prior to deleting the WebSphere MQ hierarchy.
l Protected Services: If the WebSphere resource hierarchy is taken out of service before being deleted,
the WebSphere daemons for this queue manager will be stopped. If a hierarchy is deleted while it is in
service, the WebSphere MQ daemons will continue running and offering services (without LifeKeeper
protection) after the hierarchy is deleted.
To delete a resource hierarchy from all the servers in your LifeKeeper environment, complete the following
steps:
2. Select the Target Server where you will be deleting your WebSphere MQ resource hierarchy and click
Next. (This dialog will not appear if you selected the Delete Resource task by right-clicking on a
resource instance in either pane.)
3. Select the Hierarchy to Delete. (This dialog will not appear if you selected the Delete Resource task
by right-clicking on a resource instance in the left or right pane.) Click Next.
4. An information box appears confirming your selection of the target server and the hierarchy you have
selected to delete. Click Delete.
5. Another information box appears confirming that the WebSphere resource was deleted successfully.
On the Edit menu, select Resource, then In Service. For example, an In Service request executed on a
backup server causes the application hierarchy to be taken out of service on the primary server and placed in
service on the backup server. At this point, the original backup server is now the primary server and original
primary server has now become the backup server.
If you execute the Out of Service request, the application is taken out of service without bringing it in service
on the other server.
1. Create a temporary test queue on the primary server with the default persistency of “yes”
2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
2. Put a message into the test queue created on the primary node:
3. Browse the test queue to see if the message has been stored:
You should see a message with the content “HELLO WORLD on NODE1” and some additional
output. Look for the following line and verify that the persistency is 1:
[...]
Priority : 0 Persistence : 1
[...]
5. On the standby server where the queue manager is now active, repeat Step 3. The message should be
accessible on the standby server. If not, check your storage configuration.
6. On the standby server where the queue manager is now active, get the message from the test queue:
delete qlocal(TEST)
1 : delete qlocal(TEST)
AMQ8007: WebSphere MQ queue deleted.
end
2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
1. On the primary server, use the amqsbcgc command to connect to the queue manager:
export MQSERVER='SYSTEM.DEF.SVRCONN/TCP/192.168.1.90(1414)
'
Note: Replace the IP 192.168.1.90 with the LifeKeeper protected virtual IP of the queue
manager. If your queue manager uses a different port other than 1414, then replace the port
number 1414 with the one being used. If the server connection channel being used is not the
default SYSTEM.DEF.SVRCONN channel, then replace the server connection channel
SYSTEM.DEF.SVRCONN with the one being used.
mqm@node1:/opt/mqm/samp/bin> ./amqsbcgc
LIFEKEEPER.TESTQUEUE TEST.QM
MQOPEN - 'LIFEKEEPER.TESTQUEUE'
No more messages
MQCLOSE
If you get a message like the following, then the test queue LIFEKEEPER.TESTQUEUE is not
configured. Create the test queue as described in section “Configuring WebSphere MQ for Use
with LifeKeeper” and repeat the test.
MQOPEN - 'LIFEKEEPER.TESTQUEUE'
MQOPEN failed with CompCode:2, Reason:2085
3. Repeat Step 1 on the same server as before which is now the standby server after the switchover.
2. Increase the logging level of the queue manager as described in “Changing the Log Level” to “FINE”.
3. Open the log dialog on the machine where the queue manager is active and wait for the next check to
happen (max. two minutes).
4. Analyze the log and verify that all checks are performed and none of the tests is skipped. The
PUT/GET could be skipped for the following reasons:
a. No LifeKeeper test queue is configured (in this case, configure the test queue as described in
“Changing the LifeKeeper Test Queue Name”).
b. LifeKeeper test queue does not exist (in this case, create the test queue as described in “Con-
figuring WebSphere MQ for Use with LifeKeeper”).
c. The modified amqsget(c) executables are not available (in this case, install a C compiler and
rerun the script /op-
t/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/compilesamples).
Resource properties will be displayed in the properties panel if it is enabled. You can also right-click on the
icon for the global resource for which you want to view the properties. When the Resource Context Menu
appears, click Properties. When the dialog comes up, select the server for which you want to view that
resource from the Server list.
To edit configuration details via the WebSphere MQ Configuration Properties page from the LifeKeeper GUI
Properties Panel, you must first ensure the GUI Properties Panel is enabled. To enable the GUI Properties
Panel, select View, then Properties Panel (must have a check mark to indicate it is enabled). Once enabled,
left-click on the WebSphere MQ resource to display its configuration details in the LifeKeeper GUI Properties
Panel.
Below is an example of the properties page that will appear in the LifeKeeper GUI Properties Panel for a
WebSphere MQ resource.
The properties page contains four tabs. The first tab, labeled IBM WebSphere MQ Recovery Kit
Configuration, contains configuration information that is specific to WebSphere MQ resources and allows
modification via the resource specific icons. The remaining three tabs are available for all LifeKeeper resource
types and their content is described in the topic Resource Properties Dialog in the SPS for Linux Technical
Documentation.
The following table displays the WebSphere MQ resource specific icons and the configuration component that
can be modified when clicking on the icon.
Logging Level Allows you to modify the log level that the IBM WebSphere MQ Recovery
Configuration Kit will use for the queue manager being protected.
Shutdown Allows you to modify the timeout in seconds for the immediate shutdown
Timeout and preemptive shutdown timers for the IBM WebSphere MQ queue
Configuration manager being protected.
Server
Allows you to modify the server connection channel that is used for client
Connection
connection and the PUT/GET testing for the IBM WebSphere MQ queue
Channel
manager being protected.
Configuration
Command Server
Allows you to specify the protection/recovery level for command server
Protection Con-
component of the IBM WebSphere MQ queue manager being protected.
figuration
Specifies whether you want LifeKeeper to protect the listener for the queue manager or not.
If listener management is disabled (value of NO), LifeKeeper will not monitor the listener
Listener and you can stop the listener without causing LifeKeeper recovery actions. If listener
Management management is enabled (value of YES), LifeKeeper will monitor the listener and restart the
listener if the listener is not running. If the recovery fails, a failover of the WebSphere MQ
hierarchy to the backup server is initiated.
LifeKeeper performs PUT/GET test to monitor queue manager operations. The WebSphere
MQ Recovery Kit uses a dedicated test queue to put messages in and retrieve messages
again. In case a failure is detected, no recovery or failover is performed. Instead, the
Recovery Kit sends an event that you can register to receive. The events are called
putgetfail and putgetcfail. You can add a notification script to the directories
LifeKeeper /opt/LifeKeeper/events/mqseries/putgetfail and
Test Queue /opt/LifeKeeper/events/mqseries/putgetcfail to react to those events.
Note 1: If the LifeKeeper test queue is not configured in the queue manager, the PUT/GET
test is skipped. No recovery or failover takes place.
Note 2: If the listener is protected, a second client connect check will be done. If this check
fails, a recovery or failover of the queue manager is attempted.
You can set the logging level of the WebSphere MQ Recovery Kit to four presets:
l ERROR
In this log level, only errors are logged. No informational messages are
logged.
l INFORMATIONAL (default)
In this log level, LifeKeeper informational messages about start, stop and
recovery of resources are logged.
l DEBUG
In this log level, the informational LifeKeeper messages and the command
outputs from all WebSphere MQ commands in the restore, remove and
Logging recovery scripts are logged.
Level
l FINE
It is recommended to set this debug level only for debugging purpose. As quickCheck
actions are also logged, this fills up the log files each time a quickCheck for the
WebSphere MQ queue manager runs.
Note: Independent of the logging level setting, WebSphere MQ errors during start, stop,
recovery or during the check routine are always logged with the complete command output
of the last command run.
The WebSphere MQ Recovery Kit stops the queue manager in 3 steps:
1. immediate stop
The timeout values specified determine the time the Recovery Kit waits in Steps 1 and 2 for
a successful completion. If this timeout is reached, the next step in the shutdown process
is issued. The default for the immediate and preemptive shutdown timeouts is 20 seconds.
Server The WebSphere MQ Recovery Kit allows the specification of the server connection
Connection channel. By default, the kit will use the channel SYSTEM.DEF.SVRCONN, but an alternate
Channel channel can be specified during resource creation or at any time after resource creation.
The WebSphere MQ Recovery Kit allows two levels of protection and recovery for the
command server component for the protected queue manager. The levels are Full and
Minimal.
With Full protection, the command server will be started, stopped, monitored and
recovered or failed over if recovery is unsuccessful. The recovery steps with Full protection
are:
l If that fails, attempt a full restart of the queue manager including the command
Command
server process.
Server
l If both attempts are unsuccessful at restarting the command server, then initiate a
failover to the standby node.
With Minimal protection, the command server will only be started during restore or stopped
during remove. No monitoring or recovery of the command server will be performed.
NOTE: Starting the command server will only be performed by the Recovery Kit during
restore if the queue manager SCMDSERV parameter is set for manual startup. During a
recovery, a failed command server restart will always be attempted regardless of the
SCMDSERV setting unless the Command Server Protection Level is set to Minimal.
As previously noted, these WebSphere MQ resource configuration components can be modified using the
resource specific icons in the properties panel or via the Resource Context Menu.
The parameters above can be set for each queue manager separately either via the LifeKeeper GUI or via a
command line utility.
To set the parameters via the command line, use the script:
$LKROOT/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam
GUI
First navigate to the WebSphere MQ Resource Properties Panel or the Resource Context Menu
described above. The resource must be in service to modify the Listener Protection value. Then click on
Listener Protection Configuration icon or menu item. The following dialog will appear:
Now select YES if you want LifeKeeper to start, stop and monitor the WebSphere MQ listener. Select NO if
LifeKeeper should not start, stop and monitor the WebSphere MQ listener. Click Next. You will be asked if
you want to enable or disable listener protection; click Continue. If you have chosen to enable listener
management, the LifeKeeper GUI checks if the listener is already running. If it is not already running, it will try
to start the listener. If the listener start was successful, the LifeKeeper GUI will enable listener management
on each server in the cluster. If the listener is not running and could not be started, the LifeKeeper GUI will not
enable listener management on the servers in the cluster.
Command Line
To set the LifeKeeper listener management via command line, use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p LISTENERPROTECTION -v YES
This will set (-s) the LifeKeeper listener management (-p) on each node of the cluster (-c) to YES (-v) (enable
listener management) for queue manager TEST.QM (-i).
Note: You can either use the queue manager name (-i) or the LifeKeeper TAG (-t) name.
GUI
First navigate to the WebSphere MQ Resource Properties Panel or the Resource Context Menu described
above. Then click on PUT/GET TESTQUEUE Configuration icon or menu item. The following dialog will
appear:
Now enter the name of the LifeKeeper test queue and click Next. You will be asked if you want to set the new
LifeKeeper test queue; click Continue. Next, the LifeKeeper GUI will set the LifeKeeper test queue on each
server in the cluster. If you set the test queue to an empty value, no PUT/GET tests are performed.
Command Line
To set the LifeKeeper test queue via command line, use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p TESTQUEUE -v "LIFEKEEPER.TESTQUEUE"
This will set (-s) the LifeKeeper test queue (-p) on each node of the cluster (-c) to LIFEKEEPER.TESTQUEUE
(-v) for queue manager TEST.QM (-i).
Note: You can either use the queue manager name (-i) or the LifeKeeper TAG (-t) name.
GUI
First navigate to the WebSphere MQ Resource Properties Panel or the Resource Context Menu
described above. Then click on Logging Level Configuration icon or menu item. The following dialog will
appear:
Now select the Logging Level and click Next. You will be asked if you want to set the new LifeKeeper
logging level; click Continue. Next, the LifeKeeper GUI will set the LifeKeeper logging level for the selected
queue manager on each server in the cluster.
Command Line
To set the LifeKeeper logging level via command line, use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p DEBUG -v DEBUG
This will set (-s) the LifeKeeper logging level (-p) on each node of the cluster (-c) to DEBUG (-v) for queue
manager TEST.QM (-i).
Note: You can either use the queue manager name (-i) or the LifeKeeper TAG (-t) name.
GUI
First, navigate to the WebSphere MQ resource properties panel or the resource context menu described
above. Then click on Shutdown Timeout Configuration icon or menu item. The following dialog will appear:
Now enter the immediate shutdown timeout value in seconds and click Next. If you want to disable the
immediate shutdown timeout, enter 0. Now the following dialog will appear:
Now enter the preemptive shutdown timeout value in seconds and click Next. If you want to disable the
preemptive shutdown timeout enter 0. You will be asked if you want to set the new LifeKeeper timeout
parameters, click Continue. Next, the LifeKeeper GUI will set the LifeKeeper immediate and preemptive
timeout values on each server in the cluster.
Command Line
To set the preemptive shutdown timeout values via command line, use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p PREEMPTIVE_TIMEOUT -v 20
This will set (-s) the LifeKeeper preemptive shutdown timeout (-p) on each node of the cluster (-c) to 20
seconds (-v) for queue manager TEST.QM (-i).
To set the immediate shutdown timeout values via command line, use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p IMMEDIATE_TIMEOUT -v 20
This will set (-s) the LifeKeeper immediate shutdown timeout (-p) on each node of the cluster (-c) to 20
seconds (-v) for queue manager TEST.QM (-i).
Note: You can either use the queue manager name (-i) or the LifeKeeper TAG (-t) name.
GUI
First navigate to the WebSphere MQ resource properties panel or the resource context menu described
above. The resource must be in service to modify the Server Connection Channel value. Then click on
Server Connection Channel Configuration icon or menu item. The following dialog will appear:
Now select the Server Connection Channel to use and click Next. You will be asked if you want to change
to the new Server Connection Channel, click Continue. Next, the LifeKeeper GUI will set the Server
Connection Channel for the selected queue manager on each server in the cluster.
Command Line
To set the Server Connection Channel via command line use the following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p CHANNEL -v LK.TEST.SVRCONN
This will set (-s) the Server Connection Channel (-p) on each node of the cluster (-c) to LK.TEST.SVRCONN
(-v) for queue manager TEST.QM (-i).
Note: You can either use the queue manager name (-i) or the LifeKeeper TAG (-t) name.
GUI
First navigate to the WebSphere MQ Resource Properties Panel or the Resource Context Menu described
above. Then click on Command Server Protection Configuration icon or menu item. The following dialog
will appear:
Select Full Control of the command server component of the WebSphere MQ queue manager to have
LifeKeeper start, stop, monitor and attempt to recover and to then fail over if the recovery attempt is
unsuccessful.
Select Minimal Control of the command server component of the WebSphere MQ queue manager to have
LifeKeeper only start and stop but not monitor or attempt any recovery.
See above table for more details. Once the protection control is selected, click Next. You will be asked if you
want to change the setting of the command server protection from its current setting to the new setting; click
Continue to make the change on all nodes in the cluster.
Command Line
To set the LifeKeeper Command Server Protection Configuration via the command line, use the
following command:
/opt/LifeKeeper/lkadm/subsys/appsuite/mqseries/bin/mq_modqmgrparam -
c -s -i TEST.QM -p CMDSERVERPROTECTION -v LEVEL
This will set (-s) the LifeKeeper Command Server Protection Configuration (-p) on each node in the cluster (-c)
to LEVEL (-v) for queue manager TEST.QM (-i).
Note: You can use either the queue manager name (-i) or the LifeKeeper TAG (-t) name.
To change the parameters add the appropriate variable in the table above to /etc/default/LifeKeeper.
The line should have the following syntax:
[...]
MQS_CHECK_TIMEOUT_ACTION=sendevent
[...]
To disable a custom setting and fall back to the default value, just remove the line from
/etc/default/LifeKeeper or comment out the corresponding line.
If your application gets a return code indicating that a Message Queue Interface (MQI) call has failed, refer to
the WebSphere MQ Application Programming Reference Manual for a description of that return code.
Error Messages
This section provides a list of messages that you may encounter with the use of the SPS MQ Recovery Kit.
Where appropriate, it provides an additional explanation of the cause of an error and necessary action to
resolve the error condition.
Because the MQ Recovery Kit relies on other SPS components to drive the creation and extension of
hierarchies, messages from these other components are also possible. In these cases, please refer to the
Message Catalog (located on our Technical Documentation site under “Search for an Error Code”) which
provides a listing of all error codes, including operational, administrative and GUI, that may be encountered
while using SIOS Protection Suite for Linux and, where appropriate, provides additional explanation of the
cause of the error code and necessary action to resolve the issue. This full listing may be searched for any
error code received, or you may go directly to one of the individual Message Catalogs for the appropriate SPS
component.
Error
Error Message Action
Number
The start command was
successful, but the check after the
Queue manager with TAG "TAG" failed to start on start failed.
119001
server "SERVER" with return code "Code" Check the IBM WebSphere MQ alert
log on SERVER for possible errors
and correct them.
Error
Error Message Action
Number
The start command for the queue
manager TAG returned with non zero
value.
Queue manager with TAG "TAG" start command failed Check the IBM WebSphere MQ alert
119002
on server "SERVER" with return code "Code". log on SERVER for possible errors
and correct them.
Error
Error Message Action
Number
Run the script with the correct
119018 Invalid parameters specified.
options.
Run the script with the correct
119019 Too few parameters specified
options.
Failed to set “VALUE” for resource instance “TAG” on Check the LifeKeeper log for possible
119021
server “SERVER”. errors setting the value.
When the server is up and running
Failed to update instance info for queue manager with
119025 again, retry the operation to
TAG “TAG” on server “SERVER”.
synchronize the settings.
The program EXECUTABLE cannot
be found. Verify all installation
The following program required does not exist or is not requirements are met and install all
119026 required packages.
executable: "EXECUTABLE". Check failed.
See section “Configuration
Requirements” for details.
Start the script Script with the correct
119032 Script: usage error (error message)
arguments
Make sure ConfigFile exists and is
119033 Script: error parsing config file "ConfigFile".
readable.
The CHECKTYPE check for queue
manager with tag TAG failed.
CHECKTYPE check for queue manager with TAG
"TAG" failed on server "SERVER" because the Make sure the global configuration file
119034
MQUSER could not be determined. This is probably (mqs.ini) exists and is readable.
because of a removed configuration file - ignoring.
If it is removed, recreate the mqs.ini
configuration file.
Make sure the queue manager
CHECKTYPE check for queue manager with TAG configuration file (qm.ini) exists and
"TAG" failed on server "SERVER" because no TCP contains a TCP section as required
119035 during installation.
PORT directive found in config file "CONFIGFILE" -
ignoring. Add the TCP section to the queue
manager configuration file.
“CHECKTYPE” check for queue manager with TAG Verify that the port information for the
119042 “TAG” failed on server “SERVER” because no TCP listener objects has been defined and
PORT information was found via runmqsc. is accessible via runmqsc.
Error
Error Message Action
Number
Verify that MQ is running and the port
TCP Listener configuration could not be read, reason: information for the listener objects
119043
“REASON”. has been defined and is accessible
via runmqsc.
Verify that the port information for the
No TCP Listener configured, no TCP PORT
119044 listener objects has been defined and
information was found via runmqsc: “MESSAGE”.
is accessible via runmqsc.
Create
Error
Error Message Action
Number
Check the LifeKeeper log on server “SERVER”
END failed hierarchy “CREATE” of resource
for possible errors creating the resource
001022 “TAG” on server “SERVER” with return value
hierarchy. The failure is probably associated
of “VALUE”.
with the queue manager not starting.
Create MQSeries queue manager resource Check the LifeKeeper log for possible errors
119020 with TAG “TAG” for queue manager “QMGR” creating the resource. The failure is probably
failed. associated with the queue manager not starting.
Failed to create dependency between Check the LifeKeeper log for possible errors
119022
“PARENT” and “CHILD”. creating the dependency.
Creating the filesystem hierarchies for queue
Check the LifeKeeper log for possible errors
119023 manager with TAG “TAG” failed. File
creating the filesystem hierarchies.
systems: “Filesystems”.
Add the TCP section to the queue manager
No TCP section configured in configuration file on server SERVER. See
119029
"CONFIGFILE" on server "SERVER". section “Configuration Requirements” for
details.
Queue manager "DIRTYPE" directory Move the directory DIRECTORY to shared
119031
("DIRECTORY") not on shared storage. storage and retry the operation.
Check the LifeKeeper log on server SERVER
Creation of queue manager resource with
119038 for possible errors, correct them and retry the
TAG "TAG" failed on server "SERVER".
operation.
Error
Error Message Action
Number
It’s recommended for the TCP section to be
located after the LOG: section in the queue
TCP section in configuration file "FILE" on manager configuration file.
119039 line "LINE1" is located before LOG section
on line "LINE2" on server "SERVER". Move the TCP section to the end of the queue
manager configuration file and retry the
operation.
Creation of MQSeries queue manager
resource by create_ins was successful but Check the LifeKeeper log for possible errors
119040
no resource with TAG “TAG” exists on server during resource creation.
“SERVER”. Sanity check failed.
Creation of MQSeries queue manager
resource was successful but no resource Check the LifeKeeper log for possible errors
119041
with TAG “TAG” exists on server “SERVER”. during resource creation.
Final sanity check failed.
Extend
Error
Error Message Action
Number
Instance "TAG" can not be
extended from
Correct the failure described in REASON and retry the
119024 "TEMPLATESYS" to
operation.
"TARGETSYS".
Reason:REASON
The user "USER" does not Create the user USER on SERVER with the same UID as on
119027
exist on server "SERVER". the primary server and retry the operation.
The user "USER" has a
different numeric UID on Change the UID so that USER has the same UID on all
119028 server "SERVER1" servers and reinstall WebSphere MQ on the server where you
(SERVER1UID) then it should have changed the UID and retry the operation.
be (SERVER2UID).
No TCP section configured in Add the TCP section to the queue manager configuration file
119029 "CONFIGFILE" on server on server SERVER.
"SERVER". See section “Configuration Requirements” for details.
The queue manager QMGR you are trying to extend is not
Queue manager "QMGR" not
configured in the global configuration file on the target server
119030 configured in "CONFIGFILE"
SERVER. Add the queue manager stanza to the config file
on server "SERVER".
CONFIGFILE on server SERVER and retry the operation.
Error
Error Message Action
Number
Link "LINK" points to For file system layout 3 symbolic links must point to the same
"LINKTARGET" but should location on the template and target server SERVER.
119036
point to "REALTARGET" on Correct the link LINK on server SERVER to point to
server "SERVER". REALTARGET and retry the operation.
For file system layout 3 symbolic links must also exist on the
Link "LINK" that should point to target server.
119037 "REALTARGET" does not
exist on system "SERVER". Create the required link LINK to REALTARGET on server
SERVER and retry the operation.
Remove
Error
Error Message Action
Number
The queue manager “TAG” on server “SERVER” could not be
Failed to stop queue stopped through the Recovery Kit. For further information and
119003 manager with TAG “TAG” investigation, change the logging level to DEBUG. Depending on
on server “SERVER". the machine load the shutdown timeout values possibly have to
be increased.
Some orphans of queue
manager with TAG “TAG”
Try killing the orphans manually and restart the Queue Manager
119004 could not be stopped on
again. For further information change the logging level to DEBUG.
server "SERVER". Tried it
“tries” times.
Listener for queue manager This message will only appear if the monitoring for the listener is
119010 with TAG "TAG" failed to enabled. For further information change the logging level to
stop on server "SERVER". DEBUG.
Resource Monitoring
Error
Error Message Action
Number
Queue manager with
Check the IBM WebSphere MQ alert log on SERVER for possible
119005 TAG “TAG” on server
errors. This message indicates a queue manager crash
"SERVER" failed.
Listener for queue
manager with TAG “TAG” This message will only appear if monitoring of the listener is
119009
failed on server enabled. For further information change the logging level to FINE.
"SERVER".
Error
Error Message Action
Number
“CHECKTYPE"
This message will only appear if the PUT/GET Test is enabled and
PUT/GET Test for queue
the test queue exists. For further information change the logging
manager with TAG
119011 level to FINE and check theIBM WebSphere queue manager error
"TAG" failed on server
log (/var/mqm/errors) on SERVER for possible errors and correct
"SERVER" with return
them. Verify that the file systems are not full.
code "Code"
This message will only appear if Listener management is enabled.
Client connect test for This message indicates a problem with the listener or the queue
queue manager with TAG manager.
119012 "TAG" on server
"SERVER" failed with Check the log for possible errors and correct them.
return code "Code".
The return code Code is the return code of the amqscnxc
command.
Warning Messages
Error
Num- Error Message Action
ber
Listener for queue manager with This is a warning that listener management is not enabled.
119201 TAG "TAG" is NOT monitored on
server "SERVER".
Queue manager with TAG “TAG” is
not running on server “SERVER” but
119202 some orphans are still active. This is This is a warning that MQ was not stopped properly.
attempt number “ATTEMPT” at
stopping all orphans processes.
Another instance of recover is Recovery was started, but another recovery process was
119203
running, exiting “EXITCODE”. already running, so this process will not continue.
Queue manager server connect If you see this message regulary increase the value of
check for queue manager with TAG MQS_QUICKCHECK_TIMEOUT_SC in
119204 "TAG%" timed out after /etc/defaul/LifeKeeper.
"SECONDS" seconds on server See section “Changing the Server Connection Channel”
"SERVER". for details.
Error
Num- Error Message Action
ber
If you see this message regulary increase the value of
Queue manager client connect MQS_QUICKCHECK_TIMEOUT_CC in
check for queue manager with TAG /etc/defaul/LifeKeeper.
119205
"TAG" timed out after "SECONDS"
seconds on server "SERVER". See section “Changing the Server Connection Channel”
for details.
A server was not online while updating a queue manager
Server "SERVER" is not available, configuration setting.
119206
skipping. Wait for the server to be online again and repeat the
configuration step.
Create the test queue QUEUE configured or reconfigure
"CHECKTYPE" PUT/GET test for the test queue to an existing queue.
queue manager with TAG "TAG"
119207 failed because test queue "QUEUE" See section “Configuration Requirements” for details on
does not exist (reason code creating the test queue.
"REASONCODE") - ignoring.
PUT/GET test for queue manager Configure a LifeKeeper test queue for queue manager
119209 with TAG "TAG" skipped because TAG,
no test queue is defined.
Queue manager "CHECKTYPE" If you see this message regulary increase the value of
PUT/GET test for queue manager MQS_QUICKCHECK_TIMEOUT_PUTGET in
119211 with TAG "TAG" timed out after /etc/default/LifeKeeper.
"SECONDS" seconds on server See section “Changing the Server Connection Channel”
"SERVER". for details.
QuickCheck for queue manager with
If you get this message regularly increase the value of
TAG “TAG” timed out after
119212 MQS_QUICKCHECK_TIMEOUT in
SECONDS seconds on server
/etc/default/LifeKeeper
“SERVER”.
Error
Num- Error Message Action
ber
mqseriesQueueManager::getMQVer
sion:: ERROR unexpected
Reading the MQ version failed via runmqsc. If you get this
dspmqver output (OUTPUT) –
119213 message regularly, increase the value of MQS_
reading cached value instead
DSPMQVER_TIMEOUT in /etc/default/LifeKeeper.
(Queue QUEUE, Queuemanager
QMGR).
mqseriesQueueManager::getMQVer
sion:: ERROR reading cache file
output (OUTPUT) Unable to Check if the following command yields some output as
119214
determine WebSphere MQ Version mqm user: dspmqver –b –p1 –f2
(Queue QUEUE, Queuemanager
QMGR),
ClientExitPath:
ExitsDefaultPath=/var/mqm/exits
LogDefaults:
LogPrimaryFiles=3
LogSecondaryFiles=2
LogFilePages=1024
LogType=CIRCULAR
LogBufferPages=0
LogDefaultPath=/var/mqm/log
QueueManager:
Name=TEST.DE.QM
Prefix=/haqm
Directory=TEST!DE!QM
QueueManager:
Name=TEST.QM
Prefix=/var/mqm
Directory=TEST!QM
DefaultQueueManager:
Name=TEST.QM
QueueManager:
Name=TEST.QM.NEW
Prefix=/var/mqm
Directory=TEST!QM!NEW
QueueManager:
Name=TEST.QM2
Prefix=/var/mqm
Directory=TEST!QM2
QueueManager:
Name=MULTIINS_1
Prefix=/var/mqm
Directory=MULTIINS_1
DataPath=/opt/webmq/MULTIINS_1/data
Cluster name
Contact information
Operating system
(e.g. mqm/1002)
group
(e.g. mqm/200)
(eg. 192.168.1.1/24/eth0)
Filesystem layout __ Configuration 1 - /var/mqm on
Shared Storage
__ Configuration 2 - Direct Mounts
__ Configuration 3 - Symbolic Links
__ Configuration 4 - Multi-Instance
Queue Managers
__ other
Shared storage type __ NAS (IP: ___________________
____________)
__ SCSI/FC (Type: _____________
____________)
__ SDR