0% found this document useful (0 votes)
3K views

Db2 Utility

This session will provide a set of tuning and usage recommendations which customers can adopt in order to optimize ease of use, performance, throughput and availability. It is aimed at Users of an Intermediate to Experienced level who have used the utilities for some time and are familiar with the general concepts. Subjects covered include controlling multitasking, minimizing I / O by efficient use of memory, analyzing and tuning SORT processes.

Uploaded by

Bijoy Joseph
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views

Db2 Utility

This session will provide a set of tuning and usage recommendations which customers can adopt in order to optimize ease of use, performance, throughput and availability. It is aimed at Users of an Intermediate to Experienced level who have used the utilities for some time and are familiar with the general concepts. Subjects covered include controlling multitasking, minimizing I / O by efficient use of memory, analyzing and tuning SORT processes.

Uploaded by

Bijoy Joseph
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Session: G06

Getting the most out of your BMC DB2 Utilities


Steve Thomas BMC Software

6th November 2007 14:00 15:00 Platform: DB2 for z/OS

This session will provide a set of tuning and usage recommendations which customers can adopt in order to optimize ease of use, performance, throughput and availability when using BMC DB2 for z/OS Utilities. It is aimed at Users of an Intermediate to Experienced level who have used the utilities for some time and are familiar with the general concepts but who may not be aware of all the fine tuning options available. Subjects covered include controlling multitasking, minimizing I/O by efficient use of memory, analyzing and tuning SORT processes, sizing workfile datasets, and optimizing Dynamic Dataset Allocation. Steve Thomas is a Principal Consultant at BMC Software, supporting customers in the UK, Northern Europe, the Middle East and Africa. He has been a Database specialist since 1985 and has worked with DB2 since 1989. Steve has presented on a wide range of topics at events across Europe and represents BMC on the European IDUG Conference Planning and UK DB2 User Group Committees.

Agenda

Introduction Managing Dynamic Dataset Allocation Analyzing and Tuning the SORT process Hints and Tips for COPY PLUS, RECOVER PLUS and REORG PLUS
Controlling Multitasking Minimizing I/O and/or CPU How to get more out of each utility

Due to time constraints a follow on presentation will cover UNLOAD PLUS and LOADPLUS

This presentation will explain how to get the most out of your DB2 Utilities. The first half of the presentation covers general topics relevant to all BMC utilities, including Dynamic Allocation and BMCSORT. The second half consists of hints and tips when using COPY PLUS, RECOVER PLUS and REORG PLUS. I had originally hoped to cover UNLOAD PLUS for DB2 and LOADPLUS for DB2 as well but there was far too much material for an hour so I will now discuss tips for these utilities in a future presentation. I am also assuming that the reader has access to either SNAPSHOT UPGRADE FACILITY for DB2 (SUF) or Extended Buffer Manager (XBM), either of which enable the online capabilities of BMC utilities. SUF provides a subset of XBM functionality, but for the purposes this presentation they can regarded as identical. For simplicity I shall refer to either product as XBM during the remainder of this presentation since that is usually the name of the Started Task associated with both of these products.

Executing BMC Utilities


ADUUMAIN - UNLOAD AMUUMAIN - LOAD ARUUMAIN - REORG AFRMAIN - RECOVER ACPMAIN - COPY If not permitted and DB2 V8 or 9 ensure the utility allowed to use Storage above Bar by coding MEMLIMIT, using IEFUSI exit or setting SMFPRMxx in PARMLIB

//Step EXEC PGM=AMUUMAIN,REGION=0M, //Step EXEC PGM=AMUUMAIN,REGION=0M, // PARM=ssid,utilid,restart_parm,,MSGLEVEL(n),DOPT PARM= ssid,utilid,restart_parm,,MSGLEVEL(n),DOPT // PARM=ssid,utilid,restart_parm,,MSGLEVEL(n),DOPT

PARM Syntax varies slightly check manuals


Can use individual DB2 Subsystem or Group Name Default utilid is Userid.Jobname Recommend using NEW/RESTART Use MSGLEVEL(1) if you can spare Spool DOPT = Default Options Module
3

BMC utilities are executed directly rather than running under DSNUTILB as with IBM utilities. Use REGION=0M if allowed, otherwise when using DB2 V8 or DB2 9 you need to ensure the system allows the utility to allocate storage above the Bar by using one of the methods listed. The exact syntax of the PARM option varies between the utilities see the Chapter titled Building and Executing jobs in the appropriate Reference manual for details (usually Chapter 4). However many of the keywords are common: SSID is the DB2 subsystem. When using data sharing all our utilities support the Group attach name. After a failure you can restart any utility using a different member of the Group than was used for the original execution. UTILID is utility identifier, similar to the IBM utilities, and must be a unique entry in the BMCUTIL table. The default is Userid.Jobname We recommend using NEW/RESTART for the Restart Parm. This will restart an existing utility if one exists or else start a new one and allows most failed utilities to be resubmitted with no JCL changes, particularly if using Dynamic Allocation. MSGLEVEL(0) returns minimal output. MSGLEVEL(1) provides more information, including the Maintenance applied and the Default Options used and can be very useful for tuning or debugging purposes in the event of a failure. It will almost always be needed by our Support teams so its usually a good idea to use this if you can spare the space on your JES Spool. Note that RECOVER PLUS also supports MSGLEVEL(2) which provides even more data. The Default Options module or DOPT provides a user-customisable set of default values for the utility parameters and will be discussed next.

Default Options Modules


Customized Defaults for most keywords Macro Assembled and Linked into Load Module
Create using Install JCL library member xxx$OPTS

Displayed in Output if MSGLEVEL > 0 Documented in Reference Manual Appendix Update to suit your own environment
Saves coding Control Cards Makes Utilities much easier to use

New features sometimes disabled by default to maintain consistency across new releases Definitely worth reviewing DOPTS for your site

The defaults for many BMC Utility Syntax keywords can be customized for your own site by setting up a Default Options Module or DOPT. During the installation we provide a Macro for each utility together with a job to Assemble and Link this into your Load Library (HLQ.DBLINK). The default name is xxx$OPTS where xxx is the product code you can see what these are by looking at the Execution Load module name on the previous slide, so for instance COPY PLUS uses ACP$OPTS. You can override most defaults using Command Syntax, although this is not allowed where it makes no sense to do so, for example Plan Names. The DOPT settings used by each utility are displayed in the job output whenever you have a MSGLEVEL setting > 0 which is a good reason for using MSGLEVEL(1). Its well worth the effort reviewing the DOPT settings for each utility to ensure they match the requirements of your organization. If theyre correct then the utility syntax can often be reduced to very few statements saying what utility you want to run and on what objects, leaving the rest of the settings to default. This makes setting up and maintaining Utility JCL much simpler. Another point worth mentioning at this point is that we sometimes disable new features by default and you need to explicitly switch them on either in the DOPT or by using utility syntax. Where this has been done its usually to maintain consistency of operation between releases - an example was FASTSWITCH support in REORG PLUS where the default remains NO.

Some useful DOPT settings DOPT


BMCHIST or HISTORY DRNDELAY, DRNRETRY & DRNWAIT FASTSWITCH INLINECP KEEPDICTIONARY MAXTAPE or MAXDRIVE SMAX COPYLVL XBMID

Description
Whether to save data in BMCHIST Control Drain processing Fastswitch or Rename for Online Utilities Whether to take Inline Copies Whether to keep compression dictionary Limit how many tape units can be used How many concurrent Sorts can we run? Take Full or Partition level copies What XBM subsystem to use
5

This slide shows some of the more useful DOPT settings which can be defined for different utilities. Most of these are only relevant to certain utilities, although where the same parameter needs to be set we try and maintain consistency across the different products. There are a few instances where the keyword does vary, so for instance most utilities use HISTORY to define whether to save historical execution information but REORG PLUS uses BMCHISTORY instead. Another example are the MAXDRIVE and MAXTAPE options. There are very few of these differences and where they do exist its usually for historical reasons if we tried to change them now then we would create problems for our existing customers so were stuck with what we have.

Common Utility Database


Default name BMCUTIL Contains critical data on activities
Used by all our utilities

Recommendations
Treat with same importance as DB2 Catalog & Directory
Backup at the same time Recover as soon as DB2 is back up before any User data

Use only a single set of tables


Share between all BMC products and versions

Do not update Catalog statistics for these tables


HLQ.DBSAMP(xxxRESET) will reset stats if needed

Do not run BMC utilities against these objects


Use IBM utilities instead

BMC utilities use a DB2 Database in which to store data such as what utilities are executing and what objects are being processed. These tables are as vital to us as the Catalog and Directory are to DB2 itself, so if you are using our utilities you should treat this database with the same degree of respect. For example, its normal to backup the database at the same time as the DB2 Catalog and Directory, and to restore it as soon as DB2 is back up during a Disaster Recovery, before any real User data is processed. You only need a single copy of the Common Utility Database regardless of how many BMC products you run and which versions are in use the tables are designed to be backward version compatible and are updated automatically by our installation process. The most important point to consider for the purposes of this presentation is that you should never update the Catalog Statistics for these tables. Our products are designed to operate with the default Statistics, and if you update these then performance may well be degraded. Should you ever accidentally update statistics then they can be reset by executing the SQL member xxxRESET which will be found in the HLQ.DBSAMP library (or HLQ.CNTL in older releases before we used SMP/E for maintenance), were xxx is the respective product code, so for example COPY PLUS uses ACPRESET. We also recommend that you do not attempt to run our BMC utilities against these tables as this can cause contention problems. Use the native IBM utilities against them instead they should not be particularly large with the possible exception of the BMCHIST and BMCXCOPY tables.

Tables in BMCUTIL database Table Name


BMCUTIL BMCSYNC BMCHIST BMCDICT BMCXCOPY BMCLGRNX

Used to Store
What utilities are executing What Objects are being processed History of past executions Compression dictionaries (REORG and LOAD) BMC specific data similar to the Catalog table SYSIBM.SYSCOPY Log Ranges where an object was open for update
7

This table shows the tables in the BMCUTIL database used by the utilities and describes the type of data we store in them. All the tables are important, but the ones to take particular care with are BMCUTIL, BMCSYNC and BMCXCOPY. The first two are our equivalents to the DB2 Directory object SYSUTILX and if you lost this data you would not be able to restart any in-flight BMC utilities. We use BMCXCOPY to store information about any non-standard Imagecopies which we may take, and losing this would mean that you may not be able to recover using these copies. Examples are Instant Snapshots and copies of Indexes where the Index was defined with the COPY NO attribute.

Why use Dynamic Allocation?


Supports wildcarding Simplifies JCL setup and maintenance
No need for DD statements Can save hundreds of lines of JCL

Disk datasets sized automatically Tape datasets can be stacked Automatic creation of GDG base if none exists Size based criteria can change allocation details Simplifies restart after any failure

There are a number of reasons why you might choose to use Dynamic Allocation. As with IBM utilities using LISTDEFs, the first and most obvious of these is that it supports using wild cards as these prevent you using static dataset allocation in JCL. However dynamic allocation can provide a number of other useful benefits including: Avoiding the need to code a DD card for each data set, which can save hundreds or even thousands of lines of JCL. Disk based datasets will be automatically sized correctly. Tape based datasets can automatically stacked if desired. If a dataset is a GDG (generation data group) and the base does not exist, for example if you are processing a new object, we will create it automatically, based on a GDG base template which you provide to determine the number of cycles and other appropriate parameters. Many of the utilities provide options to change the dataset allocation based on run-time criteria such as the expected dataset size or the type of space being copied. For example you can automatically direct larger objects to virtual tape instead of disk. This capability provides the basis for our Hybrid Copy which is discussed later. Restarting after a failure is much simplified because there is no need to adjust the disposition, GDG numbers, or VOL=REF statements as can happen with using DD statements. You can literally simply re-submit the job as is.

Specifying Dynamic Allocation

We use 2 methods of specifying Dynamic Allocation


COPY, RECOVER & UNLOAD use OUTPUT DESCRIPTORS LOAD and REORG use DDTYPE syntax

Both are used in place of coding DD statements


Specify Allocation Options using Keywords similar to a DD card

Default values specified in utility DOPT module Symbolic variables can be used for data set names
&DB, &TS, &DATE, &TIME, &PART, &OBNOD, &TSIX, &JOBNAME, &STEPNAME, &UID, &TASK etc.

Option to use DD instead if present in JCL


Can also ignore DD and still allocate datasets dynamically

Largely for historical reasons BMC utilities use two different methods of controlling dynamic allocation. COPY PLUS, RECOVER PLUS and UNLOAD PLUS use Output Descriptors while LOADPLUS and REORG PLUS use DDTYPE syntax. However both methods use largely the same keywords and achieve the same purpose. They are used in place of a DD statement and specify the allocation options for output data sets. They support all the keywords found in a DD statement and more, for example, UNIT, DSNAME, SPACE and RETPD. As with other utility keywords, these options have defaults which are defined in the relevant DOPT module. If the DOPT is coded correctly then you often dont need any keywords in your Syntax to obtain all the benefits of Dynamic Allocation. All you need to do is to code any keywords that differ from the DOPT in your Utility Syntax. Data set names can include a number of symbolic variables which are substituted at execution time. This enables wild carding and using a single OUTPUT descriptor or DDTYPE for many objects. Some of the more common symbolic variables are &DB, &TS, &DATE, &TIME, &TYPE, &PART a full list can be found in the relevant Reference Manual. You can specify that a DD card specified in the JCL is used if it is coded, or it can be ignored and Dynamic Allocation will still take place

A sample using each method


OUTPUT Descriptor
OUTPUT LOCALP DSNAME &UID.&OBNOD.&TYPE(+1) UNIT SYSDA COPY TABLESPACE DBSRT.* INDEXES NO COPYDDN(LOCALP) RESETMOD NO SHRLEVEL CHANGE QUIESCE AFTER GROUP YES

DDTYPE
REORG TABLESPACE DBSRT.TS1 COPY YES INLINE YES COPYLVL PART SHRLEVEL REFERENCE UNLOAD RELOAD DDTYPE LOCPFCPY ACTIVE YES IFALLOC USE UNIT (3390,VTAPE) THRSHLD 720000 DSNPAT &UID.&DB.&TSIX..P&PART.(+1) GDGLIMIT 5

10

This slide shows an example of each type of dynamic output allocation. In the left hand example, we are copying a SHRLEVEL CHANGE backup of all the Tablespaces in Database DBSRT, and generating a common consistency point after the backup has been completed. The Image copies are using an output descriptor called LOCALP, which is highlighted in red. This shows that the image copy datasets are going to be using disk based GDG datasets. The &OBNOD variable in the dataset name expands to either database.tablespace or database.indexspace depending on the context in which its used, which in this case would be the former as were processing tablespaces. If the relevant GDG base does not exist when the job was submitted we will dynamically create one using parameters found in the ACPGDG DD name in the JCL to provide the values. In the right hand example we are reorganizing a single tablespace in the same database using a Single Phase process, as specified by the UNLOAD RELOAD parameter. The Utility is taking an Inline copy at the partition level while the object is being reorganized. The DDTYPE LOCPFCPY handles the local primary imagecopy, and as you can see we are switching on dynamic allocation by using ACTIVE YES. By coding IFALLOC USE we are stating that if we have coded the correct DD cards in our JCL we should ignore the dynamic allocation request and use the datasets provided in the JCL for the copies. The UNIT parameter, together with the threshold, tells us that if the copy dataset is expected to be below 1,000 cylinders it will be allocated to Disk whereas if its larger it will be sent to Virtual Tape. This decision will be taken dynamically at execution time using the high Allocated RBA of the underlying dataset. Again any GDG base that does not exist will be created, in this instance with a limit of 5 generations.

10

BMCSORT
We use our own Sort package BMCSORT
Highly tuned Sort designed for use with BMC Utilities IBM do the same with DFSORT from DB2 V8 onwards

Uses own DOPT module


Primarily controls Dynamic Allocation Only change settings under direction from BMC Support

Recommend using recent versions of Utilities


Particularly REORG PLUS 8.1 and LOADPLUS 8.3 or later Internal improvements in how BMCSORT is called

2 primary factors to consider for end user


How are Sortwork datasets allocated? Ensuring Sorts have sufficient memory

11

All BMC utilities include a licence to use a customized Sort package called BMCSORT which has been tuned to provide optimal utility performance. BMCSORT is automatically included during product Installation. It is neither intended nor capable of replacing whatever system sort package your sites uses, but you must use it when running our utilities. Its worth noting that IBM have adopted a similar approach with their own utilities, which always use DFSORT from DB2 V8 onwards. BMCSORT uses a DOPT module similar to the utility DOPTs mentioned earlier. Its contents are primarily concerned with how Sortwork datasets are allocated. Most of the values can be overridden by individual utilities, and as a result there is usually no need to change the defaults provided unless you are either very experienced or are directed to do so by the BMC Support team. Some internal changes made to the utilities and released in mid-2006 make BMCSORT more efficient and help tuning it when processing large or complex objects. In particular we have improved how memory is used when running large numbers of concurrent Sorts, something which is fairly common particularly when running LOAD and REORG. As a result I recommend you ensure you use the versions quoted or later so that you benefit from these changes. The next few slides will explain how to optimize BMCSORT, with a focus on Sortwork datasets and memory usage.

11

SORTWORK datasets
SORTWORK datasets can be:
Hard coded in your JCL Allocated dynamically by the Utility Allocated dynamically by BMCSORT

Highly recommend that BMCSORT does this


Makes JCL simpler Ensures we can run the optimal number of parallel Sorts BMCSORT knows exactly how much data requires sorting It will allocate more itself anyway if its needed

All you need do is code the SORTNUM option


Set the relevant Utility DOPT parameter Default is 32 but can be up to 64 or 255 in recent versions Dont forget to turn off utility SORTWORK Dynamic Allocation

12

There are three basic methods of providing SORTWORK datasets. Hand coding them in your JCL is the usually the least efficient method. The only real reason to do this is when your site is struggling for work space in which case pre-allocating the datasets can help ensure space is available, but you should only do this in exceptional circumstances. The next method is to get the utility to allocate the datasets itself by switching on Dynamic Allocation for the SORTWORK file type. While this method usually works well, you need to remember the utility estimates how much space is required during the Analysis phase before the actual real work commences. While our estimates are usually pretty good at the end of the day they are only estimates and they can always be wrong, resulting in either wasted space or degraded performance. Getting BMCSORT itself to allocate your SORTWORK datasets is by far the best choice. When a Sort is needed the utility usually knows exactly how much data needs processing and passes this information onto BMCSORT internally, which allows it to allocate the optimal amount of space. In most cases BMCSORT will actually allocate any extra SORTWORK datasets it needs itself anyway, so you may as well let it do the whole job. The easiest method of getting BMCSORT to allocate Sortwork datasets simply code the SORTNUM option in your utility, which will override the setting provided in the BMCSORT DOPT module. The usual default for this option is 32, although it may be lower in your site if youve been using BMC utilities for some time and have never reviewed your DOPT settings. We wont always allocate 32 datasets but this provides an upper limit. SORTNUM can go up to 64, or even 255 in recent versions, if needed. Dont forget to turn of Dynamic Allocation for the SORTWORK files in your utility options and syntax. 12

Tuning BMCSORT Memory


BMCSORT can degrade to Standard Path
Usual cause is insufficient memory Causes increased elapsed time, CPU and EXCPs

Key is to allocate enough storage


Use REGION=0M or ensure suitable MEMLIMIT 1Mb of memory per 1Gb of data is a good rule of thumb Get values from the largest SORT used by your utility

Watch out for these messages:


WER164B 264K BYTES OF VIRTUAL STORAGE AVAILABLE.. IHJ000I CHECKPOINT job, step.step (????????) NOT TAKEN (11) MODULE = IHJACP00 Another hint is when >20K EXCPs to SORTWORK files

All indicate we may have degraded to Standard path


If you see these call BMC Support for advice
13

As we have discussed, BMCSORT usually provides a highly optimized and efficient Sort mechanism tuned to improve utility performance. However, if there is insufficient memory available, particularly above the Bar then it can degrade to a Standard Path rather than failing completely. In many ways this is good news as it prevents unnecessary job failures, but the performance implications mean that it better to avoid the situation by providing enough Storage in the first place. BMCSORT is a 64-bit application and uses storage above the 2Gb bar. It will only use as much storage as needed, so running with REGION=0M is by far the best option. If this is not possible then ensure you use a large REGION size and make sure that the system allows you to use above the Bar memory (see slide 3 for options). A good rule of thumb for how much memory is needed is 1Mb per Gigabyte of data sorted by your largest SORT. You can get these figures from the SORT messages provided in your utility job output. If the path does degrade then you will almost always see one or both of the messages in red on the slide. You always get a WER164B message in the SORT output, so its the 264K Bytes figures which is the critical factor. The IHJ0001 message will be in the main job log, possibly more than once and almost inevitably indicates problems. Another possible indicator to watch for is the number of EXCPs performed to your SORTWORK datasets by each individual SORT. If one shows more than 20K EXCPs for anything other than the largest sorts then it may warrant further investigation. A first step you can take on your own would be to check that you have a decent region size, but failing this please call BMC Support for further advice.

13

Optimizing COPY PLUS for DB2


To save elapsed time:
Exploit Disk Hardware Use Multi-tasking Use Cabinet Copies Use RESETMOD NO Increase the number of Read/Write Buffers (NBRUFS)

To save CPU:
Use CHECKLVL 0, SQUEEZE NO & COMPRESS NO Do not collect Statistics during the Copy

To reduce use of Output Media:


STACK YES and SMARTSTACK for Tape media COMPRESS YES and SQUEEZE YES

Incremental Copies can improve all categories


But think about recovery times...
14

If youre primarily concerned with reducing elapsed times the most obvious options are to exploit your disk hardware and to ensure we process objects in parallel by invoking multi-tasking, both of which will be covered in more detail shortly. If you have the Recovery Management Solution using Cabinet Copies usually generates big savings. Other options to consider include using RESETMOD NO (you should always use this if you only ever take Full Copies), and increasing the number of Read/Write Buffers in the DOPT although this will increase memory utilization. If your target is to save CPU time then you should minimize Page checking and avoid compressing the copies using SQUEEZE and COMPRESS. Avoiding collecting Statistics during the COPY can also help, although if you currently run a separate Statistics collection job with the same frequency as your copies then the decision isnt quite so straightforward. Running with a minimal number of Buffers will also reduce CPU but will increase your elapsed time. Reducing the use of output media is not usually such a big problem provided you Stack your backups when using Tape. Smartstack will assist recovery times when using Incremental Copies but it wont save on media use by the backup process. Squeezing or Compressing the Backups will save media but will increase both CPU and Elapsed time. Finally taking Incremental Copies can save resources in all three categories but it may well have a significant impact on Recovery times so you need to be careful. It can be a very good choice if the circumstances are right. One of my larger customers uses Incremental Copies very successfully but they have a large database of around 40 Terabytes, most of which has a very low update rate relative to its size so using Incremental copies was a natural choice for them.

14

Exploiting Disk Hardware


Option 1 Snapshot Copies Specify SHRLEVEL CONCURRENT Creates Standard and Consistent imagecopy
May be used by any Recovery utility

Brief Outage (Quiesce) to create Consistency Point Uses Volume/Dataset Snaps or Volume Mirrors
Hardware independent, may require Disk Vendor Software

Software Snapshot also supported


XBM will determine which is used, not Utility Syntax

Optional fallback to SHRLEVEL CHANGE backup


Use REQUIRED or PREFERRED keywords

STARTMSG used for Automation


15

Snapshot Copies are standard Consistent imagecopies registered in SYSCOPY. They can use any type of media and may be used by any Recovery Utility. Using Snapshot may not shorten the elapsed time of the copy job, but the objects will be available for Read/Write processing while the copy is being taken which effectively achieves the same end result. COPY PLUS uses XBM services to exploit the capabilities of your Disk Subsystem. When taking a Snapshot Copy, a consistency point is established using an IBM Quiesce. While this has exclusive access to the objects XBM establishes a source of consistent data which is then processed by the utility to create the Copy. In the case of a Hardware Snapshot this usually involves a Flashcopy or similar Snap operation either at the dataset or Volume level. If permitted we are also able to suspend a mirror so that the utility processes the suspended copy although you naturally need to ensure we dont suspend any mirrors being used for Disaster Recovery purposes. COPY PLUS can achieve the same end result using XBM Software capabilities where a Dataspace is used to cache pre-updated page images. You can optionally fall back to a Software Snapshot if a Hardware Request fails. The keyword specified after SHRLEVEL CONCURRENT determines what action to take if a Snapshot request fails. The default, PREFERRED, indicates that should a Snapshot request fail for whatever reason the utility will revert to creating a SHRLEVEL CHANGE copy. The alternative is REQUIRED which terminates the COPY job if the Snapshot request is unsuccessful. It is always the XBM Configuration parameters which determine what type of Snapshot is taken there is no syntax or options provided within the utility itself to influence this. The STARTMSG keyword inserts a message in the job output and the system log once the Snapshot operation is complete and the backup has started. This can be used to automate the submission of other jobs which can run alongside the COPY once the backup process is underway. 15

Exploiting Disk Hardware


Option 2 Instant Snapshots Specify DSSNAP YES or AUTO in Output Descriptor
Supports any SHRLEVEL including CONCURRENT SHRLEVEL CHANGE Instant Snapshots are outage free

Invokes Flashcopy or equivalent at the dataset level


Hardware Independent, may require Disk Vendor Software

Creates non-standard Imagecopy


Registered in BMCXCOPY table rather than SYSCOPY Can only be processed by BMC Utilities Use COPY IMAGECOPY to create standard backup

Multi-task to improve throughput As soon as Flashcopy is taken the Copy is complete


Backup is physical copy of the DB2 VSAM LDS
16

While a Snapshot Copy achieves savings by allowing other work to run alongside the copy, an Instant Snapshot reduces the actual elapsed time of the backup itself. We support any SHRLEVEL option, including SHRLEVEL CONCURRENT. A SHRLEVEL CHANGE Instant Snapshot will fully exploit your Disk Hardware with no outage whatsoever to your applications. Instant Snapshot uses a Flashcopy or other Disk based Snap operation on the underlying Datasets. The Flashed copy is registered in BMCXCOPY. This Type of Backup is non-standard and can only be used by a BMC utility such as RECOVER PLUS. If you dont own this, you can use the COPY IMAGECOPY facility of COPY PLUS to create a Standard Imagecopy which can then be processed by any utility of your choice. Instant Snapshots can be multi-tasked to improve throughput, in which case more than one dataset will be processed concurrently. The limiting factor to throughput then becomes the capability of the Disk Subsystem rather than the utility itself. As soon as the Flashcopy operation has been completed the backup is registered and the utility completes. The process takes only a second or two per dataset, regardless of size. The backup itself is always Disk based and is a physical copy of the DB2 VSAM Linear Dataset. Recovery simply involves flashing the dataset back and so Restore times can be improved just as dramatically as the Backups. Most customers I see Copy their Instant Snapshots offline at a convenient time to create offsite backups and to free up the disk space for the next backup to be taken.

16

Multi-tasking in COPY PLUS


GROUP YES required for multi-tasking Quiesce for SHRLEVEL REFERENCE & CONCURRENT Specify MAXTASKS in DOPT or PARALLEL in syntax
Highest value used if both specified Each subtask can perform tape stacking ACPPRTnn DD dynamically allocated for messages TASK n can be used to direct objects to specific tasks
OPTIONS MAXTASKS 3 OUTPUT OUTCPY UNIT CART STACK YES COPY TABLESPACE DBSRT1.* COPYDDN(OUTCPY) TASK 1 TABLESPACE DBSRT2.* COPYDDN(OUTCPY) GROUP YES RESETMOD NO FULL NO READTYPE AUTO

17

One of the most useful features of COPY PLUS is multi-tasking. GROUP YES is a pre-requisite if you wish to use this feature. Remember that if youre running SHRLEVEL REFERENCE or CONCURRENT copies and using GROUP YES then we will use a Quiesce in order to obtain a consistency point. To invoke multi-tasking, specify either the MAXTASKS Default Option or the PARALLEL keyword in your syntax. I find that most customers use MAXTASKS as in the example, perhaps because its more descriptive. If both parameters are specified COPY PLUS will use the highest value, but it will only ever start as many subtasks as it requires. Each subtask can perform tape stacking independently. COPY PLUS dynamically allocates ACPPRTnn output DD cards as necessary to store the output messages from each subtask. If your data is significantly skewed you can use the TASK n syntax to direct individual objects or groups of objects to a specific task. This can help ensure that the largest objects are processed first so the elapsed time of the utility is minimized. You can also use TASK syntax to stack the backups of objects belonging to a database together, as can be seen in the example. When using Multi-tasking remember that COPY PLUS is likely to have more than one open thread with DB2 so you need to ensure your CTHREAD, IDFORE and IDBACK DSNZPARM settings are large enough.

17

Dynamic Allocation in COPY PLUS


Different Output Descriptors based on Type of Copy
Incremental Copies use COPYDDN & RECOVERYDDN Full copies use FULLDDN & FULLRECDDN if coded Remember the SMARTSTACK option for Incremental Copies

Different Output Descriptors based on size of Copy


OUTSIZE sets the threshold Large objects use BIGDDN & BIGRECDDN for Full copies Have priority over COPYDDN and FULLDDN when the OUTSIZE threshold is exceeded

Copy Indexes based on size


INDEXES YES invokes Index backups along with Tablespaces IXSIZE governs whether indexes are copied
18

There are a number of very useful parameters that can be used to manage the devices that Imagecopies use, as well as to control whether indexes are backed up or not. The first set of options allow Incremental Copies to be placed onto Different devices and to use different naming standards than Full copies. If FULLDDN and FULLRECDDN are not coded then both Full and Incremental Copies will use the standard COPYDDN and RECOVERYDDN output descriptors. Remember that we support the SMARTSTACK keyword to automatically stack incremental copies in the same order as their respective full copies. We also allow Full copies for large objects to be placed into a different set of datasets than those for smaller objects. The threshold is defined using OUTSIZE, and if this is exceeded we will use the BIGDDN Descriptor for the Copy datasets in place of COPYDDN or FULLDDN. If youre backing up Tablespaces INDEXES YES can be used along with IXSIZE to determine whether or not to backup Indexes based on size. Remember that we support backing up indexes defined in DB2 as COPY NO. Taken together these options provide a huge amount of flexibility when it comes to defining where your backups are stored for optimal Space utilization and Recovery Performance. They allow you to automatically place larger backups on Tape or to use Instant Snapshot for these while using Disk (or even a Cabinet Copy) for smaller objects. This type of Copy is known within BMC as a Hybrid Copy, and has the benefit of being self managing if an object increases or decreases in size it will automatically be placed into the correct category of dataset the next time the Backup runs. An example of this type of backup can be found on the next slide.

18

Sample Hybrid Copy Job


OPTIONS MAXTASKS 5 XBMID(XBMP) OUTSIZE 100M IXSIZE 10M OUTPUT CABCOPY DSNAME ... UNIT VTAPE STACK CABINET OUTPUT INSTCOPY DSNAME ... UNIT 3390 DSSNAP YES COPY TABLESPACE DBSRT.* INDEXES YES COPYDDN(CABCOPY) BIGDDN(INSTCOPY) RESETMOD NO SHRLEVEL CHANGE GROUP YES
19

Here is a sample Hybrid Copy job. It uses two Output Descriptors: CABCOPY which will be a Cabinet Copy used for any Smaller datasets. It is going to use a virtual tape device. INSTCOPY is an Instant Snapshot which will be taken for any larger objects, in this case of over 100Mb. These will use Flashcopy technology and the backups will remain on disk although we may choose to process these offline afterwards to free up space for the next days copies. The Options command tells us how many subtasks to use (5 in this instance), so we will be processing 5 objects at a time and may end up with up to 5 Cabinet Copy datasets, once for each subtask. It also defines the name of the XBM subsystem to be used for the Instant Snapshots and the Size thresholds for both larger image copies and for whether Indexes will be processed. The COPY step itself merely pulls together these components, as well as specifying RESETMOD NO (which is required for Instant Snapshots) and the SHRLEVEL to be used. GROUP YES is required in this instance because we are multi-tasking.

19

Other uses for COPY PLUS


Dont forget COPY PLUS can also be used for:
COPY IMAGECOPY QUIESCE MODIFY RECOVERY

All support our Wildcards and special keywords MODIFY is particularly useful
DELETE based on maximum number of copies or using an SQL-like WHERE clause Deletes and Uncatalogs old image copies Tidies up BMCXCOPY as well as SYSCOPY Verifies Object Recoverability Checks time and log data volumes since last Copy Can generate Copy of any unrecoverable or alerted objects

20

Dont forget that COPY PLUS can be used for a number of purposes other than simply taking backups. It also supports the COPY IMAGECOPY (equivalent to COPYTOCOPY), QUIESCE and MODIFY RECOVERY utility functions. All these support our normal range of wildcards and special keywords which can be useful. For example, although we invoke the IBM Quiesce utility to perform the actual Quiesce, you can use COPY PLUS as an easy to use front end if you prefer our wildcarding syntax to using IBMs LISTDEF functionality or to generate a Quiesce of the objects in a Recovery Manager Group. Of the capabilities listed MODIFY RECOVERY is probably the most useful. We provide additional keywords which allow features such as removing older Intermediate (Daily) copies from SYSIBM.SYSCOPY while retaining Weekly or Monthly copies. COPY PLUS will also verify the recoverability of objects and can be used to check the elapsed time and the number of Log datasets created since the last Full Copy. These checks can either generate warnings or the product can automatically execute an Imagecopy to backup any such objects if required.

20

Optimizing RECOVER PLUS for DB2


Recovery Plan created during UTILINIT/ANALYZE
ANALYZE ONLY describes plan and resources to be used

Backout Recovery
Point-in-time Recovery without Image Copies OK for Indexes even if no Image Copy or defined COPY NO

Consider INDEXLOG AUTO


Recovers indexes if possible otherwise Rebuilds You may have to change some recovery JCL if you use this

NOWORKDDN strategy
Eliminates SYSUT1 for Index Keys piped straight to SORT

Multiple Log readers Consider UNLOADKEYS/BUILDINDEX strategy


For large partitioned objects with non-clustering indexes
21

Recovery is something most people dont practice a lot, so running one needs to be as simple as possible. RECOVER PLUS develops a Recovery Plan during the first phase of execution. A number of reports on what we intend to do together with a summary of the objects affected and the recovery resources that will be used are generated. How much detail is provided in your job output can be adjusted using the MSGLEVEL parameter. The following are some options to consider to improve recovery times. Most are either DOPTs or are the default behaviour so you dont need to specify them explicitly each time: Backout Recovery will be covered shortly, but provides the ability to recover to a PIT without Image Copies by reading backwards through the log. It is available for Indexes you have never backed them up as well as for indexes defined using the COPY NO attribute. When using INDEXLOG AUTO we attempt to recover indexes using backups and logs, and automatically convert the request to a REBUILD INDEX if this is not possible. The default for this Option setting is INDEXLOG NO so that all existing recovery jobs will work as some types are not eligible for conversion (see Manual for details). The NOWORKDDN strategy avoids the need for SYSUT1 datasets to store index keys by piping them directly into the SORT process. This can affect restart processing but saves resources and improves recovery times. To invoke this either specify NOWORKDDN or avoid coding a WORKDDN in your job (this is the default). Using Multiple log readers and the UNLOADKEYS/BUILDINDEX strategy are covered later in the presentation

21

Point-in-Time Recovery - BACKOUT


RECOVER TABLESPACE EMP.PAYROLL TOLOGPOINT X000000000900 BACKOUT PIT_RBA Pit Range START_RBA

Quiesce at 000000000900

000000001200

Image copy at 000000000100

Bad Update at 000000001000

22

This diagram show how you might choose to use BACKOUT to avoid having to mount an imagecopy, apply logs and rebuild indexes.

22

Why choose BACKOUT?


Main benefit is speed of recovery
No Mounts for Image Copies required No key sort for index rebuilds

Also saves resources in most cases

OPTIONS BACKOUT INDEXLOG YES RECOVER TABLESPACE PAYROLL.EMPLOYEE TABLESPACE PAYROLL.RULES TOLOGPOINT LASTQUIESCE RECOVER INDEX (ALL) TABLESPACE PAYROLL.EMPLOYEE RECOVER INDEX (ALL) TABLESPACE PAYROLL.RULES

23

The main advantage of using BACKOUT processing is the speed of recovery. Its almost always going to be quicker to process the logs backwards than to recover to an imagecopy, apply the logs to a known consistency point and then rebuild the associated indexes. In the example job, I am recovering two objects to a known Consistency Point. I have also specified that all the indexes should be recovered as well, and since their associated tablespaces are being processed in the same recovery job using LOGPOINT syntax the indexes will automatically be recovered to the same point. This would also happen if I had used TORBA or TOCOPY in my recovery syntax.

23

But remember...
The Space must be physically undamaged Process whole log for COPY NO indexes You cannot Backout
Through the range of a LOAD, REORG or REBUILD utility If object is in a restricted status such as RECP or LPL A segmented TS through a DROP TABLE or Mass Delete Unless Data Capture Changes is ON or the segments affected have not been reused by later inserts LOBS, Not Logged Tablespaces, an index defined using an expression or Compressed indexes Using Keywords such as OBIDXLAT and OUTCOPY ONLY

An index cannot be recovered through a Backout


OUTCOPY YES, run copy afterwards (SHRLEVEL CHANGE) or rebuild index if further recovery needed
24

One of the requirements of BACKOUT is that the space must be physically undamaged and in a state which reflects all updates done to the current time (i.e., the space cannot have been restored with DSN1COPY or some other process outside DB2). If COPY NO indexes are being processed all the log between the target LRSN and current will be read because SYSLGRNX does not record Update ranges for these objects. There are some restrictions listed but if you plan to use this feature in your system you should review the Reference manual for your release for exact details. Most of the restrictions are SYSCOPY events. RECOVER PLUS detects these during analysis and fails before performing any processing. Examples are any REORG or LOAD Utility (even if LOG YES), or a REBUILD INDEX. Also detected at analysis time are statuses which are unacceptable, like DEFER, LPL, GRECP, REFP, or RECP, as well as any attempt to BACKOUT into the middle of an existing PIT. RECOVER PLUS will fail during the execution if one of a number of conditions is encountered on a page image being recovered. The most common is that a mass delete or DROP TABLE has been done, on a table which is not defined as DATA CAPTURE CHANGES, where the segment logically deleted has been reused (so the data page images are not available). If you encounter such a case a normal forward recovery must be performed. An index cannot be forward recovered later through a BACKOUT. You can specify OUTCOPY YES during the index BACKOUT (but this causes all the pages to be processed and so will impact performance), you can start a SHRLEVEL CHANGE copy immediately after completion of the BACKOUT, or you can continue with processing and take the risk that you will have to REBUILD if a recovery of the index becomes necessary before it is next image copied.

24

Using Multiple Log Readers


Concurrent Log Reading is Always a Good Thing
BMC Recommends setting MAXLOGS to 6 Check MAXDRIVES setting Experiment with OPTION statement first Then update AFR$OPTS Installation Options
MAXLOGS (default is 1) MAXDRIVE (default is 0 => unlimited tape drives)

2:00:00 1:18:27 Elapsed Time 1:30:00 55:34 1:00:00 0:30:00 0:00:00


1 2 3
25

51:06

45:15

42:04

40:56

BMC Software recommends a value of 6 for MAXLOGS; this seems to achieve optimal results in many shops. The default value is 3. Unless you want to possibly allocate 6 tape drives during a recovery, ensure that MAXDRIVES is set to a number lower than 6. The default value of 0 for MAXDRIVE specifies that tape drive usage is unlimited. It does not mean that no tapes are used! You may want to experiment with different values of MAXLOGS and MAXDRIVES on the OPTION statement to find the best combination of settings for your environment. Once you are satisfied with these values, you can define them as default values in the AFR$OPTS macro. The chart shows that on a lightly loaded system with large archive log files on tape, a value of 6 for MAXLOGS reduces elapsed time by 50% compared to using the value of 1. In practice, the value of MAXLOGS is usually limited by the fact that the effect on elapsed time decreases as the value of MAXLOGS is increased. In this benchmark, which is a few releases old now, 15.8 billion log records were read from 29 archive logs and 2 active logs; 4 million of these records (520 million bytes) were sorted and applied to spaces. The elapsed time values in the chart are for the total recovery execution time.

25

Sorted UNLOADKEYS - Step 1


PARTS 1-4 PARTS 5-8 PARTS 9-12

SKEYDDN1

SKEYDDN2

SKEYDDN3

Benefits
Greater Concurrency Smaller Sorts Saves Unloaded Keys
26

If you need to REBUILD a large non-clustering Index on a partitioned object, the UNLOADKEYS/BUILDINDEX strategy allows you to concurrently extract the keys saving time during the unload phase. Each job uses its own sort task, which will be smaller and more efficient. A side benefit is that this allows you to save the files with the unloaded keys, for future builds. We usually recommend using the sorted UNLOADKEYS strategy if your sort would need more than 1 GB of storage. You must first set up your jobs to unload the keys. In this example, we have a 12 part space that will be unloaded in 3 separate jobs. Each job has produces a key file for the partitions being processed. Here is the first of the jobs, for partitions 1-4:
//BMCRCVR1 EXEC PGM=AFRMAIN,REGION=0M,PARM=DHN1,DMBRCVR1,NEW' //STEPLIB // //SKEYDDN1 // //SYSOUT //SYSPRINT DD DSN=SYS2.DB2V81M.DSNLOAD,DISP=SHR DD DSN=AFR.RUNLIB.LOAD,DISP=SHR DD DSN=HLQ.SKEYDDN1,DISP=(NEW,CATLG), UNIT=SYSDA,SPACE=(CYL,(800,100),RLSE),VOL=SER=DMB004 DD SYSOUT=* DD SYSOUT=*

RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 1 RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 2 RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 3 RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 4

26

Sorted UNLOADKEYS - Step 2

SKEYDDN1

SKEYDDN2

SKEYDDN3

PARTS 1-12

27

After the unloads have completed, you then submit another job to merge the keys from the Key files and build the index:
//BMCRCVR2 EXEC PGM=AFRMAIN,REGION=0M,PARM=DHN1,DMBRCVRB,NEW' //STEPLIB // //SKEYDDN1 //SKEYDDN2 //SKEYDDN3 //SYSOUT //SYSPRINT DD DSN=SYS2.DB2V81M.DSNLOAD,DISP=SHR DD DSN=AFR.RUNLIB.LOAD,DISP=SHR DD DSN=HLQ.SKEYDDN1,DISP=OLD DD DSN=HLQ.SKEYDDN2,DISP=OLD DD DSN=HLQ.SKEYDDN3,DISP=OLD DD SYSOUT=* DD SYSOUT=*

RECOVER BUILDINDEX(ALL) TABLESPACE DMB16.TSB16

The merge referenced is RECOVER PLUS code . Do not confuse it with the merge from your sort package.

27

Optimizing REORG PLUS for DB2


Use SHRLEVEL REREFENCE or CHANGE
Uses more space but avoids need to recover after failure

Use Dynamic Allocation


... but not for SORTWORK files Use SORTNUM 32 (Default) to get BMCSORT to allocate these

COPY YES INLINE YES


Set COPYLVL PART (default DOPT is FULL)
Also ensures best use of multi-tasking Maybe not if youre processing a large number of partitions

Also use ICTYPE UPDATE when using Tape

Consider Single Phase Reorg


UNLOAD RELOAD especially for Online Reorgs
Utility terminates before UTILTERM phase anyway

28

This first slide on REORG PLUS focuses on all types of reorganization. A later section covers Online Reorgs in more detail. My first recommendation is to always use one of the non-disruptive techniques, in other words try and use SHRLEVEL REFERENCE or CHANGE, rather than NONE. The reason is that the operational impact of any failure is much reduced because the shadow objects can be simply discarded whereas with an Offline Reorg the original objects are placed into Recover Pending if the utility is terminated. The obvious downside is the increased disk space used by the shadow datasets, but you only need this for a short time while the Reorg is running and the benefits obtained usually far outweigh this additional requirement. As with all our utilities we recommend using Dynamic Allocation to maximize multi-tasking opportunities and to simplify the JCL. The exception to this is SORTWORK files which have been covered during the section on BMCSORT. When using REORG you almost always want to use Inline copy. Provided you can stand backing up to Disk (or have enough tape units available) then using Partition level copies also helps maximize multi-tasking opportunities, but you may want to reconsider this if youre processing large numbers of partitions together. One quick tip here if you are using Tape backups during an Online Reorg is to specify ICTYPE UPDATE rather than the normal AUTO. We will then append the pages updated during the Log apply to the end of the copy rather than running an Incremental copy during the LOGFINAL phase when some objects are in a restricted state. Using a Single Phase Load by specifying UNLOAD RELOAD will improve performance although it may impact restart, particularly for SHRLEVEL NONE. It also makes the use of SYSREC and SYSUT1 datasets optional, so you can turn off Dynamic Allocation for these two dataset types if you wish to save disk space. A Single Phase Reorg should be an almost automatic choice for SHRLEVEL CHANGE as it does not affect Restartability.

28

Some DOPTS to review


COPYDDN use 4 characters (BMCCPY,BMCCPZ)
Allows Dynamic Copy datasets for >999 partitions

DELFILES=YES (NO) FASTSWITCH=YES (NO) KEEPDICTIONARY=YES (NO)


Provided your data is relatively static May be worth rebuilding the Dictionary occasionally

REDEFINE=NO (YES)
Saves CPU but may not achieve optimal allocation

STAGEDSN DSN (BMC)


Avoids unnecessary messages

Dont change Multi-tasking Options (ending-MAX)


Except if asked to do so by BMC Support
29

This slide lists some Installation Default Options for REORG PLUS which you might want to review. The first of these is COPYDDN which usually defaults to (BMCCPY,BMCCPZ). If you have more than 99 partitions and youre using Partition level copies this will result in an invalid name, so you may want to reduce this to a 4 character string to allow for >999 parts. Dont forget the name still needs to be unique within the job when the partition number is added to it! Most remaining options are fairly self explanatory, so I will only cover them briefly below. Using DELFILES=YES is safe we only try and delete the files if we know theyre not going to be needed again. Using this can save you a lot of manual effort afterwards. If the nature of your data does not change rapidly then using KEEPDICTIONARY=YES will save CPU. We ignore this if were performing partition rebalancing during the Reorg, and we will also build a compression dictionary if one does not exist even if your specify KEEPDICTIONARY=YES. If the main purpose of your Reorg is to reorganize the data rather than save space then using REDEFINE=NO may well make sense and again saves CPU cycles, especially if there are a large number of datasets involved in the Reorg. Its unlikely to have much effect during an Online Reorg as the staging datasets are likely to be redefined anyway. For largely historical reasons we still use the old default of BMC for the STAGEDSN option. This option is ignored anyway for Online or Reference Reorgs using Fastswitch and you can save a warning message by changing the default back to the IBM standard of DSN. As long as youre using Dynamic Allocation then REORG PLUS multi-tasking is controlled by a set of DOPTs whose names end in MAX, notably TASKMAX and SMAX. These normally work best using the defaults, so dont change them unless requested to do so by BMC Support. 29

Online Reorganizations

RID Map

Log Records for these objects


Log Control SIX Copy * Log Final & Copy Update
DW S T w e i r t m c h DA RW

Init
RW

Analyze

Unload

Reload Build Copy

Log Apply

IX IX

TS TS

IX

TS

Original Objects

New Objects

* *Copies NPIs to staging datasets for Partial Reorg Only Copies NPIs to staging datasets for Partial Reorg Only
30

This diagram runs through the processing phases undertaken by our Online Reorganization. I will discuss tuning this process shortly, but this is an opportunity to briefly mention two other features you may wish to use. First is the SIXSNAP of Snapshot Index Copy. If youre performing partial Reorgs of partitioned objects with NPIs, we create a shadow copy of the NPI in order to avoid the need for a BUILD2 phase. If you have the appropriate hardware this can be achieved using the SIXSNAP feature, which is invoked using the SIXSNAP DOPT, setting it to either YES or AUTO in place of the default value of NO. See the manual for more details as I wont have time to go into this during the session. The second point is how to send commands to the Utility to switch it into LOGFINAL if youre using MAXRO DEFER or to simply see how the work is progressing. You can do this in two ways, by using the XBM online interface or by sending MVS commands to the utility job itself via the Console or Automated Operations. Again this is too complex an area for the hour we have available, so please see the REORG PLUS Reference Manual or the XBM User Guide for more details.

30

Providing Highest Availability


DOPT keywords provide all the control you need
Can be changed using Syntax in brackets, along with default DRNWAIT=n, UTIL, SQL or NONE (DRAIN_WAIT,UTIL)
Using NONE Provides best Availability option Using UTIL gives REORG the best chance of completing

DRNRETRY=n (RETRY,10) DRNDELAY=n (RETRY_DELAY,3) DRAINTYP=WRITERS or ALL (DRAIN, WRITERS)


Use ALL if heavy SQL Update or Long running Read UOWs and youre getting many -911s Slightly longer outage but can be less disruptive in the end

DSPLOCKS=NONE,RETRY or DRNFAIL (DSPLOCKS,NONE)


Provides information on active URIDs if Drain failures

MAXRO=n or DEFER (MAXRO,300)


300 seconds is an awfully long time!
31

These are the primary keywords we provide to allow you to completely control how REORG PLUS will impact your applications during an Online Reorg. Most of the control comes around the time we take to obtain the Drains we need, at the start of the utility when we invoke XBM services and at the end when we start LOGFINAL and subsequently the SWITCH phase. The main keywords to consider changing are DRNWAIT, DRAINTYP and MAXRO. DRNWAIT tells us how long a Drain request should wait for the access it needs before the request is cancelled and the utility waits to try again. The default for this is UTIL, which is the utility timeout from your DB2 DSNZPARMS (IRLMRWT x UTIMOUT) which is usually over 4 minutes, far too long to prevent SQL failures. If your applications are more important than the Reorg then consider setting this to NONE, so that if a Drain itself times out if it is not immediately successful. You could increase the Retry count and Delay to compensate, but at most customers we find this is the preferred setting. DRAINTYP is relevant at the end of the Log Apply phase when the utility is about to go into LOGFINAL. It tells us whether to Drain just the Writers before the LogFinal or all SQL. In a system with heavy update activity or long running Read UOWs you may find it less disruptive to change this to DRAINTYP ALL. Contact me if youd like to know more. MAXRO is time we estimate it will take us to complete the final Log apply process. Dropping below this triggers the start of LOGFINAL. You can use DEFER here and trigger the switch yourself at a convenient time, but whatever you do 300 seconds sounds too long to me Read UOWs will time out if you leave it here. Something below your SQL timeout figure sounds more reasonable to me.

31

Other uses for REORG PLUS


All these are available using any SHRLEVEL Rebalancing Partitions
Provide limit keys manually using a DDLIN dataset Or use the new REBALANCE keyword

Archiving, Updating or Deleting rows


Rows can be deleted using SELECT or DELETE syntax
Deleted rows can be saved in the SYSARC dataset

Rows can be Updated using UPDATE syntax

Resize Datasets during REORG


Change Primary and Secondary quantities Reorder or only use subset of volumes in a STOGROUP
Cannot add new volumes

Redefine Yes can be changed to Redefine No at object level

32

I dont have time to go into details of these features within the hour we have available, but if you are interested in any of these then please hunt me out during the remainder of the Conference, or feel free to send me an email afterwards my address is on the last slide of the presentation material. The first option is to rebalance the partition boundaries of a partitioned tablespace. This works for both Table controlled and Index controlled partitions. You provide the new limits via a DD card called DDLIN, or you can ask us to rebalance up to 255 ranges of logically contiguous partitions. The next feature allows you to Delete (or archive) rows during a Reorg. You specify which ones using relatively simplistic SQL-like syntax contains in a SELECT or DELETE statement (depending on which is easier to code). Rows that have not been reloaded can be placed into a Archive dataset for subsequent processing, such as loading into a long term history table or an archive database. We can also update columns in a table during a REORG, again only using fairly simple syntax. Its worth noting that these options do not check any referential constraints before processing the data, nor do they set the Check Pending flag so they do need to be used with a degree of caution. However they can be very useful. Finally we are able to resize an object during a Reorg, as well as make a number of other changes to the way the dataset is reallocated. The simplest use of this feature is to change the primary and secondary quantities, but we can also reorder the volumes in a STOGROUP or even limit the dataset to a subset of the volumes. A Reorg that runs using REDEFINE YES can also be dynamically changed to REDFINE NO at the object level using the same mechanism.

32

Session G06 Getting the most out of your BMC DB2 Utilities

Steve Thomas Steve Thomas


BMC Software Steve_Thomas@bmc.com

33

33

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy