Tivoli Storage Admin Guide Aix
Tivoli Storage Admin Guide Aix
for AIX
Version 6.3.4
Administrator's Guide
SC23-9769-05
IBM Tivoli Storage Manager
for AIX
Version 6.3.4
Administrator's Guide
SC23-9769-05
Note:
Before using this information and the product it supports, read the information in Notices on page 1131.
This edition applies to Version 6.3.4 of IBM Tivoli Storage Manager (product numbers 5608-E01, 5608-E02,
5608-E03), and to all subsequent releases and modifications until otherwise indicated in new editions or technical
newsletters. This edition replaces SC23-9769-04.
Copyright IBM Corporation 1993, 2013.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . xv Database and recovery-log management . . . . 35
Who should read this guide . . . . . . . . . xv Sources of information about the server . . . . 36
Publications . . . . . . . . . . . . . . xv Tivoli Storage Manager server networks . . . . 37
Tivoli Storage Manager publications . . . . . xvi Exporting and importing data . . . . . . . 38
Tivoli Storage FlashCopy Manager publications xviii Protecting Tivoli Storage Manager and client data 38
Related hardware publications . . . . . . xviii Protecting the server . . . . . . . . . . 38
Support information. . . . . . . . . . . xviii
Getting technical training . . . . . . . . xix Part 2. Configuring and managing
Searching knowledge bases . . . . . . . . xix
Contacting IBM Software Support . . . . . xxi
storage devices . . . . . . . . . . 41
Conventions used in this guide . . . . . . . xxiii
Chapter 3. Storage device concepts . . 43
New for IBM Tivoli Storage Manager Road map for key device-related task information 43
Tivoli Storage Manager storage devices . . . . . 44
Version 6.3 . . . . . . . . . . . . xxv
Tivoli Storage Manager storage objects . . . . . 44
Server updates . . . . . . . . . . . . . xxv
Libraries . . . . . . . . . . . . . . 44
New for the server in Version 6.3.4 . . . . . xxv
Drives . . . . . . . . . . . . . . . 47
New for the server in Version 6.3.3 . . . . . xxvi
Device class . . . . . . . . . . . . . 48
New for the server in Version 6.3.1 . . . . xxviii
Library, drive, and device-class objects . . . . 50
New for the server in Version 6.3.0 . . . . xxviii
Storage pools and storage-pool volumes . . . . 51
Data movers . . . . . . . . . . . . . 52
Part 1. Tivoli Storage Manager Paths . . . . . . . . . . . . . . . 53
basics . . . . . . . . . . . . . . . 1 Server objects. . . . . . . . . . . . . 53
Tivoli Storage Manager volumes . . . . . . . 53
Volume inventory for an automated library. . . 54
Chapter 1. Tivoli Storage Manager Device configurations . . . . . . . . . . . 55
overview . . . . . . . . . . . . . . 3 Devices on local area networks . . . . . . . 55
How client data is stored . . . . . . . . . . 5 Devices on storage area networks . . . . . . 55
Data-protection options . . . . . . . . . . 8 LAN-free data movement. . . . . . . . . 57
Data movement to server storage . . . . . . 14 Network-attached storage . . . . . . . . 58
Consolidation of backed-up client data . . . . 14 Mixed device types in libraries . . . . . . . 60
How the server manages storage . . . . . . . 15 Library sharing . . . . . . . . . . . . 62
Device support . . . . . . . . . . . . 15 Removable media mounts and dismounts . . . . 62
Data migration through the storage hierarchy . . 16 How Tivoli Storage Manager uses and reuses
Removal of expired data . . . . . . . . . 16 removable media . . . . . . . . . . . . 63
Required definitions for storage devices . . . . . 66
Chapter 2. Tivoli Storage Manager Example: Mapping devices to device classes . . 67
concepts . . . . . . . . . . . . . . 19 Example: Mapping storage pools to device
Interfaces to Tivoli Storage Manager . . . . . . 19 classes and devices . . . . . . . . . . . 67
Server options . . . . . . . . . . . . . 20 Planning for server storage . . . . . . . . . 68
Storage configuration and management . . . . . 20 Server options that affect storage operations . . . 70
Disk devices . . . . . . . . . . . . . 21
Removable media devices . . . . . . . . 21 Chapter 4. Magnetic disk devices . . . 71
Defined volumes and scratch volumes . . . . 22 Requirements for disk systems . . . . . . . . 71
Migrating data from disk to tape . . . . . . 23 Comparison of random access and sequential access
Storage pools and volumes . . . . . . . . 24 disk devices . . . . . . . . . . . . . . 73
IBM PowerHA SystemMirror for AIX and Tivoli File systems and raw logical volumes for random
System Automation. . . . . . . . . . . . 25 access storage . . . . . . . . . . . . . 77
Management of client operations . . . . . . . 26 Configuring random access volumes on disk devices 78
Managing client nodes. . . . . . . . . . 26 Configuring FILE sequential volumes on disk
Managing client data with policies . . . . . 29 devices . . . . . . . . . . . . . . . . 79
Schedules for client operations . . . . . . . 31 Varying disk volumes online or offline . . . . . 80
Server maintenance . . . . . . . . . . . . 34 Cache copies for files stored on disk . . . . . . 80
Server-operation management . . . . . . . 34 Freeing space on disk . . . . . . . . . . . 81
Server script automation . . . . . . . . . 35 Scratch FILE volumes . . . . . . . . . . . 81
Contents v
Preparing volumes for random-access storage Collocation of copy storage pools and
pools . . . . . . . . . . . . . . . 265 active-data pools . . . . . . . . . . . 369
Preparing volumes for sequential-access storage Planning for and enabling collocation . . . . 370
pools . . . . . . . . . . . . . . . 265 Reclaiming space in sequential-access storage pools 372
Updating storage pool volumes . . . . . . 267 How Tivoli Storage Manager reclamation works 372
Access modes for storage pool volumes . . . 268 Reclamation thresholds . . . . . . . . . 374
Storage pool hierarchies . . . . . . . . . . 270 Reclaiming volumes with the most reclaimable
Setting up a storage pool hierarchy . . . . . 270 space . . . . . . . . . . . . . . . 374
How the server groups files before storing . . 272 Starting reclamation manually or in a schedule 375
Where the server stores files . . . . . . . 273 Optimizing drive usage using multiple
Example: How the server determines where to concurrent reclamation processes . . . . . . 375
store files in a hierarchy . . . . . . . . . 273 Reclaiming volumes in a storage pool with one
Backing up the data in a storage hierarchy . . 275 drive . . . . . . . . . . . . . . . 376
Staging client data from disk to tape . . . . 280 Reducing the time to reclaim tape volumes with
Migrating files in a storage pool hierarchy. . . . 281 high capacity . . . . . . . . . . . . 377
Migrating disk storage pools . . . . . . . 282 Reclamation of write-once, read-many (WORM)
Migrating sequential-access storage pools . . . 287 media . . . . . . . . . . . . . . . 377
The effect of migration on copy storage pools Controlling reclamation of virtual volumes . . 378
and active-data pools . . . . . . . . . . 292 Reclaiming copy storage pools and active-data
Caching in disk storage pools . . . . . . . . 292 pools . . . . . . . . . . . . . . . 378
How the server removes cached files . . . . 293 How collocation affects reclamation . . . . . 382
Effect of caching on storage pool statistics . . . 293 Estimating space needs for storage pools . . . . 383
Deduplicating data . . . . . . . . . . . 293 Estimating space requirments in random-access
Data deduplication overview . . . . . . . 294 storage pools . . . . . . . . . . . . 383
Data deduplication limitations. . . . . . . 297 Estimating space needs in sequential-access
Planning guidelines for data deduplication . . 299 storage pools . . . . . . . . . . . . 385
Detecting possible security attacks on the server Monitoring storage-pool and volume usage . . . 385
during client-side deduplication . . . . . . 311 Monitoring space available in a storage pool 386
Evaluating data deduplication in a test Monitoring the use of storage pool volumes . . 388
environment. . . . . . . . . . . . . 312 Monitoring migration processes . . . . . . 396
Managing deduplication-enabled storage pools 314 Monitoring the use of cache space on disk
Controlling data deduplication . . . . . . 318 storage . . . . . . . . . . . . . . 398
Displaying statistics about server-side data Obtaining information about the use of storage
deduplication . . . . . . . . . . . . 325 space . . . . . . . . . . . . . . . 399
Displaying statistics about client-side data Moving data from one volume to another volume 403
deduplication . . . . . . . . . . . . 326 Data movement within the same storage pool 404
Querying about data deduplication in file Data movement to a different storage pool . . 404
spaces . . . . . . . . . . . . . . . 329 Data movement from off-site volumes in copy
Scenarios for data deduplication . . . . . . 330 storage pools or active-data pools . . . . . 405
Data deduplication and data compatibility . . 335 Moving data . . . . . . . . . . . . 405
Data deduplication and disaster recovery Moving data belonging to a client node . . . . 408
management . . . . . . . . . . . . 336 Moving data in all file spaces belonging to one
Writing data simultaneously to primary, copy, and or more nodes . . . . . . . . . . . . 408
active-data pools . . . . . . . . . . . . 337 Moving data in selected file spaces belonging to
Guidelines for using the simultaneous-write a single node . . . . . . . . . . . . 409
function . . . . . . . . . . . . . . 338 Obtaining information about data-movement
Limitations that apply to simultaneous-write processes . . . . . . . . . . . . . . 410
operations . . . . . . . . . . . . . 339 Troubleshooting incomplete data-movement
Controlling the simultaneous-write function . . 341 operations . . . . . . . . . . . . . 410
Simultaneous-write operations: Examples . . . 344 Renaming storage pools . . . . . . . . . . 411
Planning simultaneous-write operations . . . 358 Defining copy storage pools and active-data pools 411
Simultaneous-write function as part of a backup Example: Defining a copy storage pool . . . . 413
strategy: Example . . . . . . . . . . . 362 Properties of primary, copy, and active-data
Keeping client files together using collocation . . 363 pools . . . . . . . . . . . . . . . 413
The effects of collocation on operations . . . . 364 Deleting storage pools . . . . . . . . . . 415
How the server selects volumes with collocation Deleting storage pool volumes . . . . . . . 415
enabled . . . . . . . . . . . . . . 366 Deleting empty storage pool volumes . . . . 416
How the server selects volumes with collocation Deleting storage pool volumes that contain data 417
disabled . . . . . . . . . . . . . . 368
Collocation on or off settings . . . . . . . 368
Part 3. Managing client operations 419
Contents vii
Configuring policy for NDMP operations . . . 527 Creating schedules for running command files . . 571
Configuring policy for LAN-free data Updating the client options file to automatically
movement . . . . . . . . . . . . . 528 generate a new password . . . . . . . . . 572
Policy for Tivoli Storage Manager servers as
clients . . . . . . . . . . . . . . . 530 Chapter 16. Managing schedules for
Setting policy to enable point-in-time restore for client nodes . . . . . . . . . . . . 573
clients . . . . . . . . . . . . . . . 530
Managing IBM Tivoli Storage Manager schedules 573
Distributing policy using enterprise configuration 531
Adding new schedules . . . . . . . . . 573
Querying policy . . . . . . . . . . . . 531
Copying existing schedules . . . . . . . . 574
Querying copy groups . . . . . . . . . 532
Modifying schedules . . . . . . . . . . 574
Querying management classes. . . . . . . 532
Deleting schedules . . . . . . . . . . 574
Querying policy sets . . . . . . . . . . 533
Displaying information about schedules . . . 574
Querying policy domains . . . . . . . . 533
Managing node associations with schedules . . . 575
Deleting policy . . . . . . . . . . . . . 534
Adding new nodes to existing schedules . . . 575
Deleting copy groups. . . . . . . . . . 534
Moving nodes from one schedule to another 576
Deleting management classes . . . . . . . 535
Displaying nodes associated with schedules . . 576
Deleting policy sets . . . . . . . . . . 535
Removing nodes from schedules . . . . . . 576
Deleting policy domains. . . . . . . . . 535
Managing event records . . . . . . . . . . 576
Displaying information about scheduled events 577
Chapter 14. Managing data for client Managing event records in the server database 578
nodes. . . . . . . . . . . . . . . 537 Managing the throughput of scheduled operations 579
Validating a node's data . . . . . . . . . . 537 Modifying the default scheduling mode . . . 579
Performance considerations for data validation 538 Specifying the schedule period for incremental
Validating a node's data during a client session 538 backup operations . . . . . . . . . . . 582
Encrypting data on tape . . . . . . . . . . 538 Balancing the scheduled workload for the server 582
Choosing an encryption method . . . . . . 539 Controlling how often client nodes contact the
Changing your encryption method and server . . . . . . . . . . . . . . . 584
hardware configuration . . . . . . . . . 540 Specifying one-time actions for client nodes . . . 585
Securing sensitive client data . . . . . . . . 541 Determining how long the one-time schedule
Setting up shredding . . . . . . . . . . 542 remains active . . . . . . . . . . . . 586
Ensuring that shredding is enforced . . . . . 543
Creating and using client backup sets . . . . . 545
Part 4. Maintaining the server . . . 587
Generating client backup sets on the server . . 546
Restoring backup sets from a backup-archive
client . . . . . . . . . . . . . . . 550 | Chapter 17. Managing servers with the
Moving backup sets to other servers. . . . . 550 | Operations Center . . . . . . . . . 589
Managing client backup sets . . . . . . . 551 | Opening the Operations Center . . . . . . . 589
Enabling clients to use subfile backup . . . . . 554 | Getting started with your tasks . . . . . . . 590
Setting up clients to use subfile backup. . . . 555 | Viewing the Operations Center on a mobile device 591
Managing subfile backups . . . . . . . . 555 | Administrator IDs and passwords . . . . . . 591
Optimizing restore operations for clients . . . . 556 | Hub and spoke servers . . . . . . . . . . 592
Environment considerations . . . . . . . 557 | Adding spoke servers . . . . . . . . . 593
Restoring entire file systems . . . . . . . 558 | Restarting the initial configuration wizard . . . . 594
Restoring parts of file systems . . . . . . . 559 | Stopping and starting the web server . . . . . 595
Restoring databases for applications . . . . . 560
Restoring files to a point-in-time . . . . . . 560 Chapter 18. Managing servers with the
Concepts for client restore operations . . . . 560 Administration Center. . . . . . . . 597
Archiving data . . . . . . . . . . . . . 563
Using the Administration Center . . . . . . . 597
Archive operations overview . . . . . . . 563
Starting and stopping the Administration Center 600
Managing storage usage for archives . . . . 564
Functions in the Administration Center supported
only by command line . . . . . . . . . . 600
Chapter 15. Scheduling operations for Protecting the Administration Center . . . . . 603
client nodes . . . . . . . . . . . . 567 Backing up the Administration Center . . . . 603
Prerequisites to scheduling operations . . . . . 567 Restoring the Administration Center. . . . . 604
Scheduling a client operation . . . . . . . . 568
Defining client schedules . . . . . . . . 568 Chapter 19. Managing server
Associating client nodes with schedules . . . 569 operations. . . . . . . . . . . . . 605
Starting the scheduler on the clients . . . . . 569
Licensing IBM Tivoli Storage Manager . . . . . 605
Displaying schedule information . . . . . . 570
Registering licensed features . . . . . . . 606
Checking the status of scheduled operations . . 570
Monitoring licenses . . . . . . . . . . 607
Contents ix
Enterprise configuration scenario . . . . . . 710 Part 5. Monitoring operations . . . 777
Creating the default profile on a configuration
manager . . . . . . . . . . . . . . 714
Creating and changing configuration profiles 714 Chapter 24. Daily monitoring tasks 779
Getting information about profiles . . . . . 722 Monitoring operations using the command line 780
Subscribing to a profile . . . . . . . . . 724 Monitoring your server processes daily. . . . 780
Refreshing configuration information . . . . . 728 Monitoring your database daily . . . . . . 781
Managing problems with configuration refresh 728 Monitoring disk storage pools daily . . . . . 784
Returning managed objects to local control . . . 729 Monitoring sequential access storage pools daily 785
Setting up administrators for the servers . . . . 729 Monitoring scheduled operations daily . . . . 788
Managing problems with synchronization of Monitoring operations daily with Tivoli Monitoring
profiles . . . . . . . . . . . . . . . 730 for Tivoli Storage Manager . . . . . . . . . 789
Switching a managed server to a different | Monitoring operations daily using the Operations
configuration manager . . . . . . . . . . 730 | Center . . . . . . . . . . . . . . . . 791
Deleting subscribers from a configuration manager 731
Renaming a managed server . . . . . . . . 731 Chapter 25. Basic monitoring methods 793
Completing tasks on multiple servers . . . . . 731 Using IBM Tivoli Storage Manager queries to
Working with multiple servers by using a web display information . . . . . . . . . . . 793
interface . . . . . . . . . . . . . . 732 Requesting information about IBM Tivoli
Routing commands . . . . . . . . . . 732 Storage Manager definitions . . . . . . . 793
Setting up server groups . . . . . . . . 735 Requesting information about client sessions 794
Querying server availability . . . . . . . 737 Requesting information about server processes 795
Using virtual volumes to store data on another Requesting information about server settings 796
server . . . . . . . . . . . . . . . . 737 Querying server options . . . . . . . . . 796
Setting up source and target servers for virtual Querying the system . . . . . . . . . . 796
volumes . . . . . . . . . . . . . . 739 Using SQL to query the IBM Tivoli Storage
Performance limitations for virtual volume Manager database . . . . . . . . . . . . 798
operations . . . . . . . . . . . . . 740 Using SELECT commands . . . . . . . . 798
Performing operations at the source server . . 741 Using SELECT commands in Tivoli Storage
Reconciling virtual volumes and archive files 743 Manager scripts . . . . . . . . . . . 801
Querying the SQL activity summary table . . . 802
Chapter 23. Exporting and importing Creating output for use by another application 803
data . . . . . . . . . . . . . . . 745 Using the Tivoli Storage Manager activity log . . 803
Requesting information from the activity log 804
Reviewing data that can be exported and imported 745
Setting a retention period for the activity log 805
Exporting restrictions. . . . . . . . . . 746
Setting a size limit for the activity log . . . . 805
Deciding what information to export . . . . 746
Deciding when to export . . . . . . . . 747
Exporting data directly to another server . . . . 748 | Chapter 26. Alert monitoring . . . . . 807
Options to consider before exporting . . . . 748
Preparing to export to another server for | Chapter 27. Sending alerts by email 809
immediate import . . . . . . . . . . . 752
Monitoring the server-to-server export process 754 Chapter 28. Monitoring Tivoli Storage
Exporting administrator information to another
Manager accounting records . . . . . 811
server . . . . . . . . . . . . . . . 754
Exporting client node information to another
server . . . . . . . . . . . . . . . 755 Chapter 29. Reporting and monitoring
Exporting policy information to another server 756 with Tivoli Monitoring for Tivoli
Exporting server data to another server . . . 756 Storage Manager . . . . . . . . . . 813
Exporting and importing data using sequential Types of information to monitor with Tivoli
media volumes . . . . . . . . . . . . . 756 Enterprise Portal workspaces . . . . . . . . 815
Using preview before exporting or importing Monitoring Tivoli Storage Manager real-time data 818
data . . . . . . . . . . . . . . . 756 Viewing historical data and running reports . . . 819
Planning for sequential media used to export Cognos Business Intelligence . . . . . . . . 820
data . . . . . . . . . . . . . . . 757 Cognos status and trend reports . . . . . . 820
Exporting tasks. . . . . . . . . . . . 758 Opening the Cognos Report Studio portal . . . 827
Importing data from sequential media volumes 761 Creating a custom Cognos report . . . . . . 828
Monitoring export and import processes . . . 772 Opening or modifying an existing Cognos
Exporting and importing data from virtual report . . . . . . . . . . . . . . . 829
volumes . . . . . . . . . . . . . . 775 Running a Cognos report . . . . . . . . 829
Scheduling Cognos reports to be emailed . . . 829
Chapter 32. Managing Tivoli Storage Chapter 34. Replicating client node
Manager security . . . . . . . . . . 885 data . . . . . . . . . . . . . . . 963
Securing communications . . . . . . . . . 885 Source and target node-replication servers. . . . 964
Setting up TLS . . . . . . . . . . . . 886 Replication server configurations . . . . . . 964
Securing the server console . . . . . . . . . 896 Policy management for node replication . . . 965
Administrative authority and privilege classes . . 896 Node replication processing . . . . . . . . 966
Managing Tivoli Storage Manager administrator Replication rules . . . . . . . . . . . 966
IDs . . . . . . . . . . . . . . . . 898 Replication state . . . . . . . . . . . 970
Managing access to the server and clients . . . . 903 Replication mode . . . . . . . . . . . 973
Restricting a non-root user ID from performing Replication of deduplicated data . . . . . . 974
backups as root. . . . . . . . . . . . . 904 Client node attributes that are updated during
Managing passwords and logon procedures . . . 904 replication . . . . . . . . . . . . . 975
Configuring a directory server for password Node replication restrictions . . . . . . . . 976
authentication . . . . . . . . . . . . 906 Task tips for node replication . . . . . . . . 978
Contents xi
Change replication rules . . . . . . . . . 978 Chapter 35. Disaster recovery
Add and remove client nodes for replication 978 manager . . . . . . . . . . . . . 1029
Manage replication servers . . . . . . . . 979 Querying defaults for the disaster recovery plan
Validate a configuration and preview results 979 file . . . . . . . . . . . . . . . . 1030
Manage replication processing. . . . . . . 980 Specifying defaults for the disaster recovery
Monitor replication processing and verify results 981 plan file . . . . . . . . . . . . . . 1030
Planning for node replication . . . . . . . . 981 Specifying defaults for offsite recovery media
Determining server database requirements for management . . . . . . . . . . . . 1033
node replication . . . . . . . . . . . 983 Specifying recovery instructions for your site . . 1035
Estimating the total amount of data to be Specifying information about your server and
replicated. . . . . . . . . . . . . . 983 client node machines . . . . . . . . . . 1036
Estimating network bandwidth required for Specifying recovery media for client machines 1039
replication . . . . . . . . . . . . . 984 Creating and storing the disaster recovery plan 1039
Calculating the time that is required for Storing the disaster recovery plan locally . . . 1041
replication . . . . . . . . . . . . . 984 Storing the disaster recovery plan on a target
Selecting a method for the initial replication . . 985 server . . . . . . . . . . . . . . 1041
Scheduling incremental replication after the Managing disaster recovery plan files stored on
initial replication . . . . . . . . . . . 987 target servers . . . . . . . . . . . . . 1041
Setting up the default replication configuration . . 988 Displaying information about recovery plan
Step 1: Setting up server-to-server files . . . . . . . . . . . . . . . 1042
communications . . . . . . . . . . . 990 Displaying the contents of a recovery plan file 1042
Step 2: Specifying a target replication server . . 992 Restoring a recovery plan file . . . . . . 1042
Step 3: Configuring client nodes for replication 992 Expiring recovery plan files automatically . . 1043
Customizing a node replication configuration . . 994 Deleting recovery plan files manually . . . . 1043
Changing replication rules . . . . . . . . 994 Moving backup media . . . . . . . . . . 1044
Scenario: Converting to node replication from Moving copy storage pool and active-data pool
import and export operations . . . . . . 1002 volumes offsite . . . . . . . . . . . 1046
Adding and removing client nodes for Moving copy storage pool and active-data pool
replication . . . . . . . . . . . . . 1003 volumes on-site . . . . . . . . . . . 1048
Managing source and target replication servers 1006 Managing the Disaster Recovery Manager tasks 1049
Verifying a node replication setup before Preparing for disaster recovery . . . . . . . 1051
processing . . . . . . . . . . . . . . 1008 Recovering from a disaster . . . . . . . . 1053
Validating a replication configuration . . . . 1008 Server recovery scenario . . . . . . . . 1054
Previewing node replication results . . . . 1009 Client recovery scenario . . . . . . . . 1057
Managing data replication. . . . . . . . . 1009 Recovering with different hardware at the
Replicating data by command . . . . . . 1010 recovery site . . . . . . . . . . . . . 1060
Controlling throughput for node replication 1014 Automated SCSI library at the original and
Disabling and enabling node replication . . . 1016 recovery sites . . . . . . . . . . . . 1060
Purging replicated data in a file space . . . . 1020 Automated SCSI library at the original site and
Replicating client node data after a database a manual scsi library at the recovery site . . . 1061
restore . . . . . . . . . . . . . . 1021 Managing copy storage pool volumes and
Canceling replication processes . . . . . . 1022 active-data pool volumes at the recovery site . 1062
Monitoring node replication processing and Disaster recovery manager checklist . . . . . 1063
verifying results . . . . . . . . . . . . 1022 The disaster recovery plan file . . . . . . . 1068
Displaying information about node replication Breaking out a disaster recovery plan file. . . 1068
settings . . . . . . . . . . . . . . 1022 Structure of the disaster recovery plan file . . 1068
Displaying information about node replication Example disaster recovery plan file. . . . . 1071
processes . . . . . . . . . . . . . 1023
Measuring the effectiveness of a replication
configuration . . . . . . . . . . . . 1024
Chapter 36. Integrating disaster
Measuring the effects of data deduplication on recovery manager and node
node replication processing . . . . . . . 1025 replication into your disaster
Retaining replication records . . . . . . . 1025 recovery strategy . . . . . . . . . 1091
Recovering and storing client data after a disaster 1026 | Plan for a disaster recovery strategy . . . . . 1092
Restoring, retrieving, and recalling data from a Tier 0: No disaster recovery capability. . . . . 1093
target replication server . . . . . . . . 1026 Tier 1: Offsite vaulting from a single production
Converting client nodes for store operations on site . . . . . . . . . . . . . . . . 1093
a target replication server . . . . . . . . 1026 Tier 2: Offsite vaulting with a recovery site . . . 1094
Removing a node replication configuration . . . 1027 Tier 3: Electronic vaulting of critical data . . . . 1094
Tier 4: Active data management at peer sites . . 1095
Tier 5: Synchronous replication . . . . . . . 1096
Contents xiii
xiv IBM Tivoli Storage Manager for AIX: Administrator's Guide
Preface
IBM Tivoli Storage Manager is a client/server program that provides storage
management solutions to customers in a multi-vendor computer environment. IBM
Tivoli Storage Manager provides an automated, centrally scheduled,
policy-managed backup, archive, and space-management facility for file servers
and workstations.
You should be familiar with the operating system on which the server resides and
the communication protocols required for the client/server environment. You also
need to understand the storage management practices of your organization, such
as how you are currently backing up workstation files and how you are using
storage devices.
Publications
Publications for the IBM Tivoli Storage Manager family of products are available
online. The Tivoli Storage Manager product family includes IBM Tivoli Storage
FlashCopy Manager, IBM Tivoli Storage Manager for Space Management, IBM
Tivoli Storage Manager for Databases, and several other storage management
products from IBM Tivoli.
To search all publications, search across the appropriate Tivoli Storage Manager
information center:
v Version 6.3 information center: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3
v Version 6.4 information center: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4
You can download PDF versions of publications from the Tivoli Storage Manager
information center or from the IBM Publications Center at http://www.ibm.com/
shop/publications/order/.
You can also order some related publications from the IBM Publications Center
website at http://www.ibm.com/shop/publications/order/. The website provides
information about ordering publications from countries other than the United
States. In the United States, you can order publications by calling 1-800-879-2755.
Preface xvii
Table 5. IBM Tivoli Storage Manager troubleshooting and tuning publications (continued)
Publication title Order number
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data SC27-4016
Protection for SAP Messages
Note: You can find information about IBM System Storage Archive Manager at
the Tivoli Storage Manager v6.3.0 information center.
For additional information on hardware, see the resource library for tape products
at http://www.ibm.com/systems/storage/tape/library.html.
Support information
You can find support information for IBM products from various sources.
Go to the following websites to sign up for training, ask questions, and interact
with others who use IBM storage products.
Tivoli software training and certification
Choose from instructor led, online classroom training, self-paced Web
classes, Tivoli certification preparation, and other training options at
http://www.ibm.com/software/tivoli/education/
Tivoli Support Technical Exchange
Technical experts share their knowledge and answer your questions in
webcasts at http://www.ibm.com/software/sysmgmt/products/support/
supp_tech_exch.html.
Storage Management community
Interact with others who use IBM storage management products at
http://www.ibm.com/developerworks/servicemanagement/sm/
index.html
Global Tivoli User Community
Share information and learn from other Tivoli users throughout the world
at http://www.tivoli-ug.org/.
IBM Education Assistant
View short "how to" recordings designed to help you use IBM software
products more effectively at http://publib.boulder.ibm.com/infocenter/
ieduasst/tivv1r0/index.jsp
You can search for information without signing in. Sign in using your IBM ID and
password if you want to customize the site based on your product usage and
information needs. If you do not already have an IBM ID and password, click Sign
in at the top of the page and follow the instructions to register.
From the support website, you can search various resources including:
v IBM technotes.
v IBM downloads.
v IBM Redbooks publications.
v IBM Authorized Program Analysis Reports (APARs). Select the product and click
Downloads to search the APAR list.
Preface xix
If you still cannot find a solution to the problem, you can search forums and
newsgroups on the Internet for the latest information that might help you find
problem resolution.
An independent user discussion list, ADSM-L, is hosted by Marist College. You can
subscribe by sending an email to listserv@vm.marist.edu. The body of the message
must contain the following text: SUBSCRIBE ADSM-L your_first_name
your_family_name.
To share your experiences and learn from others in the Tivoli Storage Manager and
Tivoli Storage FlashCopy Manager user communities, go to Service Management
Connect (http://www.ibm.com/developerworks/servicemanagement/sm/
index.html). From there you can find links to product wikis and user communities.
To learn about which products are supported, go to the IBM Support Assistant
download web page at http://www.ibm.com/software/support/isa/
download.html.
IBM Support Assistant helps you gather support information when you must open
a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports
You can find more information at the IBM Support Assistant website:
http://www.ibm.com/software/support/isa/
You can also install the stand-alone IBM Support Assistant application on any
workstation. You can then enhance the application by installing product-specific
plug-in modules for the IBM products that you use. Find add-ons for specific
products at http://www.ibm.com/support/docview.wss?uid=swg27012689.
You can determine what fixes are available by checking the IBM software support
website at http://www.ibm.com/support/entry/portal/.
v If you previously customized the site based on your product usage:
1. Click the link for your product, or a component for which you want to find a
fix.
2. Click Downloads, and then click Fixes by version.
v If you have not customized the site based on your product usage, click
Downloads and search for your product.
To obtain help from IBM Software Support, complete the following steps:
1. Ensure that you have completed the following prerequisites:
a. Set up a subscription and support contract.
b. Determine the business impact of your problem.
c. Describe your problem and gather background information.
2. Follow the instructions in Submitting the problem to IBM Software Support
on page xxii.
For IBM distributed software products (including, but not limited to, IBM Tivoli,
Lotus, and Rational products, as well as IBM DB2 and IBM WebSphere
products that run on Microsoft Windows or on operating systems such as AIX or
Linux), enroll in IBM Passport Advantage in one of the following ways:
v Online: Go to the Passport Advantage website at http://www.ibm.com/
software/lotus/passportadvantage/, click How to enroll, and follow the
instructions.
v By telephone: You can call 1-800-IBMSERV (1-800-426-7378) in the United States.
For the telephone number to call in your country, go to the IBM Software
Support Handbook web page at http://www14.software.ibm.com/webapp/
set2/sas/f/handbook/home.html and click Contacts.
Preface xxi
Determining the business impact
When you report a problem to IBM, you are asked to supply a severity level.
Therefore, you must understand and assess the business impact of the problem
you are reporting.
Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
In the usage and descriptions for administrative commands, the term characters
corresponds to the number of bytes available to store an item. For languages in
which it takes a single byte to represent a displayable character, the character to
byte ratio is 1 to 1. However, for DBCS and other multi-byte languages, the
reference to characters refers only to the number of bytes available for the item and
may represent fewer actual characters.
Preface xxiii
xxiv IBM Tivoli Storage Manager for AIX: Administrator's Guide
New for IBM Tivoli Storage Manager Version 6.3
Many features in the Tivoli Storage Manager Version 6.3 server are new for
previous Tivoli Storage Manager users.
Server updates
New features and other changes are available in the IBM Tivoli Storage Manager
V6.3 server. Technical updates since the previous edition are marked with a vertical
bar ( | ) in the left margin.
The server that is included with the Tivoli Storage Manager and IBM Tivoli Storage
Manager Extended Edition V6.4 products is at the V6.3.4 level. The V6.3.4 server is
also available for download separately, as a fix pack for current users of V6.3.
| The V6.4.1 Operations Center includes an Overview page that shows the
| interaction of Tivoli Storage Manager servers and clients. You can use the
| Operations Center to identify potential issues at a glance, manage alerts, and
| access the Tivoli Storage Manager command line. The Administration Center
| interface is also available, but the Operations Center is the preferred monitoring
| interface.
| Related tasks:
| Chapter 17, Managing servers with the Operations Center, on page 589
| The Agent Log workspace is enhanced to display whether the monitored servers
| are up and running.
| Pruning values are now automatically configured during new installations. If you
| upgraded the application, you must manually configure the pruning settings to
| periodically remove data from the WAREHOUS database.
The server that is included with the Tivoli Storage Manager and IBM Tivoli Storage
Manager Extended Edition V6.4 products is at the V6.3.3 level. The V6.3.3 server is
also available for download separately, as a fix pack for current users of V6.3.
LDAP-authenticated passwords
IBM Tivoli Storage Manager server V6.3.3 can use an LDAP directory server to
authenticate passwords. LDAP-authenticated passwords give you an extra level of
security by being case-sensitive, offering advanced password rule enforcement, and
a centralized server on which to authenticate them.
The two methods of authentication are LDAP and LOCAL. LOCAL means that the
password is authenticated with the Tivoli Storage Manager server.
Passwords that are authenticated with the Tivoli Storage Manager server are not
case-sensitive. All passwords can be composed of characters from the following
list:
a b c d e f g h i j k l m n o p q r s t u v w x y z
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
0 1 2 3 4 5 6 7 8 9
~ ! @ # $ % ^ & * _ - + = ` | ( ) { } [ ] : ; < > , . ? /
You can use logical block protection only with the following types of drives and
media:
v IBM LTO5 and later
v IBM 3592 Generation 3 drives, and later, with 3592 Generation 2 media, and later
Node replication
Node replication is the process of incrementally copying or replicating client node
data from one server of Tivoli Storage Manager to another server of Tivoli Storage
Manager for the purpose of disaster recovery.
The server from which client node data is replicated is called a source replication
server. The server to which client node data is replicated is called a target replication
server.
Node replication avoids the logistics and security exposure of physically moving
tape media to a remote location. If a disaster occurs and the source replication
If you use the export and import functions of Tivoli Storage Manager to store client
node data on a disaster-recovery server, you can convert the nodes to replicating
nodes. When replicating data, you can also use data deduplication to reduce
bandwidth and storage requirements.
Tivoli Storage Manager V6.3 servers can be used for node replication. However,
you can replicate data for client nodes that are at V6.3 or earlier. You can also
replicate data that was stored on a Tivoli Storage Manager V6.2 or earlier server
before you upgraded it to V6.3.
You cannot replicate nodes from a Tivoli Storage Manager V6.3.3 server to a server
that is running on an earlier level of Tivoli Storage Manager.
Related tasks:
Chapter 34, Replicating client node data, on page 963
You can deploy backup-archive clients on operating systems other than Windows
from all releases at V5.5 or later. These Backup-Archive Clients can go to any later
version, release, modification, or fix level. You can coordinate the updates to each
Backup-Archive Client from the Administration Center.
During restore operations, the Tivoli Storage Manager server attempts to use the
same number of data streams that you specified for the backup operation. For
example, suppose that you specify four data streams for a database backup
operation. During a restore operation, the server attempts to use four drives. If one
drive is offline and unavailable, the server uses three drives for the restore
operation.
Updates to Tivoli Monitoring for Tivoli Storage Manager include the following
items:
v Cognos Business Intelligence V8 is an integrated business intelligence suite that
is provided as part of Tivoli Common Reporting. Tivoli Common Reporting is
included in the Administration Center installation when you select the Tivoli
Common Reporting component. See Customizing reports with Cognos Business
Intelligence, in the Monitoring operations section of the Administrator's Guide for
details. All of the information regarding client and server reports can also be
found in that section.
v The installation process has been improved to include a prerequisite checker,
and now performs all installation configuration tasks automatically.
v A customizable dashboard workspace has been added to display many
commonly viewed items in a single view. With the default setting, the dashboard
displays data about the storage space used by node; unsuccessful client and
server schedules; and details about storage pools, drives, and activity log error
messages.
v You can include multiple servers in a single report. Reports have been enhanced
to refine the accuracy of the data being displayed.
v New Tivoli Enterprise Portal workspaces are: activity log, agent log, updates to
client node status, drives, libraries, occupancy, PVU details, and replication
status and details.
v New client reports are available: storage pool media details, storage summary
details, replication details, replication growth, and replication summary.
v New server reports are available: activity log details, server throughput, and an
updated server throughput report for data collected by agents earlier than
version 6.3.
By using the new QUERY PVUESTIMATE command, you can generate reports that
estimate the number of server devices and client devices managed by the Tivoli
Storage Manager server. You can also view PVU information on a per-node basis.
These reports are not legally binding, but provide a starting point for determining
license requirements. Alternatively, you can view PVU information in the
Administration Center. The Administration Center provides summaries of client
devices, server devices, and estimated PVUs, and more detailed information.
For a detailed report, issue the SQL SELECT * FROM PVUESTIMATE_DETAILS command.
This command extracts information at the node level. This data can be exported to
a spreadsheet and modified to more accurately represent the system environment.
For more information about PVU calculations and their use for licensing purposes,
see the topic describing the role of PVUs in the Administrator's Guide.
Prerequisite checker
Tivoli Storage Manager Version 6.3 includes a prerequisite checker, a tool that can
be run before starting the Tivoli Storage Manager installation.
The prerequisite checker verifies requirements for the Tivoli Storage Manager
server, the Administration Center, and Tivoli Monitoring for Tivoli Storage
Manager. The prerequisite checker verifies the operating system, the amount of free
disk space, the required memory for the server, and other prerequisites. The tool
presents a summary of results, informs you about changes that are required in
your environment before installation, and creates required directories. In this way,
the prerequisite checker can help simplify the installation process.
For more information, see the section about running the prerequisite checker in the
Installation Guide.
With enhancements available in Version 6.3, you can define a library as a virtual
tape library (VTL) to Tivoli Storage Manager.
VTLs primarily use disk subsystems to internally store data. Because they do not
use tape media, you can exceed the capabilities of a physical tape library when
using VTL storage. Using a VTL, you can define many volumes and drives which
provides for greater flexibility in the storage environment and increases
productivity by allowing more simultaneous mounts and tape I/O.
On some Linux and UNIX file systems, including ext4 on Linux and JFS2 on AIX,
IBM Tivoli Storage Manager formats random-access volumes and preallocated
sequential-access disk volumes nearly instantaneously. For example, Tivoli Storage
Manager can format a FILE or DISK volume on JFS2 in less than one second.
The new method for formatting volumes on some Linux and UNIX systems is
similar to the method that Tivoli Storage Manager has been using to format
volumes on Windows operating systems.
Before using the new volume formatting method, Tivoli Storage Manager checks
the file system to determine whether the file system supports the method. If the
file system does not, Tivoli Storage Manager uses the old method.
The Tivoli Storage Manager V6.3 server accesses client data by using a storage
device attached to z/OS. The storage device is made available by IBM Tivoli
Storage Manager for z/OS Media.
In addition, Tivoli Storage Manager for z/OS Media facilitates access to Virtual
Storage Access Method (VSAM) linear data sets on z/OS by using an enhanced
sequential FILE storage method.
For more information, see the section about migrating Tivoli Storage Manager V5
servers on z/OS systems to V6 in the Tivoli Storage Manager Upgrade and Migration
Guide for V5 Servers.
The CHECKTAPEPOS server option allows the Tivoli Storage Manager server to check
the validity and consistency of data block positions on tape.
Enhancements to this option enable a drive to check for data overwrite problems
before each WRITE operation and allow Tivoli Storage Manager to reposition tapes
to the correct location and continue to write data. Use the CHECKTAPEPOS option
with IBM LTO Generation 5 drives.
Note: You can enable append-only mode for IBM LTO Generation 5 and later
drives, and for any drives that support this feature.
In Tivoli Storage Manager Version 6.3, persistent reserve is enabled for drives and
driver levels that support the feature.
The Tivoli Storage Manager Administration Center uses Tivoli Integrated Portal for
its graphical user interface (GUI). With Tivoli Integrated Portal V2.1, you can now
monitor the Administration Center with Internet Explorer 8 and Mozilla Firefox
3.5. All browsers that you used with Tivoli Integrated Portal V1.1.1 and later can
be used with this latest version.
When you install Tivoli Integrated Portal V2.1 installing Tivoli Common Reporting,
embedded security service, or the time scheduling service is optional. These
features can be added and registered with Tivoli Integrated Portal V2.1 at a later
time.
Related concepts:
Chapter 18, Managing servers with the Administration Center, on page 597
With enhancements to the Administration Center, you can now specify server
event-based archive settings using the Policy Domain and Management Class
wizards.
If you set an archive retention period for an object through the server, you can
update these settings using the Administration Center Management Class
notebook.
Setting an archive retention period ensures that objects are not deleted from the
Tivoli Storage Manager server until policy-based retention requirements for that
object are satisfied.
With the new client performance monitor function, you have the capability to
gather and analyze performance data about backup and restore operations for an
IBM Tivoli Storage Manager client.
The client performance monitor function is accessed from the Tivoli Storage
Manager Administration Center and uses data that is collected by the API. You can
view performance information about processor, disk, and network utilization, and
performance data that relates to data transfer rates and data compression. You can
analyze data throughput rates at any time during a backup or restore operation.
Also, you can use the performance information to analyze processor, disk, or
This feature is useful, for example, if you have a planned network outage that
might affect communication between a source and a target replication server. To
prevent replication failures, you can disable outbound sessions from the source
replication server before the outage. After communications have been reestablished,
you can resume replication by enabling outbound sessions.
To display help for the DEFINE DEVCLASS command for 3570 device classes, type:
help 3.13.10.1
As in previous releases, you can use this method to display help for commands
that have unique names, such as REGISTER NODE:
3.46 REGISTER
3.46.1 REGISTER ADMIN (Register an administrator)
3.46.2 REGISTER LICENSE (Register a new license)
3.46.3 REGISTER NODE (Register a node)
To display help for the REGISTER NODE command, you can type:
help 3.46.1
You can also type help commandName, where commandName is the name of the
server command for which you want information:
help register node
For Tivoli Storage Manager V6.3 and later, to use SSL with self-signed certificates,
use the SSLTLS12 option after you distribute new self-signed certificates to all V6.3
backup-archive clients. You can use certificates from previous server versions, but
you then cannot use TLS 1.2.
Client programs such as the backup-archive client and the HSM client (space
manager) are installed on systems that are connected through a LAN and are
registered as client nodes. From these client nodes, users can back up, archive, or
migrate files to the server.
The following sections present key concepts and information about IBM Tivoli
Storage Manager. The sections describe how Tivoli Storage Manager manages client
files based on information provided in administrator-defined policies, and manages
devices and media based on information provided in administrator-defined Tivoli
Storage Manager storage objects.
The final section gives an overview of tasks for the administrator of the server,
including options for configuring the server and how to maintain the server.
You can have multiple policies and assign the different policies as needed to
specific clients, or even to specific files. Policy assigns a location in server storage
where data is initially stored. Server storage is divided into storage pools that are
groups of storage volumes.
When you install Tivoli Storage Manager, you have a default policy that you can
use. For details about this default policy, see Reviewing the standard policy on
page 479. You can modify this policy and define additional policies.
Clients use Tivoli Storage Manager to store data for any of the following purposes:
Backup and restore
The backup process copies data from client workstations to server storage
to ensure against loss of data that is regularly changed. The server retains
versions of a file according to policy, and replaces older versions of the file
with newer versions. Policy includes the number of versions and the
retention time for versions.
A client can restore the most recent version of a file, or can restore earlier
versions.
Archive and retrieve
The archive process copies data from client workstations to server storage
for long-term storage. The process can optionally delete the archived files
from the client workstations. The server retains archive copies according to
the policy for archive retention time. A client can retrieve an archived copy
of a file.
Instant archive and rapid recovery
Instant archive is the creation of a complete set of backed-up files for a
client. The set of files is called a backup set. A backup set is created on the
server from the most recently backed-up files that are already stored in
Figure 1 on page 7 shows how policy is part of the Tivoli Storage Manager process
for storing client data.
Migration
Backup
or
Archive
Database
Policy Domain
Policy Set
Management Class
Copy Group
Figure 1. How IBM Tivoli Storage Manager Controls Backup, Archive, and Migration
Processes
Files remain in server storage until they expire and expiration processing occurs, or
until they are deleted from server storage. A file expires because of criteria that are
set in policy. For example, the criteria include the number of versions allowed for a
file and the number of days that have elapsed since a file was deleted from the
client's file system. If data retention protection is activated, an archive object cannot
be inadvertently deleted.
For information on managing the database, see Chapter 21, Managing the
database and recovery log, on page 655.
For information about storage pools and storage pool volumes, see Chapter 10,
Managing storage pools and volumes, on page 249.
For information about event-based policy, deletion hold, and data retention
protection, see Chapter 13, Implementing policies for client data, on page 477.
Data-protection options
Tivoli Storage Manager provides a variety of backup and archive operations,
allowing you to select the right protection for the situation.
Schedule the backups of client data to help enforce the data management policy
that you establish. If you schedule the backups, rather than rely on the clients to
perform the backups, the policy that you establish is followed more consistently.
See Chapter 15, Scheduling operations for client nodes, on page 567.
The standard backup method that Tivoli Storage Manager uses is called progressive
incremental backup. It is a unique and efficient method for backup. See Progressive
incremental backups on page 13.
Table 8 summarizes the client operations that are available. In all cases, the server
tracks the location of the backup data in its database. Policy that you set
determines how the backup data is managed.
Table 8. Summary of client operations
Type of Description Usage Restore options For more
operation information
Progressive The standard method of Helps ensure complete, The user can restore just See Incremental
incremental backup used by Tivoli effective, policy-based the version of the file that backup on page 494
backup Storage Manager. After backup of data. Eliminates is needed. and the
the first, full backup of a the need to retransmit Backup-Archive
client system, incremental backup data that has not Tivoli Storage Manager Clients Installation
backups are done. been changed during does not need to restore a and User's Guide.
Incremental backup by successive backup base file followed by
date is also available. operations. incremental backups. This
means reduced time and
No additional full fewer tape mounts, as
backups of a client are well as less data
required after the first transmitted over the
backup. network.
Selective Backup of files that are Allows users to protect a The user can restore just See Selective
backup selected by the user, subset of their data the version of the file that backup on page 496
regardless of whether the independent of the is needed. and the
files have changed since normal incremental Backup-Archive
the last backup. backup process. Tivoli Storage Manager Clients Installation
does not need to restore a and User's Guide.
base file followed by
incremental backups. This
means reduced time and
fewer tape mounts, as
well as less data
transmitted over the
network.
Applicable to clients on
Windows systems.
Journal- Aids all types of backups Reduces the amount of Journal-based backup has See the
based (progressive incremental time required for backup. no effect on how files are Backup-Archive
backup backup, selective backup, The files eligible for restored; this depends on Clients Installation
adaptive subfile backup) backup are known before the type of backup and User's Guide.
by basing the backups on the backup operation performed.
a list of changed files. begins.
The list is maintained on
the client by the journal Applicable to clients on
engine service of IBM AIX and Windows
Tivoli Storage Manager. systems, except Windows
2003 64-bit IA64.
Image Full volume backup. Allows backup of an The entire image is See Policy for
backup entire file system or raw restored. logical volume
Nondisruptive, on-line volume as a single object. backups on page
backup is possible for Can be selected by 525 and the
Windows clients by using backup-archive clients on Backup-Archive
the Tivoli Storage UNIX, Linux, and Clients Installation
Manager snapshot Windows systems. and User's Guide.
function.
Image Full volume backup, Used only for the image The full image backup See Chapter 9,
backup which can be followed by backups of NAS file plus a maximum of one Using NDMP for
with subsequent differential servers, performed by the differential backup are operations with NAS
differential backups. server using NDMP restored. file servers, on page
backups operations. 215.
Backup A method of backup that Implements Details depend on the See the
using exploits the capabilities high-efficiency backup hardware. documentation for
hardware of IBM Enterprise Storage and recovery of Tivoli Storage
snapshot Server FlashCopy and business-critical FlashCopy Manager.
capabilities EMC TimeFinder to make applications while
copies of volumes used virtually eliminating
by database servers. The backup-related downtime
Tivoli Storage FlashCopy or user disruption on the
Manager product then database server.
uses the volume copies to
back up the database
volumes.
Tivoli Storage Manager takes incremental backup one step further. After the initial
full backup of a client, no additional full backups are necessary because the server,
using its database, keeps track of whether files need to be backed up. Only files
that change are backed up, and then entire files are backed up, so that the server
does not need to reference base versions of the files. This means savings in
resources, including the network and storage.
If you choose, you can force full backup by using the selective backup function of
a client in addition to the incremental backup function. You can also choose to use
adaptive subfile backup, in which the server stores the base file (the complete
initial backup of the file) and subsequent subfiles (the changed parts) that depend
on the base file.
You can back up client backup, archive, and space-managed data in primary
storage pools to copy storage pools. You can also copy active versions of client
backup data from primary storage pools to active-data pools. The server can
automatically access copy storage pools and active-data pools to retrieve data. See
Protecting client data on page 929.
You can also back up the server's database. The database is key to the server's
ability to track client data in server storage. See Protecting the database and
infrastructure setup files on page 918.
These backups can become part of a disaster recovery plan, created automatically
by the disaster recovery manager. See Chapter 35, Disaster recovery manager, on
page 1029.
In many configurations, the Tivoli Storage Manager client sends its data to the
server over the LAN. The server then transfers the data to a device that is attached
to the server. You can also use storage agents that are installed on client nodes to
send data over a SAN. This minimizes use of the LAN and the use of the
computing resources of both the client and the server. For details, see LAN-free
data movement on page 57.
For network-attached storage, use NDMP operations to avoid data movement over
the LAN. For details, see NDMP backup operations on page 59.
Device support
With Tivoli Storage Manager, you can use a variety of devices for server storage.
See the current list on the Tivoli Storage Manager website at http://
www.ibm.com/support/entry/portal/Overview/Software/Tivoli/
Tivoli_Storage_Manager.
Tivoli Storage Manager represents physical storage devices and media with the
following administrator-defined objects:
Library
A library is one or more drives (and possibly robotic devices) with similar
media mounting requirements.
Drive
Each drive represents a drive mechanism in a tape or optical device.
For details about device concepts, see Chapter 3, Storage device concepts, on
page 43.
For example, you have a backup policy that specifies that three versions of a file be
kept. File A is created on the client, and backed up. Over time, the user changes
file A, and three versions of the file are backed up to the server. Then the user
changes file A again. When the next incremental backup occurs, a fourth version of
file A is stored, and the oldest of the four versions is eligible for expiration.
To remove data that is eligible for expiration, a server expiration process marks
data as expired and deletes metadata for the expired data from the database. The
space occupied by the expired data is then available for new data.
You control the frequency of the expiration process by using a server option, or
you can start the expiration processing by command or scheduled command.
Your changing storage needs and client requirements can mean on-going
configuration changes and monitoring. The server's capabilities are described in the
following topics.
Server options
Server options let you customize the server and its operations.
Server options are in the server options file. Some options can be changed and
made active immediately by using the command, SETOPT. Most server options are
changed by editing the server options file and then halting and restarting the
server to make the changes active. See the Administrator's Reference for details
about the server options file and reference information for all server options.
The server uses its storage for the data it manages for clients. The storage can be a
combination of devices.
v Disk
v Tape drives that are either manually operated or automated
v Optical drives
v Other drives that use removable media
Disk devices
Disk devices can be used with Tivoli Storage Manager for storing the database and
recovery log or client data that is backed up, archived, or migrated from client
nodes.
The server can store data on disk by using random-access volumes (device type of
DISK) or sequential-access volumes (device type of FILE).
The Tivoli Storage Manager product allows you to exploit disk storage in ways
that other products do not. You can have multiple client nodes back up to the
same disk storage pool at the same time, and still keep the data for the different
client nodes separate. Other products also allow you to back up different systems
at the same time, but only by interleaving the data for the systems, leading to
slower restore processes.
If you have enough disk storage space, data can remain on disk permanently or
temporarily, depending on the amount of disk storage space that you have. Restore
process performance from disk can be very fast compared to tape.
You can have the server later move the data from disk to tape; this is called
migration through the storage hierarchy. Other advantages to this later move to
tape include:
v Ability to collocate data for clients as the data is moved to tape
v Streaming operation of tape drives, leading to better tape drive performance
v More efficient use of tape drives by spreading out the times when the drives are
in use
For information about storage hierarchy and setting up storage pools on disk
devices, see:
Chapter 10, Managing storage pools and volumes, on page 249
The following topics provide an overview of how to use removable media devices
with Tivoli Storage Manager.
You must define device classes for the drives available to the Tivoli Storage
Manager server. You specify a device class when you define a storage pool so that
the storage pool is associated with drives.
For more information about defining device classes, see Defining device classes
on page 190.
See Chapter 5, Attaching devices for the server, on page 83 for more information.
You can use tapes as scratch volumes, up to the number of scratch volumes you
specified for the storage pool. Using scratch volumes allows Tivoli Storage
Manager to acquire volumes as needed. A storage pool can request available
scratch volumes up to the number specified for that storage pool.
You must define private volumes to Tivoli Storage Manager, assigning each to a
specific storage pool. However, if a storage pool contains only private volumes and
runs out of them, storage operations to that pool stop until more volumes are
defined.
All tape volumes must have standard tape labels before Tivoli Storage Manager
can use them.
Migration requires tape mounts. The mount messages are directed to the console
message queue and to any administrative client that has been started with either
the mount mode or console mode option. To have the server migrate data from
BACKUPPOOL to AUTOPOOL and from ARCHIVEPOOL to TAPEPOOL do the
following:
update stgpool backuppool nextstgpool=autopool
update stgpool archivepool nextstgpool=tapepool
The server can perform migration as needed, based on migration thresholds that
you set for the storage pools. Because migration from a disk to a tape storage pool
uses resources such as drives and operators, you might want to control when
migration occurs. To do so, you can use the MIGRATE STGPOOL command:
migrate stgpool backuppool
To migrate from a disk storage pool to a tape storage pool, devices must be
allocated and tapes must be mounted. For these reasons, you may want to ensure
that migration occurs at a time that is best for your situation. You can control
when migration occurs by using migration thresholds.
See Migrating disk storage pools on page 282 and the Administrator's Reference
for more information.
The following are other examples of what you can control for a storage pool:
Collocation
The server can keep each client's files on a minimal number of volumes
within a storage pool. Because client files are consolidated, restoring
collocated files requires fewer media mounts. However, backing up files
from different clients requires more mounts.
Reclamation
Files on sequential access volumes might expire, move, or be deleted. The
reclamation process consolidates the active, unexpired data on many
volumes onto fewer volumes. The original volumes can then be reused for
new data, making more efficient use of media.
Storage pool backup
Client backup, archive, and space-managed data in primary storage pools
can be backed up to copy storage pools for disaster recovery purposes. As
client data is written to the primary storage pools, it can also be
simultaneously written to copy storage pools.
Copy active data
The active versions of client backup data can be copied to active-data
pools. Active-data pools provide a number of benefits. For example, if the
device type associated with an active-data pool is sequential-access disk
(FILE), you can eliminate the need for disk staging pools. Restoring client
data is faster because FILE volumes are not physically mounted, and the
server does not have to position past inactive files that do not have to be
restored.
An active-data pool that uses removable media, such as tape or optical,
reduces the number of volumes for onsite and offsite storage. (Like
volumes in copy storage pools, volumes in active-data pools can be moved
offsite for protection in case of disaster.) If you vault data electronically to
a remote location, a SERVER-type active-data pool saves bandwidth by
copying and restoring only active data.
As backup client data is written to primary storage pools, the active
versions can be simultaneously written to active-data pools.
Cache When the server migrates files from disk storage pools, duplicate copies of
the files can remain in cache (disk storage) for faster retrieval. Cached files
are deleted only when space is needed. However, client backup operations
that use the disk storage pool can have poorer performance.
You manage storage volumes by defining, updating, and deleting volumes, and by
monitoring the use of server storage. You can also move files within and across
storage pools to optimize the use of server storage.
For more information about storage pools and volumes and taking advantage of
storage pool features, see Chapter 10, Managing storage pools and volumes, on
page 249.
In both failover and fallback, it appears that the Tivoli Storage Manager server has
crashed or halted and was then restarted. Any transactions that were in progress at
the time of the failover or fallback are rolled back, and all completed transactions
are still complete. Tivoli Storage Manager clients see this as a communications
failure and try to reestablish their connections.
This support is also available for Windows storage agents backing up to a Tivoli
Storage Manager Version 5.3 AIX server in a cluster LAN-free environment. For
details, see:
v Requirements for a PowerHA cluster on page 1102
v IBM Tivoli Storage Manager in a Clustered Environment (IBM Redbooks)
After you have created schedules, you manage and coordinate those schedules.
Your tasks include the following:
v Verify that the schedules ran successfully.
v Determine how long Tivoli Storage Manager retains information about schedule
results (event records) in the database.
v Balance the workload on the server so that all scheduled operations complete.
For more information about client operations, see the following sections:
v For setting up an include-exclude list for clients, see Getting users started on
page 480.
v For automating client operations, see Chapter 15, Scheduling operations for
client nodes, on page 567.
v For running the scheduler on a client system, see the user's guide for the client.
v For setting up policy domains and management classes, see Chapter 13,
Implementing policies for client data, on page 477.
For more information about these tasks, see Chapter 16, Managing schedules for
client nodes, on page 573
The Tivoli Storage Manager server supports a variety of client nodes. You can
register the following types of clients and servers as client nodes:
v Tivoli Storage Manager backup-archive client
v Application clients that provide data protection through one of the following
products: Tivoli Storage Manager for Application Servers, Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Enterprise Resource
Planning, or Tivoli Storage Manager for Mail.
v Tivoli Storage Manager for Space Management client (called space manager
client or HSM client)
v A NAS file server for which the Tivoli Storage Manager server uses NDMP for
backup and restore operations
v Tivoli Storage Manager source server (registered as a node on a target server)
When you register clients, you have choices to make about the following:
v Whether the client should compress files before sending them to the server for
backup
For more information on managing client nodes, see the Backup-Archive Clients
Installation and User's Guide.
Registration for clients can be closed or open. With closed registration, a user with
administrator authority must register all clients. With open registration, clients can
register themselves at first contact with the server. See Registering nodes with the
server on page 422.
You can ensure that only authorized administrators and client nodes are
communicating with the server by requiring passwords. Passwords can
authenticate with an LDAP directory server or the Tivoli Storage Manager server.
Most password-related commands work for both kinds of servers. The PASSEXP and
RESET PASSEXP commands do not work for passwords that authenticate with an
LDAP directory server. You can use the LDAP directory server to give more
options to your passwords, independent of the Tivoli Storage Manager server.
Whether you store your passwords on an LDAP directory server, or on the Tivoli
Storage Manager server, you can set the following requirements for passwords:
v Minimum number of characters in a password.
v Expiration time.
v A limit on the number of consecutive, invalid password attempts. When the
client exceeds the limit, Tivoli Storage Manager stops the client node from
accessing the server. The limit can be set on the Tivoli Storage Manager server,
and on the LDAP directory server.
Important: The invalid password limit is for passwords that authenticate with the
Tivoli Storage Manager server and any LDAP directory servers. Invalid password
attempts can be configured on an LDAP directory server, outside of the Tivoli
Storage Manager server. But the consequence of setting the number of invalid
attempts on the LDAP directory server might pose some problems. For example,
when the REGISTER NODE command is issued, the default behavior is to name the
node administrator the same name as the node. The LDAP server does not
recognize the difference between the node NODE_Q and the administrator
NODE_Q. The node and the administrator can authenticate to the LDAP server
if they have the same password. If the node and administrator have different
passwords, the authentication fails for either the node or administrator. If the node
or the administrator fail to logon consistently, their IDs are locked. You can avoid
this situation by issuing the REGISTER NODE command with USERID=userid or
USERID=NONE.
You can control the authority of administrators. An organization can name a single
administrator or distribute the workload among a number of administrators and
grant them different levels of authority. For details, see Managing Tivoli Storage
Manager administrator IDs on page 898.
For better security when clients connect across a firewall, you can control whether
clients can initiate contact with the server for scheduled operations. See Managing
client nodes across a firewall on page 432 for details.
For additional ways to manage security, see Chapter 32, Managing Tivoli Storage
Manager security, on page 885.
Adding administrators
If you have installed any additional administrative clients, you should register
them and grant an authority level to each.
See Managing Tivoli Storage Manager administrator IDs on page 898 and
Chapter 12, Managing client nodes, on page 431 for more information.
For example, register a node named MERCEDES with the password MONTANA:
register node mercedes montana userid=none
In Tivoli Storage Manager, you define policies by defining policy domains, policy
sets, management classes, and backup and archive copy groups. When you install
Tivoli Storage Manager, you have a default policy that consists of a single policy
domain named STANDARD.
The default policy provides basic backup protection for end-user workstations. To
provide different levels of service for different clients, you can add to the default
policy or create new policy. For example, because of business needs, file servers are
likely to require a policy different from policy for users' workstations. Protecting
data for applications such as Lotus Domino also may require a unique policy.
For more information about the default policy and establishing and managing new
policies, see Chapter 13, Implementing policies for client data, on page 477.
For example, it specifies that Tivoli Storage Manager retains up to two backup
versions of any file that exists on the client (see Chapter 13, Implementing policies
for client data, on page 477 for details). Two versions may be enough for most
clients. However, if some clients need the last ten versions to be kept, you can do
either of the following:
v Create a new policy domain and assign these clients to that domain (described
in this section).
v Create a new management class within the default policy domain. The
include-exclude lists for all the affected clients must now be updated.
Remember: Under the default policy, client files are stored directly to disk. You
can also define policies for storing client files directly to tape. In a copy group,
simply name a tape pool as the destination. However, if you store directly to tape,
the number of available tape drives limits the number of client nodes that can store
data at the same time.
To create a new policy, you can start by copying the policy domain, STANDARD.
This operation also copies the associated policy set, management class, and copy
groups. You then assign clients to the new domain.
1. Copy the default policy domain, STANDARD, to the new policy domain,
NEWDOMAIN:
copy domain standard newdomain
This operation copies the policy domain, and all associated policy sets,
management classes, and copy groups. Within the policy domain named
NEWDOMAIN and the policy set named STANDARD, you have:
v Management class named STANDARD
v Backup copy group named STANDARD
v Archive copy group named STANDARD
In this example, you update only the backup copy group.
2. Update the backup copy group by specifying that ten versions of backed up
files are to be kept:
update copygroup newdomain standard standard standard -
type=backup verexists=10
3. Validate and activate the STANDARD policy set in NEWDOMAIN:
validate policyset newdomain standard
activate policyset newdomain standard
For more information about the default policy and establishing and managing new
policies, see Chapter 13, Implementing policies for client data, on page 477.
Scheduling also can mean better utilization of resources such as the network.
Client backups that are scheduled at times of lower usage can minimize the impact
on user operations on a network.
You can automate operations for clients by using schedules. Tivoli Storage
Manager provides a central scheduling facility. You can also use operating system
utilities or other scheduling tools to schedule Tivoli Storage Manager operations.
With Tivoli Storage Manager schedules, you can perform the operations for a client
immediately or schedule the operations to occur at regular intervals.
For a schedule to work on a particular client, the client machine must be turned
on. The client either must be running the client scheduler or must allow the client
acceptor daemon to start the scheduler when needed.
This is done with statements in an include-exclude list or, on UNIX and Linux
clients, in an include-exclude file. For example, an include-exclude file should
exclude system files that, if recovered, could corrupt the operating system. Tivoli
Storage Manager server and client directories should also be excluded. See the
appropriate Tivoli Storage Manager client user's guide for details.
You can define include-exclude statements for your installation. Users can add
these statements in their client options file (dsm.sys). You can also enter the
statements in a set of options and assign that set to client nodes when you register
or update the nodes. For details about the DEFINE CLOPTSET and DEFINE
CLIENTOPT commands, see Chapter 12, Managing client nodes, on page 431
and the Administrator's Reference.
Tivoli Storage Manager reads the statements from the bottom up until a match is
found. In the preceding example, no match would be found on the include
statements for the file /eng/spec/proto.obj. Tivoli Storage Manager reads the
exclude statement, finds a match, and excludes the file.
v For a file or group of files, the user can also override the default management
class:
In this example,
*.sct files are bound to the default management class.
*.drw files are bound to the management class monthly.
All other files in the spec directory are excluded from backup or archive.
The following steps guide you through the tasks to schedule client backups for
three registered client nodes that are assigned to the STANDARD policy domain:
bill, mark, and mercedes.
1. Schedule an incremental backup and associate the schedule with the clients.
define schedule standard daily_incr action=incremental -
starttime=23:00
The schedule, named DAILY_INCR, is for the Tivoli Storage Manager default
policy domain, named STANDARD. The default specifies backup to the disk
storage pool BACKUPPOOL. This schedule calls for a schedule window with
the following characteristics:
v Begins on the date the schedule is defined (the default) at 11:00 p.m.
v Lasts for 1 hour (the default)
v Is repeated daily (the default)
v Stays in effect indefinitely (the default)
2. Start the client scheduler. For the schedules to become active for a workstation,
a user must start the scheduler from the node.
dsmc schedule
To help ensure that the scheduler is running on the clients, start the client
acceptor daemon (CAD) or client acceptor service.
The include-exclude list (file on UNIX and Linux clients) on each client also
affects which files are backed up or archived by the two schedules defined in
the preceding steps. For example, if a file is excluded from backup with an
EXCLUDE statement, the file is not backed up when the DAILY_INCR schedule
runs.
3. Because the DAILY_INCR schedule is to run daily, you can verify that it is
working as it should on the day after you define the schedule and associate it
with clients. If the schedule runs successfully, the status is Completed.
query event standard daily_incr begindate=today-1
Server maintenance
If you manage more than one server, you can ensure that the multiple servers are
consistently managed by using the enterprise management functions of Tivoli
Storage Manager.
You can set up one server as the configuration manager and have other servers
obtain configuration information from it.
To keep the server running well, you can perform these tasks:
v Managing server operations, such as controlling client access to the server
v Automating repetitive administrative tasks
v Monitoring and adjusting space for the database and the recovery log
v Monitoring the status of the server, server storage, and clients
Server-operation management
When managing your server operations, you can choose from a variety of
associated tasks.
Some of the more common tasks that you can perform to manage your server
operations are shown in the following list:
v Start and stop the server.
v Allow and suspend client sessions with the server.
v Query, cancel, and preempt server processes such as backing up the server
database.
v Customize server options.
See Licensing IBM Tivoli Storage Manager on page 605. For suggestions about
the day-to-day tasks required to administer the server, see Chapter 19, Managing
server operations, on page 605.
You can define schedules for the automatic processing of most administrative
commands. For example, a schedule can run the command to back up the server's
database every day.
For more information about automating Tivoli Storage Manager operations, see
Chapter 20, Automating server operations, on page 633.
If you have a predefined maintenance script, you can add or subtract commands
using the maintenance script wizard. You can add, subtract, or reposition
commands if you have a custom maintenance script. Both methods can be accessed
through the same process. If you want to convert your predefined maintenance
script to a custom maintenance script, select a server with the predefined script,
click Select Action > Convert to Custom Maintenance Script.
The information about the client data, also called metadata, includes the file name,
file size, file owner, management class, copy group, and location of the file in
server storage. The server records changes made to the database (database
transactions) in its recovery log. The recovery log is used to maintain the database
in a transactionally consistent state, and to maintain consistency across server
startup operations.
For more information about the Tivoli Storage Manager database and recovery log
and about the tasks associated with them, see Chapter 21, Managing the database
and recovery log, on page 655.
You can increase the size of the database by creating new directories and adding
them to the database space.
The Administration Center includes a health monitor, which presents a view of the
overall status of multiple servers and their storage devices. From the health
monitor, you can link to details for a server, including a summary of the results of
client schedules and a summary of the availability of storage devices. See
Chapter 18, Managing servers with the Administration Center, on page 597.
Tivoli Monitoring for Tivoli Storage Manager can also be used to monitor client
and server operations. It brings together multiple components to provide historical
reporting and real-time monitoring. Tivoli Monitoring for Tivoli Storage Manager
can help you determine if there are any issues that require attention. You can
monitor server status, database size, agent status, client node status, scheduled
events, server IDs, and so on, using the workspaces within the Tivoli Enterprise
Portal. See Chapter 29, Reporting and monitoring with Tivoli Monitoring for
Tivoli Storage Manager, on page 813.
You can use Tivoli Storage Manager queries and SQL queries to get information
about the server. You can also set up automatic logging of information about Tivoli
Storage Manager clients and server events. Daily checks of some indicators are
suggested.
See the following sections for more information about these tasks:
v Part 5, Monitoring operations, on page 777
v Using SQL to query the IBM Tivoli Storage Manager database on page 798
v Chapter 31, Logging IBM Tivoli Storage Manager events to receivers, on page
861
v Chapter 24, Daily monitoring tasks, on page 779
When you have a network of Tivoli Storage Manager servers, you can simplify
configuration and management of the servers by using enterprise administration
functions. You can do the following:
v Designate one server as a configuration manager that distributes configuration
information such as policy to other servers. See Setting up enterprise
configurations on page 709.
v Route commands to multiple servers while logged on to one server. See
Routing commands on page 732.
v Log events such as error messages to one server. This allows you to monitor
many servers and clients from a single server. See Enterprise event logging:
logging events to another server on page 875.
v Store data for one Tivoli Storage Manager server in the storage of another Tivoli
Storage Manager server. The storage is called server-to-server virtual volumes.
See Using virtual volumes to store data on another server on page 737 for
details.
v Share an automated library among Tivoli Storage Manager servers. See Devices
on storage area networks on page 55.
v Store a recovery plan file for one server on another server, when using disaster
recovery manager. You can also back up the server database and storage pools to
another server. See Chapter 35, Disaster recovery manager, on page 1029 for
details.
v Back up the server database and storage pools to another server. See Using
virtual volumes to store data on another server on page 737 for details.
v To simplify password management, have client nodes and administrators
authenticate their passwords on multiple servers using an LDAP directory
server. See Managing passwords and logon procedures on page 904.
For example, you may need to balance workload among servers by moving client
nodes from one server to another. The following methods are available:
v You can export part or all of a server's data to sequential media, such as tape or
a file on hard disk. You can then take the media to another server and import
the data to that server
v You can export part or all of a server's data and import the data directly to
another server, if server-to-server communications are set up.
For more information about moving data between servers, see Chapter 23,
Exporting and importing data, on page 745.
Attention: If the database is unusable, the entire Tivoli Storage Manager server is
unavailable. If a database is lost and cannot be recovered, it might be difficult or
impossible to recover data that is managed by that server. Therefore, It is critically
important to back up the database. However, even without the database, fragments
of data or complete files might easily be read from storage pool volumes that are
not encrypted. Even if data is not completely recovered, security can be
compromised. For this reason, always encrypt sensitive data by using the Tivoli
Storage Manager client or the storage device, unless the storage media is physically
secured. See Part 6, Protecting the server, on page 883 for steps that you can take
to protect your database.
IBM Tivoli Storage Manager provides a number of ways to protect your data,
including backing up your storage pools and database. For example, you can
define schedules so that the following operations occur:
v After the initial full backup of your storage pools, incremental storage pool
backups are done nightly.
v Full database backups are done weekly.
v Incremental database backups are done nightly.
You can also create a maintenance script to perform database and storage pool
backups through the Server Maintenance work item in the Administration Center.
See Chapter 18, Managing servers with the Administration Center, on page 597
for details.
In addition to taking these actions, you can prepare a disaster recovery plan to
guide you through the recovery process by using the disaster recovery manager,
which is available with Tivoli Storage Manager Extended Edition. The disaster
recovery manager (DRM) assists you in the automatic preparation of a disaster
recovery plan. You can use the disaster recovery plan as a guide for disaster
recovery as well as for audit purposes to certify the recoverability of the Tivoli
Storage Manager server.
The disaster recovery methods of DRM are based on taking the following
measures:
v Sending server backup volumes offsite or to another Tivoli Storage Manager
server
v Creating the disaster recovery plan file for the Tivoli Storage Manager server
v Storing client machine information
v Defining and tracking client recovery media
For more information about protecting your server and for details about recovering
from a disaster, see Chapter 33, Protecting and recovering the server infrastructure
and client data, on page 917.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Use the following table to identify key tasks and the topics that describe how to
perform those tasks.
Task Topic
Configure and manage magnetic disk Chapter 4, Magnetic disk devices, on page
devices, which Tivoli Storage Manager uses 71
to store client data, the database, database
backups, recovery log, and export data.
Physically attach storage devices to your Chapter 5, Attaching devices for the
system. Install and configure the required server, on page 83
device drivers.
Configure devices to use with Tivoli Storage Chapter 6, Configuring storage devices, on
Manager, using detailed scenarios of page 95
representative device configurations.
Plan, configure, and manage an environment Chapter 9, Using NDMP for operations
for NDMP operations with NAS file servers, on page 215
Perform routine operations such as labeling Chapter 7, Managing removable media
volumes, checking volumes into automated operations, on page 145
libraries, and maintaining storage volumes
and devices.
Define and manage device classes. Defining device classes on page 190
For a summary of supported devices, see Table 9 on page 66. For details and
updates, see the Tivoli Storage Manager device support Web site:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
Libraries
A physical library is a collection of one or more drives that share similar
media-mounting requirements. That is, the drive can be mounted by an operator or
by an automated mounting mechanism.
A library object definition specifies the library type, for example, SCSI or 349X, and
other characteristics associated with the library type, for example, the category
numbers used by an IBM TotalStorage 3494 Tape Library for private, scratch
volumes, and scratch, write-once, read-many (WORM) volumes.
Manual libraries
In manual libraries, operators mount the volumes in response to mount-request
messages issued by the server.
The server sends these messages to the server console and to administrative clients
that were started by using the special MOUNTMODE or CONSOLEMODE parameter.
You can also use manual libraries as logical entities for sharing sequential-access
disk (FILE) volumes with other servers.
You cannot combine drives of different types or formats, such as Digital Linear
Tape (DLT) and 8MM, in a single manual library. Instead, you must create a
separate manual library for each device type.
The drives in a SCSI library can be of different types. A SCSI library can contain
drives of mixed technologies, for example LTO Ultrium and DLT drives. Some
examples of this library type are:
v The Oracle StorageTek L700 library
v The IBM 3590 tape device, with its Automatic Cartridge Facility (ACF)
Remember: Although it has a SCSI interface, the IBM 3494 Tape Library
Dataserver is defined as a 349X library type.
Using a VTL, you can create variable numbers of drives and volumes because they
are only logical entities within the VTL. The ability to create more drives and
volumes increases the capability for parallelism, giving you more simultaneous
mounts and tape I/O.
VTLs use SCSI and Fibre Channel interfaces to interact with applications. Because
VTLs emulate tape drives, libraries, and volumes, an application such as Tivoli
Storage Manager cannot distinguish a VTL from real tape hardware unless the
library is identified as a VTL.
For information about configuring a VTL library, see Managing virtual tape
libraries on page 105.
349X libraries
A 349X library is a collection of drives in an IBM 3494. Volume mounts and
demounts are handled automatically by the library. A 349X library has one or more
library management control points (LMCP) that the server uses to mount and
dismount volumes in a drive. Each LMCP provides an independent interface to the
robot mechanism in the library.
The drives in a 3494 library must be of one type only (either IBM 3490, 3590, or
3592).
The external media manager selects the appropriate drive for media-access
operations. You do not define the drives, check in media, or label the volumes in
an external library.
An external library allows flexibility in grouping drives into libraries and storage
pools. The library can have one drive, a collection of drives, or even a part of an
automated library.
For a definition of the interface that Tivoli Storage Manager provides to the
external media management system, see Appendix B, External media management
interface description, on page 1111.
Zosmedia libraries
A zosmedia library represents a tape or disk storage resource that is attached with
a Fibre Channel connection (FICON) and is managed by Tivoli Storage Manager
for z/OS Media.
A zosmedia library does not require drive definitions. Paths are defined for the
Tivoli Storage Manager server and any storage agents that need access to the
zosmedia library resource.
For information about configuring a zosmedia library, see Configuring the Tivoli
Storage Manager server to use z/OS media server storage on page 136.
Drives
A drive object represents a drive mechanism within a library that uses removable
media. For devices with multiple drives, including automated libraries, you must
define each drive separately and associate it with a library.
Drive definitions can include such information as the element address for drives in
SCSI or virtual tape libraries (VTLs), how often a tape drive is cleaned, and
whether the drive is online.
Tivoli Storage Manager drives include tape and optical drives that can stand alone
or that can be part of an automated library. Supported removable media drives
also include removable file devices such as rewritable CDs.
A device class for a tape or optical drive must also specify a library.
Disk devices
Using Tivoli Storage Manager, you can define random-access disk (DISK device
type) volumes using a single command. You can also use space triggers to
automatically create preassigned private volumes when predetermined
space-utilization thresholds are exceeded.
Removable media
Tivoli Storage Manager provides a set of specified removable-media device types,
such as 8MM for 8 mm tape devices, or REMOVABLEFILE for Jaz or DVD-RAM
drives.
The GENERICTAPE device type is provided to support certain devices that are not
supported by the Tivoli Storage Manager server.
For more information about supported removable media device types, see
Defining device classes on page 190 and the Administrator's Reference.
FILE volumes are a convenient way to use sequential-access disk storage for the
following reasons:
v You do not need to explicitly define scratch volumes. The server can
automatically acquire and define scratch FILE volumes as needed.
v You can create and format FILE volumes using a single command. The
advantage of private FILE volumes is that they can reduce disk fragmentation
and maintenance overhead.
v Using a single device class definition that specifies two or more directories, you
can create large, FILE-type storage pools. Volumes are created in the directories
you specify in the device class definition. For optimal performance, volumes
should be associated with file systems.
v When predetermined space-utilization thresholds have been exceeded, space
trigger functionality can automatically allocate space for private volumes in
FILE-type storage pools.
Unless sharing with storage agents is specified, the FILE device type does not
require you to define library or drive objects. The only required object is a device
class.
The Centera storage device can also be configured with the Tivoli Storage Manager
server to form a specialized storage system that protects you from inadvertent
deletion of mission-critical data such as e-mails, trade settlements, legal documents,
and so on.
The CENTERA device class creates logical sequential volumes for use with Centera
storage pools. These volumes share many of the same characteristics as FILE type
volumes. With the CENTERA device type, you are not required to define library or
drive objects. CENTERA volumes are created as needed and end in the suffix
"CNT."
Multiple client retrieve sessions, restore sessions, or server processes can read a
volume concurrently in a storage pool that is associated with the CENTERA device
type. In addition, one client session or one server process can write to the volume
The following server processes can share read access to Centera volumes:
v EXPORT NODE
v EXPORT SERVER
v GENERATE BACKUPSET
The following server processes cannot share read access to Centera volumes:
v AUDIT VOLUME
v DELETE VOLUME
For more information about the Centera device class, see Defining device classes
for CENTERA devices on page 208. For details about Centera-related commands,
refer to the Administrator's Reference.
Library Represents
Drives
Device
Drive Drive
Figure 2. Removable media devices are represented by a library, drive, and device class
You can control the characteristics of storage pools, such as whether scratch
volumes are used.
Figure 3 shows storage pool volumes grouped into a storage pool. Each storage
pool represents only one type of media. For example, a storage pool for 8-mm
devices represents collections of only 8-mm tapes.
Volume Volume
Represents
Storage Media
Pool
One or more device classes are associated with one library, which can contain
multiple drives. When you define a storage pool, you associate the pool with a
device class. Volumes are associated with pools. Figure 4 shows these relationships.
Storage Pool
Vol. Vol. Volumes
Library
For information about defining storage pool and volume objects, see Chapter 10,
Managing storage pools and volumes, on page 249.
Data movers
Data movers are devices that accept requests from Tivoli Storage Manager to
transfer data on behalf of the server. Data movers transfer data between storage
devices without using significant server, client, or network resources.
For NDMP operations, data movers are NAS file servers. The definition for a NAS
data mover contains the network address, authorization, and data formats required
for NDMP operations. A data mover enables communication and ensures authority
for NDMP operations between the Tivoli Storage Manager server and the NAS file
server.
Paths
Paths allow access to drives, disks, and libraries. A path definition specifies a
source and a destination. The source accesses the destination, but data can flow in
either direction between the source and destination.
Server objects
Server objects are defined to use a library that is on a SAN and that is managed by
another Tivoli Storage Manager server, to use LAN-free data movement, or to store
data in virtual volumes on a remote server.
Among other characteristics, you must specify the server TCP/IP address.
For each storage pool, you must decide whether to use scratch volumes. If you do
not use scratch volumes, you must define private volumes, or you can use
space-triggers if the volume is assigned to a storage pool with a FILE device type.
Tivoli Storage Manager keeps an inventory of volumes in each automated library it
manages and tracks whether the volumes are in scratch or private status. When a
volume mount is requested, Tivoli Storage Manager selects a scratch volume only
if scratch volumes are allowed in the storage pool. The server can choose any
scratch volume that has been checked into the library.
You do not need to allocate volumes to different storage pools associated with the
same automated library. Each storage pool associated with the library can
dynamically acquire volumes from the library's inventory of scratch volumes. Even
if only one storage pool is associated with a library, you do not need to explicitly
define all the volumes for the storage pool. The server automatically adds volumes
to and deletes volumes from the storage pool.
This inventory is not necessarily identical to the list of volumes in the storage
pools associated with the library. For example:
v A volume can be checked into the library but not be in a storage pool (a scratch
volume, a database backup volume, or a backup set volume).
v A volume can be defined to a storage pool associated with the library (a private
volume), but not checked into the library.
Device configurations
You can configure devices on a local area network, on a storage area network, for
LAN-free data movement, and as network-attached storage. Tivoli Storage
Manager provides methods for configuring storage devices.
For information about supported devices and Fibre Channel hardware and
configurations, see http://www.ibm.com/support/entry/portal/Overview/
Software/Tivoli/Tivoli_Storage_Manager
In a SAN you can share tape drives, optical drives, and libraries that are supported
by the Tivoli Storage Manager server, including most SCSI devices.
This does not include devices that use the GENERICTAPE device type.
For information about device driver setup information, see Chapter 5, Attaching
devices for the server, on page 83.
Library Control
Data Flow
SAN Data Flow
Tape Library
Figure 5. Library sharing in a storage area network (SAN) configuration. The servers
communicate over the LAN. The library manager controls the library over the SAN. The
library client stores data to the library devices over the SAN.
When Tivoli Storage Manager servers share a library, one server, the library
manager, controls device operations. These operations include mount, dismount,
volume ownership, and library inventory. Other Tivoli Storage Manager servers,
library clients, use server-to-server communications to contact the library manager
and request device service. Data moves over the SAN between each server and the
storage device.
Tivoli Storage Manager servers use the following features when sharing an
automated library:
Partitioning of the Volume Inventory
The inventory of media volumes in the shared library is partitioned among
servers. Either one server owns a particular volume, or the volume is in
the global scratch pool. No server owns the scratch pool at any given time.
Serialized Drive Access
Only one server accesses each tape drive at a time. Drive access is
serialized and controlled so that servers do not dismount other servers'
volumes or write to drives where other servers mount their volumes.
Serialized Mount Access
The library autochanger performs a single mount or dismount operation at
a time. A single server (library manager) performs all mount operations to
provide this serialization.
Tape library
File library
Figure 6. LAN-Free data movement. Client and server communicate over the LAN. The
server controls the device on the SAN. Client data moves over the SAN to the device.
LAN-free data movement requires the installation of a storage agent on the client
machine. The server maintains the database and recovery log, and acts as the
library manager to control device operations. The storage agent on the client
handles the data transfer to the device on the SAN. This implementation frees up
bandwidth on the LAN that would otherwise be used for client data movement.
The following outlines a typical backup scenario for a client that uses LAN-free
data movement:
1. The client begins a backup operation. The client and the server exchange policy
information over the LAN to determine the destination of the backed up data.
For a client using LAN-free data movement, the destination is a storage pool
that uses a device on the SAN.
2. Because the destination is on the SAN, the client contacts the storage agent,
which will handle the data transfer. The storage agent sends a request for a
volume mount to the server.
3. The server contacts the storage device and, in the case of a tape library, mounts
the appropriate media.
4. The server notifies the client of the location of the mounted media.
5. The client, through the storage agent, writes the backup data directly to the
device over the SAN.
6. The storage agent sends file attribute information to the server, and the server
stores the information in its database.
Remember:
v Centera storage devices and optical devices cannot be targets for LAN-free
operations.
v For the latest information about clients that support the feature, see the IBM
Tivoli Storage Manager support page at http://www.ibm.com/support/entry/
portal/Overview/Software/Tivoli/Tivoli_Storage_Manager.
Network-attached storage
Network-attached storage (NAS) file servers are dedicated storage machines whose
operating systems are optimized for file-serving functions. NAS file servers
typically do not run software acquired from another vendor. Instead, they interact
with programs like Tivoli Storage Manager through industry-standard network
protocols, such as network data management protocol (NDMP).
Tivoli Storage Manager provides two basic types of configurations that use NDMP
for backing up and managing NAS file servers. In one type of configuration, Tivoli
Storage Manager uses NDMP to back up a NAS file server to a library device
directly attached to the NAS file server. (See Figure 7.) The NAS file server, which
can be distant from the Tivoli Storage Manager server, transfers backup data
directly to a drive in a SCSI-attached tape library. Data is stored in special,
NDMP-formatted storage pools, which can be backed up to storage media that can
be moved offsite for protection in case of an on-site disaster.
Server
Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP NAS File Server
Connection File System
Data Flow Disks
Disk Storage
Server Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP NAS File Server
Connection File System
Data Flow Disks
Note:
v A Centera storage device cannot be a target for NDMP operations.
v Support for filer-to-server data transfer is only available for NAS devices that
support NDMP version 4.
v For a comparison of NAS backup methods, including using a backup-archive
client to back up a NAS file server, see Determining the location of NAS
backup on page 224.
The image backups are different from traditional Tivoli Storage Manager backups
because the NAS file server transfers the data to the drives in the library or
directly to the Tivoli Storage Manager server. NAS file system image backups can
be either full or differential image backups. The first backup of a file system on a
NAS file server is always a full image backup. By default, subsequent backups are
differential image backups containing only data that has changed in the file system
since the last full image backup. If a full image backup does not already exist, a
full image backup is performed.
Using the Web backup-archive client, users can then browse the TOC and select
the files that they want to restore. If you do not create a TOC, users must be able
to specify the name of the backup image that contains the file to be restored and
the fully qualified name of the file.
By defining virtual file spaces, a file system backup can be partitioned among
several NDMP backup operations and multiple tape drives. You can also use
different backup schedules to back up sub-trees of a file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator's Reference for more
information about virtual file space mapping commands.
Remember: Virtual file space mappings are only supported for NAS nodes.
Libraries with this capability are those models supplied from the manufacturer
already containing mixed drives, or capable of supporting the addition of mixed
drives. Check with the manufacturer, and also check the Tivoli Storage Manager
Web site for specific libraries that have been tested on Tivoli Storage Manager with
mixed device types.
For example, you can have Quantum SuperDLT drives, LTO Ultrium drives, and
StorageTek 9940 drives in a single library defined to the Tivoli Storage Manager
server. For examples of how to set this up, see:
Configuration with multiple drive device types on page 99
Configuring a 3494 library with multiple drive device types on page 112
If the new drive technology cannot write to media formatted by older generation
drives, the older media must be marked read-only to avoid problems for server
operations. Also, the older drives must be removed from the library. Some
examples of combinations that the Tivoli Storage Manager server does not support
in a single library are:
v SDLT 220 drives with SDLT 320 drives
v DLT 7000 drives with DLT 8000 drives
v StorageTek 9940A drives with 9940B drives
v UDO1 drives with UDO2 drives
There are exceptions to the rule against mixing generations of LTO Ultrium drives
and media. The Tivoli Storage Manager server does support mixtures of the
following types:
v LTO Ultrium Generation 1 (LTO1) and LTO Ultrium Generation 2 (LTO2)
v LTO Ultrium Generation 2 (LTO2) with LTO Ultrium Generation 3 (LTO3)
v LTO Ultrium Generation 3 (LTO3) with LTO Ultrium Generation 4 (LTO4)
v LTO Ultrium Generation 4 (LTO4) with LTO Ultrium Generation 5 (LTO5)
| v LTO Ultrium Generation 5 (LTO5) with LTO Ultrium Generation 6 (LTO6)
The server supports these mixtures because the different drives can read and write
to the different media. If you plan to upgrade all drives to Generation 2 (or
Generation 3, Generation 4, or Generation 5), first delete all existing Ultrium drive
definitions and the paths associated with them. Then you can define the new
Generation 2 (or Generation 3, Generation 4, or Generation 5) drives and paths.
Note:
1. LTO Ultrium Generation 3 drives can only read Generation 1 media. If you are
mixing Ultrium Generation 1 with Ultrium Generation 3 drives and media in a
single library, you must mark the Generation 1 media as read-only, and all
Generation 1 scratch volumes must be checked out.
2. LTO Ultrium Generation 4 drives can only read Generation 2 media. If you are
mixing Ultrium Generation 2 with Ultrium Generation 4 drives and media in a
single library, you must mark the Generation 2 media as read-only, and all
Generation 2 scratch volumes must be checked out.
3. LTO Ultrium Generation 5 drives can only read Generation 3 media. If you are
mixing Ultrium Generation 3 with Ultrium Generation 5 drives and media in a
single library, you must mark the Generation 3 media as read-only, and all
Generation 3 scratch volumes must be checked out.
| 4. LTO Ultrium Generation 6 drives can only read Generation 4 media. If you are
| mixing Ultrium Generation 4 with Ultrium Generation 6 drives and media in a
| single library, you must mark the Generation 4 media as read-only, and all
| Generation 4 scratch volumes must be checked out.
If you plan to encrypt volumes in a library, do not mix media generations in the
library.
| This includes LTO formats. Multiple storage pools and their device classes of
| different types can point to the same library that can support them as explained in
| Different media generations in a library on page 61.
You can migrate to a new generation of a media type within the same storage pool
by following these steps:
1. ALL older drives are replaced with the newer generation drives within the
library (they cannot be mixed).
| 2. The existing volumes with the older formats are marked R/O if the new drive
| cannot append those tapes in the old format. If the new drive can write to the
| existing media in their old format, this is not necessary, but Step 1 is still
| required. If it is necessary to keep different drive generations that are read but
| not write compatible within the same library, separate storage pools for each
| must be used.
Library sharing
Library sharing or tape resource sharing allows multiple Tivoli Storage Manager
servers to use the same tape library and drives on a storage area network (SAN)
and to improve backup and recovery performance and tape hardware asset
utilization.
When Tivoli Storage Manager servers share a library, one server is set up as the
library manager and controls library operations such as mount and dismount. The
library manager also controls volume ownership and the library inventory. Other
servers are set up as library clients and use server-to-server communications to
contact the library manager and request resources.
Library clients must be at the same or a lower version than the library manager
server. A library manager cannot support library clients that are at a higher
version. For example, a version 6.2 library manager can support a version 6.1
library client but cannot support a version 6.3 library client.
When data is to be stored in or retrieved from a storage pool, the server does the
following:
1. The server selects a volume from the storage pool. The selection is based on the
type of operation:
Tivoli Storage Manager manages the data on the media, but you manage the media
itself, or you can use a removable media manager. Regardless of the method used,
managing media involves creating a policy to expire data after a certain period of
time or under certain conditions, move valid data onto new media, and reuse the
empty media.
Tape inventory
Ongoing tape processing
3
Select tape
Data
expires or
moves
Reclaim
1. You label 1 and check in 2 the media. Checking media into a manual library
simply means storing them (for example, on shelves). Checking media into an
automated library involves adding them to the library volume inventory.
See
v Labeling removable media volumes on page 146
2. If you plan to define volumes to a storage pool associated with a device, you
should check in the volume with its status specified as private. Use of scratch
volumes is more convenient in most cases.
3. A client sends data to the server for backup, archive, or space management.
The server stores the client data on the volume. Which volume the server
selects 3 depends on:
v The policy domain to which the client is assigned.
v The management class for the data (either the default management class for
the policy set, or the class specified by the client in the client's
include/exclude list or file).
v The storage pool specified as the destination in either the management class
(for space-managed data) or copy group (for backup or archive data). The
storage pool is associated with a device class, which determines which
device and which type of media is used.
Table 9 summarizes the definitions that are required for different device types.
Table 9. Required definitions for storage devices
Required Definitions
Device Device Types Library Drive Path Device Class
Magnetic disk DISK Yes See note
FILE See note Yes
CENTERA Yes
Tape 3590 Yes Yes Yes Yes
3592
4MM
8MM
DLT
LTO
NAS
QIC
VOLSAFE
3570
DTF
GENERICTAPE
CARTRIDGE See note
ECARTRIDGE See note
| Optical OPTICAL Yes Yes Yes Yes
| WORM
Removable media REMOVABLEFILE Yes Yes Yes Yes
(file system)
Notes:
v The DISK device class exists at installation and cannot be changed.
v FILE libraries, drives, and paths are required for sharing with storage agents.
v Support for the CARTRIDGE device type:
IBM 3480, 3490, and 3490E tape drives
v The ECARTRIDGE device type is for StorageTek's cartridge tape drives such as
SD-3, 9480, 9890, and 9940 drives
To map storage devices to device classes, use the information shown in Table 10.
Table 10. Mapping storage devices to device classes
Device Class Description
DISK Storage volumes that reside on the internal disk drive
You must define any device classes that you need for your removable media
devices such as tape drives. See Defining device classes on page 190 for
information on defining device classes to support your physical storage
environment.
For example, you determine that users in the business department have three
requirements:
v Immediate access to certain backed-up files, such as accounts receivable and
payroll accounts.
To match user requirements to storage devices, you define storage pools, device
classes, and, for device types that require them, libraries and drives. For example,
to set up the storage hierarchy so that data migrates from the BACKUPPOOL to 8
mm tapes, you specify BACKTAPE1 as the next storage pool for BACKUPPOOL.
See Table 11.
Table 11. Mapping storage pools to device classes, libraries, and drives
Library
Storage Pool Device Class (Hardware) Drives Volume Type Storage Destination
BACKUPPOOL DISK Storage volumes For a backup copy
on the internal group for files
disk drive requiring immediate
access
BACKTAPE1 8MM_CLASS AUTO_8MM DRIVE01, 8-mm tapes For overflow from the
(Exabyte DRIVE02 BACKUPPOOL and for
EXB-210) archived data that is
periodically accessed
BACKTAPE2 DLT_CLASS MANUAL_LIB DRIVE03 DLT tapes For backup copy
(Manually groups for files that are
mounted) occasionally accessed
Note: Tivoli Storage Manager has the following default disk storage pools:
v BACKUPPOOL
v ARCHIVEPOOL
v SPACEMGPOOL
For more information, see
Configuring random access volumes on disk devices on page 78
Tip: For sequential access devices, you can categorize the type of removable
media based on their capacity.
For example, standard length cartridge tapes and longer length cartridge tapes
require different device classes.
8. Determine how the mounting of volumes is accomplished for the devices:
v Devices that require operators to load volumes must be part of a defined
MANUAL library.
| v Devices that are automatically loaded must be part of a defined SCSI, 349X,
| or VTL library. Each automated library device is a separate library.
v Devices that are controlled by Oracle StorageTek Automated Cartridge
System Library Software (ACSLS) must be part of a defined ACSLS library.
v Devices that are managed by an external media management system must
be part of a defined EXTERNAL library.
9. If you are considering storing data for one Tivoli Storage Manager server by
using the storage of another Tivoli Storage Manager server, consider network
bandwidth and network traffic. If your network resources constrain your
environment, you might have problems with using the SERVER device type
efficiently.
Also, consider the storage resources available on the target server. Ensure that
the target server has enough storage space and drives to handle the load from
the source server.
10. Determine the storage pools to set up, based on the devices you have and on
user requirements. Gather users' requirements for data availability. Determine
which data needs quick access and, which does not.
11. Be prepared to label removable media. You might want to create a new
labeling convention for media so that you can distinguish them from media
that are used for other purposes.
Tivoli Storage Manager stores data on magnetic disks in random access volumes,
as data is normally stored on disk, and in files on the disk that are treated as
sequential access volumes.
You can store the following types of data on magnetic disk devices:
v The database and recovery log
v Backups of the database
v Export and import data
v Client data that is backed up, archived, or migrated from client nodes. The client
data is stored in storage pools.
Tasks:
Configuring random access volumes on disk devices on page 78
Configuring FILE sequential volumes on disk devices on page 79
Varying disk volumes online or offline on page 80
Cache copies for files stored on disk on page 80
Freeing space on disk on page 81
Scratch FILE volumes on page 81
Volume history file and volume reuse on page 81
Review the following Tivoli Storage Manager requirements for disk devices and
compare them with information from your disk system vendor. A list of supported
disk storage devices is not available. Contact the vendor for your disk system if
you have questions or concerns about whether Tivoli Storage Manager
requirements are supported. The vendor should be able to provide the
configuration settings to meet these requirements.
I/O operation results must be reported synchronously and accurately. For the
database and the active and archive logs, unreported or asynchronously reported
write errors that result in data not being permanently committed to the storage
Data in Tivoli Storage Manager storage pools, database volumes, and log volumes
must be interdependent. Tivoli Storage Manager requires that the data written to
these entities can be retrieved exactly as it was written. Also data in these entities
must be consistent with one another. There cannot be timing windows in which
data that is being retrieved varies depending on the way that an I/O system
manages the writing of data. Generally, this means that replicated Tivoli Storage
Manager environments must use features such as maintenance of write-order
between the source and replication targets. It also requires that the database, log,
and disk storage pool volumes be part of a consistency group in which any I/O to
the members of the target consistency group are written in the same order as the
source and maintain the same volatility characteristics. Requirements for I/O to
disk storage systems at the remote site must also be met.
Database write operations must be nonvolatile for active and archive logs and
DISK device class storage pool volumes. Data must be permanently committed to
storage that is known toTivoli Storage Manager Tivoli Storage Manager has many
of the attributes of a database system, and data relationships that are maintained
require that data written as a group be permanently resident as a group or not
resident as a group. Intermediate states produce data integrity issues. Data must be
permanently resident after each operating-system write API invocation.
For FILE device type storage pool volumes, data must be permanently resident
following an operating system flush API invocation. This API is used at key
processing points in the Tivoli Storage Manager application. The API is used when
data is to be permanently committed to storage and synchronized with database
and log records that have already been permanently committed to disk storage.
For systems that use caches of various types, the data must be permanently
committed by the write APIs for the database, the active and archive logs, and
DISK device class storage pool volumes and by the flush API (for FILE device class
storage pool volumes). Tivoli Storage Manager uses write-through flags internally
when using storage for the database, the active and archive logs, and DISK device
class storage pool volumes. Data for the I/O operation can be lost if nonvolatile
cache is used to safeguard I/O writes to a device and the nonvolatile cache is
battery protected. If there is a power loss and power is not restored before the
battery is exhausted, then data can be lost. This would be the same as having
uncommitted storage resulting in data integrity issues.
To write properly to the Tivoli Storage Manager database, to active and archive
logs, and to DISK device class storage pool volumes, the operating system API
write invocation must synchronously and accurately report the operation results.
Similarly, the operating system API flush invocation for FILE device type storage
pool volumes must also synchronously and accurately report the operation results.
A successful result from the API for either write or flush must guarantee that the
data is permanently committed to the storage system.
These requirements extend to replicated environments such that the remote site
must maintain consistency with the source site in terms of the order of writes; I/O
must be committed to storage at the remote site in the same order that it was
written at the source site. The ordering applies to the set of files that Tivoli Storage
Manager is writing, whether the files belong to the database, recovery log, or
To avoid having the Tivoli Storage Manager server at the local and remote site
losing synchronization, the server at the remote site should not be started except in
a fail-over situation. If there is a possibility that data at the source and target
locations can lose synchronization, there must be a mechanism to recognize this
situation. If synchronization is lost, the Tivoli Storage Manager server at the remote
location must be restored by conventional means by using Tivoli Storage Manager
database and storage pool restores.
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Remote file systems in particular might report successful writes, even
after being configured for synchronous operations. This mode of operation causes
data integrity issues if the file system can fail after reporting a successful write.
Check with the vendor of your file system to ensure that flushes are performed to
nonvolatile storage in a synchronous manner.
File systems and raw logical volumes for random access storage
You can choose to use either files in a file system or raw logical volumes when
defining random access storage pool volumes.
Random access storage pool volumes defined as raw logical volumes have the
following advantages:
v The formatting of volumes is nearly instantaneous because the creation of a file
is not needed.
v Many layers of the operating system can be bypassed, providing faster
performance and lower CPU utilization.
v Fewer RAM resources are consumed because file system cache is not used.
Note:
1. Using JFS2 file systems for the storage pool volumes can provide many of the
benefits of raw logical volumes.
Define storage pool volumes on disk drives that reside on the server system, not
on remotely mounted file systems. Network attached drives can compromise the
integrity of the data that you are writing.
Complete the following steps to use random access volumes on a disk device:
1. Define a storage pool that is associated with the DISK device class, or use one
of the default storage pools that Tivoli Storage Manager provides:
ARCHIVEPOOL, BACKUPPOOL, and SPACEMGPOOL.
For example, enter the following command on the command line of an
administrative client:
define stgpool engback1 disk maxsize=5G highmig=85 lowmig=40
This command defines device class FILECLASS with a device type of FILE.
To store database backups or exports on FILE volumes, this step is all you need
to do to prepare the volumes. You can use FILE sequential volumes to transfer
data for purposes such as electronic vaulting. For example, you can send the
results of an export operation or a database backup operation to another
location. At the receiving site, the files can be placed on tape or disk. You can
define a device class with a device type of FILE.
2. Define a storage pool that is associated with the new FILE device class.
For example, enter the following command on the command line of an
administrative client:
define stgpool engback2 fileclass maxscratch=100 mountlimit=2
This command defines storage pool ENGBACK2 with device class FILECLASS.
To allow Tivoli Storage Manager to use scratch volumes for this device class,
specify a value greater than zero for the number of maximum scratch volumes
when you define the device class. If you do set MAXSCRATCH=0 to not allow
scratch volumes, you must define each volume to be used in this device class.
3. Do one of the following:
v Specify the new storage pool as the destination for client files that are backed
up, archived, or migrated, by modifying existing policy or creating new
policy. See Chapter 13, Implementing policies for client data, on page 477
for details.
v Place the new storage pool in the storage pool migration hierarchy by
updating an already defined storage pool. See Example: Updating storage
pools on page 260.
You can also set up predefined sequential volumes with the DEFINE VOLUME
command:
define volume poolname prefix numberofvolumes=x
where x specifies the number of volumes that can be created at once with a size
taken from the device class' maximum capacity. The advantage to this method
is that a space is pre-allocated and not subject to additional fragmentation in
the file system as scratch volumes are.
For storage pools associated with the FILE device class, you can also use the
DEFINE SPACETRIGGER and UPDATE SPACETRIGGER commands to create volumes
and assign them to a specified storage pool when predetermined
space-utilization thresholds are exceeded.
For more information, see the Administrator's Reference.
1. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
For example, to vary the disk volume named /storage/pool001 offline, enter:
vary offline /storage/pool001
You can make the disk volume available to the server again by varying the volume
online. For example:
vary online /storage/pool001
Using cache can improve how fast a frequently accessed file is retrieved. Faster
retrieval can be important for clients that are storing space-managed files. If the file
needs to be accessed, the copy in cache can be used rather than the copy on tape.
However, using cache can degrade the performance of client backup operations
and increase the space needed for the database.
Related tasks:
Caching in disk storage pools on page 292
Expiration processing deletes information from the database about any client files
that are no longer valid according to the policies you have set. For example,
suppose that four backup versions of a file exist in server storage, and only three
versions are allowed in the backup policy (the management class) for the file.
Expiration processing deletes information about the oldest of the four versions of
the file. The space that the file occupied in the storage pool becomes available for
reuse.
You can run expiration processing by using one or both of the following methods:
v Use the EXPIRE INVENTORY command.
v Set the EXPINTERVAL server option and specify the interval so that expiration
processing runs periodically.
Shredding occurs only after a data deletion commits, but it is not necessarily
completed immediately after the deletion. The space occupied by the data to be
shredded remains occupied while the shredding takes place, and is not available as
free space for new data until the shredding is complete. When sensitive data is
written to server storage and the write operation fails, the data that was already
written is shredded.
Related concepts:
Securing sensitive client data on page 541
Related reference:
Running expiration processing to delete expired files on page 514
You can specify a maximum number of scratch volumes for a storage pool that has
a FILE device type.
When scratch volumes used in storage pools become empty, the files are deleted.
Scratch volumes can be located in multiple directories on multiple file systems.
To reuse volumes that were previously used for database backup or export, use the
DELETE VOLHISTORY command.
| Note: With Tivoli Storage Manager Extended Edition, the disaster recovery
| manager (DRM) function automatically deletes volume information during
| processing of the MOVE DRMEDIA command.
Attached devices should be on their own host bus adapter (HBA) and should not
share with other devices types (disk, CDROM, and so on). IBM tape drives have
some special requirements for HBAs and associated drivers.
Tasks:
Attaching a manual drive to your system
Attaching an automated library device to your system on page 84
Selecting a device driver on page 85
Installing and configuring device drivers on page 88
For more information about selecting a device driver, see Selecting a device
driver on page 85.
Before you attach an automated library device, consider the following restrictions:
v Attached devices must be on their own Host Bus Adapter (HBA).
v An HBA must not be shared with other devices types (disk, CDROM, and so
on).
v For multiport Fibre Channel HBAs, attached devices must be on their own port.
These ports must not be shared with other device types.
v IBM tape drives have some special requirements on HBA and associated drivers.
For more information about devices, see the Tivoli Storage Manager Supported
Devices website.
v To use the Fibre Channel (FC) adapter card, complete the following steps:
1. Install the FC adapter card and associated drivers.
2. Install the appropriate device drivers for attached medium changer devices.
For more information about selecting a device driver, see Selecting a device
driver on page 85.
v To use the SCSI adapter card, complete the following steps:
1. Install the SCSI adapter card and associated drivers.
2. Determine the SCSI IDs available on the SCSI adapter card to which you are
attaching the device. Find one unused SCSI ID for each drive, and one
unused SCSI ID for the library or autochanger controller.
3. Set the SCSI ID for the drives to the unused SCSI IDs.
4. Set switches on the back of the device or set the IDs on the operator's panel.
For each device that is connected in a chain to a single SCSI bus, you must
configure it to have a unique SCSI ID. If each device does not have a unique
SCSI ID, serious system problems can arise.
5. Turn off your system before you attach a device to prevent damage to the
hardware.
6. Attach the device to your server system hardware, by following the
manufacturer's instructions.
7. Attach a terminator to the last device in the chain of devices that are
connected on one SCSI adapter card.
The appropriate mode is usually called random mode; however, terminology can
vary from one device to another. Refer to the documentation for your device to
determine how to set it to the appropriate mode.
Note:
1. Some libraries have front panel menus and displays that can be used for
explicit operator requests. However, if you set the device to respond to such
requests, it typically does not respond to Tivoli Storage Manager requests.
2. Some libraries can be placed in sequential mode, in which volumes are
automatically mounted in drives by using a sequential approach. This mode
conflicts with how Tivoli Storage Manager accesses the device.
You can download IBM device drivers from the Fix Central website:
1. Go to the Fix Central Web site: http://www.ibm.com/support/fixcentral/.
2. Select Storage Systems for the Product Group.
3. Select Tape Systems for the Product Family.
4. Select Tape device drivers and software for the Product Type.
5. Select Tape device drivers for the Product.
6. Select your operating system for the Platform.
For the most up-to-date list of devices and operating-system levels supported by
IBM device drivers, see the Tivoli Storage Manager Supported Devices website at:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
The Tivoli Storage Manager device driver is installed with the server. For details
on device driver installation directories, see Installation directories.
The Tivoli Storage Manager device driver uses persistent reservation for some tape
drives. See Technote 1470319 at http://www.ibm.com/support/
docview.wss?uid=swg21470319 for details.
| v For the following tape devices, you can choose whether to install the Tivoli
| Storage Manager device driver or the native operating system device driver.
| All DLT and SDLT (including IBM 7337)
| 4MM
| 8MM
| DLT
| DTF
| QIC
| StorageTek SD3, 9490, 9840, 9940, and T10000
| Non-IBM LTO
| v For optical and WORM devices, you must install the Tivoli Storage Manager
| device driver.
| v All SCSI-attached libraries that contain optical and tape drives from the list must
| use the Tivoli Storage Manager changer driver.
| Device drivers that are acquired from other vendors are supported if they are
| supplied by the hardware vendor and are associated with the GENERICTAPE
| For more information, see the DEFINE DEVCLASS - GENERICTAPE command in the
| Administrator's Reference.
To allow both root and non-root users to perform SAN discovery, a special utility
module, dsmqsan, is invoked when a SAN-discovery function is launched. The
module performs as root, giving SAN-discovery authority to non-root users. While
SAN discovery is in progress, dsmqsan runs as root.
The dsmqsan module is installed by default when the Tivoli Storage Manager
server is installed. It is installed with owner root, group system, and mode 4755.
The value of the SETUID bit is on. If, for security reasons, you do not want
non-root users to run SAN-discovery functions, set the bit to off. If non-root users
are having problems running SAN-discovery functions, check the following:
v The SETUID bit. It must be set to on.
v Device special file permissions and ownership. Non-root users need read/write
access to device special files (for example, to tape and library devices).
The dsmqsan module works only for SAN-discovery functions, and does not
provide root privileges for other Tivoli Storage Manager functions.
| The tsmdlst utility is part of the Tivoli Storage Manager device driver package that
| is the same for the server and the storage agent. You must install the Tivoli Storage
| Manager device driver to run the tsmdlst utility for the storage agent.
After devices are configured, you can run the tsmdlst utility to display device
information. The utility saves this information in output files that you can retrieve.
The output files are named lbinfo for medium changer devices, mtinfo for tape
devices, and optinfo for optical devices. After a device is added or reconfigured,
you can update these output files by running the tsmdlst utility again.
The tsmdlst utility and the output files it generates are in the devices/bin
directory, which is /opt/tivoli/tsm/devices/bin, by default. Before you run the
tsmdlst utility, make sure that either the Tivoli Storage Manager server is stopped
or that all device activities are stopped. If a device is in use by the Tivoli Storage
Manager server when the tsmdlst utility runs, a device busy error is issued.
Options
/t Displays trace messages for the tsmdlst utility.
/? Displays usage information about tsmdlst and its parameters.
Display information about all devices that were configured by the Tivoli Storage
Manager device driver:
tsmdlst
TSM Device Name Vendor Product Firmware World Wide Name Serial Number
---------------- ------ ------- -------- --------------- -------------
/dev/rmt/tsmlb39 ATL P3000 0100 1333508999
TSM Device Name Vendor Product Firmware World Wide Name Serial Number
------------------ ------ ------- -------- --------------- -------------
/dev/rmt/tsmmt1001 QUANTUM DLT7000 0100 1333508000
/dev/rmt/tsmmt1002 QUANTUM DLT7000 0100 1333508002
/dev/rmt/tsmmt1003 QUANTUM DLT7000 0100 1333508001
/dev/rmt/tsmmt1004 QUANTUM DLT7000 0100 1333508003
Tivoli Storage Manager supports all devices that are supported by IBM device
drivers. However, Tivoli Storage Manager does not support all the
operating-system levels that are supported by IBM device drivers.
See the following documentation for instructions about installing and configuring
IBM tape device drivers:
v IBM Tape Device Drivers Installation and User's Guide: http://www.ibm.com/
support/docview.wss?uid=ssg1S7002972
v IBM Tape Device Drivers Programming Reference: http://www.ibm.com/support/
docview.wss?uid=ssg1S7003032
After completing the installation procedure in the IBM Tape Device Drivers
Installation and User's Guide, different messages are issued, depending on the device
driver that you are installing:
v If you are installing the device driver for an IBM 3480 or 3490 tape device, you
receive:
rmtx Available
where rmtx is the logical file name for the tape device.
The value of x is assigned automatically by the system. To determine the special
file name of your device, use the /dev/ prefix with the name provided by the
system. For example, if the message is rmt0 Available, the special file name for
the device is /dev/rmt0.
v If you are installing the device driver for an IBM SCSI Tape drive or library, you
receive:
rmtx Available
or
smcx Available
Note: This applies to the IBM device driver only and the device type of this class
must NOT be GENERICTAPE.
The IBM tape device driver provides multipathing support so that, if one path
fails, the Tivoli Storage Manager server can use a different path to access data on a
storage device. The failure and transition to a different path are undetected by the
running server or by a storage agent. The IBM tape device driver also uses
multipath I/O to provide dynamic load balancing for enhanced I/O performance.
To provide redundant paths for SCSI devices, each device must be connected to
two or more HBA ports on a multiport FC Host Bus Adapter, or to different single
FC Host Bus Adapters. If multipath I/O is enabled and a permanent error occurs
on one path (such as a malfunctioning HBA or cable), device drivers provide
automatic path failover to an alternate path.
After multipath I/O has been enabled, the IBM tape device driver detects all paths
for a device on the host system. One path is designated as the primary path. The
rest of the paths are alternate paths. (The maximum number of alternate paths for
a device is 16.) For each path, the IBM tape device driver creates a file with a
unique name. When specifying a path from a source to a destination (for example,
from the Tivoli Storage Manager server to a tape drive) using the DEFINE PATH
command, specify the name of the special file associated with the primary path as
the value of the DEVICE parameter.
For an overview of multipath I/O and load balancing, as well as details about how
to enable, disable or query the status of multipath I/O for a device, see the IBM
Tape Device Drivers Installation and User's Guide.
Multipath I/O is not enabled automatically when the IBM tape device driver is
installed. You must configure it for each logical device after installation. Multipath
I/O remains enabled until the device is deleted or the support is unconfigured. To
configure multipath I/O, use the smit to display Change/Show Characteristics of a
Tape Drive, then select Yes for Enable Path Failover Support.
To obtain the names of special files, use the ls -l command (for example, ls -l
/dev/rmt*). Primary paths and alternate paths are identified by "PRI" and "ALT,"
respectively.
rmt0 Available 20-60-01-PRI IBM 3590 Tape Drive and Medium Changer (FCP)
rmt1 Available 30-68-01-ALT IBM 3590 Tape Drive and Medium Changer (FCP)
In this example, there are two paths associated with the IBM 3590 tape drive
(20-60-01-PRI, 30-68-01-ALT and 30-68-01-ALT). The name of the special file
associated with the primary path is /dev/rmt0. Specify /dev/rmt0 as the value of
the DEVICE parameter in the DEFINE PATH command.
To display path-related details about a particular tape drive, you can also use the
tapeutil -f /dev/rmtx path command, where x is the number of the configured tape
drive. To display path-related details about a particular medium changer, use the
tapeutil -f /dev/smcy path command, where y is the number of the configured
medium changer.
See the following documentation for instructions about installing and configuring
IBM tape device drivers:
v IBM Tape Device Drivers Installation and Users Guide: http://www.ibm.com/
support/docview.wss?uid=ssg1S7002972
v IBM Tape Device Drivers Programming Reference: http://www.ibm.com/support/
docview.wss?uid=ssg1S7003032
After installing a device driver for an IBM TotalStorage 3494 or 3495 Tape Library
Dataserver, a message (logical file name) of the following form is issued:
lmcpx Available
The main menu for Tivoli Storage Manager has two options:
SCSI Attached Devices
Use this option to configure SCSI devices that are connected to a SCSI
adapter in the host.
Fibre Channel system area network (SAN) Attached Devices
Use this option to configure devices that are connected to an FC adapter in
the host. Choose one of the following:
List Attributes of a Discovered Device
Lists attributes of a device known to the current ODM database.
v FC Port ID:
This is the 24-bit FC Port ID(N(L)_Port or F(L)_Port). This is the
address identifier that is unique within the associated topology
where the device is connected. In the switch or fabric
environments, it is usually determined by the switch, with the
upper 2 bytes which are not zero. In a Private Arbitrated Loop,
it is the Arbitrated Loop Physical Address(AL_PA), with the
upper 2 bytes being zero. Consult with your FC vendors to find
out how an AL_PA or a Port ID is assigned.
v Mapped LUN ID
This is from an FC to SCSI bridge (also, called a converter,
router, or gateway) box. Consult with your bridge vendors about
how LUNs are mapped. It is recommended that you do not
change LUN Mapped IDs.
v WW Name
The World Wide Name of the port to which the device is
attached. It is the 64-bit unique identifier assigned by vendors of
FC components such as bridges or native FC devices. Consult
with your FC vendors to find out a port's WWN.
Run the SMIT program to configure the device driver for each autochanger or
robot:
1. Select Devices.
2. Select Tivoli Storage Manager Devices.
3. Select Library/MediumChanger.
4. Select Add a Library/MediumChanger.
5. Select the Tivoli Storage Manager-SCSI-LB for any Tivoli Storage Manager
supported library.
6. Select the parent adapter to which you are connecting the device. This number
is listed in the form: 00-0X, where X is the slot number location of the SCSI
adapter card.
7. When prompted, enter the CONNECTION address of the device you are
installing. The connection address is a two-digit number. The first digit is the
SCSI ID (the value you recorded on the worksheet). The second digit is the
device's SCSI logical unit number (LUN), which is usually zero, unless
otherwise noted. The SCSI ID and LUN must be separated by a comma (,). For
example, a connection address of 4,0 has a SCSI ID=4 and a LUN=0.
8. Click on the DO button.
You will receive a message (logical filename) of the form lbX Available. Note
the value of X, which is a number assigned automatically by the system. Use
this information to complete the Device Name field on your worksheet.
For example, if the message is lb0 Available, the Device Name field is /dev/lb0
on the worksheet. Always use the /dev/ prefix with the name provided by SMIT.
Attention: Tivoli Storage Manager cannot write over tar or dd tapes, but tar or dd
can write over Tivoli Storage Manager tapes.
Note: Tape drives can be shared only when the drive is not defined or the server
is not started. The MKSYSB command will not work if both Tivoli Storage
Manager and AIX are sharing the same drive or drives. To use the operating
system's native tape device driver in conjunction with a SCSI drive, the device
must be configured to AIX first and then configured to Tivoli Storage Manager. See
your AIX documentation regarding these native device drivers.
Run the SMIT program to configure the device driver for each drive (including
drives in libraries) as follows:
1. Select Devices.
2. Select Tivoli Storage Manager Devices.
3. Select Tape Drive or Optical R/W Disk Drive, depending on whether the drive
is tape or optical.
4. Select Add a Tape Drive or Add an Optical Disk Drive, depending on
whether the drive is tape or optical.
5. Select the Tivoli Storage Manager-SCSI-MT for any supported tape drive or
Tivoli Storage Manager-SCSI-OP for any supported optical drive.
6. Select the adapter to which you are connecting the device. This number is listed
in the form: 00-0X, where X is the slot number location of the SCSI adapter
card.
7. When prompted, enter the CONNECTION address of the device you are
installing. The connection address is a two-digit number. The first digit is the
SCSI ID (the value you recorded on the worksheet). The second digit is the
device's SCSI logical unit number (LUN), which is usually zero, unless
otherwise noted. The SCSI ID and LUN must be separated by a comma (,). For
example, a connection address of 4,0 has a SCSI ID=4 and a LUN=0.
8. Click on the DO button. You will receive a message:
v If you are configuring the device driver for a tape device (other than an IBM
tape drive), you will receive a message (logical filename) of the form mtX
Available. Note the value of X, which is a number assigned automatically by
the system. Use this information to complete the Device Name field on the
worksheet.
For example, if the message is mt0 Available, the Device Name field is
/dev/mt0 on the worksheet. Always use the /dev/ prefix with the name
provided by SMIT.
v If you are configuring the device driver for an optical device, you will
receive a message of the form opX Available. Note the value of X, which is a
number assigned automatically by the system. Use this information to
complete the Device Name field on the worksheet.
For example, if the message is op9 Available, the Device Name field is
/dev/op9 on the worksheet. Always use the /dev/ prefix with the name
provided by SMIT.
Perform the following steps when setting up the Tivoli Storage Manager server to
access Centera:
1. Install the Tivoli Storage Manager server.
2. If you are upgrading from a previous level of Tivoli Storage Manager, delete
the Centera SDK libraries from the directory where the server was installed. For
each platform delete the following files:
Table 15. Centera SDK library files to delete
Operating system Files to delete
AIX libFPLibrary64-aix.a
libFPParser64.a
libPAI_module64.a
HP-UX libFPLibrary64-hp.sl
libPAI_module64.sl
libFPParser64.sl
Oracle Solaris libFPLibrary64-sun.a
libPAI_module64.so
libFPParser64.so
For the most up-to-date list of supported devices and operating-system levels, see
the Supported Devices website:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
The tasks require an understanding of Tivoli Storage Manager storage objects. For
an introduction to these storage objects, see Tivoli Storage Manager storage
objects on page 44.
Concepts:
Device configuration tasks on page 96
Mixed device types in libraries on page 60
Server options that affect storage operations on page 70
Impacts of device changes on the SAN on page 143
Tasks:
Configuring manually mounted devices on page 132
Configuring SCSI libraries for use by one server on page 97
Configuring SCSI libraries shared among servers on a SAN on page 101
Configuring IBM 3494 libraries on page 108
Configuring an IBM 3494 library for use by one server on page 110
Configuring a 3494 library with a single drive device type on page 111
Configuring a 3494 library with multiple drive device types on page 112
Configuring an ACSLS-managed library on page 122
Configuring IBM Tivoli Storage Manager for LAN-free data movement on page 134
Validating your LAN-free configuration on page 135
Configuring the Tivoli Storage Manager server to use z/OS media server storage on
page 136
Configuring IBM Tivoli Storage Manager for NDMP operations on page 142
Note: Each volume used by a server for any purpose must have a unique
name. This applies to volumes that reside in different libraries, volumes used
for storage pools, and volumes used for operations such as database backup or
export.
6. Register clients to the domain associated with the policy that you defined or
updated in the preceding step. For more information, see Chapter 13,
Implementing policies for client data, on page 477.
After you have attached and defined your devices, you can store client data in two
ways:
v Have clients back up data directly to tape. For details, see Configuring policy
for direct-to-tape backups on page 524.
v Have clients back up data to disk. The data is later migrated to tape. For details,
see Storage pool hierarchies on page 270.
You can also configure devices using the device configuration wizard in the
Administration Center. See Chapter 18, Managing servers with the Administration
Center, on page 597 for more details.
Assume that you want to attach an automated SCSI library containing two drives
to the server system. The library is not shared with other Tivoli Storage Manager
servers or with storage agents and is typically attached to the server system via
SCSI cables.
v In the first configuration, both drives in the SCSI library are the same device
type. Define one device class.
v In the second configuration, the drives are different device types. Define a
device class for each drive device type.
Drives with different device types are supported in a single library if you define
a device class for each type of drive. If you are configuring this way, you must
include the specific format for the drive's device type by using the FORMAT
parameter with a value other than DRIVE.
Note: If you have a SCSI library with a barcode reader and you would like to
automatically label tapes before they are checked in, you can set the AUTOLABEL
parameter to YES. For example:
define library autodltlib libtype=scsi autolabel=yes
2. Define a path from the server to the library. The DEVICE parameter specifies the
device driver's name for the library, which is the special file name.
define path server1 autodltlib srctype=server desttype=library
device=/dev/lb3
3. Define the drives in the library. Both drives belong to the AUTODLTLIB library.
define drive autodltlib drive01
define drive autodltlib drive02
This example uses the default address for the drive's element address. The
server obtains the element address from the drive itself at the time that the
path is defined.
The element address is a number that indicates the physical location of a drive
within an automated library. The server needs the element address to connect
the physical location of the drive to the drive's SCSI address. You can have the
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
Important: Do not use the DRIVE format, which is the default. Because the
drives are different types, Tivoli Storage Manager uses the format specification
to select a drive. The results of using the DRIVE format in a mixed media
library are unpredictable.
define devclass dlt_class library=mixedlib devtype=dlt format=dlt40
define devclass lto_class library=mixedlib devtype=lto format=ultriumc
6. Verify your definitions by issuing the following commands:
query library
query drive
query path
query devclass
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
Notes:
v If you use the autolabel=yes parameter on the DEFINE LIBRARY command, you
will not need to label tapes before you check them in.
v If a volume has an entry in volume history, you cannot check it in as a scratch
volume.
v Tivoli Storage Manager accepts only tapes labeled with IBM standard labels.
IBM standard labels are similar to ANSI Standard X3.27 labels except that the
IBM standard labels are written in EBCDIC. For a list of IBM media sales
contacts who can provide compatible tapes, visit the IBM website. If you are
using non-IBM storage devices and media, consult your tape-cartridge
distributor.
v Any volume that has a bar code beginning with CLN is treated as a cleaning
tape.
The procedures for volume checkin and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
The following tasks are required for Tivoli Storage Manager servers to share library
resources on a SAN:
1. Ensure the server that will be defined as the library manager is at the same or
higher version as the server or servers that will be defined as library clients.
2. Set up server-to-server communications.
3. Set up devices on the server systems.
4. Set up the library on the Tivoli Storage Manager server that is going to act as
the library manager. In the example used for this section, the library manager
server is named ASTRO.
5. Set up the library on the Tivoli Storage Manager server that is going to act as
the library client. In the example used for this section, the library client server
is named JUDY.
This requires configuring each server as you would for Enterprise Administration,
which means you define the servers to each other using the cross-define function.
See Setting up communications among servers on page 700 for details. For a
discussion about the interaction between library clients and the library manager in
processing Tivoli Storage Manager operations, see Operations with shared
libraries on page 163.
For details, see Attaching an automated library device to your system on page 84
and Selecting a device driver on page 85.
Note: You can configure a SCSI library so that it contains all drives of the same
device type or so that it contains drives of different device types. You can modify
the procedure described for configuring a library for use by one server
(Configuration with multiple drive device types on page 99) and use it for
configuring a shared library.
Use the following sample procedure for each Tivoli Storage Manager server that
will be a library client. The library client server is named JUDY. With the exception
of one step, perform the procedure from the library client servers.
1. Define the server that is the library manager:
define server astro serverpassword=secret hladdress=9.115.3.45 lladdress=1580
crossdefine=yes
2. Define the shared library named SANGROUP, and identify the library manager
server's name as the primary library manager. Ensure that the library name is
the same as the library name on the library manager:
define library sangroup libtype=shared primarylibmanager=astro
3. Perform this step from the library manager. Define a path from the library client
server to each drive that the library client server will be allowed to access. The
device name should reflect the way the library client system sees the device.
There must be a path defined from the library manager to each drive in order
for the library client to use the drive.
In general, it is best practice for any library sharing setup to have all drive path
definitions created for the library manager also created for each library client.
For example, if the library manager defines three drives, the library client
should also define three drives. If you want to limit the number of drives a
library client can use at a time, use the MOUNTLIMIT parameter on the library
client's device class instead of limiting the drive path definitions for the library
client.
The following is an example of how to define a path from the library manager
to a drive in the library client:
define path judy drivea srctype=server desttype=drive
library=sangroup device=/dev/rmt6
define path judy driveb srctype=server desttype=drive
library=sangroup device=/dev/rmt7
For more information about paths, see Defining paths on page 188.
4. Return to the library client for the remaining steps. Define all the device classes
that are associated with the shared library.
define devclass tape library=sangroup devtype=3570
Set the parameters for the device class the same on the library client as on the
library manager. A good practice is to make the device class names the same on
both servers, but this is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
Defining a VTL to the Tivoli Storage Manager server can help improve
performance because the server handles mount point processing for VTLs
differently than real tape libraries. The physical limitations for real tape hardware
are not applicable to a VTL, affording options for better scalability.
You can use a VTL for any virtual tape library when the following conditions are
true:
v There is no mixed media involved in the VTL. Only one type and generation of
drive and media is emulated in the library.
v Every server and storage agent with access to the VTL has paths that are defined
for all drives in the library.
If either of these conditions are not met, any mount performance advantage from
defining a VTL library to the Tivoli Storage Manager server can be reduced or
negated.
VTLs are compatible with earlier versions of both library clients and storage
agents. The library client or storage agent is not affected by the type of library that
is used for storage. If mixed media and path conditions are true for a SCSI library,
it can be defined or updated as LIBTYPE=VTL.
The concept of storage capacity in a virtual tape library is different from capacity
in physical tape hardware. In a physical tape library, each volume has a defined
capacity, and the library's capacity is defined in terms of the total number of
volumes in the library. The capacity of a VTL, alternatively, is defined in terms of
total available disk space. You can increase or decrease the number and size of
volumes on disk.
This variability affects what it means to run out of space in a VTL. For example, a
volume in a VTL can run out of space before reaching its assigned capacity if the
total underlying disk runs out of space. In this situation, the server can receive an
end-of-volume message without any warning, resulting in backup failures.
When out-of-space errors and backup failures occur, disk space is usually still
available in the VTL. It is hidden in volumes that are not in use. For example,
volumes that are logically deleted or returned to scratch status in the Tivoli Storage
Manager server are only deleted in the server database. The VTL is not notified,
and the VTL maintains the full size of the volume as allocated in its capacity
considerations.
To help prevent out-of-space errors, ensure that any SCSI library that you update
to LIBTYPE=VTL is updated with the RELABELSCRATCH parameter set to YES. The
RELABELSCRATCH option enables the server to overwrite the label for any volume
that is deleted and to return the volume to scratch status in the library. The
RELABELSCRATCH parameter defaults to YES for any library defined as a VTL.
Most VTL environments use as many drives as possible to maximize the number
of concurrent tape operations. A single tape mount in a VTL environment is
typically faster than a physical tape mount. However, using many drives increases
the amount of time that the Tivoli Storage Manager server requires when a mount
is requested. The selection process takes longer as the number of drives that are
defined in a single library object in the server increases. Virtual tape mounts can
take as long or longer than physical tape mounts depending on the number of
drives in the VTL.
For best results, create VTLs with 300-500 drives each. If more drives are required,
you can logically partition the VTL into multiple libraries and assign drives to each
library. Operating system and SAN hardware configurations could impose
limitations on the number of devices that can be utilized within the VTL library.
VTLs are identified by using the DEFINE LIBRARY command and specifying
LIBTYPE=VTL. Because a VTL library functionally interacts with the server in the
same way that a SCSI library does, it is possible to use the UPDATE LIBRARY
command to change the library type of a SCSI library that is already defined. You
do not have to redefine the library.
The following examples show how to add a VTL library to your environment.
If you have a new VTL library and want to use the VTL enhancements that are
available in Tivoli Storage Manager Version 6.3, define the library as a VTL to the
server:
define library chester libtype=vtl
This sets up the new VTL library and enables the RELABELSCRATCH option to
relabel volumes that have been deleted and returned to scratch status.
If you have a SCSI library and you want to change it to a VTL, use the UPDATE
LIBRARY command to change the library type:
update library calzone libtype=vtl
You can only issue this command when the library being updated is defined with
LIBTYPE=SCSI.
If you define a SCSI tape library as a VTL and want to change it back to the SCSI
library type, update the library by issuing the UPDATE LIBRARY command:
update library chester libtype=scsi
If you are setting up or modifying your hardware environment and must create or
change large numbers of drive definitions, the PERFORM LIBACTION command can
make this task much simpler. You can define a new library and then define all
drives and paths to the drives. Or, if you have an existing library that you want to
delete, you can delete all existing drives and their paths in one step.
The PREVIEW parameter allows you to view the output of commands before they
are processed to verify the action that you want to perform. If you are defining a
library, a path to the library must already be defined if you want to specify the
PREVIEW parameter. You cannot use the PREVIEW and DEVICE parameters
together.
If you currently have an IBM 3494 library with both 3490 and 3590 drives defined,
you will need to follow the upgrade procedure to separate the library into two
distinct library objects. See Upgrading 3494 libraries with both 3490 and 3590
drives defined on page 109.
Attention: If other systems or other Tivoli Storage Manager servers connect to the
same 3494 library, each must use a unique set of category numbers. Otherwise, two
or more systems may try to use the same volume, and cause corruption or loss of
data.
Typically, a software application that uses a 3494 library uses volumes in one or
more categories that are reserved for that application. To avoid loss of data, each
application sharing the library must have unique categories. When you define a
3494 library to the server, you can use the PRIVATECATEGORY and
SCRATCHCATEGORY parameters to specify the category numbers for private and
scratch Tivoli Storage Manager volumes in that library. If the volumes are IBM
3592 WORM (write once, read many) volumes, you can use the
WORMSCRATCHCATEGORY parameter to specify category numbers for scratch
When a volume is first inserted into the library, either manually or automatically at
the convenience I/O station, the volume is assigned to the insert category
(X'FF00'). A software application such as Tivoli Storage Manager can contact the
library manager to change a volume's category number. For Tivoli Storage
Manager, you use the CHECKIN LIBVOLUME command (see Checking new
volumes into a library on page 149).
349X library objects only contain one device type (3490, 3590, or 3592) of drives.
Thus, if you have 3590s and 3592s in your 349X library, you must define two
library objects: one for your 3590 drives and one for your 3592 drives. Each of
these library objects must have the same device parameter when their paths are
defined.
The server reserves two category numbers in each 3494 library that it accesses: one
for private volumes and one for scratch volumes. For example, suppose you want
to define a library object for your 3592 drives:
define library my3494 libtype=349x privatecategory=400 scratchcategory=401
wormscratchcategory=402
For this example, the server uses the following categories in the new my3494
library:
v 400 (X'190') Private volumes
v 401 (X'191') Scratch volumes
v 402 (X'192') WORM scratch volumes
Note: The default values for the categories may be acceptable in most cases.
However, if you connect other systems or Tivoli Storage Manager servers to a
single 3494 library, ensure that each uses unique category numbers. Otherwise, two
or more systems may try to use the same volume, and cause a corruption or loss
of data.
For a discussion regarding the interaction between library clients and the library
manager in processing Tivoli Storage Manager operations, see Operations with
shared libraries on page 163.
The following additional steps must be implemented to continue use of the 3490
drives and volumes. Your new library will be your 349X library with 3490 drives.
1. Check out all of your 3490 scratch volumes from the Tivoli Storage Manager
349X library.
2. Delete all 3490 drives and 3490 drive paths that pertain to this library.
3. Define a new Tivoli Storage Manager library that you will use for 3490 drives
and volumes. Use a different scratch category and the same private category as
your original library. If you want your new library to have a different private
category also, you must repeat step 1 with your private 3490 volumes.
Because a 349X library will now only have one device type (3490, 3590, or 3592) of
drives, the DEVTYPE parameter in both LABEL LIBVOLUME and CHECKIN
LIBVOLUME commands is an optional parameter unless there are no drives
defined in the library object or no paths defined to any of the drives in the library
object. In this case, where no drives are defined, the DEVTYPE parameter must be
given or else the LABEL LIBVOLUME and CHECKIN LIBVOLUME commands
will fail. The DEVTYPE parameter can still be provided in all cases; it is just not
needed when drives are defined in the library. There is no longer a default for the
CHECKIN LIBVOLUME command.
You must first set up the IBM 3494 library on the server system. This involves the
following tasks:
1. Set the 3494 Library Manager Control Point (LMCP). This procedure is
described in IBM Tape Device Drivers Installation and Users Guide.
2. Physically attach the devices to the server hardware or the SAN.
3. Install and configure the appropriate device drivers for the devices on the
server that will use the library and drives.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
Note: Specify scratch and private categories explicitly. If you accept the
category defaults for both library definitions, different types of media will be
assigned to the same categories.
2. Define a path from the server to each library:
define path server1 3590elib srctype=server desttype=library device=/dev/lmcp0
define path server1 3590hlib srctype=server desttype=library device=/dev/lmcp0
The DEVICE parameter specifies the device special file for the LMCP.
For more information about paths, see Defining paths on page 188.
3. Define the drives, ensuring that they are associated with the appropriate
libraries.
v Define the 3590E drives to 3590ELIB.
define drive 3590elib 3590e_drive1
define drive 3590elib 3590e_drive2
v Define the 3590H drives to 3590HLIB.
define drive 3590hlib 3590h_drive3
define drive 3590hlib 3590h_drive4
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
4. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 3590E drives:
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
The procedures for volume check-in and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
Note: If your library has drives of multiple device types, you defined two libraries
to the Tivoli Storage Manager server in the procedure in Configuring a 3494
library with multiple drive device types on page 112. The two Tivoli Storage
Manager libraries represent the one physical library. The check-in process finds all
available volumes that are not already checked in. You must check in media
separately to each defined library. Ensure that you check in volumes to the correct
Tivoli Storage Manager library.
Do the following:
1. Check in the library inventory. The following shows two examples.
v Check in volumes that are already labeled:
checkin libvolume 3494lib search=yes status=scratch checklabel=no
v Label and check in volumes:
label libvolume 3494lib search=yes checkin=scratch
2. Depending on whether you use scratch volumes or private volumes, do one of
the following:
v If you use only scratch volumes, ensure that enough scratch volumes are
available. For example, you may need to label more volumes. As volumes are
used, you may also need to increase the number of scratch volumes allowed
in the storage pool that you defined for this library.
v If you want to use private volumes in addition to or instead of scratch
volumes in the library, define volumes to the storage pool you defined. The
volumes you define must have been already labeled and checked in. See
Defining storage pool volumes on page 266.
For more information about checking in volumes, see Checking new volumes into
a library on page 149.
The following tasks are required for Tivoli Storage Manager servers to share library
devices over a SAN:
1. Ensure the server that will be defined as the library manager is at the same or
higher version as the server or servers that will be defined as library clients.
2. Set up server-to-server communications.
3. Set up the device on the server systems.
4. Set up the library on the library manager server. In the following example, the
library manager server is named MANAGER.
5. Set up the library on the library client server. In the following example, the
library client server is named CLIENT.
See Categories in an IBM 3494 library on page 108 for additional information
about configuring 3494 libraries.
Note: You can also configure a 3494 library so that it contains drives of multiple
device types or different generations of drives of the same device type. The
procedure for working with multiple drive device types is similar to the one
described for a LAN in Configuring a 3494 library with multiple drive device
types on page 112. For details about mixing generations of drives, see Defining
3592 device classes on page 196 and Defining LTO device classes on page 203.
Note: Ensure that the library name agrees with the library name on the library
manager.
define library 3494san libtype=shared primarylibmanager=manager
3. Perform this step from the library manager. Define a path from the library client
server to each drive that the library client server will access. The device name
must reflect the way the library client system sees the device. There must be a
path defined from the library manager to each drive in order for the library
client to use the drive. The following is an example of how to define a path:
define path client drivea srctype=server desttype=drive
library=3494san device=/dev/rmt0
define path client driveb srctype=server desttype=drive
library=3494san device=/dev/rmt1
To help ensure a smoother migration and to ensure that all tape volumes that are
being used by the servers get associated with the correct servers, perform the
following migration procedure.
1. Do the following on each server that is sharing the 3494 library:
a. Update the storage pools using the UPDATE STGPOOL command. Set the
value for the HIGHMIG and LOWMIG parameters to 100%.
b. Stop the server by issuing the HALT command.
c. Edit the dsmserv.opt file and make the following changes:
1) Comment out the 3494SHARED YES option line
2) Activate the DISABLESCHEDS YES option line if it is not active
3) Activate the EXPINTERVAL X option line if it is not active and change
its value to 0, as follows:
EXPINTERVAL 0
d. Start the server.
e. Enter the following Tivoli Storage Manager command:
disable sessions
Note: You can use the saved volume history files from the library clients
as a guide.
b. Check in any remaining volumes as scratch volumes. Use the CHECKIN
LIBVOLUME command with STATUS=SCRATCH.
5. Halt all the servers.
6. Edit the dsmserv.opt file and comment out the following lines in the file:
DISABLESCHEDS YES
EXPINTERVAL 0
7. Start the servers.
Tivoli Storage Manager uses the capability of the 3494 library manager, which
allows you to partition a library between multiple Tivoli Storage Manager servers.
Library partitioning differs from library sharing on a SAN in that with
partitioning, there are no Tivoli Storage Manager library managers or library
clients.
When you partition a library on a LAN, each server has its own access to the same
library. For each server, you define a library with tape volume categories unique to
that server. Each drive that resides in the library is defined to only one server. Each
server can then access only those drives it has been assigned. As a result, library
partitioning does not allow dynamic sharing of drives or tape volumes because
they are pre-assigned to different servers using different names and category
codes.
In the following example, an IBM 3494 library containing four drives is attached to
a Tivoli Storage Manager server named ASTRO and to another Tivoli Storage
Manager server named JUDY.
Note: Tivoli Storage Manager can also share the drives in a 3494 library with other
servers by enabling the 3494SHARED server option. When this option is enabled,
you can define all of the drives in a 3494 library to multiple servers, if there are
SCSI connections from all drives to the systems on which the servers are running.
This type of configuration is not recommended, however, because when this type
For details, see Attaching an automated library device to your system on page 84
and Selecting a device driver on page 85.
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
For more information, see Defining storage pools on page 255.
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
For more information, see Defining storage pools on page 255.
The ACSLS client application communicates with the ACSLS library server to
access tape cartridges in an automated library. Tivoli Storage Manager is one of the
applications that gains access to tape cartridges by interacting with ACSLS through
its client, which is known as the control path. The Tivoli Storage Manager server
reads and writes data on tape cartridges by interacting directly with tape drives
through the data path. The control path and the data path are two different paths.
The ACSLS client daemon must be initialized before starting the server. See
/usr/tivoli/tsm/devices/bin/rc.acs_ssi for the client daemon invocation. For detailed
installation, configuration, and system administration of ACSLS, refer to the
appropriate StorageTek documentation.
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
For more information, see Defining storage pools on page 255.
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
v Define the 9840 drives to 9840LIB.
define drive 9840lib 9840_drive1 acsdrvid=1,2,3,1
define drive 9840lib 9840_drive2 acsdrvid=1,2,3,2
v Define the 9940 drives to 9940LIB.
define drive 9940lib 9940_drive3 acsdrvid=1,2,3,3
define drive 9940lib 9940_drive4 acsdrvid=1,2,3,4
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drive's SCSI address. See the StorageTek documentation for
details.
See Defining drives on page 186.
3. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 9840 drives:
define path server1 9840_drive1 srctype=server desttype=drive
library=9840lib device=/dev/mt0
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
For more information, see Defining storage pools on page 255.
You can modify the procedure described for configuring a library for use by one
server (Configuration with multiple drive device types on page 99) and use it for
configuring a shared library. You must set up the server that is the library manager
before you set up servers that are the library clients.
Use the following sample procedure for each Tivoli Storage Manager server that
will be a library client. The library client server is named WALLACE. With the
exception of one step, perform the procedure from the library client server.
1. Define the server that is the library manager:
define server glencoe serverpassword=secret hladdress=9.115.3.45 lladdress=1580
crossdefine=yes
2. Define the shared library named MACGREGOR, and identify the library
manager server's name as the primary library manager. Ensure that the library
name is the same as the library name on the library manager:
define library macgregor libtype=shared primarylibmanager=glencoe
3. Perform this step from the library manager. Define a path from the library client
server to each drive that the library client server will be allowed to access. The
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
Tip: If your library has drives of multiple device types, you defined two libraries
to the Tivoli Storage Manager server in the procedure in Configuring an ACSLS
library with multiple drive device types on page 124. The two Tivoli Storage
Manager libraries represent the one physical library. The check-in process finds all
available volumes that are not already checked in. You must check in media
separately to each defined library. Ensure that you check in volumes to the correct
Tivoli Storage Manager library.
1. Check in the library inventory. The following shows examples for libraries with
a single drive device type and with multiple drive device types.
v Check in volumes that are already labeled:
For more information about checking in volumes, see Checking new volumes into
a library on page 149
It also allows this media to be used to transfer data between systems that support
the media. Removable file support allows the server to read data from a FILE
device class that is copied to removable file media through software that is
acquired from another vendor. The media is then usable as input media on a target
Tivoli Storage Manager server that uses the REMOVABLEFILE device class for
input.
Note: Software for writing CDs may not work consistently across platforms.
Use a MAXCAPACITY value that is less than one CD's usable space to allow for a
one-to-one match between files from the FILE device class and copies that are on
CD. Use the DEFINE DEVCLASS or UPDATE DEVCLASS commands to set the
MAXCAPACITY parameter of the FILE device class to a value less than 650 MB.
Server A
1. Define a device class with a device type of FILE.
define devclass file devtype=file directory=/home/user1
2. Export the node. This command results in a file name /home/user1/CDR03 that
contains the export data for node USER1
export node user1 filedata=all devclass=file vol=cdr03
You can use software for writing CDs to create a CD with volume label CDR03
that contains a single file that is also named CDR03.
Server B
Notes:
a. CD drives lock while the file system is mounted. This prevents use of the
eject button on the drive.
Note: CD drives lock while the file system is mounted. This prevents use of
the eject button on the drive.
3. Ensure that the media is labeled. The software that you use for making a CD
also labels the CD. Before you define the drive, you must put formatted,
labeled media in the drive. For label requirements, see Labeling requirements
for removable file device types. When you define the drive, the server verifies
that a valid file system is present.
4. Define a manual library named CDROM:
define library cdrom libtype=manual
5. Define the drive in the library:
define drive cdrom cddrive
6. Define a path from the server to the drive at mount point /cdrom:
define path serverb cddrive srctype=server desttype=drive
library=cdrom device=/cdrom
7. Define a device class with a device type of REMOVABLEFILE. The device type
must be REMOVABLEFILE.
define devclass cdrom devtype=removablefile library=cdrom
8. Issue the following Tivoli Storage Manager command to import the node data
on the CD volume CDR03.
import node user1 filedata=all devclass=cdrom vol=cdr03
You must use another application to copy the FILE device class data from the CD
as a file that has the same name as the volume label. The software used to copy
the FILE device class data must also label the removable media.
While the server tracks and manages client data, the media manager, operating
entirely outside of the I/O data stream, labels, catalogs, and tracks physical
volumes. The media manager also controls library drives, slots, and doors.
Tivoli Storage Manager provides a programming interface that lets you use a
variety of media managers. See Setting up Tivoli Storage Manager to work with
an external media manager for setup procedures.
To use a media manager with Tivoli Storage Manager, define a library that has a
library type of EXTERNAL. The library definition will point to the media manager
rather than a physical device.
Note: You do not define the drives to the server in an externally managed
library.
3. Define a path from the server to the library:
define path server1 mediamgr srctype=server desttype=library
externalmanager=/usr/sbin/mediamanager
In the EXTERNALMANAGER parameter, specify the media manager's installed
path. For more information about paths, see Defining paths on page 188.
4. Define device class, EXTCLASS, for the library with a device type that matches
the drives. For this example the device type is ECARTRIDGE.
define devclass extclass library=mediamgr devtype=ecartridge
mountretention=5 mountlimit=2
Note:
a. For environments in which devices are shared across storage applications,
the MOUNTRETENTION setting should be carefully considered. This
parameter determines how long an idle volume remains in a drive. Because
some media managers will not dismount an allocated drive to satisfy
pending requests, you might need to tune this parameter to satisfy
competing mount requests while maintaining optimal system performance.
b. It is recommended that you explicitly specify the mount limit instead of
using MOUNTLIMIT=DRIVES.
5. Define a storage pool, EXTPOOL, for the device class. For example:
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
Refer to the documentation for the media manager for detailed setup and
management information.
The most likely symptom of this problem is that the volumes in the media
manager's database are not known to the server, and thus not available for use.
Verify the Tivoli Storage Manager volume list and any disaster recovery media. If
volumes not identified to the server are found, use the media manager interface to
deallocate and delete the volumes.
See Attaching a manual drive to your system on page 83 and Selecting a device
driver on page 85 for details.
In the following example, two DLT drives are attached to the server system and
defined as part of a manual library:
1. Define a manual library named MANUALDLT:
define library manualdlt libtype=manual
2. Define the drives in the library:
define drive manualdlt drive01
define drive manualdlt drive02
See Defining drives on page 186 and http://www.ibm.com/support/entry/
portal/Overview/Software/Tivoli/Tivoli_Storage_Manager.
3. Define a path from the server to each drive:
define path server1 drive01 srctype=server desttype=drive
library=manualdlt device=/dev/mt1
define path server1 drive02 srctype=server desttype=drive
library=manualdlt device=/dev/mt2
For more about device special file names, see:
Device special file names on page 86
For more information about paths, see Defining paths on page 188.
4. Classify the drives according to type by defining a device class named
TAPEDLT_CLASS. Use FORMAT=DRIVE as the recording format only if all the
drives associated with the device class are identical.
define devclass tapedlt_class library=manualdlt devtype=dlt format=drive
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can use any scratch
volumes available without further action on your part. If you do not allow
scratch volumes (MAXSCRATCH=0), you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping client files together using
collocation on page 363 and How collocation affects reclamation on page
382.
See Defining storage pools on page 255.
Labeling volumes
Use the following procedure to ensure that volumes are available to the server.
Keep enough labeled volumes on hand so that you do not run out during an
operation such as client backup. Label and set aside extra scratch volumes for any
potential recovery operations you might have later.
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
Do the following:
1. Label volumes. For example, enter the following command to use one of the
drives to label a volume with the ID of vol001:
label libvolume manualdlt vol001
Note: Tivoli Storage Manager only accepts tapes labeled with IBM standard
labels. IBM standard labels are similar to ANSI Standard X3.27 labels except
As part of the configuration, a storage agent is installed on the client system. Tivoli
Storage Manager supports SCSI, 349X, and ACSLS tape libraries as well as FILE
libraries for LAN-free data movement. The configuration procedure you follow
will depend on the type of environment you implement; however in all cases you
must do the following:
1. Verify the network connection.
2. Establish communications among client, storage agent, and Tivoli Storage
Manager.
3. Configure devices for the storage agent to access.
4. If you are using shared FILE storage, install and configure IBM TotalStorage
SAN File System, Tivoli SANergy, or IBM General Parallel File System .
To help you tune the use of your LAN and SAN resources, you can control the
path that data transfers take for clients with the capability of LAN-free data
movement. For each client you can select whether data read and write operations
use:
v The LAN path only
v The LAN-free path only
v Either path
See the REGISTER NODE and UPDATE NODE commands in the Administrator's
Reference for more about these options.
For more information on configuring Tivoli Storage Manager for LAN-free data
movement see the Storage Agent User's Guide.
To determine if there is a problem with the client node FRED using the storage
agent FRED_STA, issue the following:
validate lanfree fred fred_sta
The output will allow you to see which management class destinations for a given
operation type are not LAN-free capable, and provide a brief explanation about
why. It will also report the total number of LAN-free destinations.
See the VALIDATE LANFREE command in the Administrator's Reference for more
information.
To allow both root and non-root users to perform SAN discovery, a special utility
module, dsmqsan, is invoked when a SAN-discovery function is launched. The
module performs as root, giving SAN-discovery authority to non-root users. While
SAN discovery is in progress, dsmqsan runs as root.
The dsmqsan module is installed by default when the Tivoli Storage Manager
server is installed. It is installed with owner root, group system, and mode 4755.
The value of the SETUID bit is on. If, for security reasons, you do not want
non-root users to run SAN-discovery functions, set the bit to off. If non-root users
are having problems running SAN-discovery functions, check the following:
v The SETUID bit. It must be set to on.
v Device special file permissions and ownership. Non-root users need read-write
access to device special files, for example, to tape and library devices.
v The SANDISCOVERY option in the server options file. This option must be set
to ON
The dsmqsan module works only for SAN-discovery functions, and does not
provide root privileges for other Tivoli Storage Manager functions.
Tivoli Storage Manager for z/OS Media provides read and write access to storage
devices that are attached to a z/OS mainframe with a Fibre Channel connection
(FICON). After a Tivoli Storage Manager for z/OS V5 server has been migrated to
a Tivoli Storage Manager V6.3 server on AIX or Linux on System z, you can use
Tivoli Storage Manager for z/OS Media to access storage on a z/OS system. By
using the products together, you can continue to use existing z/OS storage while
taking advantage of newer Tivoli Storage Manager function.
Tivoli Storage Manager for z/OS Media uses standard interfaces for z/OS storage:
v Storage Management Subsystem (SMS) for FILE and tape volume allocation
v Data Facility Product (DFSMSdfp) Media Manager Virtual Storage Access
Method (VSAM) linear data sets for sequential FILE volume support
v DFSMSdfp Basic Sequential Access Method (BSAM) for tape volume support
For installation and configuration information, see the Tivoli Storage Manager for
z/OS Media Installation and Configuration Guide.
The z/OS media server provides read and write access to z/OS media tape and
FILE volumes, requesting mounts based on information received from the Tivoli
Storage Manager server. A typical backup operation to z/OS storage consists of the
steps outlined in Figure 11 on page 137:
Machine A is functioning
Report at a
on Monitoring
B level.
Machine A is functioning
Report at a
on Monitoring
MachineBB is functioning at a
level.
B level with some issues.
Machine A is functioning at a
Machine BB is functioning at a
Tivoli Storage
level.
There are two with
B level machines
some that
issues.
8
need immediate attention.
Machine B is functioning at a
There are two with
B level machines
some that
issues.
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
There are two machines that
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
Machine C needs maintenenc
Manager
Machine D is terminal.
5
backup- 3
archive client
Report on Monitoring
Machine
B level.
Machine
A is functioning
Report
Machine
BB
at a
on Monitoring
A is functioning
Report at a
on Monitoring
is functioning at a
z/OS
7
level.
6
B level with some issues.
Machine A is functioning at a
Machine BB is functioning at a
level.
media server
There are two with
B level machines
some that
issues.
need immediate attention.
Machine B is functioning at a
There are two with
B level machines
some that
issues.
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
There are two machines that
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
Machine C needs maintenenc
Machine D is terminal.
1
DB2
9
Tivoli Storage
Manager server
Report on Monitoring
Machine A is functioning
Report at a
on Monitoring
B level.
Machine A is functioning
Report at a
on Monitoring
MachineBB is functioning at a
level.
B level with some issues.
Machine A is functioning at a
Machine BB is functioning at a
level.
There are two with
B level machines
some that
issues.
need immediate attention.
Machine B is functioning at a
There are two with
B level machines
some that
issues.
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
There are two machines that
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
Machine C needs maintenenc
Machine D is terminal.
LAN
2
Tape Library
z/OS FICON
channel-
attached tape
drives
Figure 11. Data flow from the backup-archive client to z/OS media server storage
1. The Tivoli Storage Manager backup-archive client contacts the Tivoli Storage
Manager server.
2. The Tivoli Storage Manager server selects a library resource and volume for the
backup operation.
3. The Tivoli Storage Manager server contacts the z/OS media server to request a
volume mount.
4. The z/OS media server mounts the FILE or tape volume.
5. The z/OS media server responds to the Tivoli Storage Manager server that the
mount operation is complete.
6. The backup-archive client begins sending data to the Tivoli Storage Manager
server.
7. The Tivoli Storage Manager server stores metadata in the database and
manages the data transaction.
8. The Tivoli Storage Manager server sends the backup-archive client data to the
z/OS media server.
9. The z/OS media server writes the data to z/OS storage.
Network tuning
Operations that store or retrieve data using a z/OS media server require more
network bandwidth than operations using a local disk or tape. To optimize
performance, use dedicated networks for connections between a Tivoli Storage
Manager V6.3 server and a z/OS media server.
To optimize network performance when using a z/OS media server, ensure that
both the z/OS system and the Tivoli Storage Manager server system can use a
large TCP/IP window size. Set the following parameters:
v On the z/OS system, include the TCPMAXRCVBUFRSIZE parameter in the
TCPIP.PROFILE TCPCONFIG statement and set it to the default value of 256 K
or greater.
v On AIX systems, set the network tuning parameter rfc1323 to 1. This is not the
default value.
To reduce network bandwidth requirements, store backup and archive data to local
V6.3 disk pools. Use storage pool backup and storage pool migration to copy and
move the data to z/OS tape storage. This method requires less network bandwidth
than backing up or archiving the data directly to the z/OS media server and then
moving the data to z/OS tape storage.
To use multiple network connections between the z/OS media server and the
Tivoli Storage Manager server, specify different TCP/IP addresses in the HLA
parameter for the DEFINE SERVER command. Separate each address with a comma
and no spaces. Multiple connections can be isolated from other network traffic and
optimized across different paths to the z/OS media server based on network
activity.
When defining a device class for storage operations, consider the value of the
MOUNTLIMIT parameter carefully. Because a z/OS media library has no defined
drives, you must use the MOUNTLIMIT parameter in the DEFINE DEVCLASS command
to control concurrent mounts for z/OS volumes. If you must limit mount requests
to the z/OS media server, set the value accordingly.
If you are migrating one Tivoli Storage Manager for z/OS Version 5 server to
Tivoli Storage Manager Version 6.3 and plan to use one z/OS media server for
storage access, use the same USERID associated with your start task JCL that was
specified for the V5 z/OS server. By specifying the same USERID, you can
preserve the configuration of volume access controls and authorization to tape
storage that was previously used on the V5 server.
A single z/OS media server can be configured to access several storage resources
for a Tivoli Storage Manager server. This set up can be useful if you have several
One z/OS media server can provide z/OS storage access to multiple Tivoli Storage
Manager V6.3 servers. However, the z/OS media server does not have knowledge
of Tivoli Storage Manager operations or volume ownership and it does not
distinguish mount requests from different servers. No matter how many servers
you have in your configuration, the z/OS media server handles all mount requests
in the same way. Any volume access controls are maintained by the z/OS tape
management system and volume ownership is associated with the JOBNAME and
USERID of the z/OS media server. The Tivoli Storage Manager server cannot
establish ownership for volumes with the z/OS media server.
If you are using a z/OS media server with more than one Tivoli Storage Manager
server, do not request mounts for volumes that are not already in your inventory
unless they are scratch volumes. If a volume is allocated to one Tivoli Storage
Manager server and then requested by another Tivoli Storage Manager server,
there is potential for data overwrite because neither the z/OS media server or the
z/OS library keeps track of Tivoli Storage Manager volume inventory. Mount
requests are satisfied regardless of which server is making the request.
The following example illustrates how data overwrite could occur if you are using
one z/OS media server to fulfill mount requests from two Tivoli Storage Manager
servers: Server X and Server Q.
1. Server X requests a scratch volume from the z/OS media server.
2. The z/OS media server contacts the z/OS library and the tape management
system selects volume A00001 from the scratch inventory.
3. Volume A00001 is mounted and then written to by Server X. Server X records
volume A00001 in its inventory.
4. Volume A00001 is returned to the z/OS library.
5. Volume A00001 is defined on Server Q with the DEFINE VOLUME command.
Server Q then requests a mount for volume A00001 from the z/OS media
server.
6. The z/OS media server mounts volume A00001.
7. Server Q overwrites Server X's data on volume A00001.
If you plan to use one z/OS media server to provide storage access to more than
one Tivoli Storage Manager server, use caution when managing your volume
inventory.
For information about system requirements for the z/OS media server, see the
Tivoli Storage Manager for z/OS Media Installation and Configuration Guide.
If you are using a storage agent to transfer backup-archive client data to z/OS
media server storage, the storage agent must be at Version 6.3. Tivoli Storage
Manager for z/OS Media is compatible with storage agents running on AIX, Linux
on System z, Oracle Solaris, and Windows systems.
Configuration tasks
Configure the Tivoli Storage Manager server to access z/OS media server storage.
These tasks should be completed after Tivoli Storage Manager for z/OS Media is
installed and configured.
After a z/OS media server is defined, the server name and network connection
information are stored in the Tivoli Storage Manager server database. Records for
each z/OS media server that is defined can be referenced or updated.
The DEFINE SERVER command defines the network address and port of the z/OS
media server and a user ID and password for server access and authentication. The
user ID and password must be associated with the correct level of access for
resource requests through the z/OS media server. There must also be a
corresponding password specified in the Tivoli Storage Manager for z/OS Media
options file. The SERVERPASSWORD specified in the DEFINE SERVER command must
match the PASSPHRASE in the z/OS media server options file.
For example, define a z/OS media server named zserver1 with a TCP/IP address
of 192.0.2.24 and a port of 1777:
define server zserver1 serverpassword=secretpw
hladdress=192.0.2.24 lladdress=1777
In a z/OS media server environment, the z/OS media library serves as the focal
point for access to storage volumes. The ZOSMEDIA library type is used by Tivoli
Storage Manager to identify a FICON attached storage resource that is controlled
through the z/OS media server. There are no drives defined in the library and
there is no Tivoli Storage Manager library volume inventory.
After the z/OS media server zserver is defined, you can configure access to disk
storage resources.
1. Define the z/OS library that is connected to the z/OS media server:
define library zfilelibrary libtype=zosmedia
2. Define a path from the Tivoli Storage Manager server to the z/OS media
library through the z/OS media server:
define path tsmserver zfilelibrary srctype=server
desttype=library zosmediaserver=zserver
3. Define a FILE device class to use for the library:
define devclass zfile library=zfilelibrary
devtype=file prefix=MEDIA.SERVER.HLQ
Storage pools and volumes that are defined in a z/OS media library are sequential
access.
To configure Tivoli Storage Manager for NDMP operations, you must perform the
following steps:
1. Define the libraries and their associated paths.
Important: An NDMP device class can only use an Tivoli Storage Manager
library in which all of the drives can read and write all of the media in the
library.
2. Define a device class for NDMP operations.
3. Define the storage pool for backups performed by using NDMP operations.
4. Optional: Select or define a storage pool for storing tables of contents for the
backups.
5. Configure Tivoli Storage Manager policy for NDMP operations.
6. Register the NAS nodes with the server.
7. Define a data mover for the NAS file server.
8. Define the drives and their associated paths.
For instance, the server may know a device as id=1 based on the original path
specification to the server and original configuration of the LAN. However, some
event in the SAN (new device added, cabling change) causes the device to be
assigned id=2. When the server tries to access the device with id=1, it will either
get a failure or the wrong target device. The server assists in recovering from
changes to devices on the SAN by using serial numbers to confirm the identity of
devices it contacts.
When you define a device (drive or library) you have the option of specifying the
serial number for that device. If you do not specify the serial number when you
define the device, the server obtains the serial number when you define the path
for the device. In either case, the server then has the serial number in its database.
From then on, the server uses the serial number to confirm the identity of a device
for operations.
When the server uses drives and libraries on a SAN, the server attempts to verify
that the device it is using is the correct device. The server contacts the device by
using the device name in the path that you defined for it. The server then requests
the serial number from the device, and compares that serial number with the serial
number stored in the server database for that device.
If the serial number does not match, the server begins the process of discovery on
the SAN to attempt to find the device with the matching serial number. If the
server finds the device with the matching serial number, it corrects the definition
of the path in the server's database by updating the device name in that path. The
server issues a message with information about the change made to the device.
Then the server proceeds to use the device.
You can monitor the activity log for messages if you want to know when device
changes on the SAN have affected Tivoli Storage Manager. The following are the
number ranges for messages related to serial numbers:
v ANR8952 through ANR8958
v ANR8961 through ANR8968
v ANR8974 through ANR8975
Restriction: Some devices do not have the capability of reporting their serial
numbers to applications such as the Tivoli Storage Manager server. If the server
cannot obtain the serial number from a device, it cannot assist you with changes to
that device's location on the SAN.
Tasks
Preparing removable media
Labeling removable media volumes on page 146
Checking new volumes into a library on page 149
Controlling access to volumes on page 156
Reusing tapes in storage pools on page 156
Reusing volumes used for database backups and export operations on page 158
Managing volumes in automated libraries on page 160
Managing server requests for media on page 165
Managing libraries on page 168
Managing drives on page 169
Managing paths on page 181
Managing data movers on page 182
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Tip: When you use the LABEL LIBVOLUME command with drives in an
automated library, you can label and check in the volumes with one command.
3. If the storage pool cannot contain scratch volumes (MAXSCRATCH=0), identify
the volume to Tivoli Storage Manager by name so that it can be accessed later.
For details, see Defining storage pool volumes on page 266.
If the storage pool can contain scratch volumes (MAXSCRATCH is set to a
non-zero value), skip this step.
When you use the LABEL LIBVOLUME command, which is issued from the server
console or an administrative client, you provide parameters that specify the
following information:
v The name of the library where the storage volume is located
v The name of the storage volume
v Whether to overwrite a label on the volume
v Whether to search an automated library for volumes for labeling
v Whether to read media labels:
To prompt for volume names in SCSI libraries
To read the barcode label for each cartridge in SCSI, 349X, and automated
cartridge system library software (ACSLS) libraries
v Whether to check in the volume:
To add the volume to the scratch pool
To designate the volume as private
v The type of device (applies to 349X libraries only)
To use the LABEL LIBVOLUME command, there must be at least one drive that is
not in use by another Tivoli Storage Manager process. This includes volumes that
are mounted but idle. If necessary, use the DISMOUNT VOLUME command to
dismount the idle volume to make that drive available.
Attention:
v By overwriting a volume label, you destroy all of the data that resides on the
volume. Use caution when overwriting volume labels to avoid destroying
important data.
v The labels on VolSafe volumes can be overwritten only once. Therefore, you
should use the LABEL LIBVOLUME command only once for VolSafe volumes.
You can guard against overwriting the label by using the OVERWRITE=NO
option on the LABEL LIBVOLUME command.
By overwriting a volume label, you destroy all of the data that resides on the
volume. Use caution when overwriting volume labels to avoid destroying
important data.
When you use the LABEL LIBVOLUME command, you can identify the volumes
to be labeled in one of the following ways:
v Explicitly name one volume.
v Enter a range of volumes by using the VOLRANGE parameter.
v Use the VOLLIST parameter to specify a file that contains a list of volume
names or to explicitly name one or more volumes.
For information about the AUTOLABEL parameter, see Labeling new volumes
using AUTOLABEL on page 148.
Suppose that you want to label a few new volumes by using a manual tape drive
that is defined as the following:
/dev/mt5
The drive is attached at SCSI address 5. Issue the following command:
label libvolume tsmlibname volname
Restriction: The LABEL LIBVOLUME command selects the next free drive. If you
have more than one free drive, it cannot be:
/dev/mt5
When you label volumes one-at-a-time, you can specify a volume name.
You can use the LABEL LIBVOLUME command to overwrite existing volume
labels.
Suppose you want to label a few new volumes in a SCSI library that does not have
entry and exit ports. You want to manually insert each new volume into the
library, and you want the volumes to be placed in storage slots inside the library
after their labels are written. You know that none of the new volumes contains
valid data, so it is acceptable to overwrite existing volume labels. You only want to
use one of the library's four drives for these operations.
To automatically label tape volumes, you can use the AUTOLABEL parameter on
the DEFINE and UPDATE LIBRARY commands. Using this parameter eliminates
the need to pre-label a set of tapes.
It is also more efficient than using the LABEL LIBVOLUME command, which
requires you to mount volumes separately. If you use the AUTOLABEL parameter
with a SCSI library, you must check in tapes by specifying
CHECKLABEL=BARCODE on the CHECKIN LIBVOLUME command. The
AUTOLABEL parameter defaults to YES for all non-SCSI libraries and to NO for
SCSI libraries.
Tivoli Storage Manager can search all of the storage slots in a library for volumes
and can attempt to label each volume that it finds.
After a volume is labeled, the volume is returned to its original location in the
library. Specify SEARCH=BULK if you want the server to search through all the
slots of bulk entry/exit ports for labeled volumes that it can check in automatically.
The server searches through all slots even if it encounters an unavailable slot.
If the library has a barcode reader, the LABEL LIBVOLUME command can use the
reader to obtain volume names, instead of prompting you for volume names. Use
the SEARCH=YES and LABELSOURCE=BARCODE parameters. If you specify the
LABELSOURCE=BARCODE parameter, the volume bar code is read, and the tape
is moved from its location in the library or in the entry/exit ports to a drive where
the barcode label is written. After the tape is labeled, it is moved back to its
location in the library, to the entry/exit ports, or to a storage slot if the CHECKIN
option is specified.
Suppose that you want to label all volumes in a SCSI library. Enter the following
command:
label libvolume tsmlibname search=yes labelsource=barcode
The LABEL LIBVOLUME command labels volumes in the INSERT category, the
private category (PRIVATECATEGORY), the scratch category
(SCRATCHCATEGORY) and the WORM scratch category
(WORMSCRATCHCATEGORY), but does not label the volumes already checked
into the library.
Suppose that you want to label all of the volumes that are in the INSERT category
in an IBM TotalStorage 3494 Tape Library. Enter the following command:
label libvolume tsmlibname search=yes devtype=3590
You can use the LABEL LIBVOLUME command to label optical disks (3.5-inch and
5.25-inch). For example:
label libvolume opticlib search=yes labelsource=prompt
To inform the server that a new volume is available in an automated library, check
in the volume with the CHECKIN LIBVOLUME command or LABEL LIBVOLUME
command with the CHECKIN option specified. When a volume is checked in, the
server adds the volume to its library volume inventory. You can use the LABEL
LIBVOLUME command to check in and label volumes in one operation.
Notes:
v Do not mix volumes with barcode labels and volumes without barcode labels in
a library device because barcode scanning can take a long time for unlabeled
volumes.
v Any volume that has a bar code beginning with CLN is treated as a cleaning
tape.
v You must use the CHECKLABEL=YES (not NO or BARCODE) option on the
CHECKIN LIBVOLUME command when checking VolSafe volumes into a
library. This is true for both automated cartridge system library software
(ACSLS) and SCSI libraries.
When you check in a volume, you must supply the name of the library and the
status of the volume (private or scratch). To check in one or just a few volumes,
you can specify the name of the volume with the command, and issue the
command for each volume. To check in a larger number of volumes, you can use
the search capability of the CHECKIN command or you can use the VOLRANGE
parameter of the CHECKIN command.
If the library does not have an entry/exit port, Tivoli Storage Manager requests
that the mount operator load the volume into a slot within the library. The request
specifies the location with an element address. For any library or medium changer
that does not have an entry/exit port, you need to know the element addresses for
the cartridge slots and drives. If there is no worksheet listed for your device in
Note: Element addresses are sometimes numbered starting with a number other
than one. Check the worksheet to be sure.
For example, to check in volume VOL001 manually, enter the following command:
checkin libvolume tapelib vol001 search=no status=scratch
If the library has an entry/exit port, you are prompted to insert a cartridge into the
entry/exit port. If the library does not have an entry/exit port, you are prompted
to insert a cartridge into one of the slots in the library. Element addresses identify
these slots. For example, Tivoli Storage Manager finds that the first empty slot is at
element address 5. The message is:
ANR8306I 001: Insert 8MM volume VOL001 R/W in slot with element
address 5 of library TAPELIB within 60 minutes; issue REPLY along
with the request ID when ready.
Check the worksheet for the device if you do not know the location of element
address 5 in the library. To find the worksheet, see http://www.ibm.com/support/
entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager. When you have
inserted the volume as requested, respond to the message from a Tivoli Storage
Manager administrative client. Use the request number (the number at the
beginning of the mount request):
reply 1
Note: A REPLY command is not required if you specify a wait time of zero using
the optional WAITTIME parameter on the CHECKIN LIBVOLUME command. The
default wait time is 60 minutes.
The following command syntax allows you to search for volumes that have already
been inserted into a 349X library from the convenience or bulk I/O station while
specifying SEARCH=NO:
checkin libvolume 3494lib vol001 search=no status=scratch
If the volume has already been inserted, the server finds and processes it. If not,
you can insert the volume into the I/O station during the processing of the
command.
Use this mode when you have a large number of volumes to check in, and you
want to avoid issuing an explicit CHECKIN LIBVOLUME command for each
volume. For example, for a SCSI library you can simply open the library access
door, place all of the new volumes in unused slots, close the door, and issue the
CHECKIN LIBVOLUME command with SEARCH=YES.
This restriction prevents the server from using volumes owned by another
application that is accessing the library simultaneously.
The server searches through all slots even if it encounters an unavailable slot. For
SCSI libraries, the server scans all of the entry/exit ports in the library for
volumes. If a volume is found that contains a valid volume label, it is checked in
automatically. The CHECKLABEL option NO is invalid with this SEARCH option.
When you use the CHECKLABEL=YES parameter, the volume is moved from the
entry/exit ports to the drive where the label is read. After reading the label, the
tape is moved from the drive to a storage slot. When you use the
CHECKLABEL=BARCODE parameter, the volume's bar code is read and the tape
is moved from the entry/exit port to a storage slot. For barcode support to work
correctly, the Tivoli Storage Manager or IBMtape device driver must be installed
for libraries controlled by Tivoli Storage Manager.
When you check in a volume, you can specify whether Tivoli Storage Manager
should read the labels of the media during checkin processing. When
label-checking is on, Tivoli Storage Manager mounts each volume to read the
internal label and only checks in a volume if it is properly labeled. This can
prevent future errors when volumes are actually used in storage pools, but also
increases processing time at check in.
If a library has a barcode reader and the volumes have barcode labels, you can
save time in the check in process. Tivoli Storage Manager uses the characters on
the label as the name for the volume being checked in. If a volume has no barcode
label, Tivoli Storage Manager mounts the volumes in a drive and attempts to read
the recorded label. For example, to use the barcode reader to check in all volumes
found in the TAPELIB library as scratch volumes, enter the following command:
checkin libvolume tapelib search=yes status=scratch checklabel=barcode
For information on how to label new volumes, see Preparing removable media
on page 145.
Use the CHECKIN LIBVOLUME command to allow swapping. When you specify
YES for the SWAP parameter, Tivoli Storage Manager initiates a swap operation if
an empty slot is not available to check in a volume. Tivoli Storage Manager ejects
the volume that it selects for the swap operation from the library and replaces the
ejected volume with the volume that is being checked in. For example:
checkin libvolume auto wpdv00 swap=yes
Tivoli Storage Manager selects the volume to eject by checking first for any
available scratch volume, then for the least frequently mounted volume.
Tips:
v External and manual libraries use separate logical libraries to segregate their
media. Ensuring that the correct media are loaded is the responsibility of the
operator and the library manager software.
v A storage pool can consist of either WORM or RW media, but not both.
v Do not use WORM tapes for database backup or export operations. Doing so
wastes tape following a restore or import operation.
For information about defining device classes for WORM tape media, see
Defining device classes for StorageTek VolSafe devices on page 207 and
Defining tape and optical device classes on page 193.
For information about selecting device drivers for IBM and devices from other
vendors, see:
Selecting a device driver on page 85.
Library changers cannot identify the difference between standard read-write (RW)
tape media and the following types of WORM tape media:
v VolSafe
v Sony AIT
v LTO
v SDLT
v DLT
To determine the type of WORM media that is being used, a volume must be
loaded into a drive. Therefore, when checking in one of these types of WORM
volumes, you must use the CHECKLABEL=YES option on the CHECKIN
LIBVOLUME command.
If they provide support for WORM media, IBM 3592 library changers can detect
whether a volume is WORM media without loading the volume into a drive.
Specifying CHECKLABEL=YES is not required. Verify with your hardware vendors
that your 3592 drives and libraries provide the required support.
Issue the LABEL LIBVOLUME command only once for VolSafe volumes. You can
guard against overwriting the label by using the OVERWRITE=NO option on the
LABEL LIBVOLUME command.
If you have SDLT-600, DLT-V4, or DLT-S4 drives and you want to enable them for
WORM media, upgrade the drives using V30 or later firmware available from
Quantum. You can also use DLTIce software to convert unformatted read-write
(RW) volumes or blank volumes to WORM volumes.
In manual libraries, you can use the server to format empty volumes to WORM.
With Tivoli Storage Manager, you manage your volume inventory by performing
the following tasks: Each volume used by a server for any purpose must have a
unique name. This requirement applies to all volumes, whether the volumes are
used for storage pools, or used for operations such as database backup or export.
The requirement also applies to volumes that reside in different libraries but that
are used by the same server.
Tivoli Storage Manager expects to be able to access all volumes it knows about. For
example, Tivoli Storage Manager tries to fill up tape volumes. If a volume
containing client data is only partially full, Tivoli Storage Manager will later
request that volume be mounted to store additional data. If the volume cannot be
mounted, an error occurs.
To make volumes that are not full available to be read but not written to, you can
change the volume access mode. For example, use the UPDATE VOLUME
command with ACCESS=READONLY. The server will not attempt to mount a
volume that has an access mode of unavailable.
If you want to make volumes unavailable in order to send the data they contain
off-site for safekeeping, a more controlled way to do this is to use a copy storage
pool or an active-data pool. You can back up your primary storage pools to a copy
storage pool and then send the copy storage pool volumes off-site. You can also
copy active versions of client backup data to active-data pools, and then send the
volumes off-site. You can track copy storage pool volumes and active-data pool
volumes by changing their access mode to off-site, and updating the volume
history to identify their location. For more information, see Backing up primary
storage pools on page 930.
Over time, media ages, and some of the backup data located on it may no longer
be needed. You can set Tivoli Storage Manager policy to determine how many
backup versions are retained and how long they are retained. Then, expiration
processing allows the server to delete files you no longer want to keep. You can
keep the useful data on the media and then reclaim and reuse the media
themselves.
Deleting data - expiration processing
Expiration processing deletes data that is no longer valid either because it
exceeds the retention specifications in policy or because users or
administrators have deleted the active versions of the data.
For more information, see:
v Basic policy planning on page 478
v Running expiration processing to delete expired files on page 514
v File expiration and expiration processing on page 481
Reusing media - reclamation processing
Data on tapes may expire, move, or be deleted. Reclamation processing
consolidates any unexpired data by moving it from multiple volumes onto
fewer volumes. The media can then be returned to the storage pool and
reused.
You can set a reclamation threshold that allows Tivoli Storage Manager to
reclaim volumes whose valid data drops below a threshold. The threshold
is a percentage of unused space on the volume and is set for each storage
pool. The amount of data on the volume and the reclamation threshold for
the storage pool affects when the volume is reclaimed. See Reclaiming
space in sequential-access storage pools on page 372.
Determining when media have reached end of life
You can use Tivoli Storage Manager to display statistics about volumes,
including the number of write operations performed on the media and the
number of write errors. For media initially defined as private volumes,
Tivoli Storage Manager maintains this statistical data, even as the volume
is reclaimed. You can compare the information with the number of write
operations and write errors recommended by the manufacturer. For media
initially defined as scratch volumes, Tivoli Storage Manager overwrites this
statistical data each time the media are reclaimed.
Reclaim any valid data from volumes that have reached end of life. If the
volumes are in automated libraries, check them out of the volume
inventory. Delete private volumes from the database with the DELETE
VOLUME command.
For more information, see Reclaiming space in sequential-access storage
pools on page 372.
Ensuring media are available for the tape rotation
Over time, the demand for volumes may cause the storage pool to run out
of space. You can set the maximum number of scratch volumes high
enough to meet demand by doing one or both of the following:
When you back up the database or export server information, Tivoli Storage
Manager records information about the volumes used for these operations in the
volume history file. Tivoli Storage Manager will not allow you to reuse these
volumes until you delete the volume information from the volume history file. To
reuse volumes that were previously used for database backup or export, use the
DELETE VOLHISTORY command.
Note: If your server uses the disaster recovery manager function, the volume
information is automatically deleted during MOVE DRMEDIA command
processing.
For additional information about DRM, see Chapter 35, Disaster recovery
manager, on page 1029.
For information about the volume history file, see Protecting the volume history
file on page 925.
When you define a storage pool, you must specify the maximum number of
scratch volumes that the storage pool can use. Tivoli Storage Manager
automatically requests a scratch volume when needed. When the number of
scratch volumes that Tivoli Storage Manager is using for the storage pool exceeds
the maximum number of scratch volumes specified, the storage pool can run out of
space.
When you exceed the maximum number of scratch volumes, you can do one or
both of the following:
v Increase the maximum number of scratch volumes by updating the storage pool
definition. Label new volumes to be used as scratch volumes if needed.
v Make volumes available for reuse by running expiration processing and
reclamation, to consolidate data onto fewer volumes. See Reusing tapes in
storage pools on page 156.
Remember: Because you might need additional volumes for future recovery
operations, consider labeling and setting aside extra scratch volumes.
Tivoli Storage Manager cancels (rolls back) a transaction if volumes, either private
or scratch, are not available to complete the data storage operation. After Tivoli
Storage Manager begins a transaction by writing to a WORM volume, the written
space on the volume cannot be reused, even if the transaction is canceled.
For example, if a client starts to back up data and does not have sufficient volumes
in the library, Tivoli Storage Manager cancels the backup transaction. The WORM
volumes to which Tivoli Storage Manager had already written for the canceled
backup are wasted because the volumes cannot be reused. Suppose that you have
WORM platters that hold 2.6 GB each. A client starts to back up a 12 GB file. If
Tivoli Storage Manager cannot acquire a fifth scratch volume after filling four
volumes, Tivoli Storage Manager cancels the backup operation. The four volumes
that Tivoli Storage Manager already filled cannot be reused.
See How the server groups files before storing on page 272 for information.
Tivoli Storage Manager tracks the scratch and private volumes available in an
automated library through a library volume inventory. Tivoli Storage Manager
maintains an inventory for each automated library. The library volume inventory is
separate from the inventory of volumes for each storage pool. To add a volume to
a library's volume inventory, you check in a volume to that Tivoli Storage Manager
library.
To ensure that Tivoli Storage Manager's library volume inventory remains accurate,
you must check out volumes when you need to physically remove volumes from a
SCSI, 349X, or automated cartridge system library software (ACSLS) library. When
you check out a volume that is being used by a storage pool, the volume remains
in the storage pool. If Tivoli Storage Manager requires the volume to be mounted
while it is checked out, a message to the mount operator's console is displayed
with a request to check in the volume. If the check in is not successful, Tivoli
Storage Manager marks the volume as unavailable.
While a volume is in the library volume inventory, you can change its status from
scratch to private.
For details on the checkin procedure, see Checking new volumes into a library
on page 149.
You cannot change the status of a volume from private to scratch if the volume
belongs to a storage pool or is defined in the volume history file. You can use this
command if you make a mistake when checking in volumes to the library and
assign the volumes the wrong status.
If you check out a volume that is defined in a storage pool, the server may attempt
to access it later to read or write data. If this happens, the server requests that the
volume be checked in.
To find the overflow location of a storage pool, you can use the QUERY MEDIA
command. This command can also be used to generate commands. For example,
you can issue a QUERY MEDIA command to get a list of all volumes in the
overflow location, and at the same time generate the commands to check in all
those volumes to the library. For example, enter this command:
query media format=cmd stgpool=archivepool whereovflocation=Room2948
cmd="checkin libvol autolib &vol status=private"
cmdfilename="/tsm/move/media/checkin.vols"
Use the DAYS parameter to specify the number of days that must elapse before the
volumes are eligible for processing by the QUERY MEDIA command.
The file that contains the generated commands can be run using the Tivoli Storage
Manager MACRO command. For this example, the file may look like this:
checkin libvol autolib TAPE13 status=private
checkin libvol autolib TAPE19 status=private
Issue the AUDIT LIBRARY command to restore the inventory to a consistent state.
Missing volumes are deleted, and the locations of the moved volumes are updated.
However, new volumes are not added during an audit.
Unless your SCSI library has a barcode reader, the server mounts each volume
during the audit to verify the internal labels on volumes. For 349X libraries, the
server uses the information from the Library Manager.
If a SCSI library has a barcode reader, you can save time by using the barcode
reader to verify the identity of volumes. If a volume has a barcode label, the server
uses the characters on the label as the name for the volume. The volume is not
mounted to verify that the barcode name matches the internal volume name. If a
volume has no barcode label, the server mounts the volume and attempts to read
the recorded label. For example, to audit the TAPELIB library using its barcode
reader, issue the following command:
audit library tapelib checklabel=barcode
If the number of scratch volumes that Tivoli Storage Manager is using for the
storage pool exceeds the number specified in the storage pool definition, perform
the following steps:
1. Add scratch volumes to the library by checking in volumes. Label them if
necessary. You might need to use an overflow location to move volumes out of
the library to make room for these scratch volumes.
2. Increase the maximum number of scratch volumes by updating the storage
pool definition. The increase should equal the number of scratch volumes that
you checked in.
Keep in mind that you might need additional volumes for future recovery
operations, so consider labeling and setting aside extra scratch volumes.
The library client contacts the library manager, when the library manager starts
and the storage device initializes, or after a library manager is defined to a library
client. The library client confirms that the contacted server is the library manager
for the named library device. The library client also compares drive definitions
with the library manager for consistency. The library client contacts the library
manager for each of the following operations:
Volume Mount
A library client sends a request to the library manager for access to a
particular volume in the shared library device. For a scratch volume, the
library client does not specify a volume name. If the library manager
cannot access the requested volume, or if scratch volumes are not available,
Table 16 shows the interaction between library clients and the library manager in
processing Tivoli Storage Manager operations.
Table 16. How SAN-enabled servers processTivoli Storage Manager Operations
Operation Library Manager Library Client
(Command)
Query library volumes Displays the volumes that Not applicable.
are checked into the library.
(QUERY LIBVOLUME) For private volumes, the
owner server is also
displayed.
Check in and check out Performs the commands to Not applicable.
library volumes the library device.
When a checkin operation
(CHECKIN LIBVOLUME, must be performed because
CHECKOUT LIBVOLUME) of a client restore, a request
is sent to the library manager
server.
Move media and move DRM Only valid for volumes used Requests that the library
media by the library manager manager server perform the
server. operations. Generates a
(MOVE MEDIA, checkout process on the
MOVE DRMEDIA) library manager server.
Audit library inventory Performs the inventory Performs the inventory
synchronization with the synchronization with the
(AUDIT LIBRARY) library device. library manager server.
Label a library volume Performs the labeling and Not applicable.
checkin of media.
(LABEL LIBVOLUME)
Dismount a volume Sends the request to the Requests that the library
library device. manager server perform the
(DISMOUNT VOLUME) operation.
Query a volume Checks whether the volume Requests that the library
is owned by the requesting manager server perform the
(QUERY VOLUME) library client server and operation.
checks whether the volume
is in the library device.
For manual libraries, Tivoli Storage Manager detects when there is a cartridge
loaded in a drive, and no operator reply is necessary. For automated libraries, the
CHECKIN LIBVOLUME and LABEL LIBVOLUME commands involve inserting
cartridges into slots and, depending on the value of the WAITTIME parameter,
issuing a reply message. (If the value of the parameter is zero, no reply is
required.) The CHECKOUT LIBVOLUME command involves inserting cartridges
into slots and, in all cases, issuing a reply message.
For example, to start an administrative client in mount mode, enter the following
command:
> dsmadmc -mountmode
Someone you designate as the operator must respond to the mount requests by
putting in tape volumes as requested.
For automated libraries, mount messages are sent to the library and not to an
operator. Messages about problems with the library are sent to the mount message
queue. You cannot use the Tivoli Storage Manager REPLY command to respond to
these messages.
When you issue the QUERY REQUEST command, Tivoli Storage Manager displays
requested actions and the amount of time remaining before the requests time out.
For example, you enter the command as follows:
query request
The first parameter for the REPLY command is the request identification number
that tells the server which of the pending operator requests has been completed.
This three-digit number is always displayed as part of the request message. It can
also be obtained by issuing a QUERY REQUEST command. If the request requires
the operator to provide a device to be used for the mount, the second parameter
for this command is a device name.
For example, enter the following command to respond to request 001 for tape
drive TAPE01:
reply 1
The CANCEL REQUEST command must include the request identification number.
This number is included in the request message. You can also obtain it by issuing a
QUERY REQUEST command. See Requesting information about pending operator
requests on page 165.
You can specify the PERMANENT parameter if you want to mark the requested
volume as UNAVAILABLE. This process is useful if, for example, the volume has
been moved to a remote site or is otherwise inaccessible. By specifying
PERMANENT, you ensure that the server does not try to mount the requested
volume again.
For most of the requests associated with automated (SCSI) libraries, an operator
must perform a hardware or system action to cancel the requested mount. For such
requests, the CANCEL REQUEST command is not accepted by the server.
For example, a client requests that an archived file be retrieved. The file was
archived in a storage pool in an automated library. The server looks for the volume
containing the file in the automated library, but cannot find the volume. The server
then requests that the volume be checked in.
If the volume that the server requests is available, put the volume in the library
and check in the volume using the normal procedures (Checking new volumes
into a library on page 149).
If the volume requested is unavailable (lost or destroyed), update the access mode
of the volume to UNAVAILABLE by using the UPDATE VOLUME command.
Then cancel the server's request for checkin by using the CANCEL REQUEST
command. (Do not cancel the client process that caused the request.) To get the ID
of the request to cancel, use the QUERY REQUEST command.
If you do not respond to the server's checkin request within the mount-wait period
of the device class for the storage pool, the server marks the volume as
unavailable.
For information about setting mount retention times, see Controlling the amount
of time that a volume remains mounted on page 195.
You can request either a standard or a detailed report. For example, to display
information about all libraries, issue the following command:
query library
Updating libraries
You can update an existing library by issuing the UPDATE LIBRARY command. To
update the device names of a library, issue the UPDATE PATH command. You
cannot update a MANUAL library.
Automated libraries
If your system or device is re-configured and the device name changes, you might
need to update the device name.
The examples below show you how you can use the UPDATE LIBRARY and
UPDATE PATH commands for the following library types:
v SCSI
v 349X
v ACSLS
v External
Examples:
v SCSI library
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=/dev/lb1
Update the definition of a SCSI library named SCSILIB defined to a library client
so that a new library manager is specified:
update library scsilib primarylibmanager=server2
Deleting libraries
You can delete libraries by issuing the DELETE LIBRARY command.
Before you delete a library with the DELETE LIBRARY command, you must delete
all of the drives that have been defined as part of the library and delete the path to
the library.
For example, suppose that you want to delete a library named 8MMLIB1. After
deleting all of the drives defined as part of this library and the path to the library,
issue the following command to delete the library itself:
delete library 8mmlib1
Managing drives
You can query, update, clean, and delete drives by using Tivoli Storage Manager
commands.
The UPDATE DRIVE command accepts wildcard characters for both a library
name and a drive name. See the Administrator's Reference for information about this
command and the use of wildcard characters.
For example, to query all drives associated with your server, enter the following
command:
query drive
Updating drives
You can change the attributes of a drive definition by issuing the UPDATE DRIVE
command.
The following are attributes of a drive definition that you can change:
v The element address, if the drive is located in a SCSI or virtual tape library
(VTL)
v The ID of a drive in an ACSLS library
v The cleaning frequency
v Change whether the drive is online or offline
For example, to change the element address of a drive named DRIVE3 to 119, issue
the following command:
update drive auto drive3 element=119
If you are reconfiguring your system, you can change the device name of a drive
by issuing the UPDATE PATH command. For example, to change the device name of a
drive named DRIVE3, issue the following command:
update path server1 drive3 srctype=server desttype=drive library=scsilib
device=/dev/rmt0
Remember: You cannot change the element number or the device name if a drive
is in use. See Taking drives offline on page 171. If a drive has a volume
mounted, but the volume is idle, it can be explicitly dismounted. See
Dismounting idle volumes on page 167.
If you take a drive offline while it is in use, the mounted volume completes its
current process. If this volume was part of a series of volumes in a transaction, the
drive is no longer available to complete mounting the series. If no other drives are
available, the active process may fail. The offline state is retained even if the server
is halted and brought up again. If a drive is marked offline when the server is
brought up, a warning is issued noting that the drive must be manually brought
online. If all the drives in a library are taken offline, processes requiring a library
mount point will fail, rather than queue up for one.
The ONLINE parameter specifies the value of the drive's online state, even if the
drive is in use. ONLINE=YES indicates that the drive is available for use.
ONLINE=NO indicates that the drive is not available for use (offline). Do not
specify other optional parameters along with the ONLINE parameter. If you do,
the drive will not be updated, and the command will fail when the drive is in use.
You can specify the ONLINE parameter when the drive is involved in an active
process or session, but this is not recommended.
Drive encryption
Drive encryption protects tapes that contain critical or sensitive data (for example,
tapes that contain sensitive financial information). Drive encryption is particularly
beneficial for tapes that are moved from the Tivoli Storage Manager server
environment to an off-site location.
Drives must be able to recognize the correct format. With Tivoli Storage Manager,
you can use the following encryption methods:
Table 17. Encryption methods supported
Application method Library method System method
3592 generation 2 Yes Yes Yes
and later
IBM LTO generation Yes Yes, but only if your Yes
4 system hardware (for
example, 3584)
supports it
HP LTO generation 4 Yes No No
Oracle StorageTek Yes No No
T10000B
Oracle StorageTek Yes No No
T10000C
A library can contain a mixture of drives, some of which support encryption and
some that do not. (For example, a library might contain two LTO-2 drives, two
LTO-3 drives, and two LTO-4 drives.) You can also mix media in a library using,
for example, a mixture of encrypted and non-encrypted device classes having
different tape and drive technologies. However, all LTO-4 drives must support
encryption if Tivoli Storage Manager is to use drive encryption. In addition, all
drives within a logical library must use the same method of encryption. When
using Tivoli Storage Manager, do not create an environment in which some drives
use the Application method and some drives use the Library or System methods of
encryption.
For more information about setting up your hardware environment to use drive
encryption, refer to your hardware documentation.
For details about the DRIVEENCRYPTION parameter, see the following topics:.
v Encrypting data with drives that are 3592 generation 2 and later on page 198
v Encrypting data using LTO generation 4 tape drives on page 205
v Enabling ECARTRIDGE drive encryption on page 208 and Disabling
ECARTRIDGE drive encryption on page 208
With logical block protection, you can identify errors that occur while data is being
written to tape and while data is transferred from the tape drive to Tivoli Storage
Manager through the storage area network. Drives that support logical block
protection validate data during read and write operations. The Tivoli Storage
Manager server validates data during read operations.
If validation by the drive fails during write operations, it can indicate that data
was corrupted while being transferred to tape. The Tivoli Storage Manager server
fails the write operation. You must restart the operation to continue. If validation
by the drive fails during read operations, it can indicate that the tape media is
corrupted. If validation by the Tivoli Storage Manager server fails during read
operations, it can indicate that data was corrupted while being transferred from the
If logical block protection is disabled on a tape drive, or the drive does not support
logical block protection, the Tivoli Storage Manager server can read protected data.
However, the data is not validated.
Logical block protection is superior to the CRC validation that you can specify
when you define or update a storage pool definition. When you specify CRC
validation for a storage pool, data is validated only during volume auditing
operations. Errors are identified after data is written to tape.
Restriction: You cannot use logical block protection for sequential data such as
backup sets and database backups.
The following table shows the media and the formats that you can use with drives
that support logical block protection.
Tip: If you have a 3592, LTO, or Oracle StorageTek drive that is not capable of
logical block protection, you can upgrade the drive with firmware that provides
logical block protection.
Logical block protection is only available for drives that are in MANUAL, SCSI,
349x, and ACSLS libraries. Logical block protection is not available for drives that
are in external libraries. For the most current information about support for logical
block protection, see http://www.ibm.com/support/
docview.wss?uid=swg21568108.
To use logical block protection for write operations, all the drives in a library must
support logical block protection. If a drive is not capable of logical block
protection, volumes that have read/write access are not mounted. However, the
server can use the drive to mount volumes that have read-only access. The
protected data is read and validated by the Tivoli Storage Manager server if logical
block protection is enabled for read/write operations.
To enable logical block protection, specify the LBPROTECT parameter on the DEFINE
DEVCLASS or the UPDATE DEVCLASS command for the 3592, LTO, and ECARTRIDGE
device types:
v To enable logical block protection, specify a value of READWRITE or
WRITEONLY for the LBPROTECT parameter.
For example, to specify logical block protection during read/write operations for
a 3592 device class named 3592_lbprotect, issue the following command:
define devclass 3592_lbprotect library=3594 lbprotect=readwrite
Tips:
If you update the value of the LBPROTECT parameter from NO to READWRITE
or WRITEONLY and the server selects a filling volume without logical block
protection for write operations, the server issues a message each time the
volume is mounted. The message indicates that data will be written to the
volume without logical block protection. To prevent this message from
displaying or to have Tivoli Storage Manager only write data with logical
block protection, update the access of filling volumes without logical block
protection to read-only.
To reduce the performance effects, do not specify the CRCDATA parameter on
the DEFINE STGPOOL or UPDATE STGPOOL command.
When data is validated during read operations by both the drive and by the
Tivoli Storage Manager server, it can slow server performance during restore
and retrieval operations. If the time that is required for restore and retrieval
operations is critical, you can change the setting of the LBPROTECT parameter
from READWRITE to WRITEONLY to increase the restore or retrieval speed.
After data is restored or retrieved, you can reset the LBPROTECT parameter to
READWRITE.
v To disable logical block protection, specify a value of NO for the LBPROTECT
parameter.
Restriction: If logical block protection is disabled, the server does not write to
an empty tape with logical block protection. However, if a filling volume with
logical block protection is selected, the server continues to write to the volume
with logical block protection. To prevent the server from writing to tapes with
logical block protection, change access of filling volumes with logical block
protection to read-only. When data is read, the CRC on each block is not
checked by either drive or the server.
If the server is to enable logical block protection, the server issues an error
message that indicates that the drive does not support logical block protection.
To determine whether a volume has logical block protection, issue the QUERY
VOLUME command and verify the value in the field Logical Block Protection.
If you use the UPDATE DEVCLASS command to change the setting for logical block
protection, the change applies only to empty volumes. Filling and full volumes
maintain their status of logical block protection until they are empty and ready to
be refilled.
For example, suppose that you change the value of the LBPROTECT parameter from
READWRITE to NO. If the server selects a volume that is associated with the
device class and that has logical block protection, the server continues writing
protected data to the volume.
Remember:
v Before you select the volume, the Tivoli Storage Manager server does not verify
whether the volume has logical block protection.
v If a drive does not support logical block protection, the mounts of volumes with
logical block protection for write operations fail. To prevent the server from
mounting the protected volumes for write operations, change the volume access
to read-only. Also, disable logical block protection to prevent the server from
enabling the feature on the tape drive.
v If a drive does not support logical block protection, and logical block protection
is disabled, the server reads data from protected volumes. However, the data is
not validated by the server and the tape drive.
To determine whether a volume has logical block protection, issue the QUERY
VOLUME command and verify the value in the field Logical Block Protection.
Tip: Consider updating the access of filling volumes to read-only if you update the
value of the LBPROTECT parameter in one of the following ways:
v READWRITE or WRITEONLY to NO
v NO to READWRITE or WRITEONLY
For example, suppose that you change the setting of the LBPROTECT parameter from
NO to READWRITE. If the server selects a filling volume without logical block
protection for write operations, the server issues a message each time the volume
is mounted. The message indicates that data will be written to the volume without
logical block protection. To prevent this message from being displayed or to have
Tivoli Storage Manager only write data with logical block protection, update the
access of filling volumes without logical block protection to read-only.
Suppose, for example, that you have a 3584 library that has LTO-5 drives and that
you want to use for protected and unprotected data. To define the required device
classes and storage pools, you can issue the following commands.
define library 3584 libtype=scsi
define devclass lbprotect library=3584 devicetype=lto lbprotect=readwrite
define devclass normal library=3584 devicetype=lto lbprotect=no
define stgpool lbprotect_pool lbprotect maxscratch=10
define stgpool normal_pool normal maxscratch=10
Cleaning drives
The server can control cleaning tape drives in SCSI libraries and offers partial
support for cleaning tape drives in manual libraries.
For automated library devices, you can automate cleaning by specifying the
frequency of cleaning operations and checking a cleaner cartridge into the library's
volume inventory. Tivoli Storage Manager mounts the cleaner cartridge as
specified. For manual library devices, Tivoli Storage Manager issues a mount
request for the cleaner cartridge.
| Note: Use library based cleaning for automated tape libraries that support this
| function.
Drive-cleaning considerations
Some SCSI libraries provide automatic drive cleaning. In such cases, choose either
the library drive cleaning or the Tivoli Storage Manager drive cleaning, but not
both.
| Library based cleaning provides several advantages for automated tape libraries
| that support this function:
| v Library based cleaning lowers the burden on the Tivoli Storage Manager
| administrator to manage cleaning cartridges.
| v It can improve cleaning cartridge usage rates. Most tape libraries track the
| number of cleans left based on the hardware indicators. Tivoli Storage Manager
| uses a raw count.
| v Unnecessary cleaning is reduced. Modern tape drives do not need cleaning at
| fixed intervals and can detect and request when cleaning is required.
Some devices require a small amount of idle time between mount requests to start
drive cleaning. However, Tivoli Storage Manager tries to minimize the idle time for
a drive. The result may be to prevent the library drive cleaning from functioning
effectively. If this happens, try using Tivoli Storage Manager to control drive
cleaning. Set the frequency to match the cleaning recommendations from the
manufacturer.
If you have Tivoli Storage Manager control drive cleaning, disable the library drive
cleaning function to prevent problems. If the library drive cleaning function is
enabled, some devices automatically move any cleaner cartridge that is found in
the library to slots in the library that are dedicated for cleaner cartridges. An
application does not know that these dedicated slots exist. You cannot check a
cleaner cartridge into the Tivoli Storage Manager library inventory until you
disable the library drive cleaning function.
Restrictions:
v For IBM 3570, 3590, and 3592 drives, specify a value for the
CLEANFREQUENCY parameter rather than specify ASNEEDED. Using the
cleaning frequency recommended by the product documentation will not
overclean the drives.
v The CLEANFREQUENCY=ASNEEDED parameter value does not work for
all tape drives. To determine whether a drive supports this function, see the
following Web site: http://www.ibm.com/software/sysmgmt/products/
To have the server control drive cleaning without operator intervention, you must
check a cleaner cartridge into an automated library's volume inventory. As a best
practice, check in cleaner cartridges one-at-a-time and do not use the search
function when checking in a cleaner cartridge.
For example, if you need to check in both data cartridges and cleaner cartridges,
put the data cartridges in the library and check them in first. You can use the
search function of the CHECKIN LIBVOLUME command (or the LABEL
LIBVOLUME command if you are labeling and checking in volumes). Then check
in the cleaner cartridge to the library by using one of the following methods.
v Check in without using search:
checkin libvolume autolib1 cleanv status=cleaner cleanings=10
checklabel=no
The server then requests that the cartridge be placed in the entry/exit port, or
into a specific slot.
v Check in using search, but limit the search by using the VOLRANGE or
VOLLIST parameter:
checkin libvolume autolib1 status=cleaner cleanings=10 search=yes
checklabel=barcode vollist=cleanv
The process scans the library by using the barcode reader, looking for the
CLEANV volume.
If your library has limited capacity and you do not want to use a slot in your
library for a cleaner cartridge, you can still make use of the server's drive cleaning
function.
Set the cleaning frequency for the drives in the library. When a drive needs
cleaning based on the frequency setting, the server issues the message, ANR8914I.
For example:
ANR89141I Drive DRIVE1 in library AUTOLIB1 needs to be cleaned.
You can use that message as a cue to manually insert a cleaner cartridge into the
drive. However, the server cannot track whether the drive has been cleaned.
To ensure that drives are cleaned as needed, you must monitor the cleaning
messages for any problems.
When a drive needs to be cleaned, the server runs the cleaning operation after
dismounting a data volume if a cleaner cartridge is checked in to the library. If the
cleaning operation fails or is canceled, or if no cleaner cartridge is available, then
the indication that the drive needs cleaning is lost. Monitor cleaning messages for
these problems. If necessary, use the CLEAN DRIVE command to have the server
try the cleaning again, or manually load a cleaner cartridge into the drive.
The server uses a cleaner cartridge for the number of cleanings that you specify
when you check in the cleaner cartridge. If you check in two or more cleaner
cartridges, the server uses only one of the cartridges until the designated number
of cleanings for that cartridge is reached. Then the server begins to use the next
cleaner cartridge. If you check in two or more cleaner cartridges and issue two or
more CLEAN DRIVE commands concurrently, the server uses multiple cartridges
at the same time and decrements the remaining cleanings on each cartridge.
Visually verify that cleaner cartridges are in the correct storage slots before issuing
any of the following commands:
v AUDIT LIBRARY
v CHECKIN LIBVOLUME with SEARCH specified
v LABEL LIBVOLUME with SEARCH specified
To find the correct slot for a cleaner cartridge, use the QUERY LIBVOLUME
command.
When a drive needs cleaning, the server loads what its database shows as a cleaner
cartridge into the drive. The drive then moves to a READY state, and Tivoli
Storage Manager detects that the cartridge is a data cartridge. The server then
performs the following steps:
1. The server attempts to read the internal tape label of the data cartridge.
2. The server ejects the cartridge from the drive and moves it back to the home
slot of the cleaner cartridge within the library. If the eject fails, the server
marks the drive offline and issues a message that the cartridge is still in the
drive.
3. The server checks out the cleaner cartridge to avoid selecting it for another
drive cleaning request. The cleaner cartridge remains in the library but no
longer appears in the Tivoli Storage Manager library inventory.
4. If the server was able to read the internal tape label, the server checks the
volume name against the current library inventory, storage pool volumes, and
the volume history file.
v If there is not a match, an administrator probably checked in a data cartridge
as a cleaner cartridge by mistake. Now that the volume is checked out, you
do not need to do anything else.
v If there is a match, the server issues messages that manual intervention and a
library audit are required. Library audits can take considerable time, so an
administrator should issue the command when sufficient time permits. See
Auditing a library's volume inventory on page 162.
Deleting drives
You can delete drive definitions by issuing the DELETE DRIVE command.
You can request either a standard or a detailed report. This command accepts
wildcard characters for both a source name and a destination name. See the
Administrator's Reference for information about this command and the use of
wildcard characters.
For example, to display information about all paths, issue the following command:
query path
Updating paths
You can update an existing path by issuing the UPDATE PATH command.
The following examples show how you can use the UPDATE PATH commands for
the certain path types:
v Library paths
Update the path to change the device name for a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=/dev/lb1
v Drive paths
Update the path to change the device name for a drive named NASDRV1:
update path nas1 nasdrv1 srctype=datamover desttype=drive
library=naslib device=/dev/mt1
Deleting paths
You can delete an existing path definition by issuing the DELETE PATH command.
A path cannot be deleted if the destination is currently in use. Before you can
delete a path to a device, you must delete the device.
Delete a path from a NAS data mover NAS1 to the library NASLIB.
delete path nas1 naslib srctype=datamover desttype=library
Attention: If you delete the path to a device or make the path offline, you disable
access to that device.
You can request either a standard or a detailed report. For example, to display a
standard report about all data movers, issue the following command:
query datamover *
For example, to update the data mover for the node named NAS1 to change the IP
address, issue the following command:
update datamover nas1 hladdress=9.67.97.109
Before you can delete a data mover definition, you must delete all paths defined
for the data mover. To delete a data mover named NAS1, issue the following
command:
delete datamover nas1
A log page is created and can be retrieved at any given time or at a specific time
such as when a drive is dismounted.
See Managing libraries on page 168, Managing drives on page 169, and
Managing paths on page 181 for information about displaying library, drive, and
path information, and updating and deleting libraries and drives.
Defining libraries
Before you can use a drive, you must first define the library to which the drive
belongs. This is true for both manually mounted drives and drives in automated
libraries.
For example, you have several stand-alone tape drives. You can define a library
named MANUALMOUNT for these drives by using the following command:
define library manualmount libtype=manual
For all libraries other than manual libraries, you define the library and then define
a path from the server to the library. For example, if you have an IBM 3583 device,
you can define a library named ROBOTMOUNT using the following command:
define library robotmount libtype=scsi
Next, you use the DEFINE PATH command. In the path, you must specify the
DEVICE parameter. The DEVICE parameter is required and specifies the device
driver's name for the drive, which is the device special file name. The library's
robotic mechanism is known by the device special file name.
For more about device special file names, see Device special file names on page
86.
define path server1 robotmount srctype=server desttype=library
device=/dev/lb0
For more information about paths, see Defining paths on page 188.
If you have an IBM 3494 Tape Library Dataserver, you can define a library named
AUTOMOUNT using the following command:
define library automount libtype=349x
Next, assuming that you have defined one LMCP whose device name is
/dev/lmcp0, you define a path for the library:
define path server1 automount srctype=server desttype=library
device=/dev/lmcp0
If you choose, you can specify the serial number when you define the library to
the server. For convenience, the default is to allow the server to obtain the serial
number from the library itself at the time that the path is defined.
If you specify the serial number, the server confirms that the serial number is
correct when you define the path to the library. When you define the path, you can
set AUTODETECT=YES to allow the server to correct the serial number if the
number that it detects does not match what you entered when you defined the
library.
Depending on the capabilities of the library, the server may not be able to
automatically detect the serial number. Not all devices are able to return a serial
number when asked for it by an application such as the server. In this case, the
server will not record a serial number for the device, and will not be able to
confirm the identity of the device when you define the path or when the server
uses the device. See Impacts of device changes on the SAN on page 143.
Defining drives
To inform the server about a drive that can be used to access storage volumes,
issue the DEFINE DRIVE command, followed by the DEFINE PATH command.
When issuing the DEFINE DRIVE command, you must provide some or all of the
following information:
Library name
The name of the library in which the drive is located.
Drive name
The name assigned to the drive.
Serial number
The serial number of the drive. The serial number parameter applies only
to drives in SCSI or virtual tape library (VTLs). With the serial number, the
server can confirm the identity of the device when you define the path or
when the server uses the device.
You can specify the serial number if you choose. The default is to enable
the server to obtain the serial number from the drive itself at the time that
the path is defined. If you specify the serial number, the server confirms
that the serial number is correct when you define the path to the drive.
When you define the path, you can set AUTODETECT=YES to enable the
server to correct the serial number if the number that it detects does not
match what you entered when you defined the drive.
Depending on the capabilities of the drive, the server might not be able to
automatically detect the serial number. In this case, the server does not
record a serial number for the device, and is not able to confirm the
identity of the device when you define the path or when the server uses
the device. See Impacts of device changes on the SAN on page 143.
Element address
The element address of the drive. The ELEMENT parameter applies only
to drives in SCSI or VTL libraries. The element address is a number that
For example, to define a drive that belongs to the manual library named MANLIB,
enter this command:
define drive manlib tapedrv3
Next, you define the path from the server to the drive, using the device name used
to access the drive:
define path server1 tapedrv3 srctype=server desttype=drive library=manlib
device=/dev/mt3
For more information about paths, see Defining paths on page 188.
When issuing the DEFINE DATAMOVER command, you must provide some or all
of the following information:
Data mover name
The name of the defined data mover.
Type The type of data mover (NAS).
High level address
The high level address is either the numerical IP address or the domain
name of a NAS file server.
Low level address
The low level address specifies the TCP port number used to access a NAS
file server.
User ID
The user ID specifies the ID for a user when initiating a Network Data
Management Protocol (NDMP) session with a NAS file server.
Password
The password specifies the password associated with a user ID when
initiating an NDMP session with a NAS file server. Check with your NAS
file server vendor for user ID and password conventions.
Online
The online parameter specifies whether the data mover is online.
Defining paths
Before a device can be used, a path must be defined between the device and the
server or the device and the data mover responsible for outboard data movement.
The DEFINE PATH command must be used to define the following path
relationships:
v Between a server and a drive or a library.
v Between a storage agent and a drive.
v Between a data mover and a drive or a library.
When issuing the DEFINE PATH command, you must provide some or all of the
following information:
Source name
The name of the server, storage agent, or data mover that is the source for
the path.
Destination name
The assigned name of the device that is the destination for the path.
Source type
The type of source for the path. (A storage agent is considered a type of
server for this purpose.)
Destination type
The type of device that is the destination for the path.
Library name
The name of the library that a drive is defined to if the drive is the
destination of the path.
Device
The special file name of the device. This parameter is used when defining
a path between a server, a storage agent, or a NAS data mover and a
library or drive.
Automatic detection of serial number and element address
For devices on a SAN, you can specify whether the server should correct
the serial number or element address of a drive or library, if it was
incorrectly specified on the definition of the drive or library. The server
uses the device name to locate the device and compares the serial number
(and the element address for a drive) that it detects with that specified in
the definition of the device. The default is to not allow the correction.
If you have a SCSI type library named AUTODLTLIB that has a device name of
/dev/lb3, define the path to the server named ASTRO1 by doing the following:
define path astro1 autodltlib srctype=server desttype=library
device=/dev/lb3
Because of this, problems can arise if path definitions to that storage are not
accurate. The server has no way of validating the directory structure and storage
paths that storage agents see, so diagnosing failures of this nature is very difficult.
The mechanisms to map the server view of storage to the storage agent view of
storage are DEFINE DEVCLASS-FILE for the server and DEFINE PATH for the
storage agent or agents. The DIRECTORY parameter in the DEFINE
DEVCLASS-FILE command specifies the directory location or locations where the
server places files that represent storage volumes for the FILE device class. For
storage agents, the DIRECTORY parameter in the DEFINE PATH command serves
the same purpose. The device class definition sets up a directory structure for the
server and the DEFINE PATH definition tells the storage agent what that directory
structure is. If path information is incorrect, the server and storage agent or agents
will not be able to store files.
In order for the server and storage agent to be consistent on the storage they are
sharing, the directories defined in the device class definition for the server and on
the DEFINE PATH command for the storage agent should reference the same
storage, in the same order and with an equal number of directories. This should be
the same for each FILE drive that the storage agent is using. Shared file libraries
are used to set up the storage pool that will be shared between the server and
storage agents. FILE drives within that library are used so that the DEFINE PATH
command can convey the information to the storage agent.
Shared FILE libraries are supported for use in LAN-free backup configurations
only. You cannot use a shared FILE library in an environment in which a library
manager is used to manage library clients.
When the server or storage agent needs to write data to storage, it contacts the file
server over the LAN. The file server then contacts the hard disk or storage drive
over the SAN and reserves the space needed for the storage agent or server to
store volumes. Once the space is reserved, the server or storage agent writes the
data to be stored to the File Server over the LAN and then the File Server writes
the data again to storage over the SAN. Only one operation can take place at a
time, so if the server is in contact with the File Server during an operation, a
storage agent attempting to contact the File Server will have to wait its turn.
Sequential-access device types include tape, optical, and sequential-access disk. For
random access storage, Tivoli Storage Manager supports only the DISK device
class, which is defined by Tivoli Storage Manager.
To define a device class, use the DEFINE DEVCLASS command and specify the
DEVTYPE parameter. The DEVTYPE parameter assigns a device type to the device
class. You can define multiple device classes for each device type. For example,
you might need to specify different attributes for different storage pools that use
the same type of tape drive. Variations may be required that are not specific to the
device, but rather to how you want to use the device (for example, mount
retention or mount limit). For all device types other than FILE or SERVER, you
must define libraries and drives to Tivoli Storage Manager before you define the
device classes.
To update an existing device class definition, use the UPDATE DEVCLASS command.
You can also delete a device class and query a device class using the DELETE
DEVCLASS and QUERY DEVCLASS commands, respectively.
Remember:
v One device class can be associated with multiple storage pools, but each storage
pool is associated with only one device class.
v If you include the DEVCONFIG option in the dsmserv.opt file, the files that you
specify with that option are automatically updated with the results of the
DEFINE DEVCLASS, UPDATE DEVCLASS, and DELETE DEVCLASS
commands.
v Tivoli Storage Manager now allows SCSI libraries to include tape drives of more
than one device type. When you define the device class in this environment, you
must declare a value for the FORMAT parameter.
Tasks
Defining tape and optical device classes on page 193
Defining 3592 device classes on page 196
Device classes for devices not supported by the Tivoli Storage Manager server on page
199
For details about commands and command parameters, see the Administrator's
Reference.
For the most up-to-date list of supported devices and valid device class formats,
see the Tivoli Storage Manager Supported Devices website:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
See Mixed device types in libraries on page 60 and Mixed device types in
libraries on page 60 for additional information.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
The following tables list supported devices, media types, and Tivoli Storage
Manager device types.
For tape and optical device classes, the default values selected by the server
depend on the recording format used to write data to the volume. You can either
accept the default for a given device type or specify a value.
To specify estimated capacity for tape volumes, use the ESTCAPACITY parameter
when you define the device class or update its definition.
For more information about how Tivoli Storage Manager uses the estimated
capacity value, see How Tivoli Storage Manager fills volumes on page 211.
To specify a recording format, use the FORMAT parameter when you define the
device class or update its definition.
If all drives associated with that device class are identical, specify
FORMAT=DRIVE. The server selects the highest format that is supported by the
drive on which a volume is mounted.
If some drives associated with the device class support a higher density format
than others, specify a format that is compatible with all drives. If you specify
FORMAT=DRIVE, mount errors can occur. For example, suppose a device class
uses two incompatible devices such as an IBM 7208-2 and an IBM 7208-12. The
server might select the high-density recording format of 8500 for each of two new
volumes. Later, if the two volumes are to be mounted concurrently, one fails
because only one of the drives is capable of the high-density recording format.
If drives in a single SCSI library use different tape technologies (for example, DLT
and LTO Ultrium), specify a unique value for the FORMAT parameter in each
device class definition.
For a configuration example, see Configuration with multiple drive device types
on page 99.
The recording format that Tivoli Storage Manager uses for a given volume is
selected when the first piece of data is written to the volume. Updating the
FORMAT parameter does not affect media that already contain data until those
media are rewritten from the beginning. This process might happen after a volume
is reclaimed or deleted, or after all of the data on the volume expires.
To associate a device class with a library, use the LIBRARY parameter when you
define a device class or update its definition.
When setting a mount limit for a device class, you need to consider the number of
storage devices connected to your system, whether you are using the
simultaneous-write function, whether you are associating multiple device classes
with a single library, and the number of processes that you want to run at the
same time.
When selecting a mount limit for a device class, consider the following issues:
v How many storage devices are connected to your system?
Do not specify a mount limit value that is greater than the number of associated
available drives in your installation. If the server tries to mount as many
volumes as specified by the mount limit and no drives are available for the
required volume, an error occurs and client sessions may be terminated. (This
does not apply when the DRIVES parameter is specified.)
v Are you using the simultaneous-write function to primary storage pools, copy
storage pools, and active-data pools?
Specify a mount limit value that provides a sufficient number of mount points to
support writing data simultaneously to the primary storage pool and all
associated copy storage pools and active-data pools.
v Are you associating multiple device classes with a single library?
A device class associated with a library can use any drive in the library that is
compatible with the device class' device type. Because you can associate more
than one device class with a library, a single drive in the library can be used by
more than one device class. However, Tivoli Storage Manager does not manage
how a drive is shared among multiple device classes.
v How many Tivoli Storage Manager processes do you want to run at the same
time, using devices in this device class?
Tivoli Storage Manager automatically cancels some processes to run other,
higher priority processes. If the server is using all available drives in a device
class to complete higher priority processes, lower priority processes must wait
until a drive becomes available. For example, Tivoli Storage Manager cancels the
process for a client backing up directly to tape if the drive being used is needed
for a server migration or tape reclamation process. Tivoli Storage Manager
cancels a tape reclamation process if the drive being used is needed for a client
restore operation. For additional information, see Preemption of client or server
operations on page 626.
If processes are often canceled by other processes, consider whether you can
make more drives available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for drives.
Best Practice: If the library associated with this device class is EXTERNAL type,
explicitly specify the mount limit instead of using MOUNTLIMIT=DRIVES.
You can control the amount of time that a mounted volume remains mounted after
its last I/O activity. If a volume is used frequently, you can improve performance
by setting a longer mount retention period to avoid unnecessary mount and
dismount operations.
To control the amount of time a mounted volume remains mounted, use the
MOUNTRETENTION parameter when you define the device class or update its
definition. For example, if the mount retention value is 60, and a mounted volume
remains idle for 60 minutes, then the server dismounts the volume.
While Tivoli Storage Manager has a volume mounted, the drive is allocated to
Tivoli Storage Manager and cannot be used for anything else. If you need to free
the drive for other uses, you can cancel Tivoli Storage Manager operations that are
using the drive and then dismount the volume. For example, you can cancel server
migration or backup operations. For information on how to cancel processes and
dismount volumes, see:
v Canceling server processes on page 625
v Dismounting idle volumes on page 167
Controlling the amount of time that the server waits for a drive:
You can specify the maximum amount of time, in minutes, that the Tivoli Storage
Manager server waits for a drive to become available for the current mount
request.
To control wait time, use the MOUNTWAIT parameter when you define the device
class or update its definition.
For an example that shows how to configure a VolSafe device using the WORM
parameter, see Defining device classes for StorageTek VolSafe devices on page
207
For optimal performance, do not mix generations of 3592 media in a single library.
Media problems can result when different drive generations are mixed. For
example, Tivoli Storage Manager might not be able to read a volume's label.
If you must mix generations of drives in a library, use one of the methods in the
following table to prevent or minimize the potential for problems.
If your library contains three drive generations, the latest drive generation in your library
can only read media from the earliest format, but cannot write with it. For example, if your
library contains generation 2, generation 3, and generation 4 drives, the generation 4 drives
can only read the generation 2 format. In this configuration, mark all media previously
written in generation 2 format to read-only.
Specify a path with the same special file name for each new library object. In addition, for
349X libraries, specify disjoint scratch categories (including the WORMSCRATCH category,
if applicable) for each library object. Specify a new device class and a new storage pool
that points to each new library object.
(SCSI libraries only) Define a new storage pool and device class for the latest drive
generation. For example, suppose you have a storage pool and device class for 3592-2. The
storage pool will contain all the media written in generation 2 format. Suppose that the
value of the FORMAT parameter in the device class definition is set to 3952-2 (not DRIVE).
You add generation 3 drives to the library. Complete the following steps:
1. In the new device-class definition for the generation 3 drives, set the value of the
FORMAT parameter to 3592-3 or 3592-3C. Do not specify DRIVE.
2. In the definition of the storage pool associated with generation 2 drives, update the
MAXSCRATCH parameter to 0, for example:
update stgpool genpool2 maxscratch=0
This method allows both generations to use their optimal format and minimizes potential
media problems that can result from mixing generations. However, it does not resolve all
media issues. For example, competition for mount points and mount failures might result.
(To learn more about mount point competition in the context of LTO drives and media, see
Defining LTO device classes on page 203.) The following list describes media restrictions:
v CHECKIN LIBVOL: The issue resides with using the CHECKLABEL=YES option. If the label is
currently written in a generation 3 or later format, and you specify the
CHECKLABEL=YES option, drives of previous generations fail using this command. As
a best practice, use CHECKLABEL=BARCODE.
v LABEL LIBVOL: When the server tries to use drives of a previous generation to read the
label written in a generation 3 or later format, the LABEL LIBVOL command fails unless
OVERWRITE=YES is specified. Verify that the media being labeled with OVERWRITE=YES
does not have any active data.
v CHECKOUT LIBVOL: When Tivoli Storage Manager verifies the label (CHECKLABEL=YES),
as a generation 3 or later format, and read operations by drives of previous generations,
the command fails. As a best practice, use CHECKLABEL=NO.
Tivoli Storage Manager lets you reduce media capacity to create volumes with
faster data-access speeds. The benefit is that can partition data into storage pools
that have volumes with faster data-access speeds.
To reduce media capacity, use the SCALECAPACITY parameter when you define
the device class or update its definition.
Scale capacity only takes effect when data is first written to a volume. Updates to
the device class for scale capacity do not affect volumes that already have data
written to them until the volume is returned to scratch status.
Encrypting data with drives that are 3592 generation 2 and later:
With Tivoli Storage Manager, you can use the following types of drive encryption
with drives that are 3592 generation 2 and later: Application, System, and Library.
These methods are defined through the hardware.
The following simplified example shows how to permit the encryption of data for
empty volumes in a storage pool, using Tivoli Storage Manager as the key
manager:
1. Define a library. For example:
define library 3584 libtype=SCSI
2. Define a device class, 3592_ENCRYPT, and specify the value ON for the
DRIVEENCRYPTION parameter. For example:
define devclass 3592_encrypt library=3584 devtype=3592 driveencryption=on
3. Define a storage pool. For example:
define stgpool 3592_encrypt_pool 3592_encrypt
For more information about using drive encryption, refer to Encrypting data on
tape on page 538.
For a manual library with multiple drives of device type GENERICTAPE, ensure
that the device types and recording formats of the drives are compatible. Because
the devices are controlled by the operating system device driver, the Tivoli Storage
Manager server is not aware of the following:
v The actual type of device: 4 mm, 8 mm, digital linear tape, and so forth. For
example, if you have a 4 mm device and an 8 mm device, you must define
separate manual libraries for each device.
v The actual cartridge recording format. For example, if you have a manual library
defined with two device classes of GENERICTAPE, ensure the recording formats
are the same for both drives.
When using CD-ROM media for the REMOVABLEFILE device type, the library
type must be specified as MANUAL. Access this media through a mount point, for
example, /dev/cdx (x is a number that is assigned by your operating system)
To define a FILE device class, use the DEVTYPE=FILE parameter in the device
class definition.
The Tivoli Storage Manager server allows for multiple client sessions (archive,
retrieve, backup, and restore) or server processes. For example, storage pool
backup, to concurrently read a volume in a storage pool that is associated with a
FILE-type device class. In addition, one client session or one server process can
write to the volume while it is being read.
The following server processes are allowed shared read access to FILE volumes:
v BACKUP DB
v BACKUP STGPOOL
v COPY ACTIVEDATA
v EXPORT/IMPORT NODE
v EXPORT/IMPORT SERVER
v GENERATE BACKUPSET
v RESTORE STGPOOL
v RESTORE VOLUME
The following server processes are not allowed shared read access to FILE
volumes:
v AUDIT VOLUME
v DELETE VOLUME
v MIGRATION
v MOVE DATA
v MOVE NODEDATA
v RECLAMATION
You can specify one or more directories as the location of the files used in the FILE
device class. The default is the current working directory of the server at the time
the command is issued.
Attention: Do not specify multiple directories from the same file system. Doing
so can cause incorrect space calculations. For example, if the directories /usr/dir1
and /usr/dir2 are in the same file system, the space check, which does a
preliminary evaluation of available space during store operations, will count each
directory as a separate file system. If space calculations are incorrect, the server
could commit to a FILE storage pool, but not be able to obtain space, causing the
operation to fail. If the space check is accurate, the server can skip the FILE pool in
the storage hierarchy and use the next storage pool if one is available.
If the server needs to allocate a scratch volume, it creates a new file in the
specified directory or directories. (The server can choose any of the directories in
which to create new scratch volumes.) To optimize performance, ensure that
multiple directories correspond to separate physical volumes.
The following table lists the file name extension created by the server for scratch
volumes depending on the type of data that is stored.
For scratch volumes used to store this data: The file extension is:
Client data .BFS
Export .EXP
Database backup .DBV
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Disk subsystems and file systems must not report successful write
operations when they can fail after a successful write report to Tivoli Storage
Manager.
You must ensure that storage agents can access newly created FILE volumes. To
access FILE volumes, storage agents replace names from the directory list in the
device class definition with the names in the directory list for the associated path
definition.
The following example illustrates the importance of matching device classes and
paths to ensure that storage agents can access newly created FILE volumes.
Suppose you want to use these three directories for a FILE library:
/usr/tivoli1
/usr/tivoli2
/usr/tivoli3
1. Use the following command to set up a FILE library named CLASSA with one
drive named CLASSA1 on SERVER1:
define devclass classa devtype=file
directory="/usr/tivoli1,/usr/tivoli2,/usr/tivoli3"
shared=yes mountlimit=1
2. You want the storage agent STA1 to be able to use the FILE library, so you
define the following path for storage agent STA1:
define path server1 sta1 srctype=server desttype=drive device=file
directory="/usr/ibm1,/usr/ibm2,/usr/ibm3" library=classa
In this scenario, the storage agent, STA1, will replace the directory name
/usr/tivoli1 with the directory name /usr/ibm1 to access FILE volumes that
are in the /usr/tivoli1 directory on the server.
SERVER1 will still be able to access file volume /usr/tivoli1/file1.dsm, but the
storage agent STA1 will not be able to access it because a matching directory name
in the PATH directory list no longer exists. If a directory name is not available in
the directory list associated with the device class, the storage agent can lose access
to a FILE volume in that directory. Although the volume will still be accessible
from the Tivoli Storage Manager server for reading, failure of the storage agent to
access the FILE volume can cause operations to be retried on a LAN-only path or
to fail.
To restrict the size of volumes, use the MAXCAPACITY parameter when you
define a device class or update its definition. When the server detects that a
volume has reached a size equal to the maximum capacity, it treats the volume as
full and stores any new data on a different volume.
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, Tivoli Storage Manager cancels the
process for a client backup if the mount point being used is needed for a server
migration or reclamation process. Tivoli Storage Manager cancels a reclamation
process if the mount point being used is needed for a client restore operation. For
additional information, see Preemption of client or server operations on page
626.
If processes are often canceled by other processes, consider whether you can make
more mount points available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for resources.
If you are considering mixing different generations of LTO media and drives, be
aware of the following restrictions:
Table 23. Read - write capabilities for different generations of LTO drives
Generation 1 Generation 2 Generation 3 Generation 4 Generation 5
Drives media media media media media
Generation 1 Read and n/a n/a n/a n/a
write
Generation 2 Read and Read and n/a n/a n/a
write write
Generation 3 Read only Read and Read and n/a n/a
write write
Generation 4 n/a Read only Read and Read and Read and
write write write
Generation 5 n/a n/a Read only Read and Read and
write write
Both device classes can point to the same library in which there can be Ultrium
Generation 1 and Ultrium Generation 2 drives. The drives will be shared between
the two storage pools. One storage pool will use the first device class and Ultrium
Generation 1 media exclusively. The other storage pool will use the second device
class and Ultrium Generation 2 media exclusively. Because the two storage pools
share a single library, Ultrium Generation 1 media can be mounted on Ultrium
Generation 2 drives as they become available during mount point processing.
Remember:
v If you are mixing Ultrium Generation 1 with Ultrium Generation 3 drives and
media in a single library, you must mark the Generation 1 media as read-only,
and all Generation 1 scratch volumes must be checked out.
v If you are mixing Ultrium Generation 2 with Ultrium Generation 4 or
Generation 5 drives and media in a single library, you must mark the Generation
2 media as read-only, and all Generation 2 scratch volumes must be checked out.
Consider the example of a mixed library: that consists of the following drives and
media:
v Four LTO Ultrium Generation 1 drives and LTO Ultrium Generation 1 media
v Four LTO Ultrium Generation 2 drives and LTO Ultrium Generation 2 media
The number of mount points available for use by each storage pool is specified in
the device class using the MOUNTLIMIT parameter. The MOUNTLIMIT parameter
in the LTO2CLASS device class should be set to 4 to match the number of available
drives that can mount only LTO2 media. The MOUNTLIMIT parameter in the
LTO1CLASS device class should be set to a value higher (5 or possibly 6) than the
Monitor and adjust the MOUNTLIMIT setting to suit changing workloads. If the
MOUNTLIMIT for LTO1POOL is set too high, mount requests for the LTO2POOL
might be delayed or fail because the Ultrium Generation 2 drives have been used
to satisfy Ultrium Generation 1 mount requests. In the worst scenario, too much
competition for Ultrium Generation 2 drives might cause mounts for Generation 2
media to fail with the following message:
ANR8447E No drives are currently available in the library.
If the MOUNTLIMIT for LTO1POOL is not set high enough, mount requests that
could potentially be satisfied LTO Ultrium Generation 2 drives will be delayed.
For more information about using drive encryption, refer to Encrypting data on
tape on page 538.
Tivoli Storage Manager supports the Application method of encryption with IBM
and HP LTO-4 drives. Only IBM LTO-4 supports the System and Library methods.
The Library method of encryption is supported only if your system hardware (for
example, IBM 3584) supports it.
Remember: You cannot use drive encryption with write-once, read-many (WORM)
media.
The Application method is defined through the hardware. To use the Application
method, in which Tivoli Storage Manager generates and manages encryption keys,
set the DRIVEENCRYPTION parameter to ON. This permits the encryption of data
for empty volumes. If the parameter is set to ON and the hardware is configured
for another encryption method, backup operations will fail.
The following simplified example shows the steps you would take to permit the
encryption of data for empty volumes in a storage pool:
To define a SERVER device class, use the DEFINE DEVCLASS command with the
DEVTYPE=SERVER parameter. For information about how to use a SERVER device
class, see Using virtual volumes to store data on another server on page 737.
To specify a file size, use the MAXCAPACITY parameter when you define the
device class or update its definition.
The storage pool volumes of this device type are explicitly set to full when the
volume is closed and dismounted.
When specifying a mount limit, consider your network load balancing and how
many Tivoli Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available sessions in a device class to
complete higher priority processes, lower priority processes must wait until a
session becomes available. For example, Tivoli Storage Manager cancels the process
for a client backup if a session is needed for a server migration or reclamation
process. Tivoli Storage Manager cancels a reclamation process if the session being
used is needed for a client restore operation.
If processes are often canceled by other processes, consider whether you can make
more sessions available for Tivoli Storage Manager use. Otherwise, review your
scheduling of operations to reduce the contention for network resources.
There are two methods for using VolSafe media and drives: This technology uses
media that cannot be overwritten; therefore, do not use this media for short-term
backups of client files, the server database, or export tapes.
v Define a device class using the DEFINE DEVCLASS command and specify
DEVTYPE=VOLSAFE. You can use this device class with EXTERNAL, SCSI, and
ACSLS libraries. All drives in a library must be enabled for VolSafe use.
v Define a device class using the DEFINE DEVCLASS command, and specify
DEVTYPE=ECARTRIDGE and WORM=YES. For VolSafe devices, WORM=YES is
required and must be specified when the device class is defined. You cannot
update the WORM parameter using the UPDATE DEVCLASS command. You
cannot specify DRIVEENCRYPTION=ON if your drives are using WORM
media.
For more information about VolSafe media, see Write-once, read-many tape
media on page 153.
Tivoli Storage Manager supports the Application method of encryption with Oracle
StorageTek T10000B or T10000C drives. The Library method of encryption is
supported only if your system hardware supports it.
Remember: You cannot use drive encryption with write-once, read-many (WORM)
media or VolSafe media.
The Application method, in which Tivoli Storage Manager generates and manages
encryption keys, is defined through the hardware. To use the Application method,
set the DRIVEENCRYPTION parameter to ON. This setting permits the encryption
of data for empty volumes. If the parameter is set to ON and the hardware is
configured for another encryption method, backup operations fail.
The following simplified example shows the steps you would take to permit data
encryption for empty volumes in a storage pool:
1. Define a library:
define library sl3000 libtype=scsi
2. Define a device class, ECART_ENCRYPT, and specify Tivoli Storage Manager
as the key manager:
define devclass ecart_encrypt library=sl3000
devtype=ecartridge driveencryption=on
3. Define a storage pool:
define stgpool ecart_encrypt_pool ecart_encrypt
Related concepts:
Choosing an encryption method on page 539
Multiple client retrieve sessions, restore sessions, or server processes can read a
volume concurrently in a storage pool that is associated with the CENTERA device
type. In addition, one client session or one server process can write to the volume
while it is being read.
The following server processes can share read access to Centera volumes:
v EXPORT NODE
v EXPORT SERVER
v GENERATE BACKUPSET
The following server processes cannot share read access to Centera volumes:
v AUDIT VOLUME
v DELETE VOLUME
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, the Tivoli Storage Manager server is
currently performing a client backup request to an output volume and another
request from another client to restore data from the same volume. The backup
request is preempted and the volume is released for use by the restore request. For
additional information, see Preemption of client or server operations on page
626.
To control the number of mount points concurrently open for Centera devices, use
the MOUNTLIMIT parameter when you define the device class or update its
definition.
If you specify an estimated capacity that exceeds the actual capacity of the volume
in the device class, Tivoli Storage Manager updates the estimated capacity of the
volume when the volume becomes full. When Tivoli Storage Manager reaches the
end of the volume, it updates the capacity for the amount that is written to the
volume.
You can either accept the default estimated capacity for a given device class, or
explicitly specify an estimated capacity. An accurate estimated capacity value is not
required, but is useful. Tivoli Storage Manager uses the estimated capacity of
volumes to determine the estimated capacity of a storage pool, and the estimated
percent utilized. You may want to change the estimated capacity if:
v The default estimated capacity is inaccurate because data compression is being
performed by the drives.
v You have volumes of nonstandard size.
Use either client compression or device compression, but not both. The following
table summarizes the advantages and disadvantages of each type of compression.
Either type of compression can affect tape drive performance, because compression
affects data rate. When the rate of data going to a tape drive is slower than the
drive can write, the drive starts and stops while data is written, meaning relatively
poorer performance. When the rate of data is fast enough, the tape drive can reach
streaming mode, meaning better performance. If tape drive performance is more
important than the space savings that compression can mean, you may want to
perform timed test backups using different approaches to determine what is best
for your system.
Drive compression is specified with the FORMAT parameter for the drive's device
class, and the hardware device must be able to support the compression format.
For information about how to set up compression on the client, see Node
compression considerations on page 424 and Registering nodes with the server
on page 422.
It may wrongly appear that you are not getting the full use of the capacity of your
tapes, for the following reasons:
v A tape device manufacturer often reports the capacity of a tape based on an
assumption of compression by the device. If a client compresses a file before it is
sent, the device may not be able to compress it any further before storing it.
v Tivoli Storage Manager records the size of a file as it goes to a storage pool. If
the client compresses the file, Tivoli Storage Manager records this smaller size in
the database. If the drive compresses the file, Tivoli Storage Manager is not
aware of this compression.
Figure 14 on page 213 compares what Tivoli Storage Manager sees as the amount
of data stored on tape when compression is done by the device and by the client.
In both cases, Tivoli Storage Manager considers the volume to be full. However,
Tivoli Storage Manager considers the capacity of the volume in the two cases to be
different: 2.4 GB when the drive compresses the file, and 1.2 GB when the client
compresses the file. Use the QUERY VOLUME command to see the capacity of
volumes from Tivoli Storage Manager's viewpoint. See Monitoring the use of
storage pool volumes on page 388.
Drive
compression
only
2.4 GB 2.4 GB
1.2 GB
Client
compression
only
2.4 GB
1.2 GB 1.2 GB
Figure 14. Comparing compression at the client and compression at the device
Tasks:
Configuring Tivoli Storage Manager for NDMP operations on page 222
Determining the location of NAS backup on page 224
Setting up tape libraries for NDMP operations on page 228
Configuring Tivoli Storage Manager policy for NDMP operations on page 223
Registering NAS nodes with the Tivoli Storage Manager server on page 234
Defining a data mover for the NAS file server on page 235
Defining paths to libraries for NDMP operations on page 238
Defining paths to libraries for NDMP operations on page 238
Defining paths for NDMP operations on page 235
Labeling and checking tapes into the library on page 239
Scheduling NDMP operations on page 239
Defining virtual file spaces on page 239
Tape-to-tape copy to back up data on page 239
Tape-to-tape copy to move data on page 240
Backing up and restoring NAS file servers using NDMP on page 240
Backing up NDMP file server to Tivoli Storage Manager server backups on page 242
Managing table of contents on page 221
NDMP operations management on page 218
Managing NAS file server nodes on page 219
Managing data movers used in NDMP operations on page 220
Storage pool management for NDMP operations on page 220
NDMP requirements
You must meet certain requirements when you use NDMP (network data
management protocol) for operations with network-attached storage (NAS) file
servers.
Tivoli Storage Manager Extended Edition
Licensed program product that includes support for the use of NDMP.
NAS File Server
A NAS file server. The operating system on the file server must be
supported by Tivoli Storage Manager. Visit http://www.ibm.com/
support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager
for a list of NAS file servers that are certified through the Ready for IBM
Tivoli software.
| Note: The Tivoli Storage Manager server does not include External
| Library support for the ACSLS library when the library is used for
| NDMP operations.
| VTL library
| A virtual tape library that is supported by the Tivoli Storage
| Manager server. This type of library can be attached directly either
| to the Tivoli Storage Manager server or to the NAS file server. A
| virtual tape library is essentially the same as a SCSI library but is
| enhanced for virtual tape library characteristics and allows for
| better mount performance.
Drive Sharing: The tape drives can be shared by the Tivoli Storage
Manager server and one or more NAS file servers. Also, when a SCSI,
VTL, or a 349X library is connected to the Tivoli Storage Manager server
and not to the NAS file server, the drives can be shared by one or more
NAS file servers and one or more Tivoli Storage Manager:
v Library clients
v Storage agents
Verify the compatibility of specific combinations of a NAS file server, tape devices,
and SAN-attached devices with the hardware manufacturers.
Attention: Tivoli Storage Manager supports NDMP Version 4 for all NDMP
operations. Tivoli Storage Manager continues to support all NDMP backup and
restore operations with a NAS device that runs NDMP version 3. The Tivoli
Storage Manager server negotiates the highest protocol level (either Version 3 or
Version 4) with the NDMP server when it establishes an NDMP connection. If you
experience any issues with Version 4, you might want to try Version 3.
Client Interfaces:
v Backup-archive command-line client (on a Windows, 64 bit AIX, or 64 bit Oracle
Solaris system)
v web client
Server Interfaces:
v Server console
v Command line on the administrative client
Chapter 9. Using NDMP for operations with NAS file servers 217
The Tivoli Storage Manager web client interface, available with the backup-archive
client, displays the file systems of the network-attached storage (NAS) file server in
a graphical view. The client function is not required, but you can use the client
interfaces for NDMP operations. The client function is recommended for file-level
restore operations. See File-level backup and restore for NDMP operations on
page 243 for more information about file-level restore.
Tivoli Storage Manager prompts you for an administrator ID and password when
you perform NDMP functions using either of the client interfaces. See the
Backup-Archive Clients Installation and User's Guide for more information about
installing and activating client interfaces.
The NDMP format is not the same as the data format used for traditional Tivoli
Storage Manager backups. When you define a NAS file server as a data mover and
define a storage pool for NDMP operations, you specify the data format. For
example, you would specify NETAPPDUMP if the NAS file server is a NetApp or
an IBM System Storage N Series device. You would specify CELERRADUMP if the
NAS file server is an EMC Celerra device. For all other devices, you would specify
NDMPDUMP.
These include:
v NAS nodes
v Data movers
v Tape libraries and drives
v Paths
v Device classes
v Storage pools
v Table of contents
For example, assume you have created a new policy domain named NASDOMAIN
for NAS nodes and you want to update a NAS node named NASNODE1 to
include it in the new domain.
1. Query the node.
query node nasnode1 type=nas
2. Change the domain of the node by issuing the following command:
update node nasnode1 domain=nasdomain
For example, to rename NASNODE1 to NAS1 you must perform the following
steps:
1. Delete all paths between data mover NASNODE1 and libraries and between
data mover NASNODE1 and drives.
2. Delete the data mover defined for the NAS node.
3. To rename NASNODE1 to NAS1, issue the following command:
rename node nasnode1 nas1
4. Define the data mover using the new node name. In this example, you must
define a new data mover named NAS1 with the same parameters used to
define NASNODE1.
Attention: When defining a new data mover for a node that you have
renamed, ensure that the data mover name matches the new node name and
that the new data mover parameters are duplicates of the original data mover
parameters. Any mismatch between a node name and a data mover name or
between new data mover parameters and original data mover parameters can
prevent you from establishing a session with the NAS file server.
5. For SCSI or 349X libraries, define a path between the NAS data mover and a
library only if the tape library is physically connected directly to the NAS file
server.
6. Define paths between the NAS data mover and any drives used for NDMP
(network data management protocol) operations.
Chapter 9. Using NDMP for operations with NAS file servers 219
Managing data movers used in NDMP operations
You can update, query, and delete the data movers that you define for NAS
(network attached storage) file servers.
For example, if you shut down a NAS file server for maintenance, you might want
to take the data mover offline.
1. Query your data movers to identify the data mover for the NAS file server that
you want to maintain.
query datamover nasnode1
2. Issue the following command to make the data mover offline:
update datamover nasnode1 online=no
To delete the data mover, you must first delete any path definitions in which
the data mover has been used as the source.
3. Issue the following command to delete the data mover:
delete datamover nasnode1
Attention: If the data mover has a path to the library, and you delete the data
mover or make the data mover offline, you disable access to the library.
Remove Tivoli Storage Manager server access by deleting the path definition with
the following command:
delete path server1 nasdrive1 srctype=server desttype=drive library=naslib
You can query and update storage pools. You cannot update the DATAFORMAT
parameter.
You cannot designate a Centera storage pool as a target pool of NDMP operations.
Maintaining separate storage pools for data from different NAS vendors is
suggested even though the data format for both is NDMPDUMP.
The following DEFINE STGPOOL and UPDATE STGPOOL parameters are ignored
because storage pool hierarchies, reclamation, and migration are not supported for
these storage pools:
MAXSIZE
NEXTSTGPOOL
LOWMIG
HIGHMIG
MIGDELAY
MIGCONTINUE
RECLAIMSTGPOOL
OVFLOLOCATION
Issue the QUERY NASBACKUP command to display information about the file system
image objects that have been backed up for a specific NAS (network attached
storage) node and file space. By issuing the command, you can see a display of all
backup images generated by NDMP (network data management protocol) and
whether each image has a corresponding table of contents.
Note: The Tivoli Storage Manager server may store a full backup in excess of the
number of versions you specified, if that full backup has dependent differential
backups. Full NAS backups with dependent differential backups behave like other
base files with dependent subfiles. Due to retention time specified in the RETAIN
EXTRA setting, the full NAS backup will not be expired, and the version will be
displayed in the output of a QUERY NASBACKUP command. See File expiration and
expiration processing on page 481 for details.
Use the QUERY TOC command to display files and directories in a backup image
generated by NDMP. By issuing the QUERY TOC server command, you can
display all directories and files within a single specified TOC. The specified TOC
will be accessed in a storage pool each time the QUERY TOC command is issued
because this command does not load TOC information into the Tivoli Storage
Manager database. Then, use the RESTORE NODE command with the FILELIST
parameter to restore individual files.
Chapter 9. Using NDMP for operations with NAS file servers 221
Some firewall software is configured to automatically close network connections
that are inactive for a specified length of time. If a firewall exists between a Tivoli
Storage Manager server and a NAS device, it is possible that the firewall can close
NDMP control connections unexpectedly and cause the NDMP operation to fail.
The Tivoli Storage Manager server provides a mechanism, TCP keepalive, that you
can enable to prevent long-running, inactive connections from being closed. If TCP
keepalive is enabled, small packets are sent across the network at predefined
intervals to the connection partner.
To update the server option, you can use the SETOPT command.
To update the server option, you can use the SETOPT command.
Perform the following steps to configure the Tivoli Storage Manager for NDMP
operations:
1. Set up the tape library and media. See Setting up tape libraries for NDMP
operations on page 228, where the following steps are described in more
detail.
a. Attach the SCSI library to the NAS file server or to the Tivoli Storage
Manager server, or attach the ACSLS library or 349X library to the Tivoli
Storage Manager server.
b. Define the library with a library type of SCSI, ACSLS, or 349X.
c. Define a device class for the tape drives.
d. Define a storage pool for NAS backup media.
e. Define a storage pool for storing a table of contents. This step is optional.
See Configuring policy for NDMP operations on page 527 for more information.
Complete the following steps to configure Tivoli Storage Manager policy for
NDMP operations:
1. Create a policy domain for NAS (network attached storage) file servers. For
example, to define a policy domain that is named NASDOMAIN, enter the
following command:
define domain nasdomain description=Policy domain for NAS file servers
2. Create a policy set in that domain. For example, to define a policy set named
STANDARD in the policy domain named NASDOMAIN, issue the following
command:
define policyset nasdomain standard
3. Define a management class, and then assign the management class as the
default for the policy set. For example, to define a management class named
MC1 in the STANDARD policy set, and assign it as the default, issue the
following commands:
define mgmtclass nasdomain standard mc1
assign defmgmtclass nasdomain standard mc1
4. Define a backup copy group in the default management class. The destination
must be the storage pool you created for backup images produced by NDMP
operations. In addition, you can specify the number of backup versions to
retain. For example, to define a backup copy group for the MC1 management
class where up to four versions of each file system are retained in the storage
pool named NASPOOL, issue the following command:
Chapter 9. Using NDMP for operations with NAS file servers 223
define copygroup nasdomain standard mc1 destination=naspool verexists=4
You can control the management classes that are applied to backup images
produced by NDMP (network data management protocol) operations regardless of
which node initiates the backup. You can do this by creating a set of options to be
used by the client nodes. The option set can include an include.fs.nas statement
to specify the management class for NAS (network attached storage) file server
backups. See Creating client option sets on the server on page 468 for more
information.
You can also use a backup-archive client to back up a NAS file server by mounting
the NAS file-server file system on the client machine (with either an NFS [network
file system] mount or a CIFS [common internet file system] map) and then backing
up as usual. Table 24 compares the three backup-and-restore methods.
Chapter 9. Using NDMP for operations with NAS file servers 225
Table 24. Comparing methods for backing up NDMP data (continued)
NDMP: Filer to attached Backup-archive client to
Property NDMP: Filer to server library server
Cyclic Redundancy Supported Not supported Supported
Checking (CRC) when data
is moved using Tivoli
Storage Manager processes
Validation using Tivoli Supported Not supported Supported
Storage Manager audit
commands
Disaster recovery manager Supported Supported Supported
Many of the configuration choices you have for libraries and drives are determined
by the hardware features of your libraries. You can set up NDMP operations with
any supported library and drives. However, the more features your library has, the
more flexibility you can exercise in your implementation.
All drives are defined to the Tivoli Storage Manager server. However, the same
drive may be defined for both traditional Tivoli Storage Manager operations and
NDMP operations. Figure 15 on page 227 illustrates one possible configuration. The
Tape Library
1
2
3
Legend:
Drive access
Drives 1 2 3
To create the configuration shown in Figure 15, perform the following steps:
1. Define all three drives to Tivoli Storage Manager.
2. Define paths from the Tivoli Storage Manager server to drives 2 and 3. Because
drive 1 is not accessed by the server, no path is defined.
3. Define each NAS file server as a separate data mover.
4. Define paths from each data mover to drive 1 and to drive 2.
To use the Tivoli Storage Manager back end data movement operations, the Tivoli
Storage Manager server requires two available drive paths from a single NAS data
mover. The drives can be in different libraries and can have different device types
that are supported by NDMP. You can make copies between two different tape
devices, for example, the source tape drive can be an DLT drive in a library and
the target drive can be an LTO drive in another library.
During Tivoli Storage Manager back end data movements, the Tivoli Storage
Manager server locates a NAS data mover that supports the same data format as
the data to be copied from and that has two available mount points and paths to
the drives. If the Tivoli Storage Manager server cannot locate such a data mover,
the requested data movement operation is not performed. The number of available
mount points and drives depends on the mount limits of the device classes for the
storage pools involved in the back end data movements.
If the back end data movement function supports multiprocessing, each concurrent
Tivoli Storage Manager back end data movement process requires two available
mount points and two available drives. To run two Tivoli Storage Manager
processes concurrently, at least four mount points and four drives must be
available.
See Defining paths for NDMP operations on page 235 for more information.
Chapter 9. Using NDMP for operations with NAS file servers 227
Setting up tape libraries for NDMP operations
You must complete several tasks to set up a tape library for NDMP (network data
management protocol) operations.
Perform the following steps to set up tape libraries for NDMP operations:
1. Connect the library and drives for NDMP operations.
a. Connect the SCSI library. Before setting up a SCSI tape library for NDMP
operations, you should have already determined whether you want to
attach your library robotics control to the Tivoli Storage Manager server or
to the NAS (network attached storage) file server. See Tape libraries and
drives for NDMP operations on page 226. Connect the SCSI tape library
robotics to the Tivoli Storage Manager server or to the NAS file server. See
the manufacturer's documentation for instructions.
Library Connected to Tivoli Storage Manager: Make a SCSI or Fibre
Channel connection between the Tivoli Storage Manager server and the
library robotics control port. Then connect the NAS file server with the
drives you want to use for NDMP operations.
Library Connected to NAS File Server: Make a SCSI or Fibre Channel
connection between the NAS file server and the library robotics and
drives.
b. Connect the ACSLS Library. Connect the ACSLS tape library to the Tivoli
Storage Manager server.
c. Connect the 349X Library. Connect the 349X tape library to the Tivoli
Storage Manager server.
2. Define the library for NDMP operations. (The library has to be a single device
type, not a mixed device one.)
SCSI Library
define library tsmlib libtype=scsi
ACSLS Library
define library acslib libtype=acsls acsid=1
349X Library
define library tsmlib libtype=349x
3. Define a device class for NDMP operations. Create a device class for NDMP
operations. A device class defined with a device type of NAS is not explicitly
associated with a specific drive type (for example, 3570 or 8 mm). However, we
recommend that that you define separate device classes for different drive
types.
In the device class definition:
v Specify NAS as the value for the DEVTYPE parameter.
v Specify 0 as the value for the MOUNTRETENTION parameter.
MOUNTRETENTION=0 is required for NDMP operations.
v Specify a value for the ESTCAPACITY parameter.
For example, to define a device class named NASCLASS for a library named
NASLIB and media whose estimated capacity is 40 GB, issue the following
command:
define devclass nasclass devtype=nas library=naslib mountretention=0
estcapacity=40g
4. Define a storage pool for NDMP media. When NETAPPDUMP,
CELERRADUMP, or NDMPDUMP is designated as the type of storage pool,
managing the storage pools produced by NDMP operations is different from
Attention: Ensure that you do not accidentally use storage pools that have
been defined for NDMP operations in traditional Tivoli Storage Manager
operations. Be especially careful when assigning the storage pool name as the
value for the DESTINATION parameter of the DEFINE COPYGROUP command.
Unless the destination is a storage pool with the appropriate data format, the
backup will fail.
5. Define a storage pool for a table of contents. If you plan to create a table of
contents, you should also define a disk storage pool in which to store the table
of contents. You must set up policy so that the Tivoli Storage Manager server
stores the table of contents in a different storage pool from the one where the
backup image is stored. The table of contents is treated like any other object in
that storage pool. This step is optional.
For example, to define a storage pool named TOCPOOL for a DISK device
class, issue the following command:
define stgpool tocpool disk
Then, define volumes for the storage pool. For more information see:
Configuring random access volumes on disk devices on page 78.
For more information on connecting libraries, see Chapter 5, Attaching devices
for the server, on page 83.
You must determine whether to attach the library robotics to the Tivoli Storage
Manager server or to the NAS file server. Regardless of where you connect library
robotics, tape drives must always be connected to the NAS file server for NDMP
operations.
Distance and your available hardware connections are factors to consider for SCSI
libraries. If the library does not have separate ports for robotics control and drive
access, the library must be attached to the NAS file server because the NAS file
Chapter 9. Using NDMP for operations with NAS file servers 229
server must have access to the drives. If your SCSI library has separate ports for
robotics control and drive access, you can choose to attach the library robotics to
either the Tivoli Storage Manager server or the NAS file server. If the NAS file
server is at a different location from the Tivoli Storage Manager server, the distance
may mean that you must attach the library to the NAS file server.
Whether you are using a SCSI, ACSLS, or 349X library, you have the option of
dedicating the library to NDMP operations, or of using the library for NDMP
operations as well as most traditional Tivoli Storage Manager operations.
Table 25. Summary of configurations for NDMP operations
Distance between Drive sharing Drive sharing
Tivoli Storage between Tivoli Drive sharing between storage
Manager server and Storage Manager between NAS agent and NAS
Configuration library Library sharing and NAS file server file servers file server
Configuration 1
(SCSI library
Limited by SCSI or
connected to the Supported Supported Supported Supported
FC connection
Tivoli Storage
Manager server)
Configuration 2
(SCSI library
No limitation Not supported Supported Supported Not supported
connected to the
NAS file server)
Configuration 3 May be limited by
Supported Supported Supported Supported
(349X library) 349X connection
Configuration 4 May be limited by
Supported Supported Supported Supported
(ACSLS library) ACSLS connection
In this configuration, the Tivoli Storage Manager server controls the SCSI library
through a direct, physical connection to the library robotics control port. For
NDMP (network data management protocol) operations, the drives in the library
are connected directly to the NAS file server, and a path must be defined from the
NAS data mover to each of the drives to be used. The NAS file server transfers
data to the tape drive at the request of the Tivoli Storage Manager server. To also
use the drives for Tivoli Storage Manager operations, connect the Tivoli Storage
Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives. This configuration also supports a Tivoli
Storage Manager storage agent having access to the drives for its LAN-free
operations, and the Tivoli Storage Manager server can be a library manager.
NAS file
server
Legend:
SCSI or Fibre Channel connection
TCP/IP connection
Data flow
Robotics control 1
NAS file server
Drive access 2 file system disks
Figure 16. Configuration 1: SCSI library connected to Tivoli Storage Manager server
The Tivoli Storage Manager server controls library robotics by sending library
commands across the network to the NAS file server. The NAS file server passes
the commands to the tape library. Any responses generated by the library are sent
to the NAS file server, and passed back across the network to the Tivoli Storage
Manager server. This configuration supports a physically distant Tivoli Storage
Manager server and NAS file server. For example, the Tivoli Storage Manager
server could be in one city, while the NAS file server and tape library are in
another city.
Chapter 9. Using NDMP for operations with NAS file servers 231
Tivoli Storage
Manager
server
Tape
Web client library
(optional)
2
1
NAS file
server
Legend:
SCSI or Fibre Channel connection
TCP/IP connection
Data flow
Robotics control 1
NAS file server
Drive access 2 file system disks
Figure 17. Configuration 2: SCSI library connected to the NAS file server
In this configuration, the 349X tape library is controlled by the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library by passing
the request to the 349X library manager through TCP/IP.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city, while the NAS file server and tape library are in another city.
349X tape
Web client 1 library
(optional)
Figure 18. Configuration 3: 349x library connected to the Tivoli Storage Manager server
See Chapter 5, Attaching devices for the server, on page 83 for more information.
The ACSLS (automated cartridge system library software) tape library is controlled
by the Tivoli Storage Manager server. The Tivoli Storage Manager server controls
the library by passing the request to the ACSLS library server through TCP/IP. The
ACSLS library supports library sharing and LAN-free operations.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city while the NAS file server and tape library are in another city.
To also use the drives for Tivoli Storage Manager operations, connect the Tivoli
Storage Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives.
Chapter 9. Using NDMP for operations with NAS file servers 233
Tivoli Storage
Manager server
ACSLS tape
Web client 1 library
(optional)
Figure 19. Configuration 4: ACSLS library connected to the Tivoli Storage Manager server
See Chapter 5, Attaching devices for the server, on page 83 for more information.
If you are using a client option set, specify the option set when you register the
node.
You can verify that this node is registered by issuing the following command:
query node type=nas
Important: You must specify TYPE=NAS so that only NAS nodes are displayed.
To define a data mover for a NAS node named NASNODE1, enter the following
example command:
define datamover nasnode1 type=nas hladdress=netapp2 lladdress=10000 userid=root
password=admin dataformat=netappdump
In this command:
v The high-level address is an IP address for the NAS file server, either a
numerical address or a host name.
v The low-level address is the IP port for NDMP sessions with the NAS file server.
The default is port number 10000.
v The user ID is the ID defined to the NAS file server that authorizes an NDMP
session with the NAS file server (for this example, the user ID is the
administrative ID for the NetApp file server).
v The password parameter is a valid password for authentication to an NDMP
session with the NAS file server.
v The data format is NETAPPDUMP. This is the data format that the NetApp file
server uses for tape backup. This data format must match the data format of the
target storage pool.
Defining paths for drives attached only to a NAS file server and to the Tivoli
Storage Manager server:
Remember: If the drive is attached to the Tivoli Storage Manager server, the
element address is automatically detected .
2. Map the NAS drive name to the corresponding drive definition on the Tivoli
Storage Manager server:
Chapter 9. Using NDMP for operations with NAS file servers 235
v On the Tivoli Storage Manager server, issue the QUERY DRIVE FORMAT=DETAILED
command to obtain the worldwide name (WWN) and serial number for the
drive that is to be connected to the NAS file server.
v On the NAS device, obtain the tape device name, serial number, and WWN
for the drive.
If the WWN or serial number matches, a drive on a NAS file server is the same
as the drive on the Tivoli Storage Manager server .
3. Using the drive name, define a path to the drive from the NAS file server and
a path to the drive from the Tivoli Storage Manager server.
v For example, to define a path between a tape drive with a device name of
rst01 and a NetApp file server, issue the following command:
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst01
v To define a path between the tape drive and the Tivoli Storage Manager
server, issue the following command:
define path server1 nasdrive1 srctype=server desttype=drive
library=naslib device=/dev/rmt0
Related information:
Obtaining device names for devices attached to NAS file servers
Restriction: If the SCSI drive is connected only to a NAS file server, the
element address is not automatically detected, and you must supply it. If a
library has more than one drive, you must specify an element address for each
drive.
To obtain a SCSI element address, go to one of the following Tivoli
device-support websites:
v AIX, HP-UX, Solaris, and Windows: http://www.ibm.com/software/
sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
v Linux: http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_Linux.html
Element number assignment and device WWN assignments are also available
from tape-library device manufacturers.
2. Create drive definitions by specifying the element addresses identified in the
preceding step. Specify the element address in the ELEMENT parameter of the
DEFINE DRIVE command. For example, to define a drive NASDRIVE1 with the
element address 82 for the library NASLIB, issue the following command:
define drive naslib nasdrive1 element=82
Attention: For a drive connected only to the NAS file server, do not specify
ASNEEDED as the value for the CLEANFREQUENCY parameter of the DEFINE DRIVE
command.
For paths from a network-attached storage (NAS) data mover, the value of the
DEVICE parameter in the DEFINE PATH command is the name by which the NAS file
server knows a library or drive.
You can obtain these device names, also known as special file names, by querying
the NAS file server. For information about how to obtain names for devices that
are connected to a NAS file server, consult the product information for the file
server.
v To obtain the device names for tape libraries on a Netapp Release ONTAP 10.0
GX, or later, file server, connect to the file server using telnet and issue the
SYSTEM HARDWARE TAPE LIBRARY SHOW command. To obtain the device names for
tape drives on a Netapp Release ONTAP 10.0 GX, or later, file server, connect to
the file server using telnet and issue the SYSTEM HARDWARE TAPE DRIVE SHOW
command. For details about these commands, see the Netapp ONTAP GX file
server product documentation.
v For releases earlier than Netapp Release ONTAP 10.0 GX, continue to use the
SYSCONFIG command. For example, to display the device names for tape libraries,
connect to the file server using telnet and issue the following command:
sysconfig -m
To display the device names for tape drives, issue the following command:
sysconfig -t
v For fibre-channel-attached drives and the Celerra data mover, complete the
following steps:
1. Log on to the EMC Celerra control workstation using an administrative ID.
Issue the following command:
server_devconfig server_1 -l -s -n
Tip: The -l option for this command lists only the device information that
was saved in the database of the data mover. The command and option do
not display changes to the device configuration that occurred after the last
database refresh on the data mover. For details about how to obtain the most
recent device configuration for your data mover, see the EMC Celerra
documentation.
The output for the server_devconfig command includes the device names
for the devices attached to the data mover. The device names are listed in the
addr column, for example:
Chapter 9. Using NDMP for operations with NAS file servers 237
server_1:
Scsi Device Table
name addr type info
tape1 c64t0l0 tape IBM ULT3580-TD2 53Y2
ttape1 c96t0l0 tape IBM ULT3580-TD2 53Y2
2. Map the Celerra device name to the device worldwide name (WWN):
a. To list the WWN, log on to the EMC Celerra control workstation and
issue the following command. Remember to enter a period ( . ) as the
first character in this command.
.server_config server_# -v "fcp bind show"
The output for this command includes the WWN, for example:
Chain 0064: WWN 500507630f418e29 HBA 2 N_PORT Bound
Chain 0096: WWN 500507630f418e18 HBA 2 N_PORT Bound
These tasks are the same as for other libraries. For more information, see:
Labeling removable media volumes on page 146
The BACKUP NODE and RESTORE NODE commands can be used only for nodes of
TYPE=NAS. See Backing up and restoring NAS file servers using NDMP on
page 240 for information about the commands.
The schedule is active, and is set to run at 8:00 p.m. every day. See Chapter 20,
Automating server operations, on page 633 for more information.
To create a virtual file space name for the directory path on the NAS device, issue
the DEFINE VIRTUALFSMAPPING command:
define virtualfsmapping nas1 /mikesdir /vol/vol1 /mikes
This command defines a virtual file space name of /MIKESDIR on the server which
represents the directory path of /VOL/VOL1/MIKES on the NAS file server
represented by node NAS1. See Directory-level backup and restore for NDMP
operations on page 246 for more information.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
To have one NAS device with paths to four drives in a library, use the MOVE DATA
command after you are done with your configuration setup. This moves data on
the volume VOL1 to any available volumes in the same storage pool as VOL1:
move data vol1
Chapter 9. Using NDMP for operations with NAS file servers 239
Tape-to-tape copy to move data
In order to move data from an old tape technology to a new tape technology, using
NDMP (network data management protocol) tape-to-tape copy operation, perform
the steps below as well as the regular steps in your configuration setup.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
1. Define one drive in the library, lib1, that has old tape technology:
define drive lib1 drv1 element=1035
2. Define one drive in the library, lib2, that has new tape technology:
define drive lib2 drv1 element=1036
3. Move data on volume vol1 in the primary storage pool to the volumes in
another primary storage pool, nasprimpool2:
move data vol1 stgpool=nasprimpool2
For more information on the command, see the Tivoli Storage Manager
Backup-Archive Clients Installation and User's Guide.
Tip: Whenever you use the client interface, you are asked to authenticate yourself
as a Tivoli Storage Manager administrator before the operation can begin. The
administrator ID must have at least client owner authority for the NAS node.
You can perform the same backup operation with a server interface. For example,
from the administrative command-line client, back up the file system named
/vol/vol1 on a NAS file server named NAS1, by issuing the following command:
backup node nas1 /vol/vol1
Note: The BACKUP NAS and BACKUP NODE commands do not include snapshots. To
back up snapshots see Backing up and restoring with snapshots on page 246.
You can restore the image using either interface. Backups are identical whether
they are backed up using a client interface or a server interface. For example,
suppose you want to restore the image backed up in the previous examples. For
this example the file system named /vol/vol1 is being restored to /vol/vol2.
Restore the file system with the following command, issued from a Windows
backup-archive client interface:
dsmc restore nas -nasnodename=nas1 {/vol/vol1} {/vol/vol2}
You can choose to restore the file system, using a server interface. For example, to
restore the file system name /vol/vol1 to file system /vol/vol2, for a NAS file
server named NAS1, enter the following command:
restore node nas1 /vol/vol1 /vol/vol2
When you store NAS backup data in the Tivoli Storage Manager server's storage
hierarchy, you can apply Tivoli Storage Manager back end data management
functions. Migration, reclamation, and disaster recovery are among the supported
features when using the NDMP file server to Tivoli Storage Manager server option.
In order to back up a NAS device to a Tivoli Storage Manager native storage pool,
set the destination storage pool in the copy group to point to the desired native
storage pool. The destination storage pool provides the information about the
library and drives used for backup and restore. You should ensure that there is
sufficient space in your target storage pool to contain the NAS data, which can be
backed up to sequential, disk, or file type devices. Defining a separate device class
is not necessary.
Firewall considerations are more stringent than they are for filer-to-attached-library
because communications can be initiated by either the Tivoli Storage Manager
server or the NAS file server. NDMP tape servers run as threads within the Tivoli
Storage Manager server and the tape server accepts connections on port of 10001.
This port number can be changed through the following option in the Tivoli
Storage Manager server options file: NDMPPORTRANGE port-number-low,
port-number-high.
Before using this option, verify that your NAS device supports NDMP operations
that use a different network interface for NDMP control and NDMP data
connections. NDMP control connections are used by Tivoli Storage Manager to
authenticate with an NDMP server and monitor an NDMP operation while NDMP
data connections are used to transmit and receive backup data during NDMP
operations. You must still configure your NAS device to route NDMP backup and
restore data to the appropriate network interface.
Chapter 9. Using NDMP for operations with NAS file servers 241
because they use the system's default network interface. You can update this server
option without stopping and restarting the server by using the SETOPT command
(Set a server option for dynamic update).
See Backing up NDMP file server to Tivoli Storage Manager server backups for
steps on how to perform NDMP filer-to-server backups.
The destination for NAS data is determined by the destination in the copy
group. The storage size estimate for NAS differential backups uses the
occupancy of the file space, the same value that is used for a full backup. You
can use this size estimate as one of the considerations in choosing a storage
pool. One of the attributes of a storage pool is the MAXSIZE value, which
indicates that data be sent to the NEXT storage pool if the MAXSIZE value is
exceeded by the estimated size. Because NAS differential backups to Tivoli
Storage Manager native storage pools use the base file space occupancy size as
a storage size estimate, differential backups end up in the same storage pool as
the full backup. Depending on collocation settings, differential backups may
end up on the same media as the full backup.
4. Set up a node and data mover for the NAS device. The data format signifies
that the backup images created by this NAS device are a dump type of backup
image in a NetApp specific format.
register node nas1 nas1 type=nas domain=standard
define datamover nas1 type=nas hla=nas1 user=root
password=***** dataformat=netappdump
If you specify this option at the time of backup, you can later display the table of
contents of the backup image. Through the backup-archive Web client, you can
select individual files or directories to restore directly from the backup images
generated.
You also have the option to do a backup via NDMP without collecting file-level
restore information.
To allow creation of a table of contents for a backup via NDMP, you must define
the TOCDESTINATION attribute in the backup copy group for the management
class to which this backup image is bound. You cannot specify a copy storage pool
or an active-data pool as the destination. The storage pool you specify for the TOC
destination must have a data format of either NATIVE or NONBLOCK, so it
cannot be the tape storage pool used for the backup image.
If you choose to collect file-level information, specify the TOC parameter in the
BACKUP NODE server command. Or, if you initiate your backup using the client, you
can specify the TOC option in the client options file, client option set, or client
command line. You can specify NO, PREFERRED, or YES. When you specify
PREFERRED or YES, the Tivoli Storage Manager server stores file information for a
single NDMP-controlled backup in a table of contents (TOC). The table of contents
is placed into a storage pool. After that, the Tivoli Storage Manager server can
access the table of contents so that file and directory information can be queried by
the server or client. Use of the TOC parameter allows a table of contents to be
generated for some images and not others, without requiring different
management classes for the images.
See the Administrator's Reference for more information about the BACKUP NODE
command.
To avoid mount delays and ensure sufficient space, use random access storage
pools (DISK device class) as the destination for the table of contents. For sequential
access storage pools, no labeling or other preparation of volumes is necessary if
scratch volumes are allowed.
Chapter 9. Using NDMP for operations with NAS file servers 243
See Managing table of contents on page 221 for more information.
You should install Data ONTAP 6.4.1 or later, if it is available, on your NetApp
NAS file server in order to garner full support of international characters in the
names of files and directories.
If your level of Data ONTAP is earlier than 6.4.1, you must have one of the
following two configurations in order to collect and restore file-level information.
Results with configurations other than these two are unpredictable. The Tivoli
Storage Manager server will print a warning message (ANR4946W) during backup
operations. The message indicates that the character encoding of NDMP file history
messages is unknown, and UTF-8 will be assumed in order to build a table of
contents. It is safe to ignore this message only for the following two configurations.
v Your data has directory and file names that contain only English (7-bit ASCII)
characters.
v Your data has directory and file names that contain non-English characters and
the volume language is set to the UTF-8 version of the proper locale (for
example, de.UTF-8 for German).
If your level of Data ONTAP is 6.4.1 or later, you must have one of the following
three configurations in order to collect and restore file-level information. Results
with configurations other than these three are unpredictable.
Tip: Using the UTF-8 version of the volume language setting is more efficient in
terms of Tivoli Storage Manager server processing and table of contents storage
space.
v You only use CIFS to create and access your data.
As with a NAS (network attached storage) file system backup, a table of contents
(TOC) is created during a directory-level backup and you are able to browse the
files in the image, using the Web client. The default is that the files are restored to
the original location. During a file-level restore from a directory-level backup,
however, you can either select a different file system or another virtual file space
name as a destination.
For a TOC of a directory level backup image, the path names for all files are
relative to the directory specified in the virtual file space definition, not the root of
the file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator's Reference for more
information about virtual file space mapping commands.
Note: Virtual file space mappings are only supported for NAS nodes.
Chapter 9. Using NDMP for operations with NAS file servers 245
Directory-level backup and restore for NDMP operations
The DEFINE VIRTUALFSMAPPING command maps a directory path of a NAS (network
attached storage) file server to a virtual file space name on the Tivoli Storage
Manager server. After a mapping is defined, you can conduct NAS operations such
as BACKUP NODE and RESTORE NODE, using the virtual file space names as if they
were actual NAS file spaces.
To start a backup of the directory, issue the BACKUP NODE command specifying the
virtual file space name instead of a file space name. To restore the directory subtree
to the original location, run the RESTORE NODE command and specify the virtual file
space name.
Virtual file space definitions can also be specified as the destination in a RESTORE
NODE command. This allows you restore backup images (either file system or
directory) to a directory on any file system of the NAS device.
You can use the Web client to select files for restore from a directory-level backup
image because the Tivoli Storage Manager client treats the virtual file space names
as NAS file spaces.
For example, to backup a snapshot created for a NetApp file system, perform the
following:
1. On the console for the NAS device, issue the command to create the snapshot.
SNAP CREATE is the command for a NetApp device.
snap create vol2 february17
Use the NDMP SnapMirror to Tape feature as a disaster recovery option for
copying large NetAppfile systems to auxiliary storage. For most NetAppfile
systems, use the standard NDMP full or differential backup method.
Using a parameter option on the BACKUP NODE and RESTORE NODE commands, you
can back up and restore file systems by using SnapMirror to Tape. There are
several limitations and restrictions on how SnapMirror images can be used.
Consider the following guidelines before you use it as a backup method:
| v You cannot initiate a SnapMirror to Tape backup or restore operation from the
| Tivoli Storage Manager Operations Center, Administration Center, web client, or
| command-line client.
v You cannot perform differential backups of SnapMirror images.
v You cannot perform a directory-level backup using SnapMirror to Tape, thus
Tivoli Storage Manager does not permit an SnapMirror to Tape backup operation
on a server virtual file space.
v You cannot perform an NDMP file-level restore operation from SnapMirror to
Tape images. Therefore, a table of contents is never created during SnapMirror
to Tape image backups.
v At the start of a SnapMirror to Tape copy operation, the file server generates a
snapshot of the file system. NetAppprovides an NDMP environment variable to
control whether this snapshot should be removed at the end of the SnapMirror
to Tape operation. Tivoli Storage Manager always sets this variable to remove
the snapshot.
v After a SnapMirror to Tape image is retrieved and copied to a NetAppfile
system, the target file system is left configured as a SnapMirror partner.
NetAppprovides an NDMP environment variable to control whether this
SnapMirror relationship should be broken. Tivoli Storage Manager always
"breaks" the SnapMirror relationship during the retrieval. After the restore
operation is complete, the target file system is in the same state as that of the
original file system at the point-in-time of backup.
See the BACKUP NODE and RESTORE NODE commands in the Administrator's Reference
for more information about SnapMirror to Tape feature.
Chapter 9. Using NDMP for operations with NAS file servers 247
NDMP backup operations using Celerra file server integrated
checkpoints
When the Tivoli Storage Manager server initiates an NDMP backup operation on a
Celerra data mover, the backup of a large file system might take several hours to
complete. Without Celerra integrated checkpoints enabled, any changes occurring
on the file system are written to the backup image.
As a result, the backup image includes changes made to the file system during the
entire backup operation and is not a true point-in-time image of the file system.
If you are performing NDMP backups of Celerra file servers, you should upgrade
the operating system of your data mover to Celerra file server version T5.5.25.1 or
later. This version of the operating system allows enablement of integrated
checkpoints for all NDMP backup operations from the Celerra Control
Workstation. Enabling this feature ensures that NDMP backups represent true
point-in-time images of the file system that is being backed up.
If your version of the Celerra file server operating system is earlier than version
T5.5.25.1 and if you use NDMP to back up Celerra data movers, you should
manually generate a snapshot of the file system using Celerra's command line
checkpoint feature and then initiate an NDMP backup of the checkpoint file system
rather than the original file system.
Refer to the Celerra file server documentation for instructions on creating and
scheduling checkpoints from the Celerra control workstation.
Only NDMP backup data in NATIVE data format storage pools can be replicated.
You cannot replicate NDMP images that are stored in storage pools that has the
following data formats:
v NETAPPDUMP
v CELERRADUMP
v NDMPDUMP
When you configure devices so that the server can use them to store client data,
you create storage pools and storage volumes. The procedures for configuring
devices use the set of defaults that provides for storage pools and volumes. The
defaults can work well. However, you might have specific requirements not met by
the defaults. There are three common reasons to change the defaults:
v Optimize and control storage device usage by arranging the storage hierarchy
and tuning migration through the hierarchy (next storage pool, migration
thresholds).
v Reuse tape volumes through reclamation. Reuse is also related to policy and
expiration.
v Keep a client's files on a minimal number of volumes (collocation).
You can also make other adjustments to tune the server for your systems. See the
following sections to learn more. For some quick tips, see Task tips for storage
pools on page 261.
Concepts
Storage pools on page 250
Storage pool volumes on page 262
Access modes for storage pool volumes on page 268
Storage pool hierarchies on page 270
Migrating files in a storage pool hierarchy on page 281
Caching in disk storage pools on page 292
Writing data simultaneously to primary, copy, and active-data pools on page 337
Keeping client files together using collocation on page 363
Reclaiming space in sequential-access storage pools on page 372
Estimating space needs for storage pools on page 383
Tasks
Defining storage pools on page 255
Preparing volumes for random-access storage pools on page 265
Preparing volumes for sequential-access storage pools on page 265
Defining storage pool volumes on page 266
Updating storage pool volumes on page 267
Setting up a storage pool hierarchy on page 270
Monitoring storage-pool and volume usage on page 385
Monitoring the use of storage pool volumes on page 388
Moving data from one volume to another volume on page 403
Moving data belonging to a client node on page 408
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Storage pools
A storage pool is a collection of storage volumes. A storage volume is the basic
unit of storage, such as allocated space on a disk or a single tape cartridge. The
server uses the storage volumes to store backed-up, archived, or space-managed
files.
The server provides three types of storage pools that serve different purposes:
primary storage pools, copy storage pools, and active-data pools. You can arrange
primary storage pools in a storage hierarchy. The group of storage pools that you set
up for the Tivoli Storage Manager server to use is called server storage.
To prevent a single point of failure, create separate storage pools for backed-up
and space-managed files. This also includes not sharing a storage pool in either
storage pool hierarchy. Consider setting up a separate, random-access disk storage
pool to give clients fast access to their space-managed files.
For example, when a client attempts to retrieve a file and the server detects an
error in the file copy in the primary storage pool, the server marks the file as
damaged. At the next attempt to access the file, the server can obtain the file from
a copy storage pool.
You can move copy storage pool volumes off-site and still have the server track the
volumes. Moving copy storage pool volumes off-site provides a means of
recovering from an on-site disaster.
A copy storage pool can use only sequential-access storage (for example, a tape
device class or FILE device class).
Remember:
v You can back up data from a primary storage pool defined with the NATIVE,
NONBLOCK, or any of the NDMP formats (NETAPPDUMP, CELERRADUMP,
or NDMPDUMP). The target copy storage pool must have the same data format
as the primary storage pool.
v You cannot back up data from a primary storage pool defined with a CENTERA
device class.
Active-data pools
An active-data pool contains only active versions of client backup data. active-data
pools are useful for fast client restores, reducing the number of on-site or off-site
storage volumes, or reducing bandwidth when copying or restoring files that are
vaulted electronically in a remote location.
Data migrated by hierarchical storage management (HSM) clients and archive data
are not permitted in active-data pools. As updated versions of backup data
continue to be stored in active-data pools, older versions are deactivated and
removed during reclamation processing.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool. As a
best practice and to protect your inactive data, therefore, you should create a
minimum of two storage pools: one active-data pool, which contains only active
Active-data pools can use any type of sequential-access storage (for example, a
tape device class or FILE device class). However, the precise benefits of an
active-data pool depend on the specific device type associated with the pool. For
example, active-data pools associated with a FILE device class are ideal for fast
client restores because FILE volumes do not have to be physically mounted and
because the server does not have to position past inactive files that do not have to
be restored. In addition, client sessions restoring from FILE volumes in an
active-data pool can access the volumes concurrently, which also improves restore
performance.
Active-data pools that use removable media, such as tape or optical, offer similar
benefits. Although tapes need to be mounted, the server does not have to position
past inactive files. However, the primary benefit of using removable media in
active-data pools is the reduction of the number of volumes used for on-site and
off-site storage. If you vault data electronically to a remote location, an active-data
pool associated with a SERVER device class lets you save bandwidth by copying
and restoring only active data.
Remember:
v The server will not attempt to retrieve client files from an active-data pool
during a point-in-time restore. Point-in-time restores require both active and
inactive file versions. Active-data pools contain only active file versions. For
optimal efficiency during point-in-time restores and to avoid switching between
active-data pools and primary or copy storage pools, the server retrieves both
active and inactive versions from the same storage pool and volumes.
v You cannot copy active data to an active-data pool from a primary storage pool
defined with the NETAPPDUMP, the CELERRADUMP, or the NDMPDUMP
data format.
v You cannot copy active data from a primary storage pool defined with a
CENTERA device class.
Restriction: You cannot use the BACKUP STGPOOL command for active-data pools.
During client sessions and processes that require active file versions, the Tivoli
Storage Manager server searches certain types of storage pools, if they exist.
1. An active-data pool associated with a FILE device class
2. A random-access disk (DISK) storage pool
3. A primary or copy storage pool associated with a FILE device class
4. A primary, copy, or active-data pool associated with on-site or off-site
removable media (tape or optical)
Even though the list implies a selection order, the server might select a volume
with an active file version from a storage pool lower in the order if a volume
higher in the order cannot be accessed because of the requirements of the session
or process, volume availability, or contention for resources such as mount points,
drives, and data.
Figure 20 on page 254 shows one way to set up server storage. In this example, the
storage that is defined for the server includes:
v Three disk storage pools, which are primary storage pools: ARCHIVE, BACKUP,
and HSM
v One primary storage pool that consists of tape cartridges
v One copy storage pool that consists of tape cartridges
v One active-data pool that consists of FILE volumes for fast client restore
Policies that are defined in management classes direct the server to store files from
clients in the ARCHIVE, BACKUP, or HSM disk storage pools. An additional
policy specifies the following:
v A select group of client nodes that requires fast restore of active backup data
For each of the three disk storage pools, the tape primary storage pool is next in
the hierarchy. As the disk storage pools fill, the server migrates files to tape to
make room for new files. Large files can go directly to tape. For more information
about setting up a storage hierarchy, see Storage pool hierarchies on page 270.
For more information about backing up primary storage pools, see Backing up
primary storage pools on page 930.
Active
HSM backup
data only
Backup
Disk storage
Archive pool (FILE)
Tip: When you define or update storage pools that use LTO Ultrium media,
special considerations might apply.
When you define a primary storage pool, be prepared to specify some or all of the
information that is shown in Table 26. Most of the information is optional. Some
information applies only to random-access storage pools or only to
sequential-access storage pools. Required parameters are marked.
Table 26. Information for defining a storage pool
Type of
Information Explanation Storage Pool
Storage pool name The name of the storage pool. random,
sequential
(Required)
Device class The name of the device class assigned for the storage pool. random,
sequential
(Required)
Pool type The type of storage pool (primary or copy). The default is to define a random,
primary storage pool. A storage pool's type cannot be changed after it has sequential
been defined.
(Required for sequential For automated libraries, set this value equal to the physical capacity of the
access) library. For details, see:
Maintaining a supply of scratch volumes in an automated library on
page 163
Access mode Defines access to volumes in the storage pool for user operations (such as random,
backup and restore) and system operations (such as reclamation and server sequential
migration). Possible values are:
Read/Write
User and system operations can read from or write to the
volumes.
Read-Only
User operations can read from the volumes, but not write. Server
processes can move files within the volumes in the storage pool.
However, no new writes are permitted to volumes in the storage
pool from volumes outside the storage pool.
Unavailable
User operations cannot get access to volumes in the storage pool.
No new writes are permitted to volumes in the storage pool from
other volumes outside the storage pool. However, system
processes (like reclamation) are permitted to move files within the
volumes in the storage pool.
1 2
Maximum file size To exclude large files from a storage pool, set a maximum file size. The random,
maximum file size applies to the size of a physical file (a single client file sequential
or an aggregate of client files).
Do not set a maximum file size for the last storage pool in the hierarchy
unless you want to exclude very large files from being stored in server
storage.
Cyclic Redundancy Check Specifies whether the server uses CRC to validate storage pool data during random,
(CRC) 1 audit volume processing. For additional information see Data validation sequential
during audit volume processing on page 937.
1
This information is not available for sequential-access storage pools that use the following data formats:
v CELERRADUMP
v NDMPDUMP
v NETAPPDUMP
2
This information is not available or is ignored for Centera sequential-access storage pools.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up, perform the following steps:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description=tape storage pool for engineering backups
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description=disk storage pool for engineering backups
maxsize=5m nextstgpool=backtape highmig=85 lowmig=40
Restrictions:
v You cannot establish a chain of storage pools that lead to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools, not copy
storage pools or active-data pools.
v If a storage pool uses the data format NETAPPDUMP, CELERRADUMP, or
NDMPDUMP, the server will not perform any of the following functions:
Migration
Reclamation
Volume audits
Restrictions:
v You cannot use this command to change the data format for a storage pool.
v For storage pools that have the NETAPPDUMP, the CELERRADUMP, or the
NDMPDUMP data format, you can modify the following parameters only:
ACCESS
COLLOCATE
DESCRIPTION
MAXSCRATCH
REUSEDELAY
Table 27 gives tips on how to accomplish some tasks that are related to storage
pools.
Table 27. Task tips for storage pools
For this Goal Do This For More Information
Keep the data for a group of client Enable collocation for the storage Keeping client files together using
nodes, a single client node, or a client pool. collocation on page 363
file space on as few volumes as
possible.
Reduce the number of volume Disable collocation for the storage Keeping client files together using
mounts needed to back up multiple pool. collocation on page 363
clients.
Write data simultaneously to a Provide a list of copy storage pools Writing data simultaneously to
primary storage pool and to copy and active-data pools when defining primary, copy, and active-data pools
storage pools and active-data pools. the primary storage pool. on page 337
Specify how the server reuses tapes. Set a reclamation threshold for the Reclaiming space in
storage pool. sequential-access storage pools on
page 372
Optional: Identify a reclamation
storage pool
Move data from disk to tape Set a migration threshold for the Migrating disk storage pools on
automatically as needed. storage pool. page 282
You can define volumes in a sequential-access storage pool or you can specify that
the server dynamically acquire scratch volumes. You can also use a combination of
defined and scratch volumes. What you choose depends on the amount of control
you want over individual volumes.
Defined volumes
Use defined volumes when you want to control precisely which volumes are used
in the storage pool. Defined volumes can also be useful when you want to
establish a naming scheme for volumes.
You can also use defined volumes to reduce potential disk fragmentation and
maintenance overhead for storage pools associated with random-access and
sequential-access disk.
Scratch volumes
Use scratch volumes to enable the server to define a volume when needed and
delete the volume when it becomes empty. Using scratch volumes frees you from
the task of explicitly defining all of the volumes in a storage pool.
The server tracks whether a volume being used was originally a scratch volume.
Scratch volumes that the server acquired for a primary storage pool are deleted
from the server database when they become empty. The volumes are then available
for reuse by the server or other applications.
Scratch volumes in a copy storage pool or an active-data storage pool are handled
in the same way as scratch volumes in a primary storage pool, except for volumes
with the access value of off-site. If an off-site volume becomes empty, the server
does not immediately return the volume to the scratch pool. The delay prevents
the empty volumes from being deleted from the database, making it easier to
determine which volumes should be returned to the on-site location. The
administrator can query the server for empty off-site copy storage pool volumes or
active-data pool volumes, and return them to the on-site location. The volume is
returned to the scratch pool only when the access value is changed to
READWRITE, READONLY, or UNAVAILABLE.
For scratch volumes that were acquired in a FILE device class, the space that the
volumes occupied is freed by the server and returned to the file system.
To prepare a volume for use in a random-access storage pool, define the volume.
For example, suppose you want to define a 21 MB volume for the BACKUPPOOL
storage pool. You want the volume to be located in a particular path and named
stgvol.001. Enter the following command:
define volume backuppool /usr/lpp/adsmserv/bin/stgvol.001 formatsize=21
If you do not specify a full path name for the volume name, the command uses the
path associated with the registry key of this server instance.
You can also define volumes in a single step using the DEFINE VOLUME
command. For example, to define ten, 5000 MB volumes in a random-access
storage pool that uses a DISK device class, you would enter the following
command.
define volume diskpool diskvol numberofvolumes=10 formatsize=5000
Remember:
v Define storage pool volumes on disk drives that reside on the Tivoli Storage
Manager server machine, not on remotely mounted file systems.
Network-attached drives can compromise the integrity of the data that you are
writing.
You can also use a space trigger to automatically create volumes assigned to a
particular storage pool.
For sequential-access storage pools with a FILE or SERVER device type, no labeling
or other preparation of volumes is necessary. For sequential-access storage pools
associated with device types other than a FILE or SERVER, you must prepare
volumes for use.
When the server accesses a sequential-access volume, it checks the volume name in
the header to ensure that the correct volume is being accessed. To prepare a
volume:
1. Label the volume. Table 28 on page 263 shows the types of volumes that
require labels. You must label those types of volumes before the server can use
them.
For details, see:
Labeling removable media volumes on page 146.
When you define a storage pool volume, you inform the server that the volume is
available for storing backup, archive, or space-managed data.
For a sequential-access storage pool, the server can use dynamically acquired
scratch volumes, volumes that you define, or a combination.
To define a volume named VOL1 in the ENGBACK3 tape storage pool, enter:
define volume engback3 vol1
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries but that are used by the
same server.
For storage pools associated with FILE device classes, you can define private
volumes in a single step using the DEFINE VOLUME command. For example, to
define ten, 5000 MB volumes, in a sequential-access storage pool that uses a FILE
device class, you would enter the following command.
define volume filepool filevol numberofvolumes=10 formatsize=5000
For storage pools associated with the FILE device class, you can also use the
DEFINE SPACETRIGGER and UPDATE SPACETRIGGER commands to have the
server create volumes and assign them to a specified storage pool when
predetermined space-utilization thresholds have been exceeded. One volume must
be predefined.
Remember: You cannot define volumes for storage pools defined with a Centera
device class.
To allow the storage pool to acquire volumes as needed, set the MAXSCRATCH
parameter to a value greater than zero. The server automatically defines the
volumes as they are acquired. The server also automatically deletes scratch
volumes from the storage pool when the server no longer needs them.
Before the server can use a scratch volume with a device type other than FILE or
SERVER, the volume must have a label.
Restriction: Tivoli Storage Manager only accepts tapes labeled with IBM standard
labels. IBM standard labels are similar to ANSI Standard X3.27 labels except that
the IBM standard labels are written in EBCDIC (extended binary coded decimal
interchange code). For a list of IBM media sales contacts who can provide
compatible tapes, go to the IBM Web site. If you are using non-IBM storage devices
and media, consult your tape-cartridge distributor.
For details about labeling, see Preparing volumes for sequential-access storage
pools on page 265.
To change the properties of a volume that has been defined to a storage pool, issue
the UPDATE VOLUME command. For example, suppose you accidentally damage
a volume named VOL1. To change the access mode to unavailable so that the
server does not try to write or read data from the volume, issue the following
command:
update volume vol1 access=unavailable
For details about access modes, see Access modes for storage pool volumes on
page 268.
Table 29 on page 268 lists volume properties that you can update.
For example, if the server cannot write to a volume having read/write access
mode, the server automatically changes the access mode to read-only.
You can set up your devices so that the server automatically moves data from one
device to another, or one media type to another. The selection can be based on
characteristics such as file size or storage capacity. A typical implementation might
have a disk storage pool with a subordinate tape storage pool. When a client backs
up a file, the server might initially store the file on disk according to the policy for
that file. Later, the server might move the file to tape when the disk becomes full.
This action by the server is called migration. You can also place a size limit on files
that are stored on disk, so that large files are stored initially on tape instead of on
disk.
For example, your fastest devices are disks, but you do not have enough space on
these devices to store all data that needs to be backed up over the long term. You
have tape drives, which are slower to access, but have much greater capacity. You
define a hierarchy so that files are initially stored on the fast disk volumes in one
storage pool. This provides clients with quick response to backup requests and
some recall requests. As the disk storage pool becomes full, the server migrates, or
moves, data to volumes in the tape storage pool.
Another option to consider for your storage pool hierarchy is IBM 3592 tape
cartridges and drives, which can be configured for an optimal combination of
access time and storage capacity. For more information, see Controlling
data-access speeds for 3592 volumes on page 197.
You can set up a storage pool hierarchy when you first define storage pools. You
can also change the storage pool hierarchy later.
Restrictions:
v You cannot establish a chain of storage pools that leads to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools. It does not
include copy storage pools or active-data pools. See Backing up the data in a
storage hierarchy on page 275.
For detailed information about how migration between storage pools works, see
Migrating files in a storage pool hierarchy on page 281.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description=tape storage pool for engineering backups
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description=disk storage pool for engineering backups
maxsize=5M nextstgpool=backtape highmig=85 lowmig=40
If you have already defined the storage pool at the top of the hierarchy, you can
update the storage hierarchy to include a new storage pool.
To define the new tape storage pool and update the hierarchy:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description=tape storage pool for engineering backups
maxsize=nolimit collocate=node maxscratch=100
2. Update the storage-pool definition for ENGBACK1 to specify that BACKTAPE
is the next storage pool defined in the storage hierarchy:
update stgpool engback1 nextstgpool=backtape
The size of the aggregate depends on the sizes of the client files being stored, and
the number of bytes and files allowed for a single transaction. Two options affect
the number of files and bytes allowed for a single transaction. TXNGROUPMAX,
located in the server options file, affects the number of files allowed.
TXNBYTELIMIT, located in the client options file, affects the number of bytes
allowed in the aggregate.
v The TXNGROUPMAX option in the server options file indicates the maximum
number of logical files (client files) that a client may send to the server in a
single transaction. The server might create multiple aggregates for a single
transaction, depending on how large the transaction is.
It is possible to affect the performance of client backup, archive, restore, and
retrieve operations by using a larger value for this option. When transferring
multiple small files, increasing the TXNGROUPMAX option can improve
throughput for operations to tape.
When a Tivoli Storage Manager for Space Management client (HSM client)
migrates files to the server, the files are not grouped into an aggregate.
Server file aggregation is disabled for client nodes storing data associated with a
management class that has a copy group whose destination is a Centera storage
Using these factors, the server determines if the file can be written to that storage
pool or the next storage pool in the hierarchy.
Subfile backups: When the client backs up a subfile, it still reports the size of the
entire file. Therefore, allocation requests against server storage and placement in
the storage hierarchy are based on the full size of the file. The server does not put
a subfile in an aggregate with other files if the size of the entire file is too large to
put in the aggregate. For example, the entire file is 8 MB, but the subfile is only 10
KB. The server does not typically put a large file in an aggregate, so the server
begins to store this file as a stand-alone file. However, the client sends only 10 KB,
and it is now too late for the server to put this 10 KB file with other files in an
aggregate. As a result, the benefits of aggregation are not always realized when
clients back up subfiles.
TAPEPOOL
Read/Write Access
Assume a user wants to archive a 5 MB file that is named FileX. FileX is bound to
a management class that contains an archive copy group whose storage destination
is DISKPOOL, see Figure 21.
When the user archives the file, the server determines where to store the file based
on the following process:
1. The server selects DISKPOOL because it is the storage destination specified in
the archive copy group.
2. Because the access mode for DISKPOOL is read/write, the server checks the
maximum file size allowed in the storage pool.
The maximum file size applies to the physical file being stored, which may be a
single client file or an aggregate. The maximum file size allowed in DISKPOOL
is 3 MB. FileX is a 5 MB file and therefore cannot be stored in DISKPOOL.
3. The server searches for the next storage pool in the storage hierarchy.
If the DISKPOOL storage pool has no maximum file size specified, the server
checks for enough space in the pool to store the physical file. If there is not
enough space for the physical file, the server uses the next storage pool in the
storage hierarchy to store the file.
4. The server checks the access mode of TAPEPOOL, which is the next storage
pool in the storage hierarchy. The access mode for TAPEPOOL is read/write.
5. The server then checks the maximum file size allowed in the TAPEPOOL
storage pool. Because TAPEPOOL is the last storage pool in the storage
hierarchy, no maximum file size is specified. Therefore, if there is available
space in TAPEPOOL, FileX can be stored in it.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool.
As a best practice, therefore, and to prevent the permanent loss of inactive versions
of client backup data, you should create a minimum of one active-data pool, which
contains active-data only, and one copy storage pool, which contains both active
and inactive data. To recover from a disaster, use the active-data pool to restore
critical client node data, and then restore the primary storage pools from the copy
storage pool. Do not use active-data pools for recovery of a primary pool or
volume unless the loss of inactive data is acceptable.
Setting up copy storage pools and active-data pools on page 276 describes the
high-level steps for implementation.
Neither copy storage pools nor active-data pools are part of a storage hierarchy,
which, by definition, consists only of primary storage pools. Data can be stored in
copy storage pools and active-data pools using the following methods:
v Including the BACKUP STGPOOL and COPY ACTIVEDATA commands in
administrative scripts or schedules so that data is automatically backed up or
copied at regular intervals.
v Enabling the simultaneous-write function so that data is written to primary
storage pools, copy storage pools, and active-data pools during the same
transaction. Writing data simultaneously to copy storage pools is supported for
backup, archive, space-management, and import operations. Writing data
simultaneously to active-data pools is supported only for client backup
operations and only for active backup versions.
v (copy storage pools only) Manually issuing the BACKUP STGPOOL command,
specifying the primary storage pool as the source and a copy storage pool as the
target. The BACKUP STGPOOL command backs up whatever data is in the
primary storage pool (client backup data, archive data, and space-managed
data).
v (active-data pools only) Manually issuing the COPY ACTIVEDATA command,
specifying the primary storage pool as the source and an active-data pool as the
target. The COPY ACTIVEDATA command copies only the active versions of
client backup data. If an aggregate being copied contains all active files, then the
entire aggregate is copied to the active-data pool during command processing. If
an aggregate being copied contains some inactive files, the aggregate is
reconstructed during command processing into a new aggregate without the
inactive files.
For efficiency, you can use a single copy storage pool and a single active-data pool
to back up all primary storage pools that are linked in a storage hierarchy. By
backing up all primary storage pools to one copy storage pool and one active-data
pool, you do not need to repeatedly copy a file when the file migrates from its
original primary storage pool to another primary storage pool in the storage
hierarchy.
Decide which client nodes have data that needs to be restored quickly if a disaster
occurs. Only the data belonging to those nodes should be stored in the active-data
pool.
For the purposes of this example, the following definitions already exist on the
server:
v The default STANDARD domain, STANDARD policy set, STANDARD
management class, and STANDARD copy group.
v A primary storage pool, BACKUPPOOL, and a copy storage pool, COPYPOOL.
BACKUPPOOL is specified in the STANDARD copy group as the storage pool
in which the server initially stores backup data. COPYPOOL contains copies of
all the active and inactive data in BACKUPPOOL.
v Three nodes that are assigned to the STANDARD domain (NODE1, NODE2, and
NODE 3).
v Two mount points assigned for each client session.
v A FILE device class named FILECLASS.
You have identified NODE2 as the only high-priority node, so you need to create a
new domain to direct the data belonging to that node to an active-data pool. To set
up and enable the active-data pool, follow these steps:
1. Define the active-data pool:
DEFINE STGPOOL ADPPOOL FILECLASS POOLTYPE=ACTIVEDATA MAXSCRATCH=1000
2. Define a new domain and specify the active-data pool in which you want to
store the data belonging to NODE2:
DEFINE DOMAIN ACTIVEDOMAIN ACTIVEDESTINATION=ADPPOOL
3. Define a new policy set:
DEFINE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
4. Define a new management class:
DEFINE MGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
5. Define a backup copy group:
DEFINE COPYGROUP ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT DESTINATION=BACKUPPOOL
This command specifies that the active and inactive data belonging to client
nodes that are members of ACTIVEDOMAIN will be backed up to
BACKUPPOOL. Note that this is the destination storage pool for data backed
up from nodes that are members of the STANDARD domain.
6. Assign the default management class for the active-data pool policy set:
ASSIGN DEFMGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
7. Activate the policy set for the active-data pool:
ACTIVATE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
8. Assign the high-priority node, NODE2, to the new domain:
A node can belong to only one domain. When you update a node by changing
its domain, you remove it from its current domain.
9. (optional) Update the primary storage pool, BACKUPPOOL, with the name of
the active-data pool, ADPPOOL, where the server simultaneously will write
data during a client backup operation:
UPDATE STGPOOL BACKUPPOOL ACTIVEDATAPOOLS=ADPPOOL
Every time NODE2 stores data into BACKUPPOOL, the server simultaneously
writes the data to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL,
ensures that any data that was not stored during simultaneous-write operations is
copied to the active-data pool. When client nodes NODE1 and NODE3 are backed
up, their data is stored in BACKUPPOOL only, and not in ADPPOOL. When the
administrative schedule runs, only the data belonging to NODE2 is copied to the
active-data pool.
Remember: If you want all the nodes belonging to an existing domain to store
their data in the active-data pool, then you can skip steps 2 through 8. Use the
UPDATE DOMAIN command to update the STANDARD domain, specifying the
name of the active-data pool, ADPPOOL, as the value of the
ACTIVEDESTINATION parameter.
In addition to using active-data pools for fast restore of client-node data, you can
also use active-data pools to reduce the number of tape volumes that are stored
either on-site or off-site for the purpose of disaster recovery. This example assumes
that, in your current configuration, all data is backed up to a copy storage pool
and taken off-site. However, your goal is to create an active-data pool, take the
volumes in that pool off-site, and maintain the copy storage pool on-site to recover
primary storage pools.
Every time data is stored into BACKUPPOOL, the data is simultaneously written
to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL, ensures that any data
that was not stored during a simultaneous-write operation is copied to the
active-data pool. You can now move the volumes in the active-data pool to a safe
location off-site.
If your goal is to replace the copy storage pool with the active-data pool, follow
the steps below. As a best practice and to protect your inactive data, however, you
should maintain the copy storage pool so that you can restore inactive versions of
backup data if required. If the copy storage pool contains archive or files that were
migrated by a Tivoli Storage Manager for Space Management client, do not delete
it.
1. Stop backing up to the copy storage pool:
DELETE SCHEDULE BACKUP_BACKUPPOOL
UPDATE STGPOOL BACKUPPOOL COPYSTGPOOLS=""
Typically you need to ensure that you have enough disk storage to process one
night's worth of the clients' incremental backups. While not always possible, this
guideline proves to be valuable when considering storage pool backups.
For example, suppose you have enough disk space for nightly incremental backups
for clients, but not enough disk space for a FILE-type, active-data pool. Suppose
also that you have tape devices. With these resources, you can set up the following
pools:
v A primary storage pool on disk, with enough volumes assigned to contain the
nightly incremental backups for clients
v A primary storage pool on tape, which is identified as the next storage pool in
the hierarchy for the disk storage pool
v An active-data pool on tape
v A copy storage pool on tape
For more information about storage pool space, see Estimating space needs for
storage pools on page 383
The migration process helps to ensure that there is sufficient free space in the
storage pools at the top of the hierarchy, where faster devices can provide the most
benefit to clients. For example, the server can migrate data stored in a
random-access disk storage pool to a slower but less expensive sequential-access
storage pool.
Migration processing can differ for disk storage pools versus sequential-access
storage pools. If you plan to modify the default migration parameter settings for
storage pools or want to understand how migration works, read the following
topics:
v Migrating disk storage pools
v Migrating sequential-access storage pools on page 287
v Starting migration manually or in a schedule on page 290
Remember:
v Data cannot be migrated into or out of storage pools defined with a CENTERA
device class.
v If you receive an error message during the migration process, refer to IBM Tivoli
Storage Manager Messages, which can provide useful information for diagnosing
and fixing problems.
v If a migration process is started from a storage pool that does not have the next
storage pool identified in the hierarchy, a reclamation process is triggered for the
source storage pool. To prevent the reclamation process, define the next storage
pool in the hierarchy. For details, see Setting up a storage pool hierarchy on
page 270. As an alternative to prevent automatic migration from running, set the
HIGHMIG parameter of the storage pool definition to 100.
You can use the defaults for the migration thresholds, or you can change the
threshold values to identify the maximum and minimum amount of space for a
storage pool.
To control how long files must stay in a storage pool before they are eligible for
migration, specify a migration delay for a storage pool. For details, see Keeping
files in a storage pool on page 286.
If you decide to enable cache for disk storage pools, files can temporarily remain
on disks even after migration. When you use cache, you might want to set lower
migration thresholds.
For more information about migration thresholds, see How the server selects files
to migrate on page 283 and Migration thresholds on page 285. For information
about using the cache, see Minimizing access time to migrated files on page 287
and Caching in disk storage pools on page 292.
The server might not reach the low migration threshold for the pool by migrating
only files that were stored longer than the migration delay period. If so, the server
checks the storage pool characteristic that determines whether to stop migration,
even if the pool is still above the low migration threshold. For more information,
see Keeping files in a storage pool on page 286.
For example, Table 30 displays information that is contained in the database that is
used by the server to determine which files to migrate. This example assumes that
the storage pool contains no space-managed files. This example also assumes that
the migration delay period for the storage pool is set to zero. Any files can be
migrated regardless of the amount of time they are stored in the pool or the last
time of access.
Table 30. Database information about files stored in DISKPOOL
Archived Files (All Client File
Client Node Backed-Up File Spaces and Sizes Spaces)
TOMC TOMC/C 200 MB 55 MB
TOMC/D 100 MB
CAROL CAROL 50 MB 5 MB
PEASE PEASE/home 150 MB 40 MB
PEASE/temp 175 MB
High
Migration
Threshold
80%
Low
Migration
Threshold
20% DISKPOOL DISKPOOL DISKPOOL
TAPEPOOL
Figure 22 shows what happens when the high migration threshold defined for the
disk storage pool DISKPOOL is exceeded. When the amount of data that can be
migrated in DISKPOOL reaches 80%, the server runs the following tasks:
1. Determines that the TOMC/C file space is taking up the most space in the
DISKPOOL storage pool. It controls more space than any other single
backed-up or space-managed file space and more than any client node's
archived files.
2. Locates all data that belongs to node TOMC stored in DISKPOOL. In this
example, node TOMC backed up or archived files from file spaces TOMC/C
and TOMC/D stored in the DISKPOOL storage pool.
3. Migrates all data from TOMC/C and TOMC/D to the next available storage
pool. In this example, the data is migrated to the tape storage pool,
TAPEPOOL.
The server migrates all of the data from both file spaces that belong to node
TOMC. The migration happens, even if the occupancy of the storage pool drops
below the low migration threshold before the second file space is migrated.
If the cache option is enabled, files that are migrated remain on disk storage
(cached) until space is needed for new files. For more information about using
cache, see Caching in disk storage pools on page 292.
4. After all files that belong to TOMC are migrated to the next storage pool, the
server checks the low migration threshold. If the threshold is not reached, the
server determines which client node backed up or migrated the largest single
file space or archived files that occupy the most space. The server begins
migrating files that belong to that node.
In this example, the server migrates all files that belong to the client node
named PEASE to the TAPEPOOL storage pool.
5. After all the files that belong to PEASE are migrated to the next storage pool,
the server checks the low migration threshold again. If the low migration
threshold was reached or passed, then migration ends.
Choosing thresholds appropriate for your situation takes some experimenting. Start
by using the default high and low values. You need to ensure that migration
occurs frequently enough to maintain some free space but not so frequently that
the device is unavailable for other use.
High-migration thresholds:
Before changing the high-migration threshold, you need to consider the amount of
storage capacity provided for each storage pool and the amount of free storage
space needed to store additional files, without having migration occur.
If you set the high-migration threshold too high, the pool may be just under the
high threshold, but not have enough space to store an additional, typical client file.
Or, with a high threshold of 100%, the pool may become full and a migration
process must start before clients can back up any additional data to the disk
storage pool. In either case, the server stores client files directly to tape until
migration completes, resulting in slower performance.
If you set the high-migration threshold too low, migration runs more frequently
and can interfere with other operations.
Low-migration thresholds:
Before setting the low-migration threshold, you need to consider the amount of
free disk storage space needed for normal daily processing, whether you use cache
on disk storage pools, how frequently you want migration to occur, and whether
data in the next storage pool is being collocated by group.
For example, you might have backups of monthly summary data that you want to
keep in your disk storage pool for faster access until the data is 30 days old. After
the 30 days, the server moves the files to a tape storage pool.
To delay file migration of files, set the MIGDELAY parameter when you define or
update a storage pool. The number of days is counted from the day that a file was
stored in the storage pool or accessed by a client, whichever is more recent. You
can set the migration delay separately for each storage pool. When you set the
delay to zero, the server can migrate any file from the storage pool, regardless of
how short a time the file has been in the storage pool. When you set the delay to
greater than zero, the server checks how long the file has been in the storage pool
and when it was last accessed by a client. If the number of days exceeds the
migration delay, the server migrates the file.
Note: If you want the number of days for migration delay to be counted based
only on when a file was stored and not when it was retrieved, use the
NORETRIEVEDATE server option. For more information about this option, see the
Administrator's Reference.
If you set migration delay for a pool, you must decide what is more important:
either ensuring that files stay in the storage pool for the migration delay period, or
ensuring that there is enough space in the storage pool for new files. For each
storage pool that has a migration delay set, you can choose what happens as the
server tries to move enough data out of the storage pool to reach the low
migration threshold. If the server cannot reach the low migration threshold by
moving only files that have been stored longer than the migration delay, you can
choose one of the following:
If you allow more than one migration process for the storage pool and allow the
server to move files that do not satisfy the migration delay time
(MIGCONTINUE=YES), some files that do not satisfy the migration delay time
may be migrated unnecessarily. As one process migrates files that satisfy the
migration delay time, a second process could begin migrating files that do not
satisfy the migration delay time to meet the low migration threshold. The first
process that is still migrating files that satisfy the migration delay time might have,
by itself, caused the storage pool to meet the low migration threshold.
Important: For information about the disadvantages of using cache, see Caching
in disk storage pools on page 292.
To ensure that files remain on disk storage and do not migrate to other storage
pools, use one of the following methods:
v Do not define the next storage pool.
A disadvantage of using this method is that if the file exceeds the space
available in the storage pool, the operation to store the file fails.
v Set the high-migration threshold to 100%.
When you set the high migration threshold to 100%, files will not migrate at all.
You can still define the next storage pool in the storage hierarchy, and set the
maximum file size so that large files are stored in the next storage pool in the
hierarchy.
A disadvantage of setting the high threshold to 100% is that after the pool
becomes full, client files are stored directly to tape instead of to disk.
Performance may be affected as a result.
You probably will not want the server to migrate sequential-access storage pools
on a regular basis. An operation such as tape-to-tape migration has limited benefits
compared to disk-to-tape migration, and requires at least two tape drives.
You can migrate data from a sequential-access storage pool only to another
sequential-access storage pool. You cannot migrate data from a sequential-access
To control the migration process, set migration thresholds and migration delays for
each storage pool using the DEFINE STGPOOL and UPDATE STGPOOL
commands. You can also specify multiple concurrent migration processes to better
use your available tape drives or FILE volumes. (For details, see Specifying
multiple concurrent migration processes on page 291.) Using the MIGRATE
STGPOOL command, you can control the duration of the migration process and
whether reclamation is attempted prior to migration. For additional information,
see Starting migration manually or in a schedule on page 290.
For tape and optical storage pools, the server begins the migration process when
the ratio of volumes containing data to the total number of volumes in the storage
pool, including scratch volumes, reaches the high migration threshold. For
sequential-access disk (FILE) storage pools, the server starts the migration process
when the ratio of data in a storage pool to the pool's total estimated data capacity
reaches the high migration threshold. The calculation of data capacity includes the
capacity of all the scratch volumes specified for the pool.
Tip: When Tivoli Storage Manager calculates the capacity for a sequential-access
disk storage pool, it takes into consideration the amount of disk space available in
the file system. For this reason, be sure that you have enough disk space in the file
system to hold all the defined and scratch volumes specified for the storage pool.
For example, suppose that the capacity of all the scratch volumes specified for a
storage pool is 10 TB. (There are no predefined volumes.) However, only 9 TB of
disk space is available in the file system. The capacity value used in the migration
threshold is 9 TB, not 10 TB. If the high migration threshold is set to 70%,
migration will begin when the storage pool contains 6.3 TB of data, not 7 TB.
Because migration delay can prevent volumes from being migrated, the server can
migrate files from all eligible volumes but still find that the storage pool is above
the low migration threshold. If you set migration delay for a pool, you need to
decide what is more important: either ensuring that files stay in the storage pool
for as long as the migration delay, or ensuring there is enough space in the storage
pool for new files. For each storage pool that has a migration delay set, you can
choose what happens as the server tries to move enough files out of the storage
pool to reach the low migration threshold. If the server cannot reach the low
migration threshold by migrating only volumes that meet the migration delay
requirement, you can choose one of the following:
v Allow the server to migrate volumes from the storage pool even if they do not
meet the migration delay criteria (MIGCONTINUE=YES). This is the default.
Allowing migration to continue ensures that space is made available in the
storage pool for new files that need to be stored there.
v Have the server stop migration without reaching the low migration threshold
(MIGCONTINUE=NO). Stopping migration ensures that volumes are not
migrated for the time you specified with the migration delay. The administrator
must ensure that there is always enough space available in the storage pool to
hold the data for the required number of days.
If you decide to migrate data from one sequential-access storage pool to another,
ensure that:
v Two drives (mount points) are available, one in each storage pool.
v The access mode for the next storage pool in the storage hierarchy is set to
read/write.
For information about setting an access mode for sequential-access storage pools,
see Defining storage pools on page 255.
v Collocation is set the same in both storage pools. For example, if collocation is
set to NODE in the first storage pool, then collocation should be set to NODE in
the next storage pool.
Chapter 10. Managing storage pools and volumes 289
When you enable collocation for a storage pool, the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client file
space on a minimal number of volumes. For information about collocation for
sequential-access storage pools, see Keeping client files together using
collocation on page 363.
v You have sufficient resources (for example, staff) available to manage any
necessary media mount and dismount operations. (This is especially true for
multiple concurrent processing, For details, see Specifying multiple concurrent
migration processes on page 291.) More mount operations occur because the
server attempts to reclaim space from sequential-access storage pool volumes
before it migrates files to the next storage pool.
If you want to limit migration from a sequential-access storage pool to another
storage pool, set the high-migration threshold to a high percentage, such as 95%.
For information about setting a reclamation threshold for tape storage pools, see
Reclaiming space in sequential-access storage pools on page 372.
You can specify the maximum number of minutes the migration will run before
automatically cancelling. If you prefer, you can include this command in a
schedule to perform migration when it is least intrusive to normal production
needs.
For example, to migrate data from a storage pool named ALTPOOL to the next
storage pool, and specify that it end as soon as possible after one hour, issue the
following command:
migrate stgpool altpool duration=60
Do not use this command if you are going to use automatic migration. To prevent
automatic migration from running, set the HIGHMIG parameter of the storage
pool definition to 100. For details about the MIGRATE STGPOOL command, refer
to the Administrator's Reference.
Restriction: Data cannot be migrated into or out of storage pools defined with a
CENTERA device class.
Each migration process requires at least two simultaneous volume mounts (at least
two mount points) and, if the device type is not FILE, at least two drives. One of
the drives is for the input volume in the storage pool from which files are being
migrated. The other drive is for the output volume in the storage pool to which
files are being migrated.
When calculating the number of concurrent processes to run, carefully consider the
resources you have available, including the number of storage pools that will be
involved with the migration, the number of mount points, the number of drives
that can be dedicated to the operation, and (if appropriate) the number of mount
operators available to manage migration requests. The number of available mount
points and drives depends on other Tivoli Storage Manager and system activity
and on the mount limits of the device classes for the storage pools that are
involved in the migration. For more information about mount limit, see:
Controlling the number of simultaneously mounted volumes on page 194
For example, suppose that you want to migrate data on volumes in two sequential
storage pools simultaneously and that all storage pools involved have the same
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four migration processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives if
the device type is not FILE. The device class must have a mount limit of at least
eight.
If the number of migration processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other migration processes complete and
mount points or drives become available.
The Tivoli Storage Manager server starts the specified number of migration
processes regardless of the number of volumes that are eligible for migration. For
example, if you specify ten migration processes and only six volumes are eligible
for migration, the server will start ten processes and four of them will complete
without processing a volume.
Multiple concurrent migration processing does not affect collocation. If you specify
collocation and multiple concurrent processes, the Tivoli Storage Manager server
attempts to migrate the files for each collocation group, client node, or client file
space onto as few volumes as possible. If files are collocated by group, each
process can migrate only one group at a single time. In addition, if files belonging
to a single collocation group (or node or file space) are on different volumes and
are being migrated at the same time by different processes, the files could be
migrated to separate output volumes.
For example, suppose a copy of a file is made while it is in a disk storage pool.
The file then migrates to a primary tape storage pool. If you then back up the
primary tape storage pool to the same copy storage pool, a new copy of the file is
not needed. The server knows it already has a valid copy of the file.
The only way to store files in copy storage pools is by backing up (the BACKUP
STGPOOL command) or by using the simultaneous-write function. The only way to
store files in active-data pools is by copying active data (the COPY ACTIVEDATA
command) or by using the simultaneous-write function.
If space is needed to store new data in the disk storage pool, cached files are
erased and the space they occupied is used for the new data.
When cache is disabled and migration occurs, the server migrates the files to the
next storage pool and erases the files from the disk storage pool. By default, the
system disables caching for each disk storage pool because of the potential effects
of cache on backup performance. If you leave cache disabled, consider higher
If fast restores of active client data is your objective, you can also use active-data
pools, which are storage pools containing only active versions of client backup
data. For details, see Active-data pools on page 251.
For example, assume that two files, File A and File B, are cached files that are the
same size. If File A was last retrieved on 05/16/08 and File B was last retrieved on
06/19/08, then File A is deleted to reclaim space first.
If you do not want the server to update the retrieval date for files when a client
restores or retrieves the file, specify the server option NORETRIEVEDATE in the
server options file. If you specify this option, the server removes copies of files in
cache regardless how recently the files were retrieved.
Deduplicating data
Data deduplication is a method for eliminating redundant data in order to reduce
the storage that is required to retain the data. Only one instance of the data is
retained in a deduplicated storage pool. Other instances of the same data are
replaced with a pointer to the retained instance.
Restriction: When a client backs up or archives a file, the data is written to the
primary storage pool specified by the copy group of the management class that is
bound to the data. To deduplicate the client data, the primary storage pool must be
a sequential-access disk (FILE) storage pool that is enabled for data deduplication.
The ability to deduplicate data on either the backup-archive client or the server
provides flexibility in terms of resource utilization, policy management, and
security. You can also combine both client-side and server-side data deduplication
in the same production environment. For example, you can specify certain nodes
for client-side data deduplication and certain nodes for server-side data
deduplication. You can store the data for both sets of nodes in the same
deduplicated storage pool.
Backup-archive clients that can deduplicate data can also access data that was
deduplicated by server-side processes. Similarly, data that was deduplicated by
client-side processes can be accessed by the server. Furthermore, duplicate data can
be identified across objects regardless of whether the data deduplication is
performed on the client or the server.
In addition to whole files, IBM Tivoli Storage Manager can also deduplicate parts
of files that are common with parts of other files. Data becomes eligible for
duplicate identification as volumes in the storage pool are filled. A volume does
not have to be full before duplicate identification starts.
Benefits
Requirements
If the backup operation is successful and if the next storage pool is enabled for
data deduplication, the files are deduplicated by the server. If the next storage pool
is not enabled for data deduplication, the files are not deduplicated.
For details about client-side data deduplication, including options for controlling
data deduplication, see the Backup-Archive Clients Installation and User's Guide.
Only V6.2 and later storage agents can use LAN-free data movement to access
storage pools that contain data that was deduplicated by clients. V6.1 storage
agents or later can complete operations over the LAN.
Table 31. Paths for data movement
Storage pool
Storage pool contains a mixture of Storage pool
contains only client-side and contains only
client-side server-side server-side
deduplicated data deduplicated data deduplicated data
V6.1 or earlier Over the LAN Over the LAN LAN-free
storage agent
V6.2 storage agent LAN-free LAN-free LAN-free
V6.2 backup-archive clients are compatible with V6.2 storage agents and provide
LAN-free access to storage pools that contain client-side deduplicated data.
Version support
Server-side data deduplication is available only with IBM Tivoli Storage Manager
V6.1 or later servers. For optimal efficiency when using server-side data
deduplication, upgrade to the backup-archive client V6.1 or later.
Client-side data deduplication is available only with Tivoli Storage Manager V6.2
or later servers and backup-archive clients V6.2 or later.
Encrypted files
The Tivoli Storage Manager server and the backup-archive client cannot
deduplicate encrypted files. If an encrypted file is encountered during data
deduplication processing, the file is not deduplicated, and a message is logged.
Tip: You do not have to process encrypted files separately from files that are
eligible for client-side data deduplication. Both types of files can be processed in
the same operation. However, they are sent to the server in different transactions.
As a security precaution, you can take one or more of the following steps:
v Enable storage-device encryption together with client-side data deduplication.
v Use client-side data deduplication only for nodes that are secure.
v If you are uncertain about network security, enable Secure Sockets Layer (SSL).
v If you do not want certain objects (for example, image objects) to be processed
by client-side data deduplication, you can exclude them on the client. If an
object is excluded from client-side data deduplication and it is sent to a storage
pool that is set up for data deduplication, the object is deduplicated on server.
v Use the SET DEDUPVERIFICATIONLEVEL command to detect possible security
attacks on the server during client-side data deduplication. Using this command,
you can specify a percentage of client extents for the server to verify. If the
server detects a possible security attack, a message is displayed.
File size
Only files that are more than 2 KB are deduplicated. Files that are 2 KB or less are
not deduplicated.
A return code (RC=254) and message are written to the dsmerror.log file. The
message is also displayed in the command-line client. The error message is:
ANS7899E The client referenced a duplicated extent that does not exist
on the Tivoli Storage Manager server.
The workaround for this situation is to ensure that processes that can cause files to
expire are not run at the same time that back up or archive operations with
client-side data deduplication are performed.
HSM data from UNIX and Linux clients is ignored by client-side data
deduplication. Server-side data deduplication of HSM data from UNIX and Linux
clients is allowed.
Collocation
You can use collocation for storage pools that are set up for data deduplication.
However, collocation might not have the same benefit as it does for storage pools
that are not set up for data deduplication.
By using collocation with storage pools that are set up for data deduplication, you
can control the placement of data on volumes. However, the physical location of
duplicate data might be on different volumes. No-query-restore, and other
processes remain efficient in selecting volumes that contain non-deduplicated data.
However, the efficiency declines when additional volumes are required to provide
the duplicate data.
Using Tivoli Storage Manager data deduplication can provide several advantages.
However, there are some situations where data deduplication is not appropriate.
Those situations are:
v Your primary storage of backup data is on a Virtual Tape Library or physical
tape. If regular migration to tape is required, the benefits of using data
deduplication are lessened, since the purpose of data deduplication is to reduce
disk storage as the primary location of backup data.
v You have no flexibility with the backup processing window. Tivoli Storage
Manager data deduplication processing requires additional resources, which can
extend backup windows or server processing times for daily backup activities.
v Your restore processing times must be fast. Restore performance from
deduplicated storage pools is slower than from a comparable disk storage pool
that does not use data deduplication. If fast restore performance from disk is a
high priority, restore performance benchmarking must be done to determine
whether the effects of data deduplication can be accommodated.
Related tasks:
Keeping client files together using collocation on page 363
Detecting possible security attacks on the server during client-side deduplication
on page 311
As part of the planning process, ensure that you will benefit from using data
deduplication. In the following situations, IBM Tivoli Storage Manager data
deduplication can provide a cost-effective method for reducing the amount of disk
storage that is required for backups:
v You have to reduce the disk space that is required for backup storage.
v You must perform remote backups over limited bandwidth connections.
v You are using Tivoli Storage Manager node replication for disaster recovery
across geographically dispersed locations.
v You either have disk-to-disk backup configured (where the final destination of
backup data is on a deduplicating disk storage pool), or data is stored in the
FILE storage pool for a significant time (for example 30 days), or until
expiration.
v For guidance on the scalability of data deduplication with Tivoli Storage
Manager, see Effective Planning and Use of IBM Tivoli Storage Manager V6
Deduplication at http://www.ibm.com/developerworks/mydeveloperworks/
If you are creating a primary The Tivoli Storage Manager server does not
sequential-access storage pool and you do start any duplicate-identification processes
not specify a value, the server starts one automatically by default.
process automatically. If you are creating a
copy storage pool or an active-data pool and
you do not specify a value, the server does
not start any processes automatically.
v Decide whether to define or update a storage pool for data deduplication, but
not actually perform data deduplication. For example, suppose that you have a
primary sequential-access disk storage pool and a copy sequential-access disk
storage pool. Both pools are set up for data deduplication. You might want to
run duplicate-identification processes for only the primary storage pool. In this
way, only the primary storage pool reads and deduplicates data. However, when
the data is moved to the copy storage pool, the data deduplication is preserved,
and no duplicate identification is required.
v Determine the best time to use data deduplication for the storage pool. The
duplicate identification (IDENTIFY) processes can increase the workload on the
processor and system memory. Schedule duplicate identification processes at the
following times:
When the process does not conflict with other processes such as reclamation,
migration, and storage pool backup
Before node replication (if node replication is being used) so that node
replication can be used in combination with deduplication
When you use data deduplication, your system can achieve benefits such as these:
v Reduction in the storage capacity that is required for storage pools on the server
that are associated with a FILE-type device class. This reduction applies for both
server-side and client-side data deduplication.
v Reduction in the network traffic between the client and server. This reduction
applies for client-side deduplication only.
When you implement the suggested practices for data deduplication, you can help
to avoid problems such as these on your system:
v Server outages that are caused by running out of active log space or archive log
space
v Server outages or client backup failures that are caused by exceeding the IBM
DB2 internal lock list limit
v Process failures and hangs that are caused during server data management
Properly size the server database, recovery log, and system memory:
When you use data deduplication, considerably more database space is required as
a result of storing the metadata that is related to duplicate data. Data
deduplication also tends to cause longer-running transactions and a related larger
peak in recovery log usage.
In addition, more system memory is required for caching database pages that are
used during duplicate data lookup for both server-side and client-side data
deduplication.
Tips:
v Ensure that the Tivoli Storage Manager server has a minimum of 64 GB of
system memory.
v Allocate a file system with two-to-three times more capacity for the server
database than you would allocate for a server that does not use data
deduplication. You can plan for 150 GB of database storage for every 10 TB of
data that is protected in the deduplicated storage pools.
v Configure the server to have the maximum active log size of 128 GB by setting
the ACTIVELOGSIZE server option to a value of 131072.
v Use a directory for the database archive logs with an initial free capacity of at
least 500 GB. Specify the directory by using the ARCHLOGDIRECTORY server option.
For more information about managing resources such as the database and recovery
log, see the Installation Guide. Search for database and recovery log capacity
planning.
Avoid the overlap of server maintenance tasks with client backup windows:
When you schedule client backups for a period during which server maintenance
tasks are not running, you create a backup window. This practice is important when
you use data deduplication. Use this process regardless of whether data
deduplication is used with Tivoli Storage Manager.
Migration and reclamation are the tasks most likely to interfere with the success of
client backups.
Tips:
v Schedule client backups in a backup window that is isolated from data
maintenance processes, such as migration and reclamation.
v Schedule each type of data maintenance task with controlled start times and
durations so that they do not overlap with each other.
v If storage-pool backup is used to create a secondary copy, schedule storage-pool
backup operations before you start data deduplication processing to avoid
restoring objects that are sent to a non-deduplicated copy storage pool.
v If you are using node replication to keep a secondary copy of your data,
schedule the REPLICATE NODE command to run after duplicate identification
processes are completed.
For more information about tuning the schedule for daily server maintenance
tasks, see the Optimizing Performance guide. Search for tuning the schedule for daily
operations.
The lock list storage of DB2 that is automatically managed can become insufficient.
If you deduplicate data that includes large files or large numbers of files
concurrently, the data deduplication can cause insufficient storage. When the lock
list storage is insufficient, backup failures, data management process failures, or
server outages can occur.
File sizes greater than 500 GB that are processed by data deduplication are most
likely to cause storage to become insufficient. However, if many backups use
client-side data deduplication, this problem can also occur with smaller-sized files.
Tip: When you estimate the lock list storage requirements, follow the information
described in the technote to manage storage for loads that are much larger than
expected.
You can use controls to limit the potential effect of large objects on data
deduplication processing on the Tivoli Storage Manager server.
You can use the following controls when you deduplicate large-object data:
v Server controls that limit the size of objects. These controls limit the size of
objects that are processed by data deduplication.
v Controls on the data management processes of the server. These controls limit
the number of processes that can operate concurrently on the server.
v Scheduling options that control how many clients run scheduled backups
simultaneously. These scheduling options can be used to limit the number of
clients that perform client-side data deduplication at the same time.
v Client controls whereby larger objects can be processed as a collection of smaller
objects. These controls are primarily related to the Tivoli Storage Manager data
protection products.
Use the server controls that are available on Tivoli Storage Manager server to
prevent large objects from being processed by data deduplication.
Use the following parameter and server options to limit the object size for data
deduplication:
MAXSIZE
For storage pools, the MAXSIZE parameter can be used to prevent large
objects from being stored in a deduplicated storage pool. Use the default
NOLIMIT parameter value, or set the value to be greater than
CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT option values.
Use the MAXSIZE parameter with a deduplicated storage pool to prevent
objects that are too large to be eligible for data deduplication from being
stored in a deduplicated storage pool. The objects are then redirected to the
next storage pool in the storage pool hierarchy.
SERVERDEDUPTXNLIMIT
The SERVERDEDUPTXNLIMIT server option limits the total size of objects that
can be deduplicated in a single transaction by duplicate identification
processes. This option limits the maximum file size that is processed by
server-side data deduplication. The default value for this option is 300 GB,
and the maximum value is 2048 GB. Because less simultaneous activity is
typical with server-side data deduplication, consider having a limit larger
than 300 GB on the object size for server-side data deduplication.
CLIENTDEDUPTXNLIMIT
The CLIENTDEDUPTXNLIMIT server option restricts the total size of all objects
that can be deduplicated in a single client transaction. This option limits
the maximum object size that is processed by client-side data
deduplication. However, there are some methods to break up larger
objects. The default value for this option is 300 GB, and the maximum
value is 1024 GB.
Tips:
v Set the MAXSIZE parameter for deduplicated storage pools to a value slightly
greater than CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT option values.
Use the controls for the data management processes of the Tivoli Storage Manager
server. These controls limit the number of large objects that are simultaneously
processed by the server during data deduplication.
Use the following commands and parameters to limit the number of large objects
that are simultaneously processed by the server:
v The storage pool parameters on the DEFINE STGPOOL command or the UPDATE
STGPOOL command.
The MIGPROCESS parameter controls the number of migration processes for a
specific storage pool.
The RECLAIMPROCESS parameter controls the number of simultaneous processes
that are used for reclamation.
v The IDENTIFYPROCESS parameter on the IDENTIFY DUPLICATES command. The
parameter controls the number of duplicate identification processes that can run
at one time for a specific storage pool.
Tips:
v You can safely run duplicate identification processes for more than one
deduplicated storage pool at the same time. However, specify the
IDENTIFYPROCESS parameter with the IDENTIFY DUPLICATES command to limit the
total number of all simultaneous duplicate identification processes. Limit the
total number to a number less than or equal to the number of processors that are
available in the system.
v Schedule duplicate identification processes to run when the additional load does
not affect client operations or conflict with other server processes. For example,
schedule the duplicate identification process to run outside the client backup
window. The duplicate identification processes for the server intensively use the
database and system resources. These processes place additional processing on
the processor and memory of the system.
v You can use the Tivoli Storage Manager Administration Center to run a
maintenance script. The Administration Center provides a wizard that guides
you through the steps to configure and schedule an appropriate maintenance
script that runs server processes in a preferred order.
v Do not overlap different types of operations, such as expiration, reclamation,
migration, and storage pool backup.
v Read the information about data deduplication and the server storage pool. The
effect of data deduplication on system resources is also related to the size of the
file for deduplication. As the size of the file increases, more processing time,
processor resources, memory, and active log space are needed on the server.
Review the document for information about data deduplication and the server
storage pool.
For scheduled backups, you can limit the number of client backup sessions that
perform client-side data deduplication at the same time.
You can use any of the following approaches to limit the number of client backup
sessions:
v Clients can be clustered in groups by using different schedule definitions that
run at different times during the backup window. Consider spreading clients
that use client-side deduplication among these different groups.
v Increase the duration for scheduled startup windows and increase the
randomization of schedule start times. This limits the number of backups that
use client-side data deduplication that start at the same time.
v Separate client backup destinations by using the server policy definitions of the
Tivoli Storage Manager server, so that different groups of clients use different
storage pool destinations:
Clients for which data is never to be deduplicated cannot use a management
class that has as its destination a storage pool with data deduplication
enabled.
Clients that use client-side data deduplication can use storage pools where
they are matched with other clients for which there is a higher likelihood of
duplicate matches. For example, all clients that run Microsoft Windows
operating systems can be set up to use a common storage pool. However,
they do not necessarily benefit from sharing a storage pool with clients that
perform backups of Oracle databases.
Many of the data protection products process objects with sizes in the range of
several hundred GBs to one TB. This range exceeds the maximum object size that
is acceptable for data deduplication.
You can reduce large objects into multiple smaller objects by using the following
methods:
v Use Tivoli Storage Manager client features that back up application data with
the use of multiple streams. For example, a 1 TB database is not eligible for data
deduplication as a whole. However, when backed up with four parallel streams,
the resulting four 250 GB objects are eligible for deduplication. For Tivoli Storage
Manager Data Protection for SQL, you can specify a number of stripes to change
the backup into multiple streams.
v Use application controls that influence the maximum object size that is passed
through to Tivoli Storage Manager. Tivoli Storage Manager Data Protection for
Oracle has several RMAN configuration parameters that can cause larger
databases to be broken into smaller objects. These configuration parameters
include the use of multiple channels, or the MAXPIECESIZE option, or both.
Restriction: In some cases, large objects cannot be reduced in size, and therefore
cannot be processed by Tivoli Storage Manager data deduplication:
Processor usage
The amount of processor resources that are used depends on how many client
sessions or server processes are simultaneously active. Additionally, the amount of
processor usage is increased because of other factors, such as the size of the files
that are backed up. When I/O bandwidth is available and the files are large, for
example 1 MB, finding duplicates can use an entire processor during a session or
process. When files are smaller, other bottlenecks can occur. These bottlenecks can
include reading files from the client disk or the updating of the Tivoli Storage
Manager server database. In these bottleneck situations, data deduplication might
not use all of the resources of the processor.
You can control processor resources by limiting or increasing the number of client
sessions for a client or a server duplicate identification processes. To take
advantage of your processor and to complete data deduplication faster, you can
increase the number of identification processes or client sessions for the client. The
increase can be up to the number of processors that are on the system. It can be
more than that number if the processors support multiple hardware-assisted
threads for the core, such as with simultaneous multithreading. Consider a
minimum of at least 8 (2.2Ghz or equivalent) processor cores in any Tivoli Storage
Manager server that is configured for data deduplication.
Network bandwidth
Network bandwidth for the queries for data from the Tivoli Storage Manager client
to the server can be reduced by using the enablededupcache client option. The
cache stores information about extents that have been previously sent to the server.
If an extent is found that was previously sent, it is not necessary to query the
Restore performance
Compression
If the server detects that a security attack is in progress, the current session is
canceled. In addition, setting of the node DEDUPLICATION parameter is changed from
CLIENTORSERVER to SERVERONLY. The SERVERONLY setting disables
client-side data deduplication for that node.
The server also issues a message that a potential security attack was detected and
that client-side data deduplication was disabled for the node.
To display the current value for SET DEDUPVERIFICATIONLEVEL, issue the QUERY
STATUS command. Check the value in the Client-side Deduplication Verification
Level field.
In a FILE storage pool that is not set up for data deduplication, files on a volume
that are being restored or retrieved are read sequentially from the volume before
the next volume is mounted. This process ensures optimal I/O performance and
eliminates the need to mount a volume multiple times.
In a FILE storage pool that is set up for data deduplication, however, extents that
comprise a single file can be distributed across multiple volumes. To restore or
retrieve the file, each volume containing a file extent must be mounted. As a result,
the I/O is more random, which can lead to slower restore-and-retrieve times.
These results occur more often with small files that are less than 100 KB. In
addition, more processor resources are consumed when restoring or retrieving
from a deduplicated storage pool. The additional consumption occurs because the
data is checked to ensure that it has been reassembled properly.
Tip: To reduce the mounting and removing of FILE storage pool volumes, the
server allows for multiple volumes to remain mounted until they are no longer
needed. The number of volumes that can be mounted at a time is controlled by the
NUMOPENVOLSALLOWED option.
You can create a storage pool for data deduplication or update an existing storage
pool for data deduplication. You can store client-side deduplicated data and
server-side deduplicated data in the same storage pool.
As data is stored in the pool, the duplicates are identified. When the reclamation
threshold for the storage pool is reached, reclamation begins, and the space that is
occupied by duplicate data is reclaimed.
Attention: By default, the Tivoli Storage Manager server requires that you back
up deduplication-enabled primary storage pools before volumes in the storage
pool are reclaimed and before duplicate data is discarded.
| You can create a copy of the data by using BACKUP STGPOOL or REPLICATE NODE
| command. When you back up a primary storage pool, you create a copy of the
| entire storage pool. When you replicate data by using node replication, you copy
| data from one or more nodes from primary storage pools to a primary storage
| pool on another Tivoli Storage Manager server.
Table 33 describes the different scenarios that you can use to create a copy of data
in your deduplicated storage pools, and which value of DEDUPREQUIRESBACKUP to
use.
Table 33. Setting the value for the DEDUPREQUIRESBACKUP option
Creating a copy of your primary storage pool DEDUPREQUIRESBACKUP
data value Method
Back up your primary storage pool data to a Yes BACKUP STGPOOL
non-deduplicated copy pool, such as a copy
pool that uses tape.
Back up your primary storage pool data to a No BACKUP STGPOOL
deduplicated copy pool.
Use node replication to create a copy of your No REPLICATE NODE
data on another Tivoli Storage Manager server.
No copy is created. No
Depending on the method that you chose to create a copy of the data in the
primary storage pools, complete one of the following actions:
| v Use the storage pool backup command to back up data:
| 1. Issue the BACKUP STGPOOL command. If you set the DEDUPREQUIRESBACKUP
| option to yes, you must back up data to a copy storage pool that is not set
| up for data deduplication.
| Tip: When you copy data to an active data pool, it does not provide the
| same level of protection that occurs when you create a storage pool backup
| or use node replication.
| 2. Issue the IDENTIFY DUPLICATES command to identify duplicate data.
| Tip: If you backup storage pool data after duplicate data is identified, the
| copy process can take longer because the data must be reconstructed to find
| any duplicate data.
| v Use the node replication command to back up data:
| 1. Issue the IDENTIFY DUPLICATES command to identify duplicate data.
| 2. Issue the REPLICATE NODE command to start node replication.
The following table illustrates what happens to data deduplication when data
objects are moved or copied.
Deduplicated data, which was in the storage pool before you turned off data
deduplication, is not reassembled. Deduplicated data continues to be removed due
to normal reclamation and deletion. All information about data deduplication for
the storage pool is retained.
To turn off data deduplication for a storage pool, use the UPDATE STGPOOL
command and specify DEDUPLICATE=NO.
If you turn data deduplication on for the same storage pool, duplicate-
identification processes resume, skipping any files that have already been
processed. You can change the number of duplicate-identification processes. When
calculating the number of duplicate-identification processes to specify, consider the
workload on the server and the amount of data requiring data deduplication. The
number of duplicate-identification processes must not exceed the number of
processor cores available on the IBM Tivoli Storage Manager server.
The following table shows how the data deduplication settings on the client
interact with the data deduplication settings on the Tivoli Storage Manager server.
Table 35. Data deduplication settings: Client and server
Value of the
DEDUPLICATION Value of the client
parameter for REGISTER NODE DEDUPLICATION option
or UPDATE NODE in the client options file Data deduplication location
SERVERONLY Yes Server
You can set the DEDUPLICATION option in the client options file, in the
preference editor of the Tivoli Storage Manager client GUI, or in the client option
set on the Tivoli Storage Manager server. Use the DEFINE CLIENTOPT command to
set the DEDUPLICATION option in a client option set. To prevent the client from
overriding the value in the client option set, specify FORCE=YES.
Table 36 on page 320 shows how these two controls, the number and duration of
processes, interact for a particular storage pool.
Remember:
v When the amount of time that you specify as a duration expires, the number of
duplicate-identification processes always reverts to the number of processes
specified in the storage pool definition.
v When the server stops a duplicate-identification process, the process completes
the current physical file and then stops. As a result, it might take several
minutes to reach the value that you specify as a duration.
v To change the number of duplicate-identification processes, you can also update
the storage pool definition using the UPDATE STGPOOL command. However, when
you update a storage pool definition, you cannot specify a duration. The
processes that you specify in the storage pool definition run indefinitely, or until
you issue the IDENTIFY DUPLICATES command, update the storage pool definition
again, or cancel a process.
The following example illustrates how you can control data deduplication using a
combination of automatic and manual duplicate-identification processes. Suppose
you create two new storage pools for data deduplication, A and B. When you
create the pools, you specify two duplicate-identification processes for A and one
process for B. The IBM Tivoli Storage Manager server is set by default to run those
processes automatically. As data is stored in the pools, duplicates are identified and
marked for removal. When there is no data to deduplicate, the
duplicate-identification processes go into an idle state, but remain active.
Suppose you want to avoid resource impacts on the server during client-node
backups. You must reduce the number of duplicate-identification processes
manually. For A, you specify a value of 1 for the number of duplicate-identification
process. For B, you specify a value of 0. You also specify that these changes remain
in effect for 60 minutes, the duration of your backup window.
For example, suppose that you have four storage pools: stgpoolA, stgpoolB,
stgpoolC, and stgpoolD. All the storage pools are associated with a particular IBM
Tivoli Storage Manager server. Storage pools A and B are each running one
duplicate-identification process, and storage pools C and D are each running two.
A 60-minute client backup is scheduled to take place, and you want to reduce the
server workload from these processes by two-thirds.
Now two processes are running for 60 minutes, one third of the number running
before the change. At the end of 60 minutes, the Tivoli Storage Manager server
automatically restarts one duplicate-identification process in storage pools A and B,
and one process in storage pools C and D.
For details about client-side data deduplication options, see the Backup-Archive
Clients Installation and User's Guide.
Related concepts:
Client-side data deduplication on page 295
In this example, you enable client-side data deduplication for a single node. You
have a policy domain that you use to manage deduplicated data.
The name of the domain that you use to manage deduplicated data is
dedupdomain1 The primary storage pool specified by the copy group of the
default management class is a deduplication-enabled storage pool. The client,
MATT, that you want to enable for data deduplication uses a default management
class for backup operations.
To determine the amount of data that was deduplicated, start a backup or archive
operation. At the end of the operation, check the backup or archive report.
In this example, you enable client-side data deduplication for more than one client
node.
The data belonging client MATT is bound to a management class with a copy
group that specifies a deduplication-enabled destination storage pool.
To change the data deduplication location from the client to the server, issue the
following command:
update node matt deduplication=serveronly
The extent to which these symptoms occur depends on the number and size of
objects being processed, the intensity, and type of concurrent operations taking
place on the IBM Tivoli Storage Manager server, and the Tivoli Storage Manager
server configuration.
With the SERVERDEDUPTXNLIMIT server option, you can limit the size of objects that
can be deduplicated on the server. With the CLIENTDEDUPTXNLIMIT server option,
you can limit the size of transactions when client-side deduplicated data is backed
up or archived.
Data deduplication uses an average extent size of 256 KB. When deduplicating
large objects, for example, over 200 GB, the number of extents for an object can
grow large. Assuming extents are 256 KB, there are 819,200 extents for a 200 GB
object. When you need to restore this object, all 819,200 database records must be
read before the object is accessible.
Tiered data deduplication can manage larger objects because a larger average
extent size is used when deduplicating the data. For example, after an object
reaches 200 GB, the Tivoli Storage Manager server uses 1 MB as the average extent
size, instead of 256 KB. 819,200 extents become 204,800 extents.
Note: By default, objects under 100 GB in size are processed at Tier 1. Objects in
the range of 100 GB to under 400 GB are processed in Tier 2. All objects 400 GB
and larger are processed in Tier 3.
Depending on your environment, you can set different options for using tiered
data deduplication. However, if possible, avoid changing the default tier settings.
Small changes might be tolerated, but frequent changes to these settings can
prevent matches between previously stored backups and future backups.
If you want to use two tiers for data deduplication instead of three, you can set the
DEDUPTIER2FILESIZE and DEDUPTIER3FILESIZE accordingly.
Use Tier 1 and Tier 2 only
To have two tiers with an average extent size of 256 KB and 1 MB, specify
these values:
DEDUPTIER2FILESIZE 100
DEDUPTIER3FILESIZE 9999
Use Tier 1 and Tier 3 only
To have two tiers with an average extent size of 256 KB and 2 MB, specify
these values:
If you do not want to use tiered data deduplication and instead preserve your
existing environment, set the value for both of the tiered data deduplication
options to 9999. For example:
DEDUPTIER2FILESIZE 9999
DEDUPTIER3FILESIZE 9999
If both options are set to 9999, then all files that are 10 TB or less are processed
with the default extent size of 256 KB.
You can also obtain statistics about client-side data deduplication. For details, see
Backup-Archive Clients Installation and User's Guide.
To query a storage pool for statistics about data deduplication, issue the QUERY
STGPOOL command.
If you run a query before reclamation of the storage pool, the Duplicate Data Not
Stored value in the command output is inaccurate and does not reflect the most
recent data reduction.
You can display information only about files that are linked to a volume or only
about files that are stored on a volume. You can also display information about
both stored files and linked files.
To display information about files on a volume, issue the QUERY CONTENT command
and specify the FOLLOWLINKS parameter.
For example, suppose a volume in a deduplicated storage pool is physically
destroyed. You must restore this volume. Before you do, you want to determine
whether other volumes in the storage pool have files that are linked to files in the
destroyed volume. With that information, you can decide whether to restore the
other volumes. To identify links, you issue the QUERY CONTENT command for the
destroyed volume and specify the FOLLOWLINKS parameter to list all the files with
links to files on the destroyed volume.
You can use the activity log or the Tivoli Storage Manager Administration Center
to view client statistics about data deduplication. The activity log can show
historical information about one or more nodes. You can also view data reduction
information for data deduplication by using the Tivoli Storage Manager API.
To view client statistics for data deduplication, see the activity log, use the Tivoli
Storage Manager Administration Center, or use the Tivoli Storage Manager API.
The following client statistics are taken from the activity log:
tsm> incremental c:\test\* -sub=yes
Incremental backup of volume c:\test\*
Normal File--> 43,387,224 \\naxos\c$\test\newfile [Sent]
Successful incremental backup of \\naxos\c$\test\*
The \\naxos\c$\test directory uses approximately 143.29 MB of space. All files are
already stored on the Tivoli Storage Manager server except the c:\test\newfile
file, which is 41.37 MB (43,387,224 bytes). After client-side data deduplication, it is
determined that only approximately 21 MB will be sent to the server.
The following client statistics are produced using the Tivoli Storage Manager API:
typedef struct tsmEndSendObjExOut_t
{
dsUint16_t stVersion; /* structure version */
dsStruct64_t totalBytesSent; /* total bytes read from app */
dsmBool_t objCompressed; /* was object compressed */
dsStruct64_t totalCompressSize; /* total size after compress */
dsStruct64_t totalLFBytesSent; /* total bytes sent LAN free */
dsUint8_t encryptionType; /* type of encryption used */
dsmBool_t objDeduplicated; /* was object processed for dist. data dedup */
dsStruct64_t totalDedupSize; /* total size after de-dup */
} tsmEndSendObjExOut_t;
After each backup or archive operation, the Tivoli Storage Manager client reports
the data deduplication statistics in the server activity log. For details about the
activity log, see the Tivoli Storage Manager Information Center, and search for
activity log.
To query the data deduplication statistics for the client, issue the QUERY ACTLOG
command.
See the following example for sample information provided by the QUERY ACTLOG
command:
Date/Time Message
-------------------- ----------------------------------------------------------
03/15/10 09:56:56 ANE4952I (Session: 406, Node: MODO)
Total number of objects inspected: 1 (SESSION: 406)
03/15/10 09:56:56 ANE4954I (Session: 406, Node: MODO)
Total number of objects backed up: 1 (SESSION: 406)
03/15/10 09:56:56 ANE4958I (Session: 406, Node: MODO)
Total number of objects updated: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4960I (Session: 406, Node: MODO)
Total number of objects rebound: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4957I (Session: 406, Node: MODO)
Total number of objects deleted: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4970I (Session: 406, Node: MODO)
Total number of objects expired: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4959I (Session: 406, Node: MODO)
Total number of objects failed: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4982I (Session: 406, Node: MODO)
Total objects deduplicated: 1(SESSION: 406)
03/15/10 09:56:56 ANE4977I (Session: 406, Node: MODO)
Total number of bytes inspected: 7.05 MB(SESSION: 406)
03/15/10 09:56:56 ANE4975I (Session: 406, Node: MODO)
Total number of bytes processed: 33 B(SESSION: 406)
03/15/10 09:56:56 ANE4961I (Session: 406, Node: MODO)
Total number of bytes transferred: 33 B (SESSION: 406)
03/15/10 09:56:56 ANE4963I (Session: 406, Node: MODO)
Data transfer time: 0.00 sec (SESSION: 406)
03/15/10 09:56:56 ANE4966I (Session: 406, Node: MODO)
Network data transfer rate: 77.09 KB/sec (SESSION: 406)
03/15/10 09:56:56 ANE4967I (Session: 406, Node: MODO)
Aggregate data transfer rate: 0.01 KB/sec (SESSION: 406)
03/15/10 09:56:56 ANE4968I (Session: 406, Node: MODO)
Objects compressed by: 0% (SESSION: 406)
03/15/10 09:56:56 ANE4981I (Session: 406, Node: MODO)
Deduplication reduction: 100.00%(SESSION: 406)
03/15/10 09:56:56 ANE4976I (Session: 406, Node: MODO)
Total data reduction ratio: 100.00%(SESSION: 406)
03/15/10 09:56:56 ANE4964I (Session: 406, Node: MODO)
Elapsed processing time: 00:00:02 (SESSION: 406)
The following example shows how to use the activity log to gather the data
reduction information across all nodes that belong to the DEDUP domain:
dsmadmc -id=admin -password=admin -displaymode=list -scrollprompt=no "select
DISTINCT A1.MESSAGE, A2.MESSAGE from ACTLOG A1, ACTLOG A2 where A1.NODENAME
in (select NODE_NAME from nodes where domain_name=DEDUP) and
A1.SESSID=A2.SESSID and A1.MSGNO=4977 and A2.MSGNO=4961 and EXISTS
(select A3.SESSID from ACTLOG A3 where A3.SESSID=A1.SESSID and A3.MSGNO=4982)"
| grep MESSAGE: | sed -r s/MESSAGE:.*:\s+([0-9]+(\.[0-9]+)?)\s+
(B|KB|MB|GB|TB).*(SESSION: .*)/\1 \3/ | sed -r s/\.// | awk -f awk.txt
{ if ($2=="B") valueInKB = 0;
if ($2=="KB") valueInKB = $1;
if ($2=="MB") valueInKB = $1 * 1024;
if ($2=="GB") valueInKB = $1 * 1024 * 1024;
if ($2=="TB") valueInKB = $1 * 1024 * 1024 *1024;
The QUERY ACTLOG command gives a summary, as shown in the following example:
Number of bytes inspected: 930808832 KB
Number of bytes transferred: 640679936 KB
Data reduction ratio: 31 %
To query where client file spaces are stored and how much space they occupy,
issue the QUERY OCCUPANCY command.
In the following example, 10 MB of data is placed in the FS1 file space, and 2 MB
is marked for expiration and is removed during the next expiration process.
Therefore, Physical Space Occupied reports 10 MB and Logical Space Occupied
reports 8 MB. The Physical Space Occupied value for storage pools that use data
deduplication is not shown.
tsm: SERVER1>q occupancy dedup*
The occupancy table shows how much physical space is occupied by a file space
after the removal of the deduplication savings. These savings are gained by
removing duplicated data from the file space. You can use select * from
occupancy to get LOGICAL_MB and REPORTING_MB values.
LOGICAL_MB is the amount of space that is used by this file space. REPORTING_MB is
the amount of space that is occupied when the data is not placed in a
deduplication-enabled storage pool.
NODE_NAME: BRIAN
TYPE: Bkup
FILESPACE_NAME: \\brain\c$
STGPOOL_NAME: MYFILEPOOL
NUM_FILES: 63
PHYSICAL_MB: 0.00
LOGICAL_MB: 10.00
REPORTING_MB: 30.00
FILESPACE_ID: 17
Tip: The LOGICAL_MB value takes into account only the amount of data that is
removed or not stored because the data is identified as a duplicate of data that is
stored elsewhere.
For example, IBM Tivoli Storage Manager for Mail and IBM Tivoli Storage
Manager for Databases can use client-side data deduplication through the Tivoli
Storage Manager API to create backup sets and export node data.
Image backup can be full or incremental. In a typical scenario, full image backups
are scheduled less frequently than incremental image backups. For example, a full
image backup is scheduled weekly and incremental backups are scheduled daily,
except for the day of the selective image backup. The frequency of full image
backups is often driven by the available storage space. For example, each image
backup of a 50 GB volume might need 50 GB of space in a storage pool.
You can use VSS on Windows Server 2003, Windows Server 2008, and Windows
Vista operating systems. For details about backing up the Windows system state,
see Tivoli Storage Manager: Client Installation and User Guide.
System state can contain thousands of objects and take a large amount of storage
space on the server. It is likely that system state objects do not change much
between backups. This results in a large amount of duplicate data being stored on
the server. In addition, similar systems are likely to have a similar system state.
Therefore, when you perform system state backups on these systems, there is an
increase in duplicate data.
In the following example, a backup of the system state was performed on two
similar systems that run Windows Server 2008. There was no data backed up to
the storage pool. On the first system, the system-state data was deduplicated by
45%, as shown in Figure 23. A backup of the system state yielded a deduplication
reduction of 98% on the second system, as shown in Figure 24 on page 332.
This example shows a sample deduplication reduction of 45% for the system state
data:
This example shows a sample deduplication reduction of 98% for the system state
data:
Tivoli Storage Manager for Virtual Environments backups consist of all virtual
machines in the environment. Often, large portions of individual backups are
common with other backups. Therefore, when you perform backup operations,
there is an increase in duplicate data.
When you use client-side data deduplication in combination with backups for
Tivoli Storage Manager for Virtual Environments, you can reduce the amount of
duplicate data that is stored on the server. The reduction amount varies, depending
on the makeup of your data.
Before you use data deduplication, ensure that your system meets all prerequisites.
You can turn on client-side data deduplication by adding DEDUPLICATION YES to the
dsm.sys file.
Related concepts:
Client-side data deduplication on page 295
In Tivoli Storage Manager V6.1 or earlier, data protection clients do not provide
data deduplication reduction statistics in the graphical user interface. In this
situation, you can verify that data deduplication occurs.
When only the metadata of the file is changed, for example, with access control
lists or extended attributes, typically the whole file is backed up again. With
client-side data deduplication, although the file is backed up again, only the
metadata is sent to the server.
Client-side data deduplication identifies extents in the data stream and calculates
the associated hash sums. Data deduplication determines whether a data extent
with the same hash sum is already stored on the server. If it is already stored, the
backup-archive client only needs to notify the server about the hash sum, and can
avoid sending the corresponding data extent. This process reduces the amount of
data that is exchanged between the Tivoli Storage Manager backup-archive client
and the server.
The Tivoli Storage Manager client cannot use a cache for data deduplication if
there is not enough file space for a hash sum cache. Client-side data deduplication
can take place, but it has no memory of hash sums that are already sent by the
client or already found on the server. Data deduplication, generally, must query the
server to find out if hash sums are duplicates. Hash sum lists are maintained in
memory for the life of a transaction. If a hash sum is encountered multiple times
within the same transaction, the hash sum is detectable without a cache.
The cache for client-side data deduplication can become unsynchronized with the
deduplicated disk storage pool of the server. Object expiration, file space deletion,
or overflow to an associated tape storage pool can cause the cache to be
unsynchronized. When the client cache contains entries that are no longer in the
deduplicated storage pool of the Tivoli Storage Manager server, the client cache
resets. The client cache cannot delete specific entries when objects are deleted from
the storage pool of the server.
When a backup set is created for a node by using the GENERATE BACKUPSET
command, all associated node data is placed onto the backup media. It is also
placed on the backup media when node data is exported for a node by the EXPORT
NODE command. This placement ensures that the associated objects can be restored
without any server dependencies, apart from the backup media.
Compression
Consider the following factors when you use data compression in an environment
that uses multiple clients:
v Extents that are compressed by a backup-archive client that uses Tivoli Storage
Manager V6.1 or earlier are not compatible with compressed extents from a V6.2
client. Extents are also not compatible with uncompressed extents because each
version uses a different compression algorithm.
v With a deduplication storage pool that contains data from clients that are V6.2
and earlier, there is a mixture of compressed and non-compressed extents. For
example, assume that a restore operation is run from a client that is earlier than
V6.2. Compressed extents from a client at a later version of Tivoli Storage
Manager are uncompressed by the server during the restore operation.
v When backup sets are generated for clients that are at a version earlier than
V6.2, V6.2 compressed extents that are also part of the data to be backed up are
uncompressed.
Even though most data is compatible when using compression, ensure that all
clients are at V6.2 and later. This method minimizes the need for data compression
when you restore data or create a backup set.
Data that is stored by earlier client versions and processed for deduplication
extents by the server is compatible with new extents. For example, an extent that is
identified by the server from an earlier client version matches the query from
client-side data deduplication to the server. The extent is not sent to the server,
Data extents that are created by different operations are compatible. For example,
data extents are compatible that are created by file-level, image, or IBM Tivoli
Storage Manager FastBack mount backups. This can mean that a greater proportion
of the extents can be deduplicated.
Assume that you integrate the Tivoli Storage Manager FastBack mount with Tivoli
Storage Manager to back up volumes to a Tivoli Storage Manager server. The
Tivoli Storage Manager client backs up the Tivoli Storage Manager FastBack
repository to a remote server. You previously performed an image or a file-level
backup of this data with Tivoli Storage Manager client. Then it is likely that the
Tivoli Storage Manager FastBack mount backup can use many data extents that are
already stored on the server.
For example, you perform an image backup of a volume that uses the Tivoli
Storage Manager client. Then you back up the same volume with Tivoli Storage
Manager FastBack. You can expect a greater amount of data deduplication when
you back up the Tivoli Storage Manager FastBack mount.
Data extents that are created by a file-level backup can be used by the Tivoli
Storage Manager client during an image backup. For example, you perform a full
incremental backup of the C drive on your computer. Then you run an image
backup of the same drive. You can expect a greater amount of data deduplication
during the image backup. You can also expect a greater amount of data
deduplication during a file-level backup or an archive operation that immediately
follows an image backup.
Data deduplication is only permitted for storage pools that are associated with a
devtype=FILE device class. The following scenarios show how you can implement
the data deduplication of storage pools to ensure that you can restore data if a
failure occurs.
Primary storage pool is deduplicated and a single copy storage pool is not
deduplicated
The amount of time required to back up the primary storage pool to a
non-deduplicated copy storage pool can increase. While data is copied to
the copy storage pool, the deduplicated data that represents a file must be
read. The file must be recreated and stored in the copy storage pool.
You can write data simultaneously during any of the following operations:
v Client store sessions, for example:
Backup and archive sessions by Tivoli Storage Manager backup-archive
clients.
Backup and archive sessions by application clients using the Tivoli Storage
Manager API.
The maximum number of copy storage pools and active-data pools to which data
can be simultaneously written is three. For example, you can write data
simultaneously to three copy storage pools, or you can write data simultaneously
to two copy storage pools and one active-data pool.
You can specify the simultaneous-write function for a primary storage pool if it is
the target for client store sessions, server import processes, or server
data-migration processes. You can also specify the simultaneous-write function for
a primary storage pool when it is the target for all of the eligible operations.
Writing data simultaneously during client store sessions might be the logical choice
if you have sufficient time for mounting and removing tapes during the client store
session. However, if you choose this option you must ensure that a sufficient
number of mount points and drives are available to accommodate all the client
nodes that are storing data.
As a best practice, you are probably issuing the BACKUP STGPOOL and COPY
ACTIVEDATA commands for all the storage pools in your storage pool hierarchy. If
you are, and if you migrate only a small percentage of data from the primary
storage pool daily, writing data simultaneously during client store sessions is the
Use the simultaneous-write function during migration if you have many client
nodes and the number of mount points that are required to write data
simultaneously during client store sessions is unacceptable. Similarly, mounting
and removing tapes when writing data simultaneously during client store sessions
might be taking too much time. If so, consider writing data simultaneously during
migration.
By default, the Tivoli Storage Manager server writes data simultaneously during
client store sessions if you have copy storage pools or active-data pools defined to
the target storage pool.
You can also disable the simultaneous-write function. This option is useful if you
have copy storage pools or active-data pools defined, but you want to disable the
simultaneous-write function without deleting and redefining the pools.
Remember:
v Specify a value for the AUTOCOPY parameter on the primary storage pool that is
the target of data movement. (The default is to write data simultaneously during
client store sessions and server import processes.) For example, if you want to
write data simultaneously only during server data-migration processes, specify
AUTOCOPY=MIGRATION in the definition of the next storage pool in the storage pool
hierarchy.
v The AUTOCOPY parameter is not available for copy storage pools or active-data
pools.
IBM Tivoli Storage Manager provides the following options for controlling when
simultaneous-write operations occur:
v To disable the simultaneous-write function, specify AUTOCOPY=NONE.
This option is useful, if, for example, you have copy storage pools or active-data
pools defined, and you want to temporarily disable the simultaneous-write
function without having to delete and then redefine the pools.
v To specify simultaneous-write operations only during client store sessions and
server import processes, specify AUTOCOPY=CLIENT.
During server import processes, data is simultaneously written only to copy
storage pools. Data is not written to active-data pools during import processes.
v To specify that simultaneous-write operations take place only during server
data-migration processes, specify AUTOCOPY=MIGRATION.
During server data migration, data is simultaneously written to copy storage
pools and active-data pools only if the data does not exist in those pools.
v To specify that simultaneous-write operations take place during client store
sessions, server data-migration processes, and server import processes, specify
AUTOCOPY=ALL.
A primary storage pool can be the target for more than one type of data
movement. For example, the next storage pool in a storage pool hierarchy can be
the target for data migration from the primary storage pool at the top of the
hierarchy. The next storage pool can also be the target for direct backup of
certain types of client files (for example, image files). The AUTOCOPY=ALL setting
on a primary storage pool ensures that data is written simultaneously during
both server data-migration processes and client store sessions.
The following table provides examples of AUTOCOPY settings for some common
scenarios in which the simultaneous-write function is used.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands and
parameters, see the Administrator's Reference.
The parameters that are used to specify copy storage pools and active-data pools
are on the DEFINE STGPOOL and UPDATE STGPOOL commands.
v To specify copy storage pools, use the COPYSTGPOOLS parameter.
v To specify active-data pools, use the ACTIVEDATAPOOLS parameter.
Ensure that client sessions have sufficient mount points. Each session requires one
mount point for the primary storage pool and a mount point for each copy storage
pool and each active-data pool. To allow a sufficient number of mounts points, use
the MAXNUMMP parameter on the REGISTER NODE or UPDATE NODE commands.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands, refer to the
Administrator's Reference.
Use the COPYCONTINUE parameter on the DEFINE STGPOOL command to specify how
the server reacts to a write failure to copy storage pools during client store
sessions:
v To stop writing to failing copy storage pools for the remainder of the session,
but continue storing files into the primary pool and any remaining copy pools or
active-data pools, specify COPYCONTINUE=YES.
The copy storage pool list is active only for the life of the session and applies to
all the primary storage pools in a particular storage pool hierarchy.
v To fail the transaction and discontinue the store operation, specify
COPYCONTINUE=NO.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands and
parameters, refer to the Administrator's Reference.
Related concepts:
Rules of inheritance for the simultaneous-write function on page 344
When a client backs up, archives, or migrates a file, or when the server imports
data, the data is written to the primary storage pool that is specified by the copy
group of the management class that is bound to the data. If a data storage
operation or a server import operation switches from the primary storage pool at
the top of a storage hierarchy to a next primary storage pool in the hierarchy, the
next storage pool inherits the list of copy storage pools, the list of active-data
pools, and the value of the COPYCONTINUE parameter from the primary storage pool
at the top of the storage pool hierarchy.
The following rules apply during a client store session or a server import process
when the server must switch primary storage pools:
v If the destination primary storage pool has one or more copy storage pools or
active-data pools defined using the COPYSTGPOOLS or ACTIVEDATAPOOLS
parameters, the server writes the data to the next storage pool and to the copy
storage pools and active-data pools that are defined to the destination primary
pool, regardless whether the next pool has copy pools defined.
The setting of the COPYCONTINUE of the destination primary storage pool is
inherited by the next primary storage pool. The COPYCONTINUE parameter
specifies how the server reacts to a copy storage-pool write failure for any of the
copy storage pools listed in the COPYSTGPOOLS parameter. If the next pool has
copy storage pools or active-data pools defined, they are ignored as well as the
value of the COPYCONTINUE parameter.
v If no copy storage pools or active-data pools are defined in the destination
primary storage pool, the server writes the data to the next primary storage
pool. If the next pool has copy storage pools or active-data pools defined, they
are ignored.
These rules apply to all the primary storage pools within the storage pool
hierarchy.
Related tasks:
Specifying copy pools and active-data pools for simultaneous-write operations
on page 342
Specifying how the server reacts to a write failure during simultaneous-write
operations on page 343
With DISKPOOL and TAPEPOOL already defined as your storage pool hierarchy,
issue the following commands to enable the simultaneous-write function:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool diskpool copystgpools=copypool1,copypool2 copycontinue=yes
activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool.
The storage pool hierarchy and the copy storage pools and active-data pool
associated with DISKPOOL are displayed in Figure 25 on page 346.
to
Po
ts
in
in
ts
Po
to
ACTIVEDATAPOOL
DISKPOOL
COPYPOOL2
TAPEPOOL
COPYPOOL1
Figure 25. Example of storage pool hierarchy with copy storage pools defined for DISKPOOL
E
D
C
B
C C A COPYPOOL1
D D
E B
A
E
Client in DISKPOOL
NORMAL domain
next pool
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when storage
pools are backed up or when active data is copied.
In this example, the next storage pool in a hierarchy inherits empty copy storage
pool and active-data pool lists from the primary storage pool at the top of the
storage hierarchy.
You do not specify a list of copy storage pools for DISKPOOL. However, you do
specify copy storage pools for TAPEPOOL (COPYPOOL1 and COPYPOOL2) and
an active-data pool (ACTIVEDATAPOOL). You also specify a value of YES for the
COPYCONTINUE parameter. Issue the following commands to enable the
simultaneous-write function:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool tapepool copystgpools=copypool1,copypool2
copycontinue=yes activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool. Figure 27 on page 348 displays this configuration.
Po
to
int
ts
s
in
to
Po
DISKPOOL
TAPEPOOL ACTIVEDATAPOOL
COPYPOOL2
COPYPOOL1
Figure 27. Example of storage pool hierarchy with copy storage pools defined for TAPEPOOL
When files A, B, C, D, and E are backed up, the following events occur:
v A, B, C, and D are written to DISKPOOL.
v File E is written to TAPEPOOL.
See Figure 28 on page 349.
COPYPOOL2
B
A
C C COPYPOOL1
D D
E B
A
E
Client in DISKPOOL
NORMAL domain
next pool
TAPEPOOL
Although TAPEPOOL has copy storage pools and an active-data pool defined, file
E is not copied because TAPEPOOL inherits empty copy storage pool and
active-data pool lists from DISKPOOL.
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously that is written to
copy storage pools or active-data pools during migration is not copied when
primary storage pools are backed up or when active data is copied.
You specify COPYPOOL1 and COPYPOOL2 as copy storage pools for DISKPOOL
and you set the value of the COPYCONTINUE parameter to YES. You also specify
ACTIVEDATAPOOL as the active-data pool for DISKPOOL. This configuration is
identical to the configuration in the first example.
When files A, B, C, D, and E are backed up, the following events occur:
v An error occurs while writing to COPYPOOL1, and it is removed from the copy
storage pool list that is held in memory by the server. The transaction fails.
v Because the value of the COPYCONTINUE parameter is YES, the client tries the
backup operation again. The in-memory copy storage pool list, which is retained
by the server for the duration of the client session, no longer contains
COPYPOOL1.
v Files A and B are simultaneously written to DISKPOOL, ACTIVEDATAPOOL,
and COPYPOOL2.
v Files C and D are simultaneously written to DISKPOOL and COPYPOOL2.
v File E is simultaneously written to TAPEPOOL and COPYPOOL2.
See Figure 29 on page 350.
E
DISKPOOL
Client in
NORMAL domain
next pool
TAPEPOOL
In this scenario, if the primary storage pools and COPYPOOL2 become damaged
or lost, you might not be able to recover your data. For this reason, issue the
following BACKUP STGPOOL command for the copy storage pool that failed:
backup stgpool diskpool copystgpool1
backup stgpool tapepool copystgpool1
You can still recover the primary storage pools from COPYPOOL1 and, if
necessary, COPYPOOL2. However, if you want active backup data available in the
active-data pool for fast client restores, you must issue the following command:
copy activedata diskpool activedatapool
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
In this example, the storage pool hierarchy contains two primary storage pools.
The next storage pool has two copy storage pools defined. A copy of one of the
files to be migrated to the next storage pool exists in one of the copy storage pools.
FILEPOOL and TAPEPOOL are defined in your storage pool hierarchy. Two copy
storage pools, COPYPOOL1 and COPYPOOL2, are defined to TAPEPOOL. Files A,
B, and C are in FILEPOOL and eligible to be migrated. A copy of file C exists in
COPYPOOL2.
The storage pool hierarchy and the copy storage pools that are associated with
TAPEPOOL are displayed in Figure 30.
FILEPOOL A B
C
COPYPOOL2
Next pool
TAPEPOOL
COPYPOOL1
C
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL is not
relevant. TAPEPOOL is the target of the data migration.
FILEPOOL
COPYPOOL2
up
a ck
lb
o C
Next pool
e po
A ag B
or
B St A
C
TAPEPOOL Sto A
rag B
ep
oo
l ba
cku COPYPOOL1
p
C
Figure 31. Simultaneous-write operation during migration to two copy storage pools
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
In this example, the storage pool hierarchy contains two primary storage pools.
The next storage pool has two copy storage pools defined. A copy of one of the
files to be migrated to the next storage pool exists in a copy storage pool. A write
error to the pool occurs.
FILEPOOL and TAPEPOOL are defined in the storage pool hierarchy. Two copy
storage pools, COPYPOOL1 and COPYPOOL2, are defined to TAPEPOOL. Files A,
B, and C are in FILEPOOL and are eligible to be migrated. A copy of file C exists
in COPYPOOL1.
The storage pool hierarchy and the copy storage pools that are associated with
TAPEPOOL are displayed in Figure 32 on page 353.
Next pool
TAPEPOOL
COPYPOOL1
C
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL is not
relevant. TAPEPOOL is the target of the data migration.
FILEPOOL
COPYPOOL2
up
a ck
o lb C
Next pool
e po
A ag B
or
B St A
C
COPYPOOL1
TAPEPOOL C
(Removed for the
duration of the
migration process)
In this example, three primary storage pools are linked to form a storage pool
hierarchy. The next storage pool in the hierarchy has a storage pool list. The last
pool in the hierarchy inherits the list during a simultaneous-write operation.
The storage pool hierarchy and the copy storage pool are displayed in Figure 34.
FILEPOOL1 A B
C
Next pool
COPYPOOL1
C
FILEPOOL2
Next pool
TAPEPOOL
Figure 34. Three-tiered storage pool hierarchy with one copy storage pool
Issue the following commands for FILEPOOL2 and TAPEPOOL to enable the
simultaneous-write function only during migration:
update stgpool filepool2 autocopy=migration
update stgpool tapepool autocopy=migration
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL1 is not
relevant. FILEPOOL2 and TAPEPOOL are the targets of the data migration.
FILEPOOL1
Next pool
B
C FILEPOOL2
Migration COPYPOOL1
Next pool C
A A
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Primary storage pools FILEPOOL and TAPEPOOL are linked to form a storage
hierarchy. FILEPOOL is at the top of the storage hierarchy. TAPEPOOL is the next
pool in the storage hierarchy. Two copy storage pools, COPYPOOL1 and
COPYPOOL2, are defined to FILEPOOL. The value of the AUTOCOPY parameter for
FILEPOOL is CLIENT. The value of the AUTOCOPY parameter for TAPEPOOL is
NONE.
v Files A, B, and C were written to FILEPOOL during client backup operations.
v File C was simultaneously written to COPYPOOL1.
v The files in FILEPOOL are eligible to be migrated.
COPYPOOL2
FILEPOOL A B
C
COPYPOOL1
C
Next pool
TAPEPOOL
When files A, B and C are migrated, they are written to TAPEPOOL. See Figure 37.
COPYPOOL2
FILEPOOL
COPYPOOL1
C
Next pool
A
B
C
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Primary storage pools FILEPOOL and TAPEPOOL are linked to form a storage
hierarchy. FILEPOOL is at the top of the storage hierarchy. TAPEPOOL is the next
pool in the storage hierarchy. One copy storage pool, COPYPOOL, is defined to
both FILEPOOL and TAPEPOOL:
v The simultaneous-write function during client store operations was enabled.
(The setting of the AUTOCOPY parameter for FILEPOOL is CLIENT.)
v During client store operations, files A, B, and C were written to COPYPOOL. A
failure occurred while writing file D to COPYPOOL
v The simultaneous-write function during migration is enabled for TAPEPOOL.
(The setting of the AUTOCOPY parameter for TAPEPOOL is MIGRATION.)
The storage pool hierarchy and the copy storage pool that are associated with
FILEPOOL and TAPEPOOL are displayed in Figure 38.
FILEPOOL A B
C D
A B C
Next pool
COPYPOOL1
TAPEPOOL
A B C
Next pool Storage pool
D backup D
A COPYPOOL1
B
C
TAPEPOOL
Figure 39. A simultaneous-write operation during both migration and client backup operations
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Give careful consideration to the number of mount points that are available for a
simultaneous-write operation. A client session requires a mount point to store data
to a sequential-access storage pool. For example, if a storage pool hierarchy
includes a sequential primary storage pool, the client node requires one mount
point for that pool plus one mount point for each copy storage pool and
active-data pool.
Suppose, for example, you create a storage pool hierarchy like the hierarchy shown
in Figure 25 on page 346. DISKPOOL is a random-access storage pool, and
TAPEPOOL, COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL are
sequential-access storage pools. For each client backup session, the client might
have to acquire four mount points if it has to write data to TAPEPOOL. To run
two backup sessions concurrently, the client requires a total of eight mount points.
To indicate the number of mount points a client can have, specify a value for the
MAXNUMMP parameter on the REGISTER NODE or UPDATE NODE commands. Verify the
value of the MAXNUMMP parameter and, if necessary, update it if you want to enable
the simultaneous-write function. A value of 3 for the MAXNUMMP parameter might be
sufficient if, during a client session, all the data is stored in DISKPOOL,
COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
If the operation involves a copy storage pool, the value of the COPYCONTINUE
parameter in the storage pool definition determines whether the client tries the
operation again:
v If the value of the COPYCONTINUE parameter is NO, the client does not try the
operation again.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
Suppose you use a DISK primary storage pool that is accessed by many clients at
the same time during client data-storage operations. If this storage pool is
associated with copy storage pools, active-data pools, or both, the clients might
have to wait until enough tape drives are available to perform the store operation.
In this scenario, simultaneous-write operations could extend the amount of time
required for client store operations. It might be more efficient to store the data in
the primary storage pool and use the BACKUP STGPOOL command to back up the
DISK storage pool to the copy storage pools and the COPY ACTIVEDATA command to
copy active backup data from the DISK storage pool to the active-data pools.
Resources such as disk space, tape drives, and tapes are allocated at the beginning
of a simultaneous-write operation, and typically remain allocated during the entire
operation. If, for any reason, the destination primary pool cannot contain the data
being stored, the IBM Tivoli Storage Manager server attempts to store the data into
a next storage pool in the storage hierarchy. This next storage pool typically uses a
sequential-access device class. If new resources must be acquired for the next
To reduce the potential for switching storage pools, follow these guidelines:
v Ensure that enough space is available in the primary storage pools that are
targets for the simultaneous-write operation. For example, to make space
available, run the server migration operation before backing up or archiving
client data and before migration operations by Hierarchical Storage Management
(HSM) clients.
v The MAXSIZE parameter on the DEFINE STGPOOL and UPDATE STGPOOL commands
limits the size of the files that the Tivoli Storage Manager server can store in the
primary storage pools during client operations. Honoring the MAXSIZE parameter
for a storage pool during a store operation causes the server to switch pools. To
prevent switching pools, avoid using this parameter if possible.
For example, you can configure production servers to store mission critical data in
one storage pool hierarchy and use the simultaneous-write function to back up the
data to copy storage pools and an active-data pool. See Figure 40. In addition, you
can configure the servers to store noncritical, workstation data in another storage
pool hierarchy and back up that data using the BACKUP STGPOOL command.
Policy Domain
Policy Set
STANDARD Mission Critical
Management Class Management Class
Backup Backup
Copy Copy
Group Group
Points to
Points to
ACTIVEDATAPOOL B
DISKPOOL A DISKPOOL B
COPYPOOL B2
COPYPOOL B1
TAPEPOOL A TAPEPOOL B
Figure 40. Separate storage pool hierarchies for different types of data
This example also shows how to use the COPY ACTIVEDATA command to copy active
data from primary storage pools to an on-site sequential-access disk (FILE)
active-data pool. When designing a backup strategy, carefully consider your own
system, data storage, and disaster-recovery requirements.
1. Define the following storage pools:
v Two copy storage pools, ONSITECOPYPOOL and DRCOPYPOOL
v One active-data pool, ACTIVEDATAPOOL
v Two primary storage pools, DISKPOOL and TAPEPOOL
As part of the storage pool definition for DISKPOOL, specify TAPEPOOL as the
next storage pool, ONSITECOPYPOOL as the copy storage pool, and
ACTIVEDATAPOOL as the active-data pool. Set the copy continue parameter
for copy storage pools to YES. If an error occurs writing to a copy storage pool,
the operation will continue storing data into the primary pool, the remaining
copy storage pool, and the active-data pool.
define stgpool tapepool mytapedevice
define stgpool onnsitepool mytapedevice
define stgpool drcopypoool mytapedevice
define stgpool activedatapool mydiskdevice
define stgpool diskpool mydiskdevice nextstgpool=tapepool
copystgpool=onsitecopypool copycontinue=yes activedatapools=
activedatapool
You can set collocation for each sequential-access storage pool when you define or
update the pool.
Figure 41 shows an example of collocation by client node with three clients, each
having a separate volume containing that client's data.
When collocation is disabled, the server attempts to use all available space on each
volume before selecting a new volume. While this process provides better
utilization of individual volumes, user files can become scattered across many
volumes. Figure 43 on page 364 shows an example of collocation disabled, with
three clients sharing space on single volume.
Collocation by group is the Tivoli Storage Manager system default for primary
sequential-access storage pools. The default for copy storage pools and active-data
pools is no collocation.
During the following server operations, all the data belonging to a collocation
group, a single client node, or a single client file space is moved or copied by one
process. For example, if data is collocated by group, all data for all nodes
belonging to the same collocation group is migrated by the same process.
1. Moving data from random-access and sequential-access volumes
2. Moving node data from sequential-access volumes
3. Backing up a random-access or sequential-access storage pool
4. Restoring a sequential-access storage pool
5. Reclamation of a sequential-access storage pool or off-site volumes
6. Migration from a random-access storage pool.
When collocating node data, the Tivoli Storage Manager server attempts to keep
files together on a minimal number of sequential-access storage volumes. However,
when the server is backing up data to volumes in a sequential-access storage pool,
the backup process has priority over collocation settings. As a result, the server
completes the backup, but might not be able to collocate the data. For example,
suppose you are collocating by node, and you specify that a node can use two
mount points on the server. Suppose also that the data being backed up from the
node could easily fit on one tape volume. During backup, the server might mount
two tape volumes, and the node's data might be distributed across two tapes,
rather than one.
If collocation is by node or file space, nodes or file spaces are selected for
migration based on the amount of data to be migrated. The node or file space with
the most data is migrated first. If collocation is by group, all nodes in the storage
pool are first evaluated to determine which node has the most data. The node with
the most data is migrated first along with all the data for all the nodes belonging
to that collocation group regardless of the amount of data in the nodes' file spaces
or whether the low migration threshold has been reached.
One reason to collocate by group is that individual client nodes often do not have
sufficient data to fill high-capacity tape volumes. Collocating data by groups of
nodes can reduce unused tape capacity by putting more collocated data on
individual tapes. In addition, because all data belonging to all nodes in the same
collocation group are migrated by the same process, collocation by group can
reduce the number of times a volume containing data to be migrated needs to be
mounted. Collocation by group can also minimize database scanning and reduce
tape passes during data transfer from one sequential-access storage pool to
Table 39 shows how the Tivoli Storage Manager server selects the first volume
when collocation is enabled for a storage pool at the client-node, collocation-group,
and file-space level.
Table 39. How the server selects volumes when collocation is enabled
Volume Selection When collocation is by group When collocation is by node When collocation is by file
Order space
1 A volume that already A volume that already A volume that already
contains files from the contains files from the same contains files from the same
collocation group to which the client node file space of that client node
client belongs
2 An empty predefined volume An empty predefined volume An empty predefined volume
3 An empty scratch volume An empty scratch volume An empty scratch volume
4 A volume with the most A volume with the most A volume containing data
available free space among available free space among from the same client node
volumes that already contain volumes that already contain
data data
5 Not applicable Not applicable A volume with the most
available free space among
volumes that already contain
data
When the server needs to continue to store data on a second volume, it uses the
following selection order to acquire additional space:
1. An empty predefined volume
2. An empty scratch volume
3. A volume with the most available free space among volumes that already
contain data
4. Any available volume in the storage pool
When collocation is by client node or file space, the server attempts to provide the
best use of individual volumes while minimizing the mixing of files from different
clients or file spaces on volumes. This is depicted in Figure 44 on page 367, which
shows that volume selection is horizontal, where all available volumes are used
before all available space on each volume is used. A, B, C, and D represent files
from four different client nodes.
Remember:
1. If collocation is by node and the node has multiple file spaces, the server does
not attempt to collocate those file spaces.
2. If collocation is by file space and a node has multiple file spaces, the server
attempts to put data for different file spaces on different volumes.
Numbers of volumes (1 to n)
Figure 44. Using all available sequential access storage volumes with collocation enabled at
the node or file space level
When collocation is by group, the server attempts to collocate data from nodes
belonging to the same collocation group. As shown in the Figure 45, data for the
following groups of nodes has been collocated:
v Group 1 consists of nodes A, B, and C
v Group 2 consists of nodes D and E
v Group 3 consists of nodes F, G, H, and I
Whenever possible, the Tivoli Storage Manager server collocates data belonging to
a group of nodes on a single tape, as represented by Group 2 in the figure. Data
for a single node can also be spread across several tapes associated with a group
(Group 1 and 2). If the nodes in the collocation group have multiple file spaces, the
server does not attempt to collocate those file spaces.
H
C
Amount
of space E
used on B G I
each
volume
A D H
C F
Numbers of volumes (1 to n)
Figure 45. Using all available sequential access storage volumes with collocation enabled at
the group level
Remember: Normally, the Tivoli Storage Manager server always writes data to the
current filling volume for the operation being performed. Occasionally, however,
you might notice more than one filling volume in a collocated storage pool. This
can occur if different server processes or client sessions attempt to store data into
the collocated pool at the same time. In this situation, Tivoli Storage Manager will
allocate a volume for each process or session needing a volume so that both
operations complete as quickly as possible.
When the server needs to continue to store data on a second volume, it attempts to
select an empty volume. If none exists, the server attempts to select any remaining
available volume in the storage pool.
B
D
Amount C
of space C
used on A A
each
volume B D
D
C A
VOL1 VOL2 VOL3 VOL4 VOL5
Numbers of volumes (1 to n)
Figure 46. Using all available space on sequential volumes with collocation disabled
For example, if collocation is off for a storage pool and you turn it on, from then on
client files stored in the pool are collocated. Files that had previously been stored
in the pool are not moved to collocate them. As volumes are reclaimed, however,
the data in the pool tends to become more collocated. You can also use the MOVE
DATA or MOVE NODEDATA commands to move data to new volumes to increase
collocation. However, this causes an increase in the processing time and the
volume mount activity.
Remember: A mount wait can occur or increase when collocation by file space is
enabled and a node has a volume containing multiple file spaces. If a volume is
eligible to receive data, Tivoli Storage Manager will wait for that volume.
Using collocation on copy storage pools and active-data pools requires special
consideration.
Primary storage pools perform a different recovery role than those performed by
copy storage pools and active-data pools. Normally you use primary storage pools
(or active-data pools) to recover data to clients directly. In a disaster, when both
clients and the server are lost, you might use off-site active-data pool volumes to
recover data directly to clients and the copy storage pool volumes to recover the
primary storage pools. The types of recovery scenarios that concern you the most
will help you to determine whether to use collocation on your copy storage pools
and active-data pools.
Collocation typically results in partially filled volumes when you collocate by node
or by file space. (Partially filled volumes are less prevalent, however, when you
collocate by group.) Partially filled volumes might be acceptable for primary
storage pools because the volumes remain available and can be filled during the
next migration process. However, this may be unacceptable for copy storage pools
and active-data pools whose storage pool volumes are taken off-site immediately. If
you use collocation for copy storage pools or active-data pools, you must decide
among the following:
v Taking more partially filled volumes off-site, thereby increasing the reclamation
activity when the reclamation threshold is lowered or reached. Remember that
rate of reclamation for volumes in an active-data pool is typically faster than the
rate for volumes in other types of storage pools.
v Leaving these partially filled volumes on-site until they fill and risk not having
an off-site copy of the data on these volumes.
v Whether to collocate by group in order to use as much tape capacity as possible.
With collocation disabled for a copy storage pool or an active-data pool, typically
there will be only a few partially filled volumes after data is backed up to the copy
storage pool or copied to the active-data pool.
Consider your options carefully before using collocation for copy storage pools and
active-data pools. Even if you use collocation for your primary storage pools, you
may want to disable collocation for copy storage pools and active-data pools.
Collocation on copy storage pools or active-data pools might be desirable if you
have few clients, but each of them has large amounts of incremental backup data
each day.
Table 40 lists the four collocation options that you can specify on the DEFINE
STGPOOL and UPDATE STGPOOL commands. The table also describes the effects of
collocation on data which belongs to nodes that are members of collocation groups
and nodes that are not members of any collocation group.
Table 40. Collocation options and effects on node data
If a node is not defined as a member of a If a node is defined as a member of a
Collocation option collocation group... collocation group...
No The data for the node is not collocated. The data for the node is not collocated.
Group The server stores the data for the node on as The server stores the data for the node and for
few volumes in the storage pool as possible. other nodes that belong to the same
collocation group on as few volumes as
possible.
Node The server stores the data for the node on as The server stores the data for the node on as
few volumes as possible. few volumes as possible.
Filespace The server stores the data for the node's file The server stores the data for the node's file
space on as few volumes as possible. If a node space on as few volumes as possible. If a node
has multiple file spaces, the server stores the has multiple file spaces, the server stores the
data for different file spaces on different data for different file spaces on different
volumes in the storage pool. volumes in the storage pool.
When deciding whether and how to collocate data, do the following steps:
1. Familiarize yourself with the potential advantages and disadvantages of
collocation, in general. For a summary of effects of collocation on operations,
see Table 38 on page 364.
2. If the decision is to collocate, determine how the data is to be organized,
whether by client node, group of client nodes, or file space. If the decision is to
collocate by group, you must decide how to group nodes:
v If the goal is to save space, you might want to group small nodes together to
better use tapes.
v If the goal is potentially faster client restores, group nodes together so that
they fill as many tapes as possible. Doing so increases the probability that
individual node data will be distributed across two or more tapes and that
more tapes can be mounted simultaneously during a multi-session No Query
Restore operation.
v If the goal is to departmentalize data, then you can group nodes by
department.
3. If collocation by group is the wanted result:
a. Define collocation groups with the DEFINE COLLOCGROUP command.
b. Add client nodes to the collocation groups with the DEFINE COLLOCMEMBER
command.
The following query commands are available to help in collocating groups:
QUERY COLLOCGROUP
Displays the collocation groups defined on the server.
QUERY NODE
Displays the collocation group, if any, to which a node belongs.
Tip: If you use collocation, but want to reduce the number of media mounts and
use space on sequential volumes more efficiently, you can:
v Define a storage pool hierarchy and policy to require that backed-up, archived,
or space-managed files are stored initially in disk storage pools.
When files are migrated from a disk storage pool, the server attempts to migrate
all files that belong to the client node or collocation group that is using the most
disk space in the storage pool. This process works well with the collocation
option because the server tries to place all of the files from a particular client on
the same sequential-access storage volume.
v Use scratch volumes for sequential-access storage pools to allow the server to
select new volumes for collocation.
v Specify the client option COLLOCATEBYFILESPEC to limit the number of tapes to
which objects associated with one file specification are written. This collocation
option makes collocation by the server more efficient; it does not override
collocation by file space or collocation by node.
For details about the COLLOCATEBYFILESPEC option, see the Backup-Archive Clients
Installation and User's Guide.
When creating collocation groups, keep in mind that the ultimate destination of the
data that belongs to nodes in a collocation group depends on the policy domain to
which nodes belong. For example, suppose that you create a collocation group that
consists of nodes that belong to Policy Domain A. Policy Domain A specifies an
active-data pool as the destination of active data only and has a backup copy
group that specifies a primary storage pool, Primary1, as the destination for active
and inactive data. Other nodes in the same collocation group belong to a domain,
Policy Domain B, that does not specify an active-data pool, but that has a backup
copy group that specifies Primary1 as the destination for active and inactive data.
Primary1 has a designated copy storage pool. The collocation setting on
PRIMARY1, the copy storage pool, and the active-data pool is GROUP.
The server reclaims the space in storage pools based on a reclamation threshold that
you can set for each sequential-access storage pool. When the percentage of space
that can be reclaimed on a volume rises above the reclamation threshold, the
server reclaims the volume.
Restrictions:
v Storage pools defined with the NETAPPDUMP, the CELERRADUMP or the
NDMPDUMP data format cannot be reclaimed. However, you can use the
MOVE DATA command to move data out of a volume so that the volume can
be reused. The volumes in the target storage pool must have the same data
format as the volumes in the source storage pool.
v Storage pools defined with a CENTERA device class cannot be reclaimed.
The server checks whether reclamation is needed at least once per hour and begins
space reclamation for eligible volumes. During space reclamation, the server copies
files that remain on eligible volumes to other volumes. For example, Figure 47 on
page 373 shows that the server consolidates the files from tapes 1, 2, and 3 on tape
4. During reclamation, the server copies the files to volumes in the same storage
pool unless you have specified a reclamation storage pool. Use a reclamation
storage pool to allow automatic reclamation for a storage pool with only one drive.
Remember: To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The server
checks every hour for whether the DELETE FILESPACE process has completed so
that the reclamation process can start. After the DELETE FILESPACE process has
completed, reclamation begins within one hour.
The server also reclaims space within an aggregate. An aggregate is a physical file
that contains multiple logical files that are backed up or archived from a client in a
= valid data
After the server moves all readable files to other volumes, one of the following
occurs for the reclaimed volume:
v If you have explicitly defined the volume to the storage pool, the volume
becomes available for reuse by that storage pool.
v If the server acquired the volume as a scratch volume, the server deletes the
volume from the Tivoli Storage Manager database.
Volumes that have a device type of SERVER are reclaimed in the same way as
other sequential-access volumes. However, because the volumes are actually data
stored in the storage of another Tivoli Storage Manager server, the reclamation
process can consume network resources. See Controlling reclamation of virtual
volumes on page 378 for details about how the server reclaims these types of
volumes.
Volumes in a copy storage pool and active-data pools are reclaimed in the same
manner as a primary storage pool except for the following:
Reclamation thresholds
Space is reclaimable because it is occupied by files that have been expired or
deleted from the Tivoli Storage Manager database, or because the space has never
been used. The reclamation threshold indicates how much reclaimable space a
volume must have before the server reclaims the volume.
The server checks whether reclamation is needed at least once per hour. The lower
the reclamation threshold, the more frequently the server tries to reclaim space.
Frequent reclamation optimizes the use of a sequential-access storage pools space,
but can interfere with other processes, such as backups from clients.
If you set the reclamation threshold to 50% or greater, the server can combine the
usable files from two or more volumes onto a single new volume.
For example, if you set the reclamation threshold to 100%, first lower the threshold
to 98%. Volumes that have reclaimable space of 98% or greater are reclaimed by
the server. Lower the threshold again to reclaim more volumes.
If you lower the reclamation threshold while a reclamation process is active, the
reclamation process does not immediately stop. If an on-site volume is being
reclaimed, the server uses the new threshold setting when the process begins to
reclaim the next volume. If off-site volumes are being reclaimed, the server does
For copy storage pools and active-data pools, you can also use the RECLAIM
STGPOOL command to specify the maximum number of off-site storage pool
volumes the server should attempt to reclaim:
reclaim stgpool altpool duration=60 offsitereclaimlimit=230
Do not use this command if you are going to use automatic reclamation for the
storage pool. To prevent automatic reclamation from running, set the RECLAIM
parameter of the storage pool definition to 100.
For details about the RECLAIM STGPOOL command, refer to the Administrator's
Reference.
You can specify one or more reclamation processes for each primary
sequential-access storage pool, copy storage pool, or active-data pool using the
RECLAIMPROCESS parameter on the DEFINE STGPOOL and UPDATE STGPOOL
commands.
Each reclamation process requires at least two simultaneous volume mounts (at
least two mount points) and, if the device type is not FILE, at least two drives.
One of the drives is for the input volume in the storage pool being reclaimed. The
other drive is for the output volume in the storage pool to which files are being
moved.
When calculating the number of concurrent processes to run, you must carefully
consider the resources you have available, including the number of storage pools
that will be involved with the reclamation, the number of mount points, the
number of drives that can be dedicated to the operation, and (if appropriate) the
number of mount operators available to manage reclamation requests. The number
For more information about mount limit, see: Controlling the number of
simultaneously mounted volumes on page 194
For example, suppose that you want to reclaim the volumes from two sequential
storage pools simultaneously and that all storage pools involved have the same
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four reclamation processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives. The
device class for each storage pool must have a mount limit of at least eight.
If the device class for the storage pools being reclaimed does not have enough
mount points or drives, you can use the RECLAIMSTGPOOL parameter to direct
the reclamation to a storage pool with a different device class that has the
additional mount points or drives.
If the number of reclamation processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other reclamation processes complete and
mount points or drives become available.
The Tivoli Storage Manager server will start the specified number of reclamation
processes regardless of the number of volumes that are eligible for reclamation. For
example, if you specify ten reclamation processes and only six volumes are eligible
for reclamation, the server will start ten processes and four of them will complete
without processing a volume.
When the server reclaims volumes, the server moves the data from volumes in the
original storage pool to volumes in the reclamation storage pool. The server always
uses the reclamation storage pool when one is defined, even when the mount limit
is greater than one.
If the reclamation storage pool does not have enough space to hold all of the data
being reclaimed, the server moves as much of the data as possible into the
reclamation storage pool. Any data that could not be moved to volumes in the
reclamation storage pool still remains on volumes in the original storage pool.
The pool identified as the reclamation storage pool must be a primary sequential
storage pool. The primary purpose of the reclamation storage pool is for temporary
storage of reclaimed data. To ensure that data moved to the reclamation storage
pool eventually moves back into the original storage pool, specify the original
storage pool as the next pool in the storage hierarchy for the reclamation storage
Finally, update the reclamation storage pool so that data migrates back to the tape
storage pool:
update stgpool reclaimpool nextstgpool=tapepool1
Tip:
v In a mixed-media library, reclaiming volumes in a storage pool defined with a
device class with a single mount point (that is, a single drive) requires one of the
following:
At least one other drive with a compatible read/write format
Enough disk space to create a storage pool with a device type of FILE
To prevent reclamation of WORM media, storage pools that are assigned to device
classes with a device type of WORM, WORM12, or WORM14 have a default
reclamation value of 100.
To allow reclamation, you can set the reclamation value to something lower when
defining or updating the storage pool.
To control when reclamation starts for these volumes, consider setting the
reclamation threshold to 100% for any primary storage pool that uses virtual
volumes. Lower the reclamation threshold at a time when your network is less
busy, so that the server can reclaim volumes.
For virtual volumes in a copy storage pool or an active-data pool, the server
reclaims a volume as follows:
1. The source server determines which files on the volume are still valid.
2. The source server obtains these valid files from volumes in a primary storage
pool, or if necessary, from removable-media volumes in an on-site copy storage
pool or in an on-site active-data pool. The server can also obtain files from
virtual volumes in a copy storage pool or an active-data pool.
3. The source server writes the files to one or more new virtual volumes in the
copy storage pool or active-data pool and updates its database.
4. The server issues a message indicating that the volume was reclaimed.
Tips:
v You can reclaim space in off-site volumes controlled by a z/OS media server.
v You can specify multiple concurrent reclamation processes for a primary storage
pool with a device type of SERVER. However, running multiple concurrent
processes for this type of storage pool can tie up network resources because the
data is sent across the network between the source server and target server.
Therefore, if you want to run multiple concurrent processes, do so when the
network is less busy. If multiple concurrent processing is not desired, specify a
value of 1 for the RECLAIMPROCESS parameter on the DEFINE STGPOOL or
UPDATE STGPOOL commands.
For information about using the SERVER device type, see Using virtual volumes
to store data on another server on page 737.
For off-site volumes, reclamation can occur when the percentage of unused space
on the volume is greater than the reclaim parameter value. The unused space in
copy storage pool volumes includes both space that has never been used on the
Reclamation of copy storage pool volumes and active-data pool volumes should be
done periodically to allow the reuse of partially filled volumes that are off-site.
Reclamation can be done automatically by setting the reclamation threshold for the
copy storage pool or the active-data pool to less than 100%. However, you need to
consider controlling when reclamation occurs because of how off-site volumes are
treated. For more information, see Controlling when reclamation occurs for
off-site volumes on page 380.
Virtual Volumes: Virtual volumes (volumes that are stored on another Tivoli
Storage Manager server through the use of a device type of SERVER) cannot be set
to the off-site access mode.
Tip: You can reclaim space in off-site volumes controlled by a z/OS media server.
Reclamation of primary storage pool volumes does not affect copy storage pool
files or files in active-data pools.
When an off-site volume is reclaimed, the files on the volume are rewritten to a
read/write volume. Effectively, these files are moved back to the on-site location.
The files may be obtained from the off-site volume after a disaster, if the volume
has not been reused and the database backup that you use for recovery references
the files on the off-site volume.
Tip: You can reclaim space in off-site volumes controlled by a z/OS media server.
If you are using the disaster recovery manager, see Moving copy storage pool and
active-data pool volumes on-site on page 1048.
Suppose you plan to make daily storage pool backups to a copy storage pool, then
mark all new volumes in the copy storage pool as offsite and send them to the
off-site storage location. This strategy works well with one consideration if you are
using automatic reclamation (the reclamation threshold is less than 100%).
Each day's storage pool backups will create a number of new copy-storage pool
volumes, the last one being only partially filled. If the percentage of empty space
on this partially filled volume is higher than the reclaim percentage, this volume
becomes eligible for reclamation as soon as you mark it off-site. The reclamation
process would cause a new volume to be created with the same files on it. The
volume you take off-site would then be empty according to the Tivoli Storage
Manager database. If you do not recognize what is happening, you could
perpetuate this process by marking the new partially filled volume off-site.
One way to resolve this situation is to keep partially filled volumes on-site until
they fill up. However, this would mean a small amount of your data would be
without an off-site copy for another day.
If you send copy storage pool volumes off-site, it is recommended you control pool
reclamation by using the default value of 100. This turns reclamation off for the
copy storage pool. You can start reclamation processing at desired times by
changing the reclamation threshold for the storage pool. To monitor off-site volume
utilization and help you decide what reclamation threshold to use, enter the
following command:
query volume * access=offsite format=detailed
Depending on your data expiration patterns, you may not need to do reclamation
of off-site volumes each day. You may choose to perform off-site reclamation on a
less frequent basis. For example, suppose you ship copy-storage pool volumes to
and from your off-site storage location once a week. You can run reclamation for
the copy-storage pool weekly, so that as off-site volumes become empty they are
sent back for reuse.
When you do perform reclamation for off-site volumes, the following sequence is
recommended:
1. Back up your primary-storage pools to copy-storage pools or copy the active
data in primary-storage pools to active-data pools.
This sequence ensures that the files on the new copy-storage pool volumes and
active-data pool volumes are sent off-site, and are not inadvertently kept on-site
because of reclamation.
Alternatively, you can use the following Tivoli Storage Manager SQL SELECT
command to obtain records from the SUMMARY table for the off-site volume
reclamation operation:
select * from summary where activity=OFFSITE RECLAMATION
Two kinds of records are displayed for the off-site reclamation process. One
volume record is displayed for each reclaimed off-site volume. However, the
volume record does not display the following items:
v The number of examined files.
v The number of affected files.
v The total bytes involved in the operation.
This information is summarized in the statistical summary record for the offsite
reclamation. The statistical summary record displays the following items:
v The number of examined files.
v The number of affected files.
v The total bytes involved in the operation.
v The number of off-site volumes that were processed.
v The number of parallel processes that were used.
v The total amount of time required for the processing.
For example, suppose a copy storage pool contains three volumes: VOL1, VOL2,
and VOL3. VOL1 has the largest amount of unused space, and VOL3 has the least
amount of unused space. Suppose further that the percentage of unused space in
each of the three volumes is greater than the value of the RECLAIM parameter. If
you do not specify a value for the OFFSITERECLAIMLIMIT parameter, all three
volumes will be reclaimed when the reclamation runs. If you specify a value of 2,
only VOL1 and VOL2 will be reclaimed when the reclamation runs. If you specify
a value of 1, only VOL1 will be reclaimed.
As a best practice, delay the reuse of any reclaimed volumes in copy storage pools
and active-data pools for as long as you keep your oldest database backup. For
more information about delaying volume reuse, see Delaying reuse of volumes
for recovery purposes on page 934.
If you specify collocation and multiple concurrent processes, the server attempts to
move the files for each collocation group, client node, or client file space onto as
few volumes as possible. However, if files belonging to a single collocation group
(or node or file space) are on different volumes to begin with and are being moved
at the same time by different processes, the files could be moved to separate
output volumes. For details about multiple concurrent reclamation processing, see
Optimizing drive usage using multiple concurrent reclamation processes on page
375.
See also Reducing the time to reclaim tape volumes with high capacity on page
377.
As your storage environment grows, you may want to consider how policy and
storage pool definitions affect where workstation files are stored. Then you can
define and maintain multiple storage pools in a hierarchy that allows you to
control storage costs by using sequential-access storage pools in addition to disk
storage pools, and still provide appropriate levels of service to users.
To help you determine how to adjust your policies and storage pools, get
information about how much storage is being used (by client node) and for what
purposes in your existing storage pools. For more information on how to do this,
see Obtaining information about the use of storage space on page 399.
To estimate the amount of storage space required for each random-access disk
storage pool:
v Determine the amount of disk space needed for different purposes:
To estimate the total amount of space needed for all backed-up files stored in a
single random-access (disk) storage pool, use the following formula:
Backup space = WkstSize * Utilization * VersionExpansion * NumWkst
where:
Backup Space
The total amount of storage pool disk space needed.
WkstSize
The average data storage capacity of a workstation. For example, if the
typical workstation at your installation has a 4 GB hard drive, then the
average workstation storage capacity is 4 GB.
Utilization
An estimate of the fraction of each workstation disk space used, in the
range 0 to 1. For example, if you expect that disks on workstations are 75%
full, then use 0.75.
VersionExpansion
An expansion factor (greater than 1) that takes into account the additional
backup versions, as defined in the copy group. A rough estimate allows 5%
additional files for each backup copy. For example, for a version limit of 2,
use 1.05, and for a version limit of 3, use 1.10.
NumWkst
The estimated total number of workstations that the server supports.
If clients use compression, the amount of space required may be less than the
amount calculated, depending on whether the data is compressible.
Work with policy administrators to calculate this percentage based on the number
and type of archive copy groups defined. For example, if policy administrators
have defined archive copy groups for only half of the policy domains in your
enterprise, then estimate that you need less than 50% of the amount of space you
have defined for backed-up files.
Because additional storage space can be added at any time, you can start with a
modest amount of storage space and increase the space by adding storage volumes
to the archive storage pool, as required.
Figure 48 shows a standard report with all storage pools defined to the system. To
monitor the use of storage pool space, review the Estimated Capacity and Pct Util
columns.
Estimated Capacity
Specifies the space available in the storage pool in megabytes (M) or
gigabytes (G).
For a disk storage pool, this value reflects the total amount of available
space in the storage pool, including any volumes that are varied offline.
For sequential-access storage pools, estimated capacity is the total
estimated space of all the sequential-access volumes in the storage pool,
regardless of their access mode. At least one volume must be used in a
sequential-access storage pool (either a scratch volume or a private
volume) to calculate estimated capacity.
For tape and FILE, the estimated capacity for the storage pool includes the
following factors:
v The capacity of all the scratch volumes that the storage pool already
acquired or can acquire. The number of scratch volumes is defined by
the MAXSCRATCH parameter on the DEFINE STGPOOL or UPDATE STGPOOL
command.
v The capacity of all the private volumes that are defined to the storage
pool using the DEFINE VOLUME command.
The calculations for estimated capacity depend on the availability of the
storage for the device assigned to the storage pool.
Note: The value for Pct Util can be higher than the value for Pct Migr if
you query for storage pool information while a client transaction (such as a
backup) is in progress. The value for Pct Util is determined by the amount
of space actually allocated (while the transaction is in progress). The value
for Pct Migr represents only the space occupied by committed files. At the
end of the transaction, Pct Util and Pct Migr become synchronized.
For sequential-access storage pools, this value is the percentage of the total
bytes of storage available that are currently being used to store active data
(data that is not expired). Because the server can only estimate the
available capacity of a sequential-access storage pool, this percentage also
reflects an estimate of the actual utilization of the storage pool.
Figure 48 on page 386 shows that the estimated capacity for a disk storage pool
named BACKUPPOOL is 80 MB, which is the amount of available space on disk
storage. More than half (51.6%) of the available space is occupied by either backup
files or cached copies of backup files.
The estimated capacity for the tape storage pool named BACKTAPE is 180 MB,
which is the total estimated space available on all tape volumes in the storage
pool. This report shows that 85% of the estimated space is currently being used to
store workstation files.
Note: This report also shows that volumes have not yet been defined to the
ARCHIVEPOOL and ENGBACK1 storage pools, because the storage pools show
an estimated capacity of 0.0 MB.
You can query the server for information about storage pool volumes:
v General information about a volume, for example:
Current access mode and status of the volume
Amount of available space on the volume
Location
v Contents of a storage pool volume (user files on the volume)
v The volumes that are used by a client node
To request general information about all volumes defined to the server, enter:
query volume
Figure 49 shows an example of the output of this standard query. The example
illustrates that data is being stored on the 8 mm tape volume named WREN01, as
well as on several other volumes in various storage pools.
To query the server for a detailed report on volume WREN01 in the storage pool
named TAPEPOOL, enter:
query volume wren01 format=detailed
Figure 50 shows the output of this detailed query. Table 41 on page 390 gives some
suggestions on how you can use the information.
Check the Access to determine whether files can be read from or written to this
volume.
Monitor the use of storage space.
Estimated Capacity
Pct Util
The Estimated Capacity is determined by the device class associated with the
storage pool to which this volume belongs. Based on the estimated capacity, the
system tracks the percentage of space occupied by client files (Pct Util).
The Write Pass Number indicates the number of times the volume has been
written to, starting from the beginning of the volume. A value of one indicates
that a volume is being used for the first time.
In this example, WREN01 has a write pass number of two, which indicates space
on this volume may have been reclaimed or deleted once before.
Compare this value to the specifications provided with the media that you are
using. The manufacturer may recommend a maximum number of write passes
for some types of tape media. You may need to retire your tape volumes after
reaching the maximum passes to better ensure the integrity of your data. To
retire a volume, move the data off the volume by using the MOVE DATA
command. See Moving data from one volume to another volume on page 403.
Use the Number of Times Mounted, the Approx. Date Last Written, and the Approx.
Date Last Read to help you estimate the life of the volume. For example, if more
than six months have passed since the last time this volume has been written to
or read from, audit the volume to ensure that files can still be accessed. See
Auditing storage pool volumes on page 934 for information about auditing a
volume.
The number given in the field, Number of Times Mounted, is a count of the
number of times that the server has opened the volume for use. The number of
times that the server has opened the volume is not always the same as the
number of times that the volume has been physically mounted in a drive. After a
volume is physically mounted, the server can open the same volume multiple
times for different operations, for example for different client backup sessions.
Determine the location of a Location
volume in a sequential-access
When you define or update a sequential-access volume, you can give location
storage pool.
information for the volume. The detailed query displays this location name. The
location information can be useful to help you track volumes (for example,
off-site volumes in copy storage pools or active-data pools).
Determine if a volume in a Date Became Pending
sequential-access storage pool is
A sequential-access volume is placed in the pending state after the last file is
waiting for the reuse delay period
deleted or moved from the volume. All the files that the pending volume had
to expire.
contained were expired or deleted, or were moved from the volume. Volumes
remain in the pending state for as long as specified with the REUSEDELAY
parameter for the storage pool to which the volume belongs.
Because the server tracks the contents of a storage volume through its database,
the server does not need to access the requested volume to determine its contents.
To produce a report that shows the contents of a volume, issue the QUERY
CONTENT command.
This report can be extremely large and may take a long time to produce. To reduce
the size of this report, narrow your search by selecting one or all of the following
search criteria:
Node name
Name of the node whose files you want to include in the query.
File space name
Names of file spaces to include in the query. File space names are
case-sensitive and must be entered exactly as they are known to the server.
Use the QUERY FILESPACE command to find the correct capitalization.
Number of files to be displayed
Enter a positive integer, such as 10, to list the first ten files stored on the
volume. Enter a negative integer, such as -15, to list the last fifteen files
stored on the volume.
Filetype
Specifies which types of files, that is, backup versions, archive copies, or
space-managed files, or a combination of these. If the volume being
queried is assigned to an active-data pool, the only valid values are ANY
and Backup.
Format of how the information is displayed
Standard or detailed information for the specified volume.
Damaged
Specifies whether to restrict the query output either to files that are known
to be damaged, or to files that are not known to be damaged.
Copied
Specifies whether to restrict the query output to either files that are backed
Note: There are several reasons why a file might have no usable copy in a
copy storage pool:
The file was recently added to the volume and has not yet been backed
up to a copy storage pool
The file should be copied the next time the storage pool is backed
up.
The file is damaged
To determine whether the file is damaged, issue the QUERY
CONTENT command, specifying the DAMAGED=YES parameter.
The volume that contains the files is damaged
To determine which volumes contain damaged files, issue the
following command:
select * from contents where damaged=yes
The file is segmented across multiple volumes, and one or more of the
other volumes is damaged
To determine whether the file is segmented, issue the QUERY
CONTENT command, specifying the FORMAT=DETAILED
parameter. If the file is segmented, issue the following command to
determine whether any of the volumes containing the additional
file segments are damaged:
select volume_name from contents where damaged=yes and
file_name like %filename%
For more information about using the SELECT command, see the
Administrator's Reference.
A standard report about the contents of a volume displays basic information such
as the names of files.
To view the first seven backup files on volume WREN01 from file space /usr on
client node TOMC, for example, enter:
query content wren01 node=tomc filespace=/usr count=7 type=backup
Figure 51 displays a standard report which shows the first seven files from file
space /usr on TOMC stored in WREN01.
To display detailed information about the files stored on volume VOL1, enter:
query content vol1 format=detailed
Figure 52 on page 395 displays a detailed report that shows the files stored on
VOL1. The report lists logical files and shows whether each file is part of an
aggregate. If a logical file is stored as part of an aggregate, the information in the
Segment Number, Stored Size, and Cached Copy? fields apply to the aggregate,
not to the individual logical file.
If a logical file is part of an aggregate, the Aggregated? field shows the sequence
number of the logical file within the aggregate. For example, the Aggregated? field
contains the value 2/4 for the file AB0CTGLO.IDE, meaning that this file is the
second of four files in the aggregate. All logical files that are part of an aggregate
are included in the report. An aggregate can be stored on more than one volume,
and therefore not all of the logical files in the report may actually be stored on the
volume being queried.
For disk volumes, the Cached Copy? field identifies whether the file is a cached
copy of a file that has been migrated to the next storage pool in the hierarchy.
The SELECT command queries the VOLUMEUSAGE table in the Tivoli Storage
Manager database. For example, to get a list of volumes used by the EXCH1 client
node in the TAPEPOOL storage pool, enter the following command:
select volume_name from volumeusage where node_name=EXCH1 and
stgpool_name=TAPEPOOL
For more information about using the SELECT command, see the Administrator's
Reference.
Four fields on the standard storage-pool report provide you with information
about the migration process. They include:
Pct Migr
Specifies the percentage of data in each storage pool that can be migrated.
This value is used to determine when to start or stop migration.
For random-access and sequential-access disk storage pools, this value
represents the amount of disk space occupied by backed-up, archived, or
space-managed files that can be migrated to another storage pool. The
calculation for random-access disk storage pools excludes cached data, but
includes files on volumes that are varied offline.
For sequential-access tape and optical storage pools, this value is the
percentage of the total volumes in the storage pool that actually contain
data at the moment. For example, assume a storage pool has four explicitly
defined volumes, and a maximum scratch value of six volumes. If only
two volumes actually contain data at the moment, then Pct Migr is 20%.
This field is blank for copy storage pools and active-data pools.
High Mig Pct
Specifies when the server can begin migrating data from this storage pool.
Migration can begin when the percentage of data that can be migrated
reaches this threshold. (This field is blank for copy storage pools and
active-data pools.)
Low Mig Pct
Specifies when the server can stop migrating data from this storage pool.
Migration can end when the percentage of data that can be migrated falls
below this threshold. (This field is blank for copy storage pools and
active-data pools.)
Next Storage Pool
Specifies the primary storage pool destination to which data is migrated.
(This field is blank for copy storage pools and active-data pools.)
Figure 48 on page 386 shows that the migration thresholds for BACKUPPOOL
storage pool are set to 50% for the high migration threshold and 30% for the low
migration threshold.
When the amount of migratable data stored in the BACKUPPOOL storage pool
reaches 50%, the server can begin to migrate files to BACKTAPE.
See Figure 53 on page 397 for an example of the results of this command.
If caching is on for a disk storage pool and files are migrated, the Pct Util value
does not change because the cached files still occupy space in the disk storage
You can query the server to monitor the migration process by entering:
query process
Tip: Do this only if you received an out-of-space message for the storage pool to
which data is being migrated.
The Pct Util value includes cached data on a volume (when cache is enabled) and
the Pct Migr value excludes cached data. Therefore, when cache is enabled and
migration occurs, the Pct Migr value decreases while the Pct Util value remains the
same. The Pct Util value remains the same because the migrated data remains on
the volume as cached data. In this case, the Pct Util value only decreases when the
cached data expires.
If you update a storage pool from CACHE=YES to CACHE=NO, the cached files
will not disappear immediately. The Pct Util value will be unchanged. The cache
space will be reclaimed over time as the server needs the space, and no additional
cached files will be created.
Figure 56 on page 399 displays a detailed report for the storage pool.
When Cache Migrated Files? is set to Yes, the value for Pct Util should not change
because of migration, because cached copies of files migrated to the next storage
pool remain in disk storage.
This example shows that utilization remains at 42%, even after files have been
migrated to the BACKTAPE storage pool, and the current amount of data eligible
for migration is 29.6%.
When Cache Migrated Files? is set to No, the value for Pct Util more closely
matches the value for Pct Migr because cached copies are not retained in disk
storage.
Each report gives two measures of the space in use by a storage pool:
v Logical space occupied
The amount of space used for logical files. A logical file is a client file. A logical
file is stored either as a single physical file, or in an aggregate with other logical
files. The logical space occupied in active-data pools includes the space occupied
by inactive logical files. Inactive logical files in active-data pools are removed by
reclamation.
v Physical space occupied
The amount of space used for physical files. A physical file is either a single
logical file, or an aggregate composed of logical files.
An aggregate might contain empty space that was used by logical files that are
now expired or deleted, or that were deactivated in active-data pools. Therefore,
the amount of space used by physical files is equal to or greater than the space
used by logical files. The difference gives you a measure of how much unused
space any aggregates may have. The unused space can be reclaimed in
sequential storage pools.
You can also use this report to evaluate the average size of workstation files stored
in server storage.
To determine the amount of server storage space used by the /home file space
belonging to the client node MIKE, for example, enter:
query occupancy mike /home
File space names are case-sensitive and must be entered exactly as they are known
to the server. To determine the correct capitalization, issue the QUERY FILESPACE
command. For more information, see Managing file spaces on page 454.
Figure 57 shows the results of the query. The report shows the number of files
backed up, archived, or migrated from the /home file space belonging to MIKE.
The report also shows how much space is occupied in each storage pool.
If you back up the ENGBACK1 storage pool to a copy storage pool, the copy
storage pool would also be listed in the report. To determine how many of the
client node's files in the primary storage pool have been backed up to a copy
storage pool, compare the number of files in each pool type for the client node.
Physical Logical
Node Name Type Filespace Storage Number of Space Space
Name Pool Name Files Occupied Occupied
(MB) (MB)
--------------- ---- ----------- ----------- --------- ---------- --------
MIKE Bkup /home ENGBACK1 513 3.52 3.01
For details about the QUERY NODEDATA command, refer to the Administrator's
Reference.
To query the server for the amount of data stored in backup tape storage pools
belonging to the TAPECLASS device class, for example, enter:
query occupancy devclass=tapeclass
Figure 58 displays a report on the occupancy of tape storage pools assigned to the
TAPECLASS device class.
Tip: For archived data, you might see (archive) in the Filespace Name column
instead of a file space name. This means that the data was archived before
collocation by file space was supported by the server.
For example, to request a report about backup versions stored in the disk storage
pool named BACKUPPOOL, enter:
query occupancy stgpool=backuppool type=backup
Figure 59 displays a report on the amount of server storage used for backed-up
files.
You can use this average to estimate the capacity required for additional storage
pools that are defined to the server.
For information about planning storage space, see Estimating space needs for
storage pools on page 383 and Estimating space for archived files in
random-access storage pools on page 385.
To request information about the amount of free disk space in each directory for all
device classes with a device type of FILE, issue QUERY DIRSPACE command.
Figure 60. A report of the free disk space for all device classes of device type FILE
To obtain the amount of free space associated with a particular device class, issue
the following command:
query dirspace device_class_name
During the data movement process, users cannot access the volume to restore or
retrieve files, and no new files can be written to the volume.
Remember:
v Files in a copy storage pool or an active-data pool do not move when primary
files are moved.
v You cannot move data into or out of a storage pool defined with a CENTERA
device class.
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool. If you are moving data out of a storage pool for the
purpose of upgrading to new tape technology, the target primary storage pool
must be associated with a library that has the new device for the tape drives.
Moving files from one volume to other volumes in the same storage pool is useful:
v When you want to free up all space on a volume so that it can be deleted from
the Tivoli Storage Manager server
See Deleting storage pool volumes on page 415 for information about deleting
backed-up, archived, or space-managed data before you delete a volume from a
storage pool.
v When you need to salvage readable files from a volume that has been damaged
v When you want to delete cached files from disk volumes
If you want to force the removal of cached files, you can delete them by moving
data from one volume to another volume. During the move process, the server
deletes cached files remaining on disk volumes.
If you move data between volumes within the same storage pool and you run out
of space in the storage pool before all data is moved from the target volume, then
you cannot move all the data from the target volume. In this case, consider moving
data to available space in another storage pool as described in Data movement to
a different storage pool.
Remember: Data cannot be moved from a primary storage pool to a copy storage
pool or to an active-data pool. Data in a copy storage pool or an active-data pool
cannot be moved to another storage pool.
You can move data from random-access storage pools to sequential-access storage
pools. For example, if you have a damaged disk volume and you have a limited
amount of disk storage space, you could move all files from the disk volume to a
tape storage pool. Moving files from a disk volume to a sequential storage pool
may require many volume mount operations if the target storage pool is
collocated. Ensure that you have sufficient personnel and media to move files from
disk to sequential storage.
When a data move from a shred pool is complete, the original data is shredded.
However, if the destination is not another shred pool, you must set the
SHREDTONOSHRED parameter to YES to force the movement to occur. If this
value is not specified, the server issues an error message and does not allow the
data to be moved. See Securing sensitive client data on page 541 for more
information about shredding.
Processing of the MOVE DATA command for volumes in copy -storage pools and
active-data pools is similar to that of primary-storage pools, with the following
exceptions:
v Volumes in copy-storage pools and active-data pools might be set to an access
mode of offsite, making them ineligible to be mounted. During processing of the
MOVE DATA command, valid files on off-site volumes are copied from the
original files in the primary-storage pools. In this way, valid files on off-site
volumes are copied without having to mount these volumes. These new copies
of the files are written to another volume in the copy-storage pool or active-data
pool.
v With the MOVE DATA command, you can move data from any primary-storage
pool volume to any primary-storage pool. However, you can move data from a
copy-storage pool volume only to another volume within the same-copy storage
pool. Similarly, you can move data from an active-data pool volume only to
another volume within the same active-data pool.
When you move files from a volume marked as off-site, the server performs the
following actions:
1. Determines which files are still active on the volume from which you are
moving data
2. Obtains these active files from a primary-storage pool or from another
copy-storage pool or active-data pool
3. Copies the files to one or more volumes in the destination copy-storage pool or
active-data pool
Processing of the MOVE DATA command for primary-storage pool volumes does
not affect copy-storage pool or active-data pool files.
Moving data
You can move data using the MOVE DATA command. Before moving data,
however, take steps to ensure that the move operation succeeds.
When you move data from a volume, the server starts a background process and
sends informational messages, such as:
ANR1140I Move Data process started for volume /dev/vol3
(process ID 32).
Remember:
v A volume might not be totally empty after a move data operation completes. For
example, the server may be unable to relocate one or more files to another
volume because of input/output errors on the device or because errors were
found in the file. You can delete the volume with DISCARDDATA=YES to delete
the volume and any remaining files. The server then deletes the remaining files
that had I/O or other errors.
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool. If you are moving data out of a storage pool for the
purpose of upgrading to new tape technology, the target primary storage pool
must be associated with a library that has the new device for the tape drives.
Figure 61 on page 407 shows an example of the report that you receive about the
data movement process.
Remember:
1. Reclaiming empty space in NDMP-generated images is not an issue because
NDMP-generated images are not aggregated.
2. Reconstruction removes inactive backup files in active-data pools. Specifying
RECONSTRUCT=NO when moving data from volumes in an active-data pool
prevents the inactive backup files from being removed.
For example, to see how much data has moved from the source volume in the
move operation example, enter:
query volume /dev/vol3 stgpool=backuppool
Near the beginning of the move process, querying the volume from which data is
being moved gives the following results:
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
--------------- ----------- ---------- --------- ----- --------
/dev/vol3 BACKUPPOOL DISK 15.0 M 59.9 On-Line
Querying the volume to which data is being moved (VOL1, according to the
process query output) gives the following results:
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
---------------- ----------- ---------- --------- ----- --------
VOL1 STGTMP1 8500DEV 4.9 G 0.3 Filling
At the end of the move process, querying the volume from which data was moved
gives the following results:
Chapter 10. Managing storage pools and volumes 407
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
---------------- ---------- ---------- --------- ----- --------
/dev/vol3 BACKUPPOOL DISK 15.0 M 0.0 On-Line
When the source storage pool is a primary storage pool, you can move data to
other volumes within the same pool or to another primary storage pool. When the
source storage pool is a copy storage pool, data can only be moved to other
volumes within that storage pool. When the source storage pool is an active-data
pool, data can only be moved to other volumes within that same storage pool.
Tips:
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool.
v If you are moving files within the same storage pool, there must be volumes
available that do not contain the data you are moving. That is, the server cannot
use a destination volume containing data that will need to be moved.
v When moving data from volumes in an active-data pool, you have the option of
reconstructing file aggregates during data movement. Reconstruction removes
inactive backup files in the pool. Specifying no reconstruction prevents the
inactive files from being removed.
v You cannot move node data into or out of a storage pool defined with a
CENTERA device class.
Best practice: Avoid movement of data into, out of, or within a storage pool while
MOVE NODEDATA is concurrently processing data on the same storage pool.
For example, consider moving data for a single node and restricting the data
movement to files in a specific non-Unicode file space (for this example, \\eng\e$)
as well as a specific Unicode file space (for this example, \\eng\d$ ). The node
name owning the data is ENGINEERING and it currently has data stored in the
ENGPOOL storage pool. After the move is complete, the data is located in the
destination storage pool BACKUPPOOL. To move the data enter the following:
move nodedata engineering fromstgpool=engpool
tostgpool=backuppool filespace=\\eng\e$ unifilespace=\\eng\d$
Another example is to move data for a single node named MARKETING from all
primary sequential-access storage pools to a random-access storage pool named
DISKPOOL. First obtain a list of storage pools that contain data for node
MARKETING, issue either:
query occupancy marketing
or
SELECT * from OCCUPANCY where node_name=MARKETING;
For this example the list of resulting storage pool names all begin with the
characters FALLPLAN. To move the data repeat the following command for every
instance of FALLPLAN. The following example displays the command for
FALLPLAN3:
move nodedata marketing fromstgpool=fallplan3
tostgpool=diskpool
A final example shows moving both non-Unicode and Unicode file spaces for a
node. For node NOAH move non-Unicode file space \\servtuc\d$ and Unicode
Figure 62 shows an example of the report that you receive about the data
movement process.
When you rename a storage pool, any administrators with restricted storage
privilege for the storage pool automatically have restricted storage privilege to the
storage pool under the new name. If the renamed storage pool is in a storage pool
hierarchy, the hierarchy is preserved.
Copy groups and management classes might contain a storage pool name as a
destination. If you rename a storage pool used as a destination, the destination in a
copy group or management class is not changed to the new name of the storage
pool. To continue to use the policy with the renamed storage pool as a destination,
you must change the destination in the copy groups and management classes. You
then activate the policy set with the changed destinations.
To define a copy storage pool, issue the DEFINE STGPOOL command and specify
POOLTYPE=COPY. To define an active-data pool, issue the DEFINE STGPOOL
command and specify POOLTYPE=ACTIVEDATA. When you define a copy
storage pool or an active-data pool, be prepared to provide some or all of the
information in Table 42.
Remember:
1. To back up a primary storage pool to an active-data pool, the data format must
be NATIVE or NONBLOCK. You can back up a primary storage pool to a copy
storage pool using NATIVE, NONBLOCK, or any of the NDMP formats. The
target storage pool must have the same data format as the source storage pool.
2. You cannot define copy storage pools or active-data pools for a Centera device
class.
Table 42. Information for defining copy storage pools and active-data pools
Information Explanation
Device class Specifies the name of the device class assigned for the storage pool. This
is a required parameter.
Pool type Specifies that you want to define a copy storage pool or an active-data
pool. This is a required parameter. You cannot change the pool type
when updating a storage pool.
For automated libraries, set this value equal to the physical capacity of
the library. For details, see:
Maintaining a supply of scratch volumes in an automated library
on page 163
Collocation When collocation is enabled, the server attempts to keep all files
belonging to a group of client nodes, a single client node, or a client file
space on a minimal number of sequential-access storage volumes. See
Collocation of copy storage pools and active-data pools on page 369.
Reclamation Specifies when to initiate reclamation of volumes in the copy storage
threshold pool or active-data pool. Reclamation is a process that moves any
remaining files from one volume to another volume, thus making the
original volume available for reuse. A volume is eligible for reclamation
when the percentage of unused space on the volume is greater than the
reclaim parameter value.
For more information, see Backing up primary storage pools on page 930.
To store data in the new storage pool, you must back up the primary storage pools
(BACKUPPOOL, ARCHIVEPOOL, and SPACEMGPOOL) to the
DISASTER-RECOVERY pool. See Backing up primary storage pools on page 930.
If files that are not cached are deleted from a primary storage pool volume, any
copies of these files in copy storage pools and active-data pools will also be
deleted.
Files in a copy storage pool or an active-data pool are never deleted unless:
v Data retention is off, or the files have met their retention criterion.
You cannot delete a Centera volume if the data in the volume was stored using a
server with retention protection enabled and if the data has not expired.
Tip: If you are deleting many volumes, delete the volumes one at a time.
Concurrently deleting many volumes can adversely affect server performance.
You can delete empty storage pool volumes. For example, to delete an empty
volume named WREN03, enter:
delete volume wren03
Volumes in a shred pool (DISK pools only) are not deleted until shredding is
completed. See Securing sensitive client data on page 541 for more information.
After you respond yes, the server generates a background process to delete the
volume.
Tips:
1. The Tivoli Storage Manager server will not delete archive files that are on
deletion hold.
2. If archive retention protection is enabled, the Tivoli Storage Manager server
will delete only archive files whose retention period has expired.
3. Volumes in a shred pool (DISK pools only) are note deleted until the data on it
is shredded. See Securing sensitive client data on page 541 for more
information.
For example, to discard all data from volume WREN03 and delete the volume
from its storage pool, enter:
delete volume wren03 discarddata=yes
The server generates a background process and deletes data in a series of batch
database transactions. After all files have been deleted from the volume, the server
deletes the volume from the storage pool. If the volume deletion process is
canceled or if a system failure occurs, the volume might still contain data. Reissue
the DELETE VOLUME command and explicitly request the server to discard the
remaining files on the volume.
To delete a volume but not the files it contains, move the files to another volume.
See Moving data from one volume to another volume on page 403 for
information about moving data from one volume to another volume.
Residual data: Even after you move data, residual data may remain on the
volume because of I/O errors or because of files that were previously marked as
damaged. (Tivoli Storage Manager does not move files that are marked as
damaged.) To delete any volume that contains residual data that cannot be moved,
you must explicitly specify that files should be discarded from the volume.
When the Tivoli Storage Manager server is installed, the Tivoli Storage Manager
backup-archive client and the administrative client are installed on the same server
by default. However, many installations of Tivoli Storage Manager include remote
clients, and application clients on other servers, often running on different
operating systems.
The term nodes indicate the following type of clients and servers that you can
register as client nodes:
v Tivoli Storage Manager backup-archive clients
v Tivoli Storage Manager application clients, such as Tivoli Storage Manager for
Mail clients
v Tivoli Storage Manager for Space Management (HSM client)
v Tivoli Storage Manager source server registered as a node on a target server
v Network-attached storage (NAS) file server using NDMP support
Each node must be registered with the server and requires an option file with a
pointer to the server.
For details on many of the topics in this chapter, refer to the Backup-Archive Clients
Installation and User's Guide.
Related concepts:
Accepting default closed registration or enabling open registration on page 422
Overview of clients and servers as nodes
Related tasks:
Installing client node software on page 422
Registering nodes with the server on page 422
Related reference:
Connecting nodes with the server on page 426
Comparing network-attached nodes to local nodes on page 428
The following are the methods for installing client node software:
v Install directly from the CD
v Transfer installable files from the CD to a target server
v Create client software images and install the images
You can also install using the silent installation technique. For backup-archive
clients, use the client auto deployment feature in the Administration Center. This
feature deploys client code to existing backup-archive clients.
Tip: You can connect to a Web backup-archive client directly from a supported
Web browser or from a hyperlink in the Web administrative Enterprise Console. To
do so, specify the node's URL and port number during the registration process or
update the node later with this information.
Related concepts:
Overview of remote access to web backup-archive clients on page 449
The administrator must register client nodes with the server when registration is
set to closed. Closed registration is the default.
Open registration allows the client nodes to register their node names, passwords,
and compression options. On UNIX and Linux systems, only the root user can
register a client node with the server.
With open registration, the server automatically assigns the node to the
STANDARD policy domain. The server, by default, allows users to delete archive
copies, but not backups in server storage. Nodes are created with the default
authentication method that is defined on the server. Nodes are registered with the
default authentication method if it is defined on the server with the SET
DEFAULTAUTHENTICATION command. The default is LOCAL.
1. Enable open registration by entering the following command from an
administrative client command line:
set registration open
For examples and a list of open registration defaults, see the Administrator's
Reference.
2. To change the defaults for a registered node, issue the UPDATE NODE command.
Remember: Use either client compression or drive compression, but not both.
Related concepts:
Data compression on page 212
Specify an option set for a node when you register or update the node. Issue the
following example command:
register node mike pass2eng cloptset=engbackup
The client node MIKE is registered with the password pass2eng. When the client
node MIKE performs a scheduling operation, the schedule log entries are kept for
5 days.
Related reference:
Managing client option files on page 468
The REGISTER NODE and UPDATE NODE commands have a default parameter of
TYPE=CLIENT.
To register a NAS file server as a node, specify the TYPE=NAS parameter. Issue
the following command, which is an example, to register a NAS file server with a
node name of NASXYZ and a password of PW4PW:
register node nasxyz pw4pw type=nas
You must use this same node name when you later define the corresponding data
mover name.
To use virtual volumes, register the source server as a client node on the target
server.
The REGISTER NODE and UPDATE NODE commands have a default parameter of
TYPE=CLIENT.
An administrator can issue the REGISTER NODE command to register the workstation
as a node.
You can determine the compression by using one of the following methods:
v An administrator during registration who can:
Require that files are compressed
Restrict the client from compressing files
Allow the application user or the client user to determine the compression
status
v The client options file. If an administrator does not set compression on or off,
Tivoli Storage Manager checks the compression status that is set in the client
options file. The client options file is required, but the API user configuration file
is optional.
v One of the object attributes. When an application sends an object to the server,
some object attributes can be specified. One of the object attributes is a flag that
indicates whether or not the data has already been compressed. If the
application turns this flag on during either a backup or an archive operation,
then Tivoli Storage Manager does not compress the data a second time. This
process overrides what the administrator sets during registration.
For more information on setting options for the API and on controlling
compression, see IBM Tivoli Storage Manager Using the Application Program Interface
The administrator who sets the file deletion option can use the following methods:
v An administrator during registration
If an administrator does not allow file deletion, then an administrator must
delete objects or file spaces that are associated with the workstation from server
storage.
If an administrator allows file deletion, then Tivoli Storage Manager checks the
client options file.
v An application using the Tivoli Storage Manager API deletion program calls
If the application uses the dsmDeleteObj or dsmDeleteFS program call, then
objects or files are marked for deletion when the application is executed.
Important: If any changes are made to the dsm.opt file, the client must be restarted
for changes in the options file to have any affect.
The client options file dsm.opt is located in the client, application client, or host
server directory. If the file does not exist, copy the dsm.smp file. Users and
administrators can edit the client options file to specify:
v The network address of the server
v The communication protocol
v Backup and archive options
v Space management options
v Scheduling options
Related concepts:
Creating or updating a client options file on page 427
Figure 63 on page 427 shows the contents of a client options file that is configured
to connect to the server using TCP/IP. The communication options specified in the
client options file satisfy the minimum requirements for the node to connect to the
server.
Many non-required options are available that can be set at any time. These options
control the behavior of Tivoli Storage Manager processing.
Refer to Backup-Archive Clients Installation and User's Guide for more information
about non-required client options.
Editing individual options files is the most direct method, but may not be suitable
for sites with many client nodes.
From the backup-archive client GUI, the client can also display the setup wizard
by selecting Utilities > Setup Wizard. The user can follow the panels in the setup
wizard to browse Tivoli Storage Manager server information in the Active
Directory. The user can determine which server to connect to and what
communication protocol to use.
Backup-archive client
Server
Administrative client
Backup-archive client
Backup-archive client
Server
Administrative client
Application client
Each client requires a client options file. A user can edit the client options file at
the client node. The options file contains a default set of processing options that
identify the server, communication method, backup and archive options, space
management options, and scheduling options.
To change the default to open so users can register their own client nodes, issue
the following command:
set registration open
Before you can assign client nodes to a policy domain, the policy domain must
exist.
You want to let users delete backed up or archived files from storage pools. From
an administrative client, you can use the macro facility to register more than one
client node at a time.
1. Create a macro file named REGENG.MAC, that contains the following REGISTER
NODE commands:
register node ssteiner choir contact=department 21
domain=engpoldom archdelete=yes backdelete=yes
register node carolh skiing contact=department 21, second shift
domain=engpoldom archdelete=yes backdelete=yes
register node mab guitar contact=department 21, third shift
domain=engpoldom archdelete=yes backdelete=yes
2. Issue the MACRO command.
macro regeng.mac
The Tivoli Storage Manager server views its registered clients, application clients,
and source servers as nodes. The term client node refers to the following type of
clients and servers:
v Tivoli Storage Manager backup-archive clients
v Tivoli Storage Manager application clients, such as Tivoli Storage Manager for
Mail clients
v Tivoli Storage Manager source servers registered as nodes on a target server
v Network-attached storage (NAS) file servers using network data management
protocol (NDMP) support
Related concepts:
Accepting default closed registration or enabling open registration on page 422
Overview of clients and servers as nodes on page 421
Related tasks:
Installing client node software on page 422
Registering nodes with the server on page 422
Related reference:
Connecting nodes with the server on page 426
Comparing network-attached nodes to local nodes on page 428
Managing nodes
From the perspective of the server, each client and application client is a node
requiring IBM Tivoli Storage Manager services.
Administrators can perform the following activities when managing client nodes.
IBM Tivoli Storage Manager has two methods for enabling communication
between the client and the server across a firewall: client-initiated communication
and server-initiated communication. To allow either client-initiated or
server-initiated communication across a firewall, client options must be set in
concurrence with server parameters on the REGISTER NODE or UPDATE NODE
commands. Enabling server-initiated communication overrides client-initiated
communication, including client address information that the server may have
previously gathered in server-prompted sessions.
Client-initiated sessions
You can enable clients to communicate with a server across a firewall by opening
the TCP/IP port for the server and modifying the dsmserv.opt file.
1. To enable clients to communicate with a server across a firewall, open the
TCP/IP port for the server on the TCPPORT option in the dsmserv.opt file. The
default TCP/IP port is 1500. When authentication is turned on, the information
that is sent over the wire is encrypted.
2. To enable administrative clients to communicate with a server across a firewall,
open the TCP/IP port for the server on the TCPADMINPORT option in the
dsmserv.opt file. The default TCP/IP port is the TCPPORT value. When
authentication is turned on, the information that is sent over the wire is
encrypted. See the Backup-Archive Clients Installation and User's Guide for more
information.
1. If the TCPADMINPORT option is specified, sessions from clients without
administration authority can be started on the TCPPORT port only. If the server
dsmserv.opt specifies TCPADMINPORT that is different from the TCPPORT and sets
ADMINONCLIENTPORT to NO, then administrative client sessions can be started on
the TCPADMINPORT port only.
2. You can specify either IPv4 or IPv4/IPv6 in the COMMMETHOD option when you
start the server, storage agent, client, or API application. The same port
numbers are used by the server, storage agent, client, or API application for
both IPv4 and IPv6.
IPv6 address formats are acceptable for all functions that support IPv6.
However, if you use IPv6 addresses for functions that do not support IPv6,
communications fail. The following functions do not support IPv6:
Remember: You can continue to use IPv4 address formats for the following
functions:
v NDMP: backing up and restoring storage pools, copying and moving data
v ACSLS
v SNMP
v Centera device support
v Shared memory protocol
v Windows Microsoft Management Console functions
v Administration Center
Server-initiated sessions
To limit the start of backup-archive client sessions to the IBM Tivoli Storage
Manager server, specify the SESSIONINITIATION parameter on the server. You must
also synchronize the information in the client option file.
In either the REGISTER NODE or UPDATE NODE command, select the SERVERONLY option
of the SESSIONINITIATION parameter. Provide the HLADDRESS and LLADDRESS
client node addresses. For example,
register node fran secretpw hladdress=9.11.521.125 lladdress=1501
sessioninitiation=serveronly
The HLADDRESS specifies the IP address of the client node, and is used whenever the
server contacts the client. The LLADDRESS specifies the low level address of the
client node and is used whenever the server contacts the client. The client node
listens for sessions from the server on the LLADDRESS port number.
Note:
Update client node TOMC to prevent it from deleting archived files from storage
pools by entering the following example command:
update node tomc archdelete=no
After configuring each server for deploying packages, the three steps in the process
are downloading, moving, and importing the packages.
Downloading
The Import Client Deployment Packages wizard accesses the FTP site
where the packages are stored and from where you can select the packages
to import.
Moving
After you download the packages, they must be moved from the
Administration Center workstation to the Tivoli Storage Manager server.
The packages must be moved to a location that is referenced by the
IBM_CLIENT_DEPLOY_IMPORT device class. This device class is created
when you configure your server with the Configure Server for Client
Auto Deployment wizard.
Importing
If you are configured for a local import, the Administration Center finds
the packages that are stored locally and starts the process of deploying
them. An Administration Center with web access deploys the packages
from the FTP site.
See table Table 45 for a list of the software packages that are available.
Table 45. Administration Center releases and deployment requirements
Administration Windows deployment packages AIX, HP-UX, Linux, Macintosh,
Center and Solaris deployment packages
6.2 6.2 N/A
6.3 5.5 and later 5.5.1 and later
To use the feature, the Backup-Archive Client must meet these requirements:
v The IBM Tivoli Storage Manager Windows Backup-Archive Client must be at
version 5.4.0 or later. The deployment feature does not install new
backup-archive clients.
v A Backup-Archive Client on an operating system other than Windows must be
at version 5.5 or later.
v Windows backup-archive clients must have 2 GB of total disk space.
v The PASSWORDACCESS option must be set to generate.
v The client acceptor (CAD) or Backup-Archive Client schedule must be running
at the time of the deployment. The Backup-Archive Client is deployed from the
server as a scheduled task.
The Backup-Archive Client must have the additional disk space that is required for
a deployment, as shown in this table:
Table 46. Disk space required on the Backup-Archive Client workstation for deploying a
Backup-Archive Client package
Operating system Total required disk space
AIX 1500 MB
Solaris 1200 MB
HP-UX 900 MB
Macintosh 200 MB
Linux x86/x86 64 950 MB
Windows 2 GB
To access the Configure Client Auto Deployment wizard, click Tivoli Storage
Manager > Manage Servers. Select a server from the table and then select
Configure Client Auto Deployment from the table actions.
The wizard guides you in setting up the location where imported packages are to
be stored, and how long they are stored.
Configure the server by using the Configure Server for Client Auto Deployment
wizard.
The View Available Client Deployment Packages portlet shows all of the
available packages. You can either import the available deployment packages,
check for new packages on the FTP site, or refresh the table from a local copy.
Complete the following steps to use the Import Client Deployment Packages
wizard:
1. Open the Administration Center.
2. Click Tivoli Storage Manager > Manage Servers.
3. Access the wizard by selecting View Client Deployment Packages > Import
Client Deployment Packages.
The properties file holds critical information for the deployment feature and the
Administration Center can then find and import packages. The
catalog.properties file is updated automatically when you configure the server to
run deployments through the Administration Center.
In the directory descriptions, user_chosen_path is the root directory for the Tivoli
Integrated Portal installation. If the server does not have web access, you must edit
the catalog.properties file to point to the local catalog.xml file. The
catalog.properties file is in these directories:
v For Windows: user_chosen_path\tsmac\tsm\clientDeployCatalog
v For all other platforms: user_chosen_path/tsmac/tsm/clientDeployCatalog
You can copy the packages to the server from media and then access the packages
as if you are connected to the FTP site.
Complete the following steps to schedule a client deployment without direct web
access:
1. Move the packages to a local FTP server that is configured for anonymous
access.
2. Configure the servers for deployments. Access the configure server for client
deployments wizard by clicking Tivoli Storage Manager > Manage Servers.
Select a server and then select Configure Automatic Client Deployment from
the action list.
3. Edit the catalog.properties file to point to the local catalog.xml file. See this
example of the catalog.properties file:
base.url=
ftp://public.dhe.ibm.com/storage/tivoli-storage-management/catalog/client
You can schedule your deployments around your routine IBM Tivoli Storage
Manager activities. When scheduling client deployments, place those schedules on
a lower priority than regular Storage Management tasks (for example, back up,
archive, restore, and retrieve).
You are offered the option to restart the client operating system after the
deployment completes. Restarting the system can affect any critical applications
that are running on the client operating system.
v You must use the SET SERVERHLADDRESS command for all automatic client
deployments.
You can find the deployment packages in the maintenance directory on the FTP
site: ftp://public.dhe.ibm.com/storage/tivoli-storage-management/maintenance/
client.
Related tasks:
Importing the target level to the server on page 441
Defining a schedule for an automatic deployment on page 442
Verifying the backup-archive client deployment results on page 443
Related reference:
Using the command-line interface to configure the server for a backup-archive
client deployment
The following example command can be used to configure the server to deploy
backup-archive client packages with the command-line interface:
set serverhladdress=server.serveraddress.com
where:
v ibm_client_deploy_import is the temporary location from where the deployment packages
are imported. This parameter is defined by the deployment manager.
v import_directory is a previously defined directory that is accessible from the server.
v stgpool_name is the name of a storage pool of your choosing where the deployment
packages are stored on the server. The storage pool name is based on a previously
defined device class. That device class is different from the one which is used to perform
IMPORT operations.
v storage_dc_name represents the device class where the deployment packages are stored on
the server.
v retention_value (RETVER) of the DEFINE COPYGROUP command sets the retention time for the
package. You can set it to NOLimit or to a number of days. The default for the
Administration Center is five years.
| Important: The retention value must be set to a value that includes the amount of time
| that the package was on the FTP site. For example, if a deployment package is on the FTP
| site for 30 days, the retention value for the copy group must be greater than 30 days. If
| not, the package expires when the next EXPIRE INVENTORY command is issued.
v server.serveraddress.com is the server IP address or host name from which you scheduled
the client automatic deployment.
Ensure that you configure the server for backup-archive client automatic
deployments before you import the packages.
where:
upgradedev is the file device class name.
volname1.exp is the deployment package name. You can also use a
comma-separated list of package names.
If you want to view the progress, issue the QUERY PROCESS command.
3. Verify that the packages are in a location that the server can reach. Enter the
following command:
| select * from ARCHIVES where node_name=IBM_CLIENT_DEPLOY_UNX
where ARCHIVES is the type of file that is imported through the IMPORT NODE
command.
Related reference:
Using the command-line interface to configure the server for a backup-archive
client deployment on page 440
where
deployment_package_location is the path to the deployment package
destination_for_package is the path to where you want to store the
deployment package
IBM_CLIENT_DEPLOY_UNX is the predefined name (for a UNIX
deployment) for the -fromnode option
| nodeinfo2=TBD must be entered exactly as shown.
If your current backup-archive client is AIX, Linux, Solaris, or HP-UX and is at
version 6.1 or later, use nodeinfo2=TBD in the POSTNSCHEDULECMD command.
Macintosh Backup-Archive Clients at version 5.5 and later also must use
nodeinfo2=TBD.
One result of the QUERY ACTLOG command is the publishing of the ANE4200I
message reports. Message ANE4200I displays the status of the deployment and
the session number. You can use the session number to search for more
deployment information.
When users access the server, their IBM Tivoli Storage Manager user IDs match the
host name of their workstations. If the host name changes, you can update a client
node user ID to match the new host name.
ENGNODE retains the contact information and access to back up and archive data
that belonged to CAROLH. All files backed up or archived by CAROLH now
belong to ENGNODE.
If you rename a node that authenticates with an LDAP directory server, names for
same-named nodes on other servers that share namespace are not renamed. You
must issue a RENAME command for each node. If you want to keep the nodes in
sync, change their name to match the new name. If you do not, the node on the
other server can no longer authenticate with the LDAP directory server if you
specify SYNCLDAPDELETE=YES.
If you have a node that shares namespace on an LDAP directory server with other
nodes, you can rename each node. The renaming must, however, be done on each
server. For example, you can issue the following command on each server:
rename node starship moonship syncldapdelete=yes
The node starship, that authenticates to an LDAP directory server, changes their
name to moonship. With SYNCLDAPDELETE=YES, the entry on the LDAP directory
server changes to moonship and removes node starship from the LDAP server.
Therefore, other servers cannot authenticate node starship with the LDAP server.
You can register node starship with the LDAP server, or rename node starship to
moonship.
You can restore a locked nodes access to the server with the UNLOCK NODE
command.
1. To prevent client node MAB from accessing the server, issue the following
example command:
lock node mab
Before you can delete a network-attached storage (NAS) node, you must first
delete any file spaces, then delete any defined paths for the data mover with the
DELETE PATH command. Delete the corresponding data mover with the DELETE
DATAMOVER command. Then you can issue the REMOVE NODE command to delete the
NAS node.
This is useful when the server responsible for performing the backup may change
over time, such as with a cluster. Consolidating shared data from multiple servers
under a single name space on the Tivoli Storage Manager server means that the
directories and files can be easily found when restore operations are required.
Backup time can be reduced and clustered configurations can store data with
proxy node support. Client nodes can also be configured with proxy node
authority to support many of the systems which support clustering failover.
By granting client nodes proxy node authority to another node, you gain the
ability to backup, archive, migrate, restore, recall, and retrieve shared data on
multiple clients under a single node name on the Tivoli Storage Manager server.
When authorized as agent nodes, Tivoli Storage Manager nodes and Tivoli Storage
Manager for Space Management (HSM) clients can be directed to backup or restore
data on behalf of another node (the target node).
Administrators must then create scripts that change the passwords manually before
they expire. Using proxy node support, it is possible to break up a large GPFS into
smaller units for backup purposes and not have password coordination issues.
The following example shows how scheduling would work where workload is
distributed, for example in the DB2 Universal Database Enterprise Extended
Edition (EEE) environment. In this example, NODE_A, NODE_B and NODE_C all
work together to back up this distributed environment, all acting on behalf of
NODE-Z. NODE_A directs the backup for all three physical servers. NODE_A
either has ASNODENAME=NODE_Z in its local options file or the server (through
the DEFINE SCHEDULE command) has indicated that NODE_A needs to request
proxy authority to NODE_Z. See the Backup-Archive Clients Installation and User's
Guide for more information on the ASNODENAME client option.
An administrator can define the schedule that does a DB2 UDB EEE backup on
behalf of NODE_Z by issuing the following command:
DEFINE SCHEDULE STANDARD BACKUP-SCHED ACTION=INCREMENTAL
OPTIONS=-ASNODENAME=NODE_Z
Agent nodes are considered traditional nodes in that there is usually a one-to-one
relationship between a traditional node and a physical server. A target node can be
a logical entity, meaning no physical server corresponds to the node. Or, it can be a
predefined node which corresponds to a physical server.
By using the GRANT PROXYNODE command, you can grant proxy node authority to all
nodes sharing data in the cluster environment to access the target node on the
Tivoli Storage Manager server. QUERY PROXYNODE displays the nodes to which a
proxy node relationship was authorized. See the Administrator's Reference for more
information about these commands.
Proxy node relationships will not be imported by default; however, the associations
can be preserved by specifying the PROXYNODEASSOC option on the IMPORT NODE and
IMPORT SERVER commands. Exporting to sequential media maintains proxy node
relationships, but exporting to a server requires specifying the PROXYNODEASSOC
option on EXPORT NODE and EXPORT SERVER.
Important:
v If a proxy node relationship is authorized for incompatible file spaces, there is a
possibility of data loss or other corruption.
v Central command routing or importing of the GRANT PROXYNODE and REVOKE
PROXYNODE commands can create access issues.
v The maximum number of mount points for agent nodes should be increased to
allow parallel backup operations across the target nodes.
The following example shows how to set up proxy node authority for shared
access. In the example, client agent nodes NODE_1, NODE_2, and NODE_3 all
share the same General Parallel File System (GPFS). Because the file space is so
large, it is neither practical nor cost effective to back up this file system from a
single client node. By using Tivoli Storage Manager proxy node support, the very
large file system can be backed up by the three agent nodes for the target
NODE_GPFS. The backup effort is divided among the three nodes. The end result
is that NODE_GPFS has a backup from a given point in time.
All settings used in the proxy node session are determined by the definitions of the
target node, in this case NODE_GPFS. For example, any settings for
DATAWRITEPATH or DATAREADPATH are determined by the target node, not
the agent nodes (NODE_1, NODE_2, NODE_3).
Assume that NODE_1, NODE_2 and NODE_3 each need to execute an incremental
backup and store all the information under NODE_GPFS on the server.
Perform the following steps to set up a proxy node authority for shared access:
1. Define four nodes on the server: NODE_1, NODE_2, NODE_3, and
NODE_GPFS. Issue the following commands:
register node node_1 mysecretpa5s
register node node_2 mysecret9pas
register node node_3 mypass1secret
register node node_gpfs myhiddp3as
2. Define a proxy node relationship among the nodes by issuing the following
commands:
grant proxynode target=node_gpfs agent=node_1,node_2,node_3
3. Define the node name and asnode name for each of the servers in the
respective dsm.sys files. See the Backup-Archive Clients Installation and User's
Guide for more information on the NODENAME and ASNODENAME client options.
Issue the following commands:
nodename node_1
asnodename node_gpfs
For example, as a policy administrator, you might query the server about all client
nodes assigned to the policy domains for which you have authority. Or you might
query the server for detailed information about one client node.
Issue the following command to view information about client nodes that are
assigned to the STANDARD and ENGPOLDOM policy domains:
query node * domain=standard,engpoldom
The data from that command might display similar to the following output:
Node Name Platform Policy Domain Days Since Days Since Locked?
Name Last Password
Access Set
---------- -------- -------------- ---------- ---------- -------
JOE WinNT STANDARD 6 6 No
ENGNODE AIX ENGPOLDOM <1 1 No
HTANG Mac STANDARD 4 11 No
MAB AIX ENGPOLDOM <1 1 No
PEASE Linux86 STANDARD 3 12 No
SSTEINER SOLARIS ENGPOLDOM <1 1 No
For example, to review the registration parameters defined for client node JOE,
issue the following command:
query node joe format=detailed
| A web backup-archive client can be accessed from a web browser or opened from
| the Operations Center or Administration Center interface. This allows an
| administrator with the proper authority to perform backup, archive, restore, and
| retrieve operations on any server that is running the web backup-archive client.
You can establish access to a web backup-archive client for help desk personnel
that do not have system or policy privileges by granting those users client-access
authority to the nodes that they must manage. Help desk personnel can then
perform activities on behalf of the client node such as backup and restore
operations.
To use the web backup-archive client from your web browser, specify the URL and
port number of the Tivoli Storage Manager backup-archive client computer that is
running the web client. The browser that you use to connect to a web
backup-archive client must be Microsoft Internet Explorer 5.0 or Netscape 4.7 or
later. The browser must have the Java Runtime Environment (JRE) 1.3.1, which
includes the Java Plug-in software. The JRE is available at http://
www.oracle.com/.
During node registration, you have the option of granting client owner or client
access authority to an existing administrative user ID. You can also prevent the
server from creating an administrative user ID at registration. If an administrative
user ID exists with the same name as the node that is being registered, the server
registers the node but does not automatically create an administrative user ID. This
process also applies if your site uses open registration.
For more information about installing and configuring the web backup-archive
client, refer to Backup-Archive Clients Installation and User's Guide.
Administrators with system or policy privileges over the client node's domain,
have client owner authority by default. The administrative user ID created
automatically at registration has client owner authority by default. This
administrative user ID is displayed when an administrator issues a QUERY ADMIN
command.
The following definitions describe the difference between client owner and client
access authority when defined for a user that has the node privilege class:
Client owner
You can access the client through the Web backup-archive client or
native backup-archive client.
You own the data and have a right to physically gain access to the data
remotely. You can backup and restore files on the same or different
servers, you can delete file spaces or archive data.
The user ID with client owner authority can also access the data from
another server using the NODENAME or -VIRTUALNODENAME parameter.
The administrator can change the client node's password for which they
have authority.
This is the default authority level for the client at registration. An
administrator with system or policy privileges to a client's domain has
client owner authority by default.
Client access
You can only access the client through the Web backup-archive client.
You can restore data only to the original client.
You can grant client access or client owner authority to other administrators by
specifying CLASS=NODE and AUTHORITY=ACCESS or AUTHORITY=OWNER parameters on
the GRANT AUTHORITY command. You must have one of the following privileges to
grant or revoke client access or client owner authority:
v System privilege
v Policy privilege in the client's domain
v Client owner privilege over the node
v Client access privilege over the node
You can grant an administrator client access authority to individual clients or to all
clients in a specified policy domain. For example, you may want to grant client
access privileges to users that staff help desk environments.
Related tasks:
Example: setting up help desk access to client computers in a specific policy
domain on page 453
The administrator FRED can now access the LABCLIENT client, and perform
backup and restore. The administrator can only restore data to the LABCLIENT
node.
2. Issue the following command to grant client owner authority to ADMIN1 for
the STUDENT1 node:
grant authority admin1 class=node authority=owner node=student1
The user ID ADMIN1 can now perform backup and restore operations for the
STUDENT1 client node. The user ID ADMIN1 can also restore files from the
STUDENT1 client node to a different client node.
When the node is created, the authentication method and Secure Sockets Layer
(SSL) settings are inherited by the administrator.
To give client owner authority to the HELPADMIN user ID when registering the
NEWCLIENT node, issue the following command:
register node newclient pass2new userid=helpadmin
This command results in the NEWCLIENT node being registered with a password
of pass2new, and also grants HELPADMIN client owner authority. This command
would not create an administrator ID. The HELPADMIN client user ID is now able
to access the NEWCLIENT node from a remote location.
You are also granting HELP1 client access authority to the FINANCE domain
without having to grant system or policy privileges.
The help desk person, using HELP1 user ID, has a Web browser with Java
Runtime Environment (JRE) 1.3.1.
1. Register an administrative user ID of HELP1.
register admin help1 05x23 contact="M. Smith, Help Desk x0001"
2. Grant the HELP1 administrative user ID client access authority to all clients in
the FINANCE domain. With client access authority, HELP1 can perform backup
and restore operations for clients in the FINANCE domain. Client nodes in the
FINANCE domain are Dave, Sara, and Joe.
grant authority help1 class=node authority=access domains=finance
The following output is generated by this command:
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
DAVE.
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
JOE.
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
SARA.
3. The help desk person, HELP1, opens the Web browser and specifies the URL
and port number for client computer Sara:
http://sara.computer.name:1581
A Java applet is started, and the client hub window is displayed in the main
window of the Web browser. When HELP1 accesses the backup function from
the client hub, the IBM Tivoli Storage Manager login screen is displayed in a
separate Java applet window. HELP1 authenticates with the administrative user
ID and password. HELP1 can perform a backup for Sara.
For information about what functions are not supported on the Web
backup-archive client, refer to Backup-Archive Clients Installation and User's Guide.
Tip: You can copy the files to any location on the host operating system, but
ensure that all files are copied to the same directory.
5. Ensure that guest virtual machines are running. This step is necessary to ensure
that the guest virtual machines are detected during the hardware scan.
6. To collect PVU information, issue the following command:
retrieve -v
If you restart the host machine or change the configuration, run the retrieve
command again to ensure that current information is retrieved.
Tip: When the IBM Tivoli Storage Manager for Virtual Environments license file is
installed on a VMware vStorage backup server, the platform string that is stored
on the Tivoli Storage Manager server is set to TDP VMware for any node name
that is used on the server. The reason is that the server is licensed for Tivoli
Storage Manager for Virtual Environments. The TDP VMware platform string can
be used for PVU calculations. If a node is used to back up the server with standard
backup-archive client functions, such as file-level and image backup, interpret the
TDP VMware platform string as a backup-archive client for PVU calculations.
Administrators can perform the following activities when managing file spaces:
Related reference:
Defining client nodes and file spaces
Typically, each client file system is represented on the server as a unique file space
that belongs to each client node. Therefore, the number of file spaces a node has
depends on the number of file systems on the client computer. For example, a
Windows desktop system may have multiple drives (file systems), such as C: and
D:. In this case, the client's node has two file spaces on the server; one for the C:
drive and a second for the D: drive. The file spaces can grow as a client stores
more data on the server. The file spaces decrease as backup and archive file
versions expire and the server reclaims the space.
IBM Tivoli Storage Manager does not allow an administrator to delete a node
unless the node's file spaces have been deleted.
For client nodes running on NetWare, file spaces map to NetWare volumes. Each
file space is named with the corresponding NetWare volume name.
For clients running on Macintosh, file spaces map to Macintosh volumes. Each file
space is named with the corresponding Macintosh volume name.
For clients running on UNIX or Linux, a file space name maps to a file space in
storage that has the same name as the file system or virtual mount point from
which the files originated. The VIRTUALMOINTPOINT option allows users to define a
virtual mount point for a file system to back up or archive files beginning with a
specific directory or subdirectory. For information on the VIRTUALMOUNTPOINT
For client nodes that are running on Windows, it is possible to create objects with
long fully qualified names. The IBM Tivoli Storage Manager clients for Windows
are able to support fully qualified names of up to 8704 bytes in length for backup
and restore functions. These long names are often generated with an automatic
naming function or are assigned by an application.
Long object names can be difficult to display and use through normal operating
system facilities, such as a command prompt window or Windows Explorer. To
manage them, Tivoli Storage Manager assigns an identifying token to the name
and abbreviates the length. The token ID is then used to display the full object
name. For example, an error message might display as follows, where
[TSMOBJ:9.1.2084] is the assigned token ID:
ANR9999D file.c(1999) Error handling file [TSMOBJ:9.1.2084] because of
lack of server resources.
The token ID can then be used to display the fully qualified object name by
specifying it in the DISPLAY OBJNAME command.
The fully qualified object name is displayed. If you are displaying long object
names that are included in backup sets, a token ID might not be included if the
For more information about fully qualified object names and issuing the DISPLAY
OBJNAME command, see the Administrator's Reference.
New clients storing data on the server for the first time require no special setup. If
the client has the latest IBM Tivoli Storage Manager client software installed, the
server automatically stores Unicode-enabled file spaces for that client.
However, if you have clients that already have data stored on the server and the
clients install the Unicode-enabled IBM Tivoli Storage Manager client software, you
need to plan for the migration to Unicode-enabled file spaces. To allow clients with
existing data to begin to store data in Unicode-enabled file spaces, IBM Tivoli
Storage Manager provides a function for automatic renaming of existing file
spaces. The file data itself is not affected; only the file space name is changed.
After the existing file space is renamed, the operation creates a new file space that
is Unicode-enabled. The creation of the new Unicode-enabled file space for clients
can greatly increase the amount of space required for storage pools and the
amount of space required for the server database. It can also increase the amount
of time required for a client to run a full incremental backup, because the first
incremental backup after the creation of the Unicode-enabled file space is a full
backup.
When clients with existing file spaces migrate to Unicode-enabled file spaces, you
need to ensure that sufficient storage space for the server database and storage
pools is available. You also need to allow for potentially longer backup windows
for the complete backups.
Attention: After the server is at the latest level of software that includes support
for Unicode-enabled file spaces, you can only go back to a previous level of the
server by restoring an earlier version of IBM Tivoli Storage Manager and the
database.
When IBM Tivoli Storage Manager cannot convert the code page, the client may
receive one or all of the following messages if they were using the command line:
ANS1228E, ANS4042E, and ANS1803E. Clients that are using the GUI may see a
Path not found message. If you have clients that are experiencing such backup
failures, then you need to migrate the file spaces for these clients to ensure that
these systems are completely protected with backups. If you have a large number
of clients, set the priority for migrating the clients based on how critical each
client's data is to your business.
Any new file spaces that are backed up from client systems with the
Unicode-enabled IBM Tivoli Storage Manager client are automatically stored as
Unicode-enabled file spaces in server storage.
When enabled, IBM Tivoli Storage Manager uses the rename function when it
recognizes that a file space that is not Unicode-enabled in server storage matches
the name of a file space on a client. The existing file space in server storage is
renamed, so that the file space in the current operation is then treated as a new,
Unicode-enabled file space. For example, if the operation is an incremental backup
at the file space level, the entire file space is then backed up to the server as a
Unicode-enabled file space.
If you force the file space renaming for all clients at the same time, backups can
contend for network and storage resources, and storage pools can run out of
storage space.
Related tasks:
Planning for Unicode versions of existing client file spaces on page 461
Examining issues when migrating to Unicode on page 463
Example of a migration process on page 464
Related reference:
Defining options for automatically renaming file spaces
Defining the rules for automatically renaming file spaces on page 461
As an administrator, you can control whether the file spaces of any existing clients
are renamed to force the creation of new Unicode-enabled file spaces. By default,
no automatic renaming occurs.
To control the automatic renaming, use the parameter AUTOFSRENAME when you
register or update a node. You can also allow clients to make the choice. Clients
can use the client option AUTOFSRENAME.
Restriction: The setting for AUTOFSRENAME affects only clients that are
Unicode-enabled.
The following table summarizes what occurs with different parameter and option
settings.
Table 48. The effects of the AUTOFSRENAME option settings
Parameter on the Option on the client Result for file spaces Is the file space
server (for each renamed?
client)
Yes Yes, No, Prompt Renamed Yes
No Yes, No, Prompt Not renamed No
Client Yes Renamed Yes
Client No Not renamed Yes
Client Prompt Command-line or GUI: The user receives a Depends on the
one-time-only prompt about renaming response from the user
(yes or no)
Client Prompt Client Scheduler: Not renamed (prompt is No
displayed during the next command-line
or GUI session)
Related reference:
Defining the rules for automatically renaming file spaces on page 461
With its automatic renaming function, IBM Tivoli Storage Manager renames a file
space by adding the suffix _OLD.
For example:
If the new name would conflict with the name of another file space, a number is
added to the suffix. For example:
Original file space name New file space name Other existing file spaces
\\maria\c$ \\maria\c$_OLD \\maria\c$_OLD1
\\maria\c$_OLD2
If the new name for the file space exceeds the limit of 64 characters, the file space
name is truncated before the suffix _OLD is added.
Several factors must be considered before you plan for Unicode versions of
existing client file spaces.
To minimize problems, you need to plan the storage of Unicode-enabled file spaces
for clients that already have existing file spaces in server storage.
1. Determine which clients need to migrate.
Clients that have had problems with backing up files because their file spaces
contain names of directories or files that cannot be converted to the server's
code page should have the highest priority. Balance that with clients that are
most critical to your operations. If you have a large number of clients that need
to become Unicode-enabled, you can control the migration of the clients.
Change the rename option for a few clients at a time to keep control of storage
space usage and processing time. Also consider staging migration for clients
that have a large amount of data backed up.
2. Allow for increased backup time and network resource usage when the
Unicode-enabled file spaces are first created in server storage.
Based on the number of clients and the amount of data those clients have,
consider whether you need to stage the migration. Staging the migration means
setting the AUTOFSRENAME parameter to YES or CLIENT for only a small number
of clients every day.
When you migrate to Unicode, there are several issues that you must consider.
The server manages a Unicode-enabled client and its file spaces as follows:
v When a client upgrades to a Unicode-enabled client and logs in to the server, the
server identifies the client as Unicode-enabled.
Remember: That same client (same node name) cannot log in to the server with
a previous version of IBM Tivoli Storage Manager or a client that is not
Unicode-enabled.
v The original file space that was renamed (_OLD) remains with both its active
and inactive file versions that the client can restore if needed. The original file
space will no longer be updated. The server will not mark existing active files
inactive when the same files are backed up in the corresponding
Unicode-enabled file space.
Important: Before the Unicode-enabled client is installed, the client can back up
files in a code page other than the current locale, but cannot restore those files.
After the Unicode-enabled client is installed, if the same client continues to use
file spaces that are not Unicode-enabled, the client skips files that are not in the
same code page as the current locale during a backup. Because the files are
skipped, they appear to have been deleted from the client. Active versions of the
files in server storage are made inactive on the server. When a client in this
situation is updated to a Unicode-enabled client, you should migrate the file
spaces for that client to Unicode-enabled file spaces.
v The server does not allow a Unicode-enabled file space to be sent to a client that
is not Unicode-enabled during a restore or retrieve process.
v Clients should be aware that they will not see all their data on the
Unicode-enabled file space until a full incremental backup has been processed.
When a client performs a selective backup of a file or directory and the original
file space is renamed, the new Unicode-enabled file space will contain only the
file or directory specified for that backup operation. All other directories and
files are backed up on the next full incremental backup.
If a client needs to restore a file before the next full incremental backup, the
client can perform a restore from the renamed file space instead of the new
Unicode-enabled file space. For example:
Sue had been backing up her file space, \\sue-node\d$.
Sue upgrades the IBM Tivoli Storage Manager client on her system to the
Unicode-enabled IBM Tivoli Storage Manager client.
Sue performs a selective backup of the HILITE.TXT file.
The automatic file space renaming function is in effect and IBM Tivoli Storage
Manager renames\\sue-node\d$ to \\sue-node\d$_OLD. IBM Tivoli Storage
Manager then creates a new Unicode-enabled file space on the server with the
name \\sue-node\d$. This new Unicode-enabled file space contains only the
HILITE.TXT file.
All other directories and files in Sue's file system will be backed up on the
next full incremental backup. If Sue needs to restore a file before the next full
incremental backup, she can restore the file from the \\sue-node\d$_OLD file
space.
Refer to the Backup-Archive Clients Installation and User's Guide for more
information.
The example of a migration process includes one possible sequence for migrating
clients.
This forces the file spaces to be renamed at the time of the next backup or
archive operation on the file servers. If the file servers are large, consider
changing the renaming parameter for one file server each day.
3. Allow backup and archive schedules to run as usual. Monitor the results.
a. Check for the renamed file spaces for the file server clients. Renamed file
spaces have the suffix _OLD or _OLDn, where n is a number.
b. Check the capacity of the storage pools. Add tape or disk volumes to
storage pools as needed.
c. Check database usage statistics to ensure you have enough space.
Note: If you are using the client acceptor to start the scheduler, you must first
modify the default scheduling mode.
4. Migrate the workstation clients. For example, migrate all clients with names
that start with the letter a.
update node a* autofsrename=yes
5. Allow backup and archive schedules to run as usual that night. Monitor the
results.
6. After sufficient time passes, consider deleting the old, renamed file spaces.
Related tasks:
Modifying the default scheduling mode on page 579
Related reference:
Managing the renamed file spaces on page 465
Defining the rules for automatically renaming file spaces on page 461
The file spaces that were automatically renamed (_OLD) to allow the creation of
Unicode-enabled file spaces continue to exist on the server. Users can still access
the file versions in these file spaces.
Because a renamed file space is not backed up again with its new name, the files
that are active (the most recent backup version) in the renamed file space remain
active and never expire. The inactive files in the file space expire according to the
policy settings for how long versions are retained. To determine how long the files
are retained, check the values for the parameters, Retain Extra Versions and
Retain Only Versions, in the backup copy group of the management class to
which the files are bound.
When users no longer have a need for their old, renamed file spaces, you can
delete them. If possible, wait for the longest retention time for the only version
(Retain Only Version) that any management class allows. If your system has
storage constraints, you may need to delete these file spaces before that.
For example, a Version 5.1.0 client backs up file spaces, and then upgrades to
Version 5.2.0 with support for Unicode-enabled file spaces. That same client can
still restore the non-Unicode file spaces from the backup set.
You can display file space information for the following reasons:
v To identify file spaces that are defined to each client node, so that you can delete
each file space from the server before removing the client node from the server
v To identify file spaces that are Unicode-enabled and identify their file space ID
(FSID)
v To monitor the space that is used on workstation's disks
v To monitor whether backups are completing successfully for the file space
v To determine the date and time of the last backup
Note: File space names are case-sensitive and must be entered exactly as known to
the server.
To view information about file spaces that are defined for client node JOE, issue
the following command:
query filespace joe *
field might display file space names as .... This indicates to the administrator that
a file space does exist but could not be converted to the server's code page.
Conversion can fail if the string includes characters that are not available in the
server code page, or if the server has a problem accessing system conversion
routines.
File space names and file names that can be in a different code page or locale than
the server do not display correctly in the Operations Center, the Administration
Center, or the administrative command-line interface. The data itself is backed up
and can be restored properly, but the file space name or file name may display
with a combination of invalid characters or blank spaces.
Refer to the Administrator's Reference for details.
After you delete all of a client node's file spaces, you can delete the node with
the REMOVE NODE command.
For client nodes that support multiple users, such as UNIX or Linux, a file owner
name is associated with each file on the server. The owner name is the user ID of
the operating system, such as the UNIX Linux user ID. When you delete a file
space belonging to a specific owner, only files that have the specified owner name
in the file space are deleted.
When a node has more than one file space and you issue a DELETE FILESPACE
command for only one file space, a QUERY FILESPACE command for the node during
the delete process shows no file spaces. When the delete process ends, you can
view the remaining file spaces with the QUERY FILESPACE command. If data
retention protection is enabled, the only files which will be deleted from the file
space are those which have met the retention criterion. The file space will not be
deleted if one or more files within the file space cannot be deleted.
Note: Data stored using the System Storage Archive Manager product cannot be
deleted using the DELETE FILESPACE command if the retention period for the data
has not expired. If this data is stored in a Centera storage pool, then it is
additionally protected from deletion by the retention protection feature of the
Centera storage device.
The most important option is the network address of the server, but you can add
many other client options at any time. Administrators can also control client
options by creating client option sets on the server that are used in conjunction
with client option files on client nodes.
Related tasks:
Creating client option sets on the server
Managing client option sets on page 470
Related reference:
Connecting nodes with the server on page 426
Client option sets allow the administrator to specify additional options that may
not be included in the client's option file (dsm.opt). You can specify which clients
use the option set with the REGISTER NODE or UPDATE NODE commands. The client
can use these defined options during a backup, archive, restore, or retrieve process.
See the Backup-Archive Clients Installation and User's Guide for detailed information
about individual client options.
To create a client option set and have the clients use the option set, perform the
following steps:
1. Create the client option set with the DEFINE CLOPTSET command.
2. Add client options to the option set with the DEFINE CLIENTOPT command.
3. Specify which clients should use the option set with the REGISTER NODE or
UPDATE NODE command.
Related reference:
Connecting nodes with the server on page 426
To provide a description of the option set, issue the following example command:
define cloptset engbackup description=Backup options for eng. dept.
For a list of client options that you can specify, refer to Administrative client options
in the Administrator's Reference.
The server automatically assigns sequence numbers to the specified options, or you
can choose to specify the sequence number for order of processing. This is helpful
if you have defined more than one of the same option as in the following example:
define clientopt engbackup inclexcl "include d:\admin"
define clientopt engbackup inclexcl "include d:\payroll"
The options are processed starting with the highest sequence number.
Any include-exclude statements in the server client option set have priority over
the include-exclude statements in the local client options file. The server
include-exclude statements are always enforced and placed last in the
include-exclude list and evaluated before the client include-exclude statements. If
the server option set has several include-exclude statements, the statements are
processed starting with the first sequence number. The client can issue the QUERY
INCLEXCL command to show the include-exclude statements in the order that they
are processed. QUERY INCLEXCL also displays the source of each include-exclude
statement. For more information on the processing of the include-exclude
statements see the Backup-Archive Clients Installation and User's Guide.
The FORCE parameter allows an administrator to specify whether the server forces
the client to use an option value. This parameter has no affect on additive options
such as INCLEXCL and DOMAIN. The default value is NO. If FORCE=YES, the server
forces the client to use the value, and the client cannot override the value. The
following example shows how you can prevent a client from using subfile backup:
define clientopt engbackup subfilebackup no force=yes
Related reference:
The include-exclude list on page 490
The client node MIKE is registered with the password pass2eng. When the client
node MIKE performs a scheduling operation, his schedule log entries are kept for
five days.
Backup-archive clients are eligible for client restartable restore sessions; however,
application clients are not.
Tivoli Storage Manager can hold a client restore session in DSMC loop mode until
one of these conditions is met:
v The device class MOUNTRETENTION limit is satisfied.
v The client IDLETIMEOUT period is satisfied.
v The loop session ends.
Administrators can perform the following activities when managing IBM Tivoli
Storage Manager sessions:
Related concepts:
Managing client restartable restore sessions on page 474
time to determine how long (in seconds, minutes, or hours) the session has been in
the current state.
Administrators can display a session number with the QUERY SESSION command.
Users and administrators whose sessions have been canceled must reissue their last
command to access the server again.
If the session you cancel is currently waiting for a media mount, the mount request
is automatically canceled. If a volume associated with the client session is currently
being mounted by an automated library, the cancel may not take effect until the
mount is complete.
The reasons are based on the settings of the following server options:
COMMTIMEOUT
Specifies how many seconds the server waits for an expected client
message during a transaction that causes a database update. If the length
of time exceeds this time-out, the server rolls back the transaction that was
in progress and ends the client session. The amount of time it takes for a
client to respond depends on the speed and processor load for the client
and the network load.
IDLETIMEOUT
Specifies how many minutes the server waits for a client to initiate
communication. If the client does not initiate communication with the
server within the time specified, the server ends the client session. For
example, the server prompts the client for a scheduled backup operation
but the client node is not started. Another example can be that the client
program is idle while waiting for the user to choose an action to perform
(for example, backup archive, restore, or retrieve files). If a user starts the
client session and does not choose an action to perform, the session will
time out. The client program automatically reconnects to the server when
the user chooses an action that requires server processing. A large number
of idle sessions can inadvertently prevent other users from connecting to
the server.
THROUGHPUTDATATHRESHOLD
Specifies a throughput threshold, in kilobytes per second, a client session
must achieve to prevent being cancelled after the time threshold is reached.
Throughput is computed by adding send and receive byte counts and
dividing by the length of the session. The length does not include time
spent waiting for media mounts and starts at the time a client sends data
This command does not cancel sessions currently in progress or system processes
such as migration and reclamation.
To disable client node access to the server, issue the following example command:
disable sessions
You continue to access the server and current client activities complete unless a
user logs off or an administrator cancels a client session. After the client sessions
have been disabled, you can enable client sessions and resume normal operations
by issuing the following command:
enable sessions
You can issue the QUERY STATUS command to determine if the server is enabled or
disabled.
Related tasks:
Locking and unlocking client nodes on page 444
After a restore operation that comes directly from tape, the Tivoli Storage Manager
server does not release the mount point to IDLE status from INUSE status. The
server does not close the volume to allow additional restore requests to be made to
that volume. However, if there is a request to perform a backup in the same
session, and that mount point is the only one available, then the backup operation
When a restartable restore session is saved in the server database the file space is
locked in server storage. The following rules are in effect during the file space lock:
v Files residing on sequential volumes associated with the file space cannot be
moved.
v Files associated with the restore cannot be backed up. However, files not
associated with the restartable restore session that are in the same file space are
eligible for backup. For example, if you are restoring all files in directory A, you
can still backup files in directory B from the same file space.
To determine which client nodes have eligible restartable restore sessions, issue the
following example command:
query restore
These sessions will automatically expire when the specified restore interval has
passed.
For example:
v How and when files are backed up and archived to server storage
v How space-managed files are migrated to server storage
v The number of copies of a file and the length of time copies are kept in server
storage
IBM Tivoli Storage Manager provides a standard policy that sets rules to provide a
basic amount of protection for data on workstations. If this standard policy meets
your needs, you can begin using Tivoli Storage Manager immediately.
The server process of expiration is one way that the server enforces policies that
you define. Expiration processing determines when files are no longer needed, that
is, when the files are expired. For example, if you have a policy that requires only
four copies of a file be kept, the fifth and oldest copy is expired. During expiration
processing, the server removes entries for expired files from the database,
effectively deleting the files from server storage.
You might need more flexibility in your policies than the standard policy provides.
To accommodate individual user's needs, you may fine-tune the STANDARD
policy, or create your own policies. Some types of clients or situations require
special policy. For example, you may want to enable clients to restore backed-up
files to a specific point-in-time.
The server manages files based on whether the files are active or inactive. The
most current backup or archived copy of a file is the active version. All other
versions are called inactive versions. An active version of a file becomes inactive
when:
v A new backup is made
v A user deletes that file on the client node and then runs an incremental backup
Policy determines how many inactive versions of files the server keeps, and for
how long. When files exceed the criteria, the files expire. Expiration processing can
then remove the files from the server database.
Related reference:
File expiration and expiration processing on page 481
Running expiration processing to delete expired files on page 514
Reviewing the standard policy
Related reference:
The parts of a policy on page 485
To help users take advantage of IBM Tivoli Storage Manager, you can further tune
the policy environment by performing the following tasks:
v Define sets of client options for the different groups of users.
v Help users with creating the include-exclude list. For example:
Create include-exclude lists to help inexperienced users who have simple file
management needs. One way to do this is to define a basic include-exclude
list as part of a client option set. This also gives the administrator some
control over client usage.
Provide a sample include-exclude list to users who want to specify how the
server manages their files. You can show users who prefer to manage their
own files how to:
- Request information about management classes
- Select a management class that meets backup and archive requirements
- Use include-exclude options to select management classes for their files
For information on the include-exclude list, see the users guide for the
appropriate client.
v Automate incremental backup procedures by defining schedules for each policy
domain. Then associate schedules with client nodes in each policy domain.
Related tasks:
Creating client option sets on the server on page 468
Chapter 15, Scheduling operations for client nodes, on page 567
Related reference:
The include-exclude list on page 490
Other situations may also require policy changes. See Policy configuration
scenarios on page 524 for details.
To change policy that you have established in a policy domain, you must replace
the ACTIVE policy set. You replace the ACTIVE policy set by activating another
policy set.
Note: You cannot directly modify the ACTIVE policy set. If you want to make
a small change to the ACTIVE policy set, copy the policy to modify it and
follow the steps here.
2. Make any changes that you need to make to the management classes, backup
copy groups, and archive copy groups in the new policy set.
3. Validate the policy set.
4. Activate the policy set. The contents of your new policy set becomes the
ACTIVE policy set.
Related tasks:
Defining and updating an archive copy group on page 510
Policy configuration scenarios on page 524
Related reference:
Validating a policy set on page 512
Activating a policy set on page 513
Defining and updating a management class on page 503
Defining and updating a backup copy group on page 504
Important:
The server deletes expired files from the server database only during expiration
processing. After expired files are deleted from the database, the server can reuse
the space in the storage pools that was occupied by expired files. You should
ensure that expiration processing runs periodically to allow the server to reuse
space.
Expiration processing also removes from the database any restartable restore
sessions that exceed the time limit set for such sessions by the RESTOREINTERVAL
server option.
Related concepts:
Managing client restartable restore sessions on page 474
Deletion hold on page 517
Expiration processing of base files and subfiles on page 555
Related tasks:
Reclaiming space in sequential-access storage pools on page 372
Related reference:
Running expiration processing to delete expired files on page 514
Backup
To guard against the loss of information, the backup-archive client can copy files,
subdirectories, and directories to media controlled by the server. Backups can be
controlled by administrator-defined policies and schedules, or users can request
backups of their own data.
See Backup-Archive Clients Installation and User's Guide for details on backup-archive
clients that can also back up logical volumes. The logical volume must meet some
of the policy requirements that are defined in the backup copy group.
Related reference:
Policy for logical volume backups on page 525
Restore
When a user restores a backup version of a file, the server sends a copy of the file
to the client node. The backup version remains in server storage. Restoring a
logical volume backup works the same way.
If more than one backup version exists, a user can restore the active backup
version or any inactive backup versions.
If policy is properly set up, a user can restore backed-up files to a specific time.
Restriction: If you back up or archive data with a Tivoli Storage Manager V6.3
client, you cannot restore or retrieve that data with a V6.2 or earlier client.
Related reference:
Setting policy to enable point-in-time restore for clients on page 530
When a user retrieves a file, the server sends a copy of the file to the client node.
The archived file remains in server storage.
Tivoli Storage Manager for Space Management frees space for new data and makes
more efficient use of your storage resources. The installed Tivoli Storage Manager
for Space Management product is also called the space manager client or the HSM
client. Files that are migrated and recalled with the HSM client are called
space-managed files.
For details about using Tivoli Storage Manager for Space Management, see Space
Management for UNIX and Linux User's Guide.
Tivoli Storage Manager for Space Management provides selective and automatic
migration. Selective migration lets users migrate files by name. The two types of
automatic migration are:
Threshold
If space usage exceeds a high threshold set at the client node, migration
begins and continues until usage drops to the low threshold also set at the
client node.
Demand
If an out-of-space condition occurs for a client node, migration begins and
continues until usage drops to the low threshold.
To prepare for efficient automatic migration, Tivoli Storage Manager for Space
Management copies a percentage of user files from the client node to the IBM
Tivoli Storage Manager server. The premigration process occurs whenever Tivoli
Storage Manager for Space Management completes an automatic migration. The
next time free space is needed at the client node, the files that have been
pre-migrated to the server can quickly be changed to stub files on the client. The
default premigration percentage is the difference between the high and low
thresholds.
Files are selected for automatic migration and premigration based on the number
of days since the file was last accessed and also on other factors set at the client
node.
Recall
Tivoli Storage Manager for Space Management provides selective and transparent
recall. Selective recall lets users recall files by name. Transparent recall occurs
automatically when a user accesses a migrated file.
When recalling active file versions, the server searches in an active-data storage
pool associated with a FILE device class, if such a pool exists.
Related concepts:
Active-data pools as sources of active file versions for server operations on page
253
Reconciliation
Migration and premigration can create inconsistencies between stub files on the
client node and space-managed files in server storage.
For example, if a user deletes a migrated file from the client node, the copy
remains at the server. At regular intervals set at the client node, IBM Tivoli Storage
Manager compares client node and server storage and reconciles the two by
deleting from the server any outdated files or files that do not exist at the client
node.
Figure 67 shows the parts of a policy and the relationships among the parts.
Policy domain
Policy sets
Management classes
Backup Archive
copy copy
group group Additional Additional
policy policy
set set
Additional
management class
Additional
management class
The numbers in the following list correspond to the numbers in the figure.
Disk
Policy domain
4
Policy set Volume Volume
Management class
Storage Represents
Copy Points to
2 group pool
DISK
device class
Migrate
6
Tape
5
Volume Volume
Management class
Represents
Device class
Library
Represents
Drives
Device
Drive Drive
Figure 68. How clients, server storage, and policy work together
1 When clients are registered, they are associated with a policy domain.
Within the policy domain are the policy set, management class, and copy
groups.
2, 3
When a client backs up, archives, or migrates a file, it is bound to a
management class. A management class and the backup and archive copy
groups within it specify where files are stored and how they are managed
when they are backed up, archived, or migrated from the client.
Figure 68 on page 487 summarizes the relationships among the physical device
environment, IBM Tivoli Storage Manager storage and policy objects, and clients.
The management classes specify whether client files are migrated to storage pools
(hierarchical storage management). The copy groups in these management classes
specify the number of backup versions retained in server storage and the length of
time to retain backup versions and archive copies.
For example, if a group of users needs only one backup version of their files, you
can create a policy domain that contains only one management class whose backup
copy group allows only one backup version. Then you can assign the client nodes
for these users to the policy domain.
Related tasks:
Registering nodes with the server on page 422
Related reference:
Contents of a management class
Default management classes on page 489
The include-exclude list on page 490
How files and directories are associated with a management class on page 491
For clients using the server for backup and archive, you can choose what a
management class contains from the following options:
Other management classes can contain copy groups tailored either for the needs of
special sets of users or for the needs of most users under special circumstances.
The options also include how the server controls symbolic links and processing
such as image, compression and encryption.
If a user does not create an include-exclude list, the following default conditions
apply:
v All files belonging to the user are eligible for backup and archive services.
v The default management class governs backup, archive, and space-management
policies.
exclude /.../core
exclude /home/ssteiner/*
include /home/ssteiner/options.scr
include /home/ssteiner/driver5/.../* mcengbk2
IBM Tivoli Storage Manager processes the include-exclude list from the bottom up,
and stops when it finds an include or exclude statement that matches the file it is
processing. Therefore, the order in which the include and exclude options are listed
affects which files are included and excluded. For example, suppose you switch the
order of two lines in the example, as follows:
include /home/ssteiner/options.scr
exclude /home/ssteiner/*
The exclude statement comes last, and excludes all files in the following directory:
v /home/ssteiner
490 IBM Tivoli Storage Manager for AIX: Administrator's Guide
When IBM Tivoli Storage Manager is processing the include-exclude list for the
options.scr file, it finds the exclude statement first. This time, the options.scr file
is excluded.
Some options are evaluated after the more basic include and exclude options. For
example, options that exclude or include files for compression are evaluated after
the program determines which files are eligible for the process being run.
You can create include-exclude lists as part of client options sets that you define
for clients.
For detailed information on the include and exclude options, see the users guide
for the appropriate client.
Related tasks:
Creating client option sets on the server on page 468
The default management class is the management class identified as the default in
the active policy set.
A management class specified with a simple include option can apply to one or
more processes on the client. More specific include options (such as
include.archive) allow the user to specify different management classes. Some
examples of how this works:
v If a client backs up, archives, and migrates a file to the same server, and uses
only a single include option, the management class specified for the file applies
to all three operations (backup, archive, and migrate).
v If a client backs up and archives a file to one server, and migrates the file to a
different server, the client can specify one management class for the file for
backup and archive operations, and a different management class for migrating.
v Clients can specify a management class for archiving that is different from the
management class for backup.
See the user's guide for the appropriate client for more details.
Backup versions
The server rebinds backup versions of files and logical volume images in some
cases.
The following list highlights the cases when a server rebinds backup versions of
files:
v The user changes the management class specified in the include-exclude list and
does a backup.
v An administrator activates a policy set in the same policy domain as the client
node, and the policy set does not contain a management class with the same
name as the management class to which a file is currently bound.
v An administrator assigns a client node to a different policy domain, and the
active policy set in that policy domain does not have a management class with
the same name.
Backup versions of a directory can be rebound when the user specifies a different
management class using the DIRMC option in the client option file, and when the
directory gets backed up.
The most recently backed up files are active backup versions. Older copies of your
backed up files are inactive backup versions. You can configure management classes
to save a predetermined number of copies of a file. If a management class is saving
five backup copies, there would be one active copy saved and four inactive copies
saved. If a file from one management class is bound to a different management
class that retains a lesser number of files, inactive files are deleted.
If a file is bound to a management class that no longer exists, the server uses the
default management class to manage the backup versions. When the user does
another backup, the server rebinds the file and any backup versions to the default
management class. If the default management class does not have a backup copy
group, the server uses the backup retention grace period specified for the policy
domain.
Archive copies
Archive copies are never rebound because each archive operation creates a
different archive copy. Archive copies remain bound to the management class
name specified when the user archived them.
If the default management class does not contain an archive copy group, the server
uses the archive retention grace period specified for the policy domain.
Incremental backup
Backup-archive clients can choose to back up their files using full or partial
incremental backup. A full incremental backup ensures that clients' backed-up files
are always managed according to policies. Clients are urged to use full incremental
backup whenever possible.
If the amount of time for backup is limited, clients may sometimes need to use
partial incremental backup. A partial incremental backup should complete more
quickly and require less memory. When a client uses partial incremental backup,
only files that have changed since the last incremental backup are backed up.
Attributes in the management class that would cause a file to be backed up when
doing a full incremental backup are ignored. For example, unchanged files are not
backed up even when they are assigned to a management class that specifies
absolute mode and the minimum days between backups (frequency) has passed.
The server also does less processing for a partial incremental backup. For example,
the server does not expire files or rebind management classes to files during a
partial incremental backup.
If clients must use partial incremental backups, they should periodically perform
full incremental backups to ensure that complete backups are done and backup
files are stored according to policies. For example, clients can do partial
incremental backups every night during the week, and a full incremental backup
on the weekend.
The IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks each file against the user's include-exclude list:
v Files that are excluded are not eligible for backup.
v If files are not excluded and a management class is specified with the
INCLUDE option, IBM Tivoli Storage Manager uses that management class.
v If files are not excluded but a management class is not specified with the
INCLUDE option, IBM Tivoli Storage Manager uses the default management
class.
v If no include-exclude list exists, all files in the client domain are eligible for
backup, and IBM Tivoli Storage Manager uses the default management class.
Selective backup
When a user requests a selective backup, the IBM Tivoli Storage Manager ensures
its eligibility.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the file against any include or exclude statements contained in the user
include-exclude list:
v Files that are not excluded are eligible for backup. If a management class is
specified with the INCLUDE option, IBM Tivoli Storage Manager uses that
management class.
v If no include-exclude list exists, the files selected are eligible for backup, and
IBM Tivoli Storage Manager uses the default management class.
2. Checks the management class of each included file:
v If the management class contains a backup copy group and the serialization
requirement is met, the file is backed up. Serialization specifies how files are
handled if they are modified while being backed up and what happens if
modification occurs.
v If the management class does not contain a backup copy group, the file is
not eligible for backup.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the specification of the logical volume against any include or exclude
statements contained in the user include-exclude list:
v If no include-exclude list exists, the logical volumes selected are eligible for
backup, and IBM Tivoli Storage Manager uses the default management class.
v Logical volumes that are not excluded are eligible for backup. If the
include-exclude list has an INCLUDE option for the volume with a
management class specified, IBM Tivoli Storage Manager uses that
management class. Otherwise, the default management class is used.
Archive
When a user requests the archiving of a file or a group of files, the IBM Tivoli
Storage Manager determine its eligibility.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the files against the users include-exclude list to see if any
management classes are specified:
v IBM Tivoli Storage Manager uses the default management class for files that
are not bound to a management class.
v If no include-exclude list exists, IBM Tivoli Storage Manager uses the default
management class unless the user specifies another management class. See
the users guide for the appropriate client for details.
2. Checks the management class for each file to be archived.
v If the management class contains an archive copy group and the serialization
requirement is met, the file is archived. Serialization specifies how files are
handled if they are modified while being archived and what happens if
modification occurs.
v If the management class does not contain an archive copy group, the file is
not archived.
If you need to frequently create archives for the same data, consider using instant
archive (backup sets) instead. Frequent archive operations can create a large
amount of metadata in the server database resulting in increased database growth
and decreased performance for server operations such as expiration. Frequently,
you can achieve the same objectives with incremental backup or backup sets.
Although the archive function is a powerful way to store inactive data with fixed
retention, it should not be used on a frequent and large scale basis as the primary
backup method.
Related concepts:
Creating and using client backup sets on page 545
The criteria for a file to be eligible for automatic migration from an HSM client are
displayed in the following list:
v It resides on a node on which the root user has added and activated hierarchical
storage management. It must also reside in a local file system to which the root
user has added space management, and not in the root (/) or /tmp file system.
v It is not excluded from migration in the include-exclude list.
v It meets management class requirements for migration:
Note: The situation described is valid only when Space Management is installed
and configured. You can perform automatic migration only when using the Space
Management client.
For example, if the file has not been accessed for at least 30 days and a backup
version exists, the file is migrated. You can also define a management class that
allows users to selectively migrate whether or not a backup version exists. Users
can also choose to archive files that have been migrated. IBM Tivoli Storage
Manager manages the following situations:
v If the file is backed up or archived to the server to which it was migrated, the
server copies the file from the migration storage pool to the backup or archive
storage pool. For a tape-to-tape operation, each storage pool must have a tape
drive.
v If the file is backed up or archived to a different server, Tivoli Storage Manager
accesses the file by using the migrate-on-close recall mode. The file resides on
the client node only until the server stores the backup version or the archived
copy in a storage pool.
When a client restores a backup version of a migrated file, the server deletes the
migrated copy of the file from server storage the next time reconciliation is run.
When a client archives a file that is migrated and does not specify that the file is to
be erased after it is archived, the migrated copy of the file remains in server
storage. When a client archives a file that is migrated and specifies that the file is
to be erased, the server deletes the migrated file from server storage the next time
reconciliation is run.
The Tivoli Storage Manager default management class specifies that a backup
version of a file must exist before the file is eligible for migration.
Table 50 shows that an advantage of copying existing policy parts is that some
associated parts are copied in a single operation.
Table 50. Cause and effect of copying existing policy parts
If you copy this... Then you create this...
Policy Domain A new policy domain with:
v A copy of each policy set from the original domain
v A copy of each management class in each original policy set
v A copy of each copy group in each original management class
Policy Set A new policy set in the same policy domain with:
v A copy of each management class in the original policy set
v A copy of each copy group in the original management class
Management Class A new management class in the same policy set and a copy of each
copy group in the management class
The domain contains two policy sets that are named STANDARD and TEST. The
administrator activated the policy set that is named STANDARD. When you
activate a policy set, the server makes a copy of the policy set and names it
ACTIVE. Only one policy set can be active at a time.
The ACTIVE policy set contains two management classes: MCENG and
STANDARD. The default management class is STANDARD.
Related tasks:
Defining and updating an archive copy group on page 510
Related reference:
Defining and updating a policy domain
Defining and updating a policy set on page 502
Defining and updating a management class on page 503
Defining and updating a backup copy group on page 504
Assigning a default management class on page 512
Activating a policy set on page 513
Running expiration processing to delete expired files on page 514
When you copy an existing domain, you also copy any associated policy sets,
management classes, and copy groups.
For example, perform the following steps to copy and update an existing domain:
1. Copy the STANDARD policy domain to the ENGPOLDOM policy domain by
entering the following command:
copy domain standard engpoldom
ENGPOLDOM now contains the standard policy set, management class,
backup copy group, and archive copy group.
2. Update the policy domain ENGPOLDOM so that the backup retention grace
period is extended to 90 days and the archive retention grace period is
extended to two years. Specify an active-data pool as the destination for active
versions of backup data belonging to nodes assigned to the domain. Use
engactivedata as the name of the active-data pool, as in the following example:
update domain engpoldom description=Engineering Policy Domain
backretention=90 archretention=730 activedestination=engactivedata
The policies in the new policy set do not take effect unless you make the new set
the ACTIVE policy set.
Related reference:
Activating a policy set on page 513
To create the TEST policy set in the ENGPOLDOM policy domain, the
administrator performs the following steps:
1. Copy the STANDARD policy set and name the new policy set TEST:
copy policyset engpoldom standard test
Note: When you copy an existing policy set, you also copy any associated
management classes and copy groups.
2. Update the description of the policy set named TEST:
update policyset engpoldom test
description=Policy set for testing
The following four parameters apply only to HSM clients (Tivoli Storage Manager
for Space Management):
Whether space management is allowed
Specifies that the files are eligible for both automatic and selective
migration, only selective migration, or no migration.
How frequently files can be migrated
Specifies the minimum number of days that must elapse since a file was
last accessed before it is eligible for automatic migration.
Whether backup is required
Specifies whether a backup version of a file must exist before the file can
be migrated.
Where migrated files are to be stored
Specifies the name of the storage pool in which migrated files are stored.
Your choice could depend on factors such as:
v The number of client nodes migrating to the storage pool. When many
user files are stored in the same storage pool, volume contention can
occur as users try to migrate files to or recall files from the storage pool.
v How quickly the files must be recalled. If users need immediate access
to migrated versions, you can specify a disk storage pool as the
destination.
Attention: You cannot specify a copy storage pool or an active-data pool as the
destination.
This attribute can be one of four values: STATIC, SHRSTATIC (shared static),
DYNAMIC, or SHRDYNAMIC (shared dynamic).
The value you choose depends on how you want IBM Tivoli Storage Manager to
manage files that are modified while they are being backed up.
Do not back up files that are modified during the backup
You will want to prevent the server from backing up a file while it is being
modified. Use one of the following values:
STATIC
Specifies that IBM Tivoli Storage Manager will attempt to back up
the file only once. If the file or directory is modified during a
backup, the server does not back it up.
SHRSTATIC (Shared static)
Specifies that if the file or directory is modified during a backup,
the server retries the backup as many times as specified by the
CHANGINGRETRIES option in the client options file. If the file is
modified during the last attempt, the file or directory is not backed
up.
Back up files that are modified during the backup
Some files are in constant use, such as an error log. Consequently, these
Attention:
v If a file is modified during backup and DYNAMIC or SHRDYNAMIC is
specified, then the backup may not contain all the changes and may not
be usable. For example, the backup version may contain a truncated
record. Under some circumstances, it may be acceptable to capture a
dynamic or fuzzy backup of a file (the file was changed during the
backup). For example, a dynamic backup of an error log file that is
continuously appended may be acceptable. However, a dynamic backup
of a database file may not be acceptable, since restoring such a backup
could result in an unusable database. Carefully consider dynamic
backups of files as well as possible problems that may result from
restoring potentially fuzzy backups.
v When certain users or processes open files, they may deny any other
access, including read access, to the files by any other user or process.
When this happens, even with serialization set to DYNAMIC or
SHRDYNAMIC, IBM Tivoli Storage Manager will not be able to open
the file at all, so the server cannot back up the file.
The server considers both parameters to determine how frequently files can be
backed up. For example, if frequency is 3 and mode is Modified, a file or directory
is backed up only if it has been changed and if three days have passed since the
last backup. If frequency is 3 and mode is Absolute, a file or directory is backed up
after three days have passed whether or not the file has changed.
Use the Modified mode when you want to ensure that the server retains multiple,
different backup versions. If you set the mode to Absolute, users may find that
they have three identical backup versions, rather than three different backup
versions.
Absolute mode can be useful for forcing a full backup. It can also be useful for
ensuring that extended attribute files are backed up, because Tivoli Storage
Manager does not detect changes if the size of the extended attribute file remains
the same.
When you set the mode to Absolute, set the frequency to 0 if you want to ensure
that a file is backed up each time full incremental backups are scheduled for or
initiated by a client.
These parameters interact to determine the backup versions that the server retains.
When the number of inactive backup versions exceeds the number of versions
allowed (Versions Data Exists and Versions Data Deleted), the oldest version
expires and the server deletes the file from the database the next time expiration
processing runs. How many inactive versions the server keeps is also related to the
parameter for how long inactive versions are kept (Retain Extra Versions).
Important: A base file is not eligible for expiration until all its dependent subfiles
have been expired.
For example, see Table 51 and Figure 71. A client node has backed up the file
REPORT.TXT four times in one month, from March 23 to April 23. The settings in the
backup copy group of the management class to which REPORT.TXT is bound
determine how the server treats these backup versions. Table 52 on page 508 shows
some examples of how different copy group settings would affect the versions. The
examples show the effects as of April 24 (one day after the file was last backed
up).
Table 51. Status of REPORT.TXT as of april 24
Days the Version Has Been
Version Date Created Inactive
Active April 23 (not applicable)
Inactive 1 April 13 1 (since April 23)
Inactive 2 March 31 11 (since April 13)
Inactive 3 March 23 24 (since March 31)
Wednesday March 31
Default management
Tuesday April 13
class
Friday April 23
Backup
copy group
Active version
Inactive
versions
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted and
Retain Only Version parameters also have an effect. All versions
are now inactive. Two of the four versions expire immediately (the
March 23 and March 31 versions expire). The April 13 version
expires when it has been inactive for 60 days (on June 23). The
server keeps the last remaining inactive version, the April 23
version, for 180 days after it becomes inactive.
NOLIMIT 2 versions 60 days 180 days Retain Extra Versions controls expiration of the versions. The
inactive versions (other than the last remaining version) are
expired when they have been inactive for 60 days.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted and
Retain Only Version parameters also have an effect. All versions
are now inactive. Two of the four versions expire immediately (the
March 23 and March 31 versions expire) because only two versions
are allowed. The April 13 version expires when it has been
inactive for 60 days (on June 22). The server keeps the last
remaining inactive version, the April 23 version, for 180 days after
it becomes inactive.
NOLIMIT NOLIMIT 60 days 180 days Retain Extra Versions controls expiration of the versions. The
server does not expire inactive versions based on the maximum
number of backup copies. The inactive versions (other than the
last remaining version) are expired when they have been inactive
for 60 days.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Retain Only Version parameter
also has an effect. All versions are now inactive. The three of four
versions will expire after each of them has been inactive for 60
days. The server keeps the last remaining inactive version, the
April 23 version, for 180 days after it becomes inactive.
4 versions 2 versions NOLIMIT NOLIMIT Versions Data Exists controls the expiration of the versions until
a user deletes the file from the client node. The server does not
expire inactive versions based on age.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted parameter
controls expiration. All versions are now inactive. Two of the four
versions expire immediately (the March 23 and March 31 versions
expire) because only two versions are allowed. The server keeps
the two remaining inactive versions indefinitely.
This new copy group must be able to complete the following tasks:
v Let users back up changed files, regardless of how much time has elapsed since
the last backup, using the default value 0 for the Frequency parameter
(frequency parameter not specified)
v Retain up to four inactive backup versions when the original file resides on the
user workstation, using the Versions Data Exists parameter (verexists=5)
v Retain up to four inactive backup versions when the original file is deleted from
the user workstation, using the Versions Data Deleted parameter
(verdeleted=4)
v Retain inactive backup versions for no more than 90 days, using the Retain
Extra Versions parameter (retextra=90)
v If there is only one backup version, retain it for 600 days after the original is
deleted from the workstation, using the Retain Only Version parameter
(retonly=600)
Note: When certain users or processes open files, they deny read access to the
files for any other user or process. When this happens, even with serialization
set to dynamic or shared dynamic, the server does not back up the file.
3. How long to retain an archived copy specifies the number of days to retain an
archived copy in storage. When the time elapses, the archived copy expires and
the server deletes the file the next time expiration processing runs.
When a user archives directories, the server uses the default management class
unless the user specifies otherwise. If the default management class does not
have an archive copy group, the server binds the directory to the management
class that currently has the shortest retention time for archive. When you
change the retention time for an archive copy group, you may also be changing
the retention time for any directories that were archived using that copy group.
The user can change the archive characteristics by using Archive Options in the
interface or by using the ARCHMC option on the command.
4. The RETMIN parameter in archive copy groups specifies the minimum number of
days an object will be retained after the object is archived. For objects that are
managed by event-based retention policy, this parameter ensures that objects
are retained for a minimum time period regardless of when an event triggers
retention
After you have defined an archive copy group, using the RETMIN=n parameter,
ensure that the appropriate archive data will be bound to the management class
with this archive copy group. You can do this either by using the default
management class or by modifying the client options file to specify the
management class for the appropriate archive data.
Placing a deletion hold on an object does not extend its retention period. For
example, if an object is thirty days away from the end of its retention period
and it is placed on hold for ninety days, it will be eligible for expiration
immediately upon the hold being released.
Related concepts:
Deletion hold on page 517
Related tasks:
Using virtual volumes to store data on another server on page 737
The STANDARD management class was copied from the STANDARD policy set to
the TEST policy set. Before the new default management class takes effect, you
must activate the policy set.
Related tasks:
Example: defining a policy set on page 502
Validation fails if the policy set does not contain a default management class.
Validation results in result in warning messages if any of the following conditions
exist.
Related reference:
How files and directories are associated with a management class on page 491
Defining and updating a policy domain on page 500
When you activate a policy set, the server performs a final validation of the
contents of the policy set and copies the original policy set to the ACTIVE policy
set.
You cannot update the ACTIVE policy set; the original and the ACTIVE policy sets
are two separate objects. For example, updating the original policy set has no effect
on the ACTIVE policy set. To change the contents of the ACTIVE policy set, you
must create or change another policy set and then activate that policy set.
If data retention protection is active, the following rules apply during policy set
validation and activation. The server can be a managed server and receive policy
definitions via enterprise configuration, but it will not be possible to activate
propagated policy sets if these rules are not satisfied.
v All management classes in the policy set to be validated and activated must
contain an archive copy group.
v If a management class exists in the active policy set, a management class with
the same name must exist in the policy set to be validated and activated.
v If an archive copy group exists in the active policy set, the corresponding copy
group in the policy set to be validated and activated must have RETVER and
RETMIN values at least as large as the corresponding values in the active copy
group.
Note:
1. A base file is not eligible for expiration until all of its dependent subfiles have
been expired.
2. An archive file is not eligible for expiration if there is a deletion hold on it. If a
file is not held, it will be handled according to existing expiration processing.
Related concepts:
Expiration processing of base files and subfiles on page 555
Deletion hold on page 517
You can set the options by editing the dsmserv.opt file (see the Administrator's
Reference).
If you use the server options file to control automatic expiration, the server runs
expiration processing each time you start the server. After that, the server runs
expiration processing at the interval you specified with the option, measured from
514 IBM Tivoli Storage Manager for AIX: Administrator's Guide
the start time of the server.
After issuing EXPIRE INVENTORY, expired files are deleted from the database
according to how you specify parameters on the command.
You can control how long the expiration process runs by using the DURATION
parameter with the EXPIRE INVENTORY command. You can run several (up to 40)
expiration processes in parallel by specifying RESOURCE=x, where x equals the
number of nodes that you want to process. Inventory expiration can also be
distributed across more than one resource on a file space level to help distribute
the workload for nodes with many file spaces.
You can use the DEFINE SCHEDULE command to set a specific schedule for this
command. This automatically starts inventory expiration processing. If you
schedule the EXPIRE INVENTORY command, set the expiration interval to 0 (zero) in
the server options so that the server does not run expiration processing when you
start the server.
When expiration processing runs, the server normally sends detailed messages
about policy changes made since the last time expiration processing ran. You can
reduce those messages by using the QUIET=YES parameter with the EXPIRE
INVENTORY command, or the following options:
v The EXPQUIET server option
When you use the quiet option or parameter, the server issues messages about
policy changes during expiration processing only when files are deleted, and either
the default management class or retention grace period for the domain has been
used to expire the files.
For example, securities brokers and other regulated institutions enforce retention
requirements for certain records, including electronic mail, customer statements,
trade settlements, check images and new account forms. Data retention protection
prevents deliberate or accidental deletion of data until its specified retention
criterion is met.
Retention protection can only be activated on a new server that does not already
have stored objects (backup, archive, or space-managed). Activating retention
protection applies to all archive objects subsequently stored on that server. After
retention protection has been set, the server cannot store backup objects,
space-managed objects, or backupsets. Retention protection cannot be added for an
object that was previously stored on a Tivoli Storage Manager server. After an
object is stored with retention protection, retention protection cannot be removed.
Retention protection is based on the retention criterion for each object, which is
determined by the RETVER parameter of the archive copy group of the management
class to which the object is bound. If an object uses event-based retention, the
object will not expire until whatever comes later: either the date the object was
archived plus the number of days in the RETMIN parameter or the date the event
was signalled plus the number of days specified in the RETVER parameter. On
servers which have retention protection enabled, the following operations will not
delete objects whose retention criterion has not been satisfied:
v Requests from the client to delete an archive object
v DELETE FILESPACE (from either a client or administrative command)
v DELETE VOLUME DISCARDDATA=YES
v AUDIT VOLUME FIX=YES
Important: A cached copy of data can be deleted, but data in primary storage
pools, copy storage pools, and active-data pools can only be marked damaged
and is never deleted.
If your server has data retention protection activated, the following items are
restrictions:
The server does not send a retention value to an EMC Centera storage device if
retention protection is not enabled. If this is the case, you can use a Centera
storage device as a standard device from which archive and backup files can be
deleted.
Related tasks:
Chapter 33, Protecting and recovering the server infrastructure and client data,
on page 917
Deletion hold
If a hold is placed on an object through the client API, the object is not deleted
until the hold is released.
See the Backup-Archive Clients Installation and User's Guide for more information.
There is no limit to how often you alternate holding and releasing an object. An
object can have only one hold on it at a time, so if you attempt to hold an object
that is already held, you will get an error message.
If an object with event-based policy is on hold, an event can still be signalled. The
hold will not extend the retention period for an object. If the retention period
specified in the RETVER and RETMIN parameters expires while the object is on hold,
the object will be eligible for deletion whenever the hold is released.
If an object is held, it will not be deleted whether or not data retention protection
is active. If an object is not held, it is handled according to existing processing such
as normal expiration, data retention protection, or event-based retention. Data that
is in deletion hold status can be exported. The hold status will be preserved when
the data is imported to another system.
Note: A cached copy of data can be deleted, but data in primary storage pools,
copy storage pools, and active-data pools can only be marked damaged and is
never deleted.
Data stored with a retention date cannot be deleted from the file system before the
retention period expires. The SnapLock feature can only be used by Tivoli Storage
Manager servers that have data retention protection enabled.
Data archived by data retention protection servers and stored to NetApp NAS file
servers is stored as Tivoli Storage Manager FILE volumes. At the end of a write
transaction, a retention date is set for the FILE volume, through the SnapLock
interface. This date is calculated by using the RETVER and RETMIN parameters of the
archive copy group used when archiving the data. Having a retention date
associated with the FILE volume gives it a characteristic of WORM media by not
allowing the data to be destroyed or overwritten until the retention date has
passed. These FILE volumes are referred to as WORM FILE volumes. After a
retention date has been set, the WORM FILE volume cannot be deleted until the
retention date has passed. System Storage Archive Manager combined with
WORM FILE volume reclamation ensures protection for the life of the data.
Storage pools can be managed either by threshold or by data retention period. The
RECLAMATIONTYPE storage pool parameter indicates that a storage pool is managed
based on a data retention period. When a traditional storage pool is queried with
the FORMAT=DETAILED parameter, this output is displayed:
Reclamation Type: THRESHOLD
Tivoli Storage Manager servers that have data retention protection enabled through
System Storage Archive Manager and have access to a NetApp filer with the
SnapLock licensed feature can define a storage pool with RECLAMATIONTYPE set
to SNAPLOCK. This means that data created on volumes in this storage pool are
managed by retention date. When a SnapLock storage pool is queried with the
FORMAT=DETAILED parameter, the output displayed indicates that the storage
pools are managed by data retention period.
Reclamation Type: SNAPLOCK
See the NetApp document Data ONTAP Storage Management Guide for details on
the SnapLock filer. Note this is NetApp documentation.
Attention: It is not recommended that you use this feature to protect data with a
retention period of less than three months.
Related concepts:
Data retention protection on page 516
The reclamation of a WORM FILE volume to another WORM FILE volume before
the retention date expiration ensures that data is always protected by the SnapLock
feature.
Because this protection is at a Tivoli Storage Manager volume level, the data on the
volumes can be managed by Tivoli Storage Manager policy without consideration
of where the data is stored. Data stored on WORM FILE volumes is protected both
by data retention protection and by the retention period stored with the physical
file on the SnapLock volume. If a Tivoli Storage Manager administrator issues a
command to delete the data, the command fails. If someone attempt to delete the
file through a series of network file system calls, the SnapLock feature prevents the
data from being deleted.
During reclamation processing, if the Tivoli Storage Manager server cannot move
data from an expiring SnapLock volume to a new SnapLock volume, a warning
message is issued.
Retention periods
Tivoli Storage Manager policies manage the retention time for the WORM FILE
volume. The retention of some files might exceed the retention time for the WORM
FILE volume they were stored on. This could require moving them to another
volume to ensure that the files are stored on WORM media.
Some objects on the volume might need to be retained longer than other objects on
the volume for the following reasons:
v They are bound to management classes with different retention times.
v They cannot be removed because of a deletion hold.
v They are waiting for an event to occur before expiring.
v The retention period for a copy group is increased, requiring a longer retention
time than that specified in the SnapLock feature when the WORM FILE volume
was committed.
Use the DEFINE STGPOOL command to set up a storage pool for use with the
SnapLock feature. Selecting RECLAMATIONTYPE=SNAPLOCK enables Tivoli
Storage Manager to manage FILE volumes by a retention date. After a storage pool
has been set up as a SnapLock storage pool, the RECLAMATIONTYPE parameter
cannot be updated to THRESHOLD. When a SnapLock storage pool is defined, a
check is made to ensure that the directories specified in the device class are
SnapLock WORM volumes. When a file class is defined and storage pools are
created with the reclamation type of SNAPLOCK, all volumes must be WORM
volumes or the operation fails. If a device class is updated to contain additional
directories and there are SnapLock storage pools assigned to it, the same check is
made to ensure all directories are SnapLock WORM volumes.
There are three retention periods available in the NetApp SnapLock feature. These
must be configured correctly so that the Tivoli Storage Manager server can
properly manage WORM data stored in SnapLock volumes. The Tivoli Storage
Manager server sets the retention period for data being stored on NetApp
SnapLock volumes based on the values in the copy group for the data being
Chapter 13. Implementing policies for client data 519
archived. The NetApp filer should not conflict with the ability of the Tivoli Storage
Manager server to set the retention period. The following settings are the Tivoli
Storage Manager recommendations for retention periods in the NetApp filer:
1. Minimum Retention Period Set the higher value: either 30 days or the
minimum number of days specified by any copy group (using a NetApp
SnapLock filer for WORM FILE storage) for the data retention period. The copy
group is the one in use storing data on NetApp SnapLock volumes.
2. Maximum Retention Period Leave default of 30 years. This allows the Tivoli
Storage Manager server to set the actual volume retention period based on the
settings in the archive copy group.
3. Default Retention Period Set to 30 days. If you do not set this value and you do
not set the maximum retention period, each volume's retention period will be
set to 30 years. If this occurs, the Tivoli Storage Manager server's ability to
manage expiration and reuse of NetApp SnapLock volumes will be largely
defeated in that no volume will be able to be reused for thirty years.
With the NetApp SnapLock retention periods appropriately set, Tivoli Storage
Manager can manage the data in SnapLock storage pools with maximum efficiency.
For each volume that is in a SNAPLOCK storage pool, a Tivoli Storage Manager
reclamation period is created. The Tivoli Storage Manager reclamation period has a
start date, BEGIN RECLAIM PERIOD, and an end date, END RECLAIM PERIOD.
View these dates by issuing the QUERY VOLUME command with the
FORMAT=DETAILED parameter on a SnapLock volume. For example:
Begin Reclaim Period: 09/05/2010
End Reclaim Period: 10/06/2010
When Tivoli Storage Manager archives files to a SnapLock volume, it keeps track
of the latest expiration date of those files, and the BEGIN RECLAIM PERIOD is set
to that latest expiration date. When more files are added to the SnapLock volume,
the starting date is set to that later date if there is a file with a later expiration date
than the one currently on the volume. The start date is set to the latest expiration
date for any file on that volume. The expectation is that all files on that volume
should have already either expired, or should be expiring on that day and the
following day there should be no valid data left on that volume.
The END RECLAIM PERIOD is set to a month later than the BEGIN RECLAIM
PERIOD. The retention date set in the NetApp filer for that volume is set to the
END RECLAIM PERIOD date. This means the NetApp filer will prevent any
deletion of that volume until the END RECLAIM PERIOD date has passed. This is
approximately a month after the data has actually expired in the Tivoli Storage
Manager server. If an END RECLAIM PERIOD date is calculated by the Tivoli
Storage Manager server for a volume, and the date is later than the current END
RECLAIM PERIOD, the new date will be reset in the NetApp filer for that volume
to the later date. This guarantees that the Tivoli Storage Manager WORM FILE
volume will not be deleted until all data on the volume has expired, or the data
has been moved to another SnapLock volume.
The Tivoli Storage Manager reclamation period is the amount of time between the
begin date and the end date. It is also the time period which the Tivoli Storage
Manager server has to delete volumes on which all the data has expired, or to
move files which have not expired on expiring SnapLock volumes to new
SnapLock volumes with new dates. This month is critical to how the server safely
and efficiently manages the data on WORM FILE volumes. Data on a SnapLock
volume typically expires by the time the beginning date arrives, and the volume
However, some events may occur which mean that there is still valid data on a
SnapLock volume:
1. Expiration processing in the Tivoli Storage Manager server for that volume may
have been delayed or has not completed yet.
2. The retention parameters on the copy group or associated management classes
may have been altered for a file after it was archived, and that file is not going
to expire for some period of time.
3. A deletion hold may have been placed on one or more of the files on the
volume.
4. Reclamation processing has either been disabled or is encountering errors
moving data to new SnapLock volumes on a SnapLock storage pool.
5. A file is waiting for an event to occur before the Tivoli Storage Manager server
can begin the expiration of the file.
If there are files which have not expired on a SnapLock volume when the
beginning date arrives, they must be moved to a new SnapLock volume with a
new begin and end date. This will properly protect that data. However, if
expiration processing on the Tivoli Storage Manager server has been delayed, and
those files will expire as soon as expiration processing on the Tivoli Storage
Manager server runs, it is inefficient to move those files to a new SnapLock
volume. To ensure that unnecessary data movement does not occur for files which
are due to expire, movement of files on expiring SnapLock volumes will be
delayed some small number of days after the BEGIN RECLAIM PERIOD date.
Since the data is protected in the SnapLock filer until the END RECLAIM PERIOD
date, there is no risk to the data in delaying this movement. This allows Tivoli
Storage Manager expiration processing to complete. After that number of days, if
there is still valid data on an expiring SnapLock volume, it will be moved to a new
SnapLock volume, thus continuing the protection of the data.
Since the data was initially archived, there may have been changes in the retention
parameters for that data (for example, changes in the management class or copy
pool parameters) or there may be a deletion hold on that data. However, the data
on that volume will only be protected by SnapLock until the END RECLAIM
PERIOD date. Data that has not expired is moved to new SnapLock volumes
during the Tivoli Storage Manager reclamation period. If errors occur moving data
to a new SnapLock volume, a distinct warning message is issued indicating that
the data will soon be unprotected. If the error persists, it is recommended that you
issue a MOVE DATA command for the problem volume.
You can avoid this situation by using the RETENTIONEXTENSION server option. This
option allows the server to set or extend the retention date of a SnapLock volume.
You can specify from 30 to 9999 days. The default is 365 days.
When selecting volumes in a SnapLock storage pool for reclamation, the server
checks if the volume is within the reclamation period.
v If the volume is not within the reclamation period, no action is taken. The
volume is not reclaimed, and the retention date is unchanged
v If the volume is within the reclamation period, the server checks if the percent of
reclaimable space on the volume is greater than the reclamation threshold of the
storage pool or of the threshold percentage passed in on the THRESHOLD
parameter of a RECLAIM STGPOOL command.
If the reclaimable space is greater than the threshold, the server reclaims the
volume and sets the retention date of the target volume is set to the greater of
these values:
- The remaining retention time of the data plus 30 days for the reclamation
period.
- The RETENTIONEXTENSION value plus 30 days for the reclamation period.
If the reclaimable space is not greater than the threshold, the server resets the
retention date of the volume by the amount specified in the
RETENTIONEXTENSION option. The new retention period is calculated by adding
the number of days specified to the current date.
The Tivoli Storage Manager server allows this type of movement, but if data is
moved from a WORM FILE volume to another type of media, the data may no
longer be protected from inadvertent or malicious deletion. If this data is on
WORM volumes to meet data retention and protection requirements for certain
legal purposes and is moved to other media, the data may no longer meet those
requirements. You should configure your storage pools so this type of data is kept
in storage pools which consist of SnapLock WORM volumes during the entire data
retention period.
When you configure the storage pools this way, you ensure that your data is
properly protected. If you define a next, reclaim, copy storage pool, or active-data
pool without selecting the RECLAMATIONTYPE=SNAPLOCK option, you will not have a
protected storage pool. The command succeeds, but a warning message is issued.
Complete the following steps to set up a SnapLock volume for use as a Tivoli
Storage Manager WORM FILE volume:
1. Install and set up SnapLock on the NetApp filer. See NetApp documentation
for more information.
2. Properly configure the minimum, maximum, and default retention periods. If
these retention periods are not configured properly, Tivoli Storage Manager will
not be able to properly manage the data and volumes.
3. Install and configure a Tivoli Storage Manager server with data retention
protection. Ensure the SET ARCHIVERETENTIONPROTECTION command is activated.
4. Set up policy by using the DEFINE COPYGROUP command. Select RETVER and
RETMIN values in the archive copy group which will meet your requirements
for protecting this data in WORM storage. If the RETVER or RETMIN values
are not set, the default management classes values will be used.
5. Set up storage by using the DEFINE DEVCLASS command.
v Use the FILE device class.
v Specify the DIRECTORY parameter to point to the directory or directories on
the SnapLock volumes.
6. Define a storage pool using the device class you defined above.
v Specify RECLAMATIONTYPE=SNAPLOCK.
7. Update the copy group to point to the storage pool you just defined.
8. Use the Tivoli Storage Manager API to archive your objects into the SnapLock
storage pool. This feature is not available on standard Tivoli Storage Manager
backup-archive clients.
If you back up directly to tape, the number of clients that can back up data at the
same time is equal to the number of drives available to the storage pool (through
the mount limit of the device class). For example, if you have one drive, only one
client at a time can back up data.
The direct-to-tape backup eliminates the need to migrate data from disk to tape.
However, performance of tape drives is often lower when backing up directly to
tape than when backing up to disk and then migrating to tape. Backing up data
directly to tape usually means more starting and stopping of the tape drive.
Backing up to disk then migrating to tape usually means the tape drive moves
more continuously, meaning better performance.
To use the Tivoli Storage Manager Console, complete the following steps:
1. Double-click the desktop icon for the Tivoli Storage Manager Console.
2. Expand the tree until the Tivoli Storage Manager server you want to work with
is displayed. Expand the server and click Wizards. The list of wizards appears
in the right pane.
3. Select the Client Node Configuration wizard and click Start. The Client Node
Configuration wizard appears.
4. Progress through the wizard to the Define Tivoli Storage Manager client nodes
and policy page.
5. By default, client nodes are associated with BACKUPPOOL. This storage pool
is set to immediately migrate any data it receives. Drag BACKUPPOOL and
drop it on a tape storage pool.
Note: You can also select a client, click Edit > New to create a new policy
domain that will send client data directly to any storage pool.
6. Finish the wizard.
This command creates the DIR2TAPE policy domain that contains a default
policy set, management class, backup and archive copy group, each named
STANDARD.
2. Update the backup or archive copy group in the DIR2TAPE policy domain to
specify the destination to be a tape storage pool. For example, to use a tape
storage pool named TAPEPOOL for backup, issue the following command:
update copygroup dir2tape standard standard destination=tapepool
To use a tape storage pool named TAPEPOOL for archive, issue the following
command:
update copygroup dir2tape standard standard type=archive
destination=tapepool
3. Activate the changed policy set.
activate policyset dir2tape standard
4. Assign client nodes to the DIR2TAPE policy domain. For example, to assign a
client node named TAPEUSER1 to the DIR2TAPE policy domain, issue the
following command:
update node tapeuser1 domain=dir2tape
The Versions Data Exists, Versions Data Deleted, and Retain Extra Versions
parameters work together to determine over what time period a client can restore a
logical volume image and reconcile later file backups. Also, you may have server
storage constraints that require you to control the number of backup versions
allowed for logical volumes. The server handles logical volume backups the same
as regular incremental or selective backups. Logical volume backups differ from
selective, incremental, or archive operations in that each file space that is backed
up is treated as a single large file.
For example, a user backs up a logical volume, and the following week deletes one
or more files from the volume. At the next incremental backup, the server records
in its database that the files were deleted from the client. When the user restores
the logical volume, the program can recognize that files have been deleted since
the backup was created. The program can delete the files as part of the restore
process. To ensure that users can use the capability to reconcile later incremental
backups with a restored logical volume, you need to ensure that you coordinate
policy for incremental backups with policy for backups for logical volumes.
For example, you decide to ensure that clients can choose to restore files and
logical volumes from any time in the previous 60 days. You can create two
management classes, one for files and one for logical volumes. Table 53 shows the
relevant parameters. In the backup copy group of both management classes, set the
Retain Extra Versions parameter to 60 days.
In the management class for files, set the parameters so that the server keeps
versions based on age rather than how many versions exist. More than one backup
version of a file may be stored per day if clients perform selective backups or if
clients perform incremental backups more than once a day. The Versions Data
Exists parameter and the Versions Data Deleted parameter control how many of
these versions are kept by the server. To ensure that any number of backup
versions are kept for the required 60 days, set both the Versions Data Exists
parameter and the Versions Data Deleted parameter to NOLIMIT for the
management class for files. This means that the server retains backup versions
based on how old the versions are, instead of how many backup versions of the
same file exist.
For logical volume backups, the server ignores the frequency attribute in the
backup copy group.
Table 53. Example of backup policy for files and logical volumes
Parameter (backup copy Management Class for Files Management Class for
group in the management Logical Volumes
class)
Versions Data Exists NOLIMIT 3 versions
Versions Data Deleted NOLIMIT 1
Retain Extra Versions 60 days 60 days
Retain Only Version 120 days 120 days
The Tivoli Storage Manager server initiates the backup, allocates a drive, and
selects and mounts the media. The NAS file server then transfers the data to tape.
Because the NAS file server performs the backup, the data is stored in its own
format. For most NAS file servers, the data is stored in the NDMPDUMP data
format. For NetApp file servers, the data is stored in the NETAPPDUMP data
format. For EMC file servers, the data is stored in the CELERRADUMP data
format. To manage NAS file server image backups, copy groups for NAS nodes
must point to a storage pool that has a data format of NDMPDUMP,
NETAPPDUMP, or CELERRADUMP.
The following backup copy group attributes are ignored for NAS images:
v Frequency
v Mode
v Retain Only Versions
v Serialization
v Versions Data Deleted
To set up the required policy for NAS nodes, you can define a new, separate policy
domain.
Backups for NAS nodes can be initiated from the server, or from a client that has
at least client owner authority over the NAS node. For client-initiated backups, you
can use client option sets that contain include and exclude statements to bind NAS
file system or directory images to a specific management class. The valid options
that can be used for a NAS node are: include.fs.nas, exclude.fs.nas, and
domain.nas. NAS backups initiated from the Tivoli Storage Manager server with
the BACKUP NODE command ignore client options specified in option files or client
option sets. For details on the options see the Backup-Archive Clients Installation and
User's Guide for your particular client platform.
When the Tivoli Storage Manager server creates a table of contents (TOC), you can
view a collection of individual files and directories backed up via NDMP and
select which to restore. To establish where to send data and store the table of
contents, policy should be set so that:
v Image backup data is sent to a storage pool with a NDMPDUMP,
NETAPPDUMP or CELERRADUMP format.
v The table of contents is sent to a storage pool with either NATIVE or
NONBLOCK format.
Related tasks:
Creating client option sets on the server on page 468
Related reference:
Chapter 9, Using NDMP for operations with NAS file servers, on page 215
The storage agent transfers data between the client and the storage device. See
Storage Agent User's Guide for details. See the Web site for details on clients that
support the feature: http://www.ibm.com/support/entry/portal/Overview/
Software/Tivoli/Tivoli_Storage_Manager.
One task in configuring your systems to use this feature is to set up policy for the
clients. Copy groups for these clients must point to the storage pool that is
associated with the SAN devices. If you have defined a path from the client to a
drive on the SAN, drives in this storage pool can then use the SAN to send data
directly to the device for backup, archive, restore, and retrieve.
To set up the required policy, either define a new, separate policy domain, or
define a new management class in an existing policy domain.
Related tasks:
Define a new policy domain
Configuring IBM Tivoli Storage Manager for LAN-free data movement on page
134
Related reference:
Define a new management class in an existing policy domain on page 529
Because the new management class is not the default for the policy domain, you
must add an include statement to each client options file to bind objects to that
management class.
For example, suppose sanclientmc is the name of the management class that you
defined for clients that are using devices on a SAN. You want the client to be able
to use the SAN for backing up any file on the c drive. Put the following line at the
end of the client's include-exclude list:
include c:* sanclientmc
For details on the include-exclude list, see Backup-Archive Clients Installation and
User's Guide.
In the default management class, the destination for the archive copy group
determines where the target server stores data for the source server. Other policy
specifications, such as how long to retain the data, do not apply to data stored for
a source server.
Related tasks:
Using virtual volumes to store data on another server on page 737
For example, you decide to ensure that clients can choose to restore files from
anytime in the previous 60 days. In the backup copy group, set the Retain Extra
Versions parameter to 60 days. More than one backup version of a file may be
stored per day if clients perform selective backups or if clients perform incremental
backups more than once a day. The Versions Data Exists parameter and the
Versions Data Deleted parameter control how many of these versions are kept by
the server. To ensure that any number of backup versions are kept for the required
60 days, set both the Versions Data Exists parameter and the Versions Data
Deleted parameter to NOLIMIT. This means that the server essentially determines
the backup versions to keep based on how old the versions are, instead of how
many backup versions of the same file exist.
Keeping backed-up versions of files long enough to allow clients to restore their
data to a point in time can mean increased resource costs. Requirements for server
storage increase because more file versions are kept, and the size of the server
database increases to track all of the file versions. Because of these increased costs,
you may want to choose carefully which clients can use the policy that allows for
point-in-time restore operations.
Clients need to run full incremental backup operations frequently enough so that
IBM Tivoli Storage Manager can detect files that have been deleted on the client
file system. Only a full incremental backup can detect whether files have been
deleted since the last backup. If full incremental backup is not done often enough,
clients who restore to a specific time may find that many files that had actually
been deleted from the workstation get restored. As a result, a clients file system
may run out of space during a restore process.
Important: The server will not attempt to retrieve client files from an active-data
pool during a point-in-time restore. Point-in-time restores require both active and
inactive file versions. Active-data pools contain only active file versions. For
optimal efficiency during point-in-time restores and to avoid switching between
active-data pools and primary or copy storage pools, the server retrieves both
active and inactive versions from the same storage pool and volumes.
To distribute policy, you associate a policy domain with a profile. Managed servers
that subscribe to the profile then receive the following definitions:
v The policy domain itself
v Policy sets in that domain, except for the ACTIVE policy set
v Management classes in the policy sets
v Backup and archive copy groups in the management classes
v Client schedules associated with the policy domain
The names of client nodes and client-schedule associations are not distributed. The
ACTIVE policy set is also not distributed.
The distributed policy becomes managed objects (policy domain, policy sets,
management classes, and so on) defined in the database of each managed server.
To use the managed policy, you must activate a policy set on each managed server.
If storage pools specified as destinations in the policy do not exist on the managed
server, you receive messages pointing out the problem when you activate the
policy set. You can create new storage pools to match the names in the policy set,
or you can rename existing storage pools.
On the managed server you also must associate client nodes with the managed
policy domain and associate client nodes with schedules.
Related tasks:
Setting up enterprise configurations on page 709
Querying policy
You can request information about the contents of policy objects. You might want
to do this before creating new objects or when helping users to choose policies that
fit their needs.
You can specify the output of a query in either standard or detailed format. The
examples in this section are in standard format.
On a managed server, you can see whether the definitions are managed objects.
Request the detailed format in the query and check the contents of the Last
update by (administrator) field. For managed objects, this field contains the string
$$CONFIG_MANAGER$$.
Issue the following command to request information about the backup copy group
(the default) in the ENGPOLDOM engineering policy domain:
query copygroup engpoldom * *
The following data shows the output from the query. It shows that the ACTIVE
policy set contains two backup copy groups that belong to the MCENG and
STANDARD management classes.
The following figure is the output from the query. It shows that the ACTIVE policy
set contains the MCENG and STANDARD management classes.
Issue the following command to request information about policy sets in the
ENGPOLDOM engineering policy domain:
query policyset engpoldom *
The following figure is the output from the query. It shows an ACTIVE policy set
and two inactive policy sets, STANDARD and TEST.
Issue the following command to request information about a policy domain (for
example, to determine if any client nodes are registered to that policy domain):
query domain *
The following figure is the output from the query. It shows that both the
ENGPOLDOM and STANDARD policy domains have client nodes assigned to
them.
Deleting policy
When you delete a policy object, you also delete any objects belonging to it. For
example, when you delete a management class, you also delete the copy groups in
it.
You cannot delete the ACTIVE policy set or objects that are part of that policy set.
You can delete the policy objects named STANDARD that come with the server.
However, all STANDARD policy objects are restored whenever you reinstall the
server.
Related concepts:
Protection and expiration of archive data on page 516
For example, to delete the backup and archive copy groups belonging to the
MCENG and STANDARD management classes in the STANDARD policy set,
enter:
delete copygroup engpoldom standard mceng type=backup
delete copygroup engpoldom standard standard type=backup
delete copygroup engpoldom standard mceng type=archive
delete copygroup engpoldom standard standard type=archive
For example, to delete the MCENG and STANDARD management classes from the
STANDARD policy set, enter:
delete mgmtclass engpoldom standard mceng
delete mgmtclass engpoldom standard standard
When you delete a management class from a policy set, the server deletes the
management class and all copy groups that belong to the management class in the
specified policy domain.
For example, to delete the TEST policy set from the ENGPOLDOM policy domain,
enter:
delete policyset engpoldom test
When you delete a policy set, the server deletes all management classes and copy
groups that belong to the policy set within the specified policy domain.
The ACTIVE policy set in a policy domain cannot be deleted. You can replace the
contents of the ACTIVE policy set by activating a different policy set. Otherwise,
the only way to remove the ACTIVE policy set is to delete the policy domain that
contains the policy set.
Move any client nodes to another policy domain, or delete the nodes.
When you delete a policy domain, the server deletes the policy domain and all
policy sets (including the ACTIVE policy set), management classes, and copy
groups that belong to the policy domain.
Related reference:
How files and directories are associated with a management class on page 491
Tasks:
Validating a node's data during a client session on page 538
Securing communications on page 885
Encrypting data on tape on page 538
Setting up shredding on page 542
Generating client backup sets on the server on page 546
Restoring backup sets from a backup-archive client on page 550
Moving backup sets to other servers on page 550
Managing client backup sets on page 551
Enabling clients to use subfile backup on page 554
Optimizing restore operations for clients on page 556
Managing storage usage for archives on page 564
Concepts:
Performance considerations for data validation on page 538
Securing sensitive client data on page 541
Creating and using client backup sets on page 545
Cyclic redundancy checking is performed at the client when the client requests
services from the server. For example, the client issues a query, backup, or archive
request. The server also performs a CRC operation on the data sent by the client
and compares its value with the value calculated by the client. If the CRC values
do not match, the server will issue an error message once per session. Depending
on the operation, the client may attempt to automatically retry the operation.
After Tivoli Storage Manager completes the data validation, the client and server
discard the CRC values generated in the current session.
Data validation can be enabled for one or all of the following items:
v Tivoli Storage Manager client nodes.
v Tivoli Storage Manager storage agents. For details, refer to the Storage Agent
User's Guide for your particular operating system.
Methods for enabling data validation for a node include choosing data validation
for individual nodes, specifying a set of nodes by using a wildcard search string,
or specifying a group of nodes in a policy domain.
For example, to enable data validation for existing node, ED, you can issue an
UPDATE NODE command. This user backs up the company payroll records weekly
and you have decided it is necessary to have all the user data validated: the data
itself and metadata.
update node ed validateprotocol=all
Later, the network has shown to be stable and no data corruption has been
identified when user ED has processed backups. You can then disable data
validation to minimize the performance impact of validating all of ED's data
during a client session. For example:
update node ed validateprotocol=no
IBM tape technology supports different methods of drive encryption for the
following devices:
v IBM 3592 generation 2 and generation 3
v IBM linear tape open (LTO) generation 4 and generation 5
Application encryption
Encryption keys are managed by the application, in this case, Tivoli
Storage Manager. Tivoli Storage Manager generates and stores the keys in
the server database. Data is encrypted during WRITE operations, when the
encryption key is passed from the server to the drive. Data is decrypted for
READ operations.
The methods of drive encryption that you can use with Tivoli Storage Manager are
set up at the hardware level. Tivoli Storage Manager cannot control or change
which encryption method is used in the hardware configuration. If the hardware is
set up for the application encryption method, Tivoli Storage Manager can turn
encryption on or off depending on the DRIVEENCRYPTION value on the device
class. For more information about specifying this parameter, see the following
topics:
v Encrypting data with drives that are 3592 generation 2 and later on page 198
v Encrypting data using LTO generation 4 tape drives on page 205
v Enabling ECARTRIDGE drive encryption on page 208 and Disabling
ECARTRIDGE drive encryption on page 208
This method allows Tivoli Storage Manager to manage the encryption keys. When
using Application encryption, you must take extra care to secure database backups
since the encryption keys are stored in the server database. Without access to
database backups and matching encryption keys, you will not be able to restore
your data.
If you want to encrypt all of your data in a particular logical library or encrypt
data on more than just storage pool volumes, the System or Library method can be
Library managed encryption allows you to control which volumes are encrypted
through the use of their serial numbers. You can specify a range or set of volumes
to encrypt. With Application managed encryption, you can create dedicated storage
pools that only contain encrypted volumes. This way, you can use storage pool
hierarchies and policies to manage the way data is encrypted.
The Library and System methods of encryption can share the same encryption key
manager, which allows the two modes to be interchanged. However, this can only
occur if the encryption key manager is set up to share keys. Tivoli Storage
Manager cannot currently verify if encryption key managers for both methods are
the same. Neither can Tivoli Storage Manager share or use encryption keys
between the application method and either library or system methods of
encryption.
To determine whether or not a volume is encrypted and which method was used,
you can issue the QUERY VOLUME command with FORMAT=DETAILED. For more
information on data encryption using the backup-archive client, see the
Backup-Archive Clients Installation and User's Guide.
For example, if you currently have Application managed encryption enabled, and
you decide that you don't want encryption enabled at all, only empty volumes will
be impacted by the change. Filling volumes will continue to be encrypted while
new volumes will not. If you do not want currently filling volumes to continue
being encrypted, the volume status should be changed to READONLY. This will
ensure that Tivoli Storage Manager does not append any more encrypted data to
the volumes. You can use the MOVE DATA command to transfer the data to a new
volume after the update of the DRIVEENCRYPTION parameter. The data will then
be available in an un-encrypted format.
When migrating from one hardware configuration to another, you will need to
move your data from the old volumes to new volumes with new encryption keys
and key managers. You can do this by setting up two logical libraries and storage
pools (each with a different encryption method) and migrating the data from the
old volumes to the new volumes. This will eliminate volumes that were encrypted
using the original method.Assume that you have volumes that were encrypted
using the Library method and you want to migrate to the Application method.
Tivoli Storage Manager will be unable to determine which encryption keys are
needed for data on these volumes because the library's encryption key manager
stores these keys and Tivoli Storage Manager does not have access to them.
Table 54 on page 541 illustrates considerations for changing your hardware
encryption method.
Restriction: If encryption is enabled for a device class, and the device class is
associated with a storage pool, the storage pool should not share a scratch pool
with other device classes that cannot be encrypted. If a tape is encrypted, and you
plan to use it on a drive that cannot be encrypted, you must manually relabel the
tape before it can be used on that drive.
This process increases the difficulty of discovering and reconstructing the data
later. Tivoli Storage Manager performs shredding only on data in random-access
disk storage pools. You can configure the server to ensure that sensitive data is
stored only in storage pools in which shredding is enforced (shred pools).
Shredding occurs only after a data deletion commits, but it is not necessarily
completed immediately after the deletion. The space occupied by the data to be
shredded remains occupied while the shredding takes place, and is not available as
free space for new data until the shredding is complete. When sensitive data is
written to server storage and the write operation fails, the data that was already
written is shredded.
Shredding can be done either automatically after the data is deleted or manually
by command. The advantage of automatic shredding is that it is performed
without administrator intervention whenever deletion of data occurs. This limits
the time that sensitive data might be compromised. Automatic shredding also
limits the time that the space used by deleted data is occupied. The advantage of
manual shredding is that it can be performed when it will not interfere with other
server operations.
Setting up shredding
You must configure Tivoli Storage Manager so that data identified as sensitive is
stored only in storage pools that will enforce shredding after that data is deleted.
You can also set the shredding option dynamically by using the SETOPT
command.
2. Set up one or more random access disk storage pool hierarchies that will
enforce shredding and specify how many times the data is to be overwritten
after deletion. For example,
define stgpool shred2 disk shred=5
define stgpool shred1 disk nextstgpool=shred2 shred=5
3. Define volumes to those pools, and specify disks for which write caching can
be disabled.
define volume shred1
/var/storage/bf.dsm formatsize=100
define volume shred2
/var/storage/bg.dsm formatsize=100
4. Define and activate a policy for the sensitive data. The policy will bind the data
to a management class whose copy groups specify shred storage pools.
define domain shreddom
define policyset shreddom shredpol
define mgmtclass shreddom shredpol shredclass
define copygroup shreddom shredpol shredclass type=backup
destination=shred1
define copygroup shreddom shredpol shredclass type=archive
destination=shred1
activate policyset shreddom shredpol
5. Identify those client nodes whose data should be shredded after deletion, and
assign them to the new domain.
update node engineering12 domain=shreddom
If you have specified manual shredding with the SHREDDING server option, you can
start the shredding process by issuing the SHRED DATA command. This command
lets you specify how long the process will run before it is canceled and how the
process responds to an I/O error during shredding. For objects that cannot be
shredded, the server reports each object.
To see the status and amount of data waiting to be shredded, you can issue the
QUERY SHREDSTATUS command. The server reports a summary of the number and
size of objects waiting to be shredded. To display detailed information about data
shredding on the server, issuing the following command:
query shredstatus format=detailed
When data shredding completes, a message is issued that reports the amount of
data that was successfully shredded and the amount of data that was skipped, if
any.
Some changes to objects and some server operations involving the moving or
copying of data could result in sensitive data that cannot be shredded. This would
compromise the intent and value of shredding.
Currently, the backup object types supported for backup sets include directories,
files, and image data. If you are upgrading from Tivoli Storage Manager Express,
backup sets can also contain data from Data Protection for Microsoft SQL and Data
Protection for Microsoft Exchange servers. The backup set process is also called
instant archive.
You can generate backup sets on the server for individual client nodes or for
groups of nodes. A node group is a group of client nodes that are acted upon as a
single entity. If you specify one or more node groups, the server generates a
backup set for each node and places all of the backup sets together on a single set
of output volumes. To create a node group, use the DEFINE NODEGROUP
command, and then use the DEFINE NODEGROUPMEMBER command to add nodes to
the group. For details, see the Administrator's Reference. The client node for which a
backup set is generated must be registered to the server.
The media may be directly readable by something such as the following device:
v A CD-ROM, JAZ, or ZIP drive attached to a client's computer.
While an administrator can generate a backup set from any client's backed up files,
backup sets can only be used by a backup-archive client.
You cannot generate a backup set with files that were backed up to Tivoli Storage
Manager using NDMP. However, you can create a backup set with files that were
backed up using NetApp SnapShot Difference.
When generating backup sets, the server searches for active file versions in an
active-data storage pool associated with a FILE device class, if such a pool exists.
For details about the complete storage-pool search-and-selection order, see
Active-data pools as sources of active file versions for server operations on page
253.
Data from a shred storage pool will not be included in a backup set unless you
explicitly permit it by setting the ALLOWSHREDDABLE parameter to YES in the
GENERATE BACKUPSET command. If this value is specified, and the client node data
includes data from shred pools, that data cannot be shredded. The server will not
issue a warning if the backup set operation includes data from shred pools. See
Securing sensitive client data on page 541 for more information about shredding.
For details about creating and using backup sets, see the following sections:
v Generating client backup sets on the server on page 546
Generate backup set processing attempts to process all available objects onto the
backup set media. However, objects may be skipped due to being unavailable on
the server or other errors (I/O, media, hardware) that can occur at the time of
backup set generation. Some errors may lead to termination of processing before
all available data can be processed. For example, if the source data for a backup set
is on multiple sequential volumes and the second or subsequent segment of an
object spanning volumes is on a volume that is unavailable, processing is
terminated.
If objects are skipped or other problems occur to terminate processing, review all
of the messages associated with the process to determine whether or not it should
be run again. To obtain a complete backup set, correct any problems that are
indicated and reissue the GENERATE BACKUPSET command.
To improve performance when generating backup sets, you can do one or both of
the following tasks:
v Collocate the primary storage pool in which the client node data is stored. If a
primary storage pool is collocated, client node data is likely to be on fewer tape
volumes than it would be if the storage pool were not collocated. With
collocation, less time is spent searching database entries, and fewer mount
operations are required.
v Store active backup data in an active-data pool associated with a FILE device
class. When generating a backup set, the server will search this type of
active-data pool for active file versions before searching other possible sources.
You can write backup sets to sequential media: sequential tape and device class
FILE. The tape volumes containing the backup set are not associated with storage
pools and, therefore, are not migrated through the storage pool hierarchy.
For device class FILE, the server creates each backup set with a file extension of
OST. You can copy FILE device class volumes to removable media that is
associated with CD-ROM, JAZ, or ZIP devices, by using the REMOVABLEFILE
device type.
You can determine whether to use scratch volumes when you generate a backup
set. If you do not use specific volumes, the server uses scratch volumes for the
backup set.
You can use specific volumes for the backup set. If there is not enough space to
store the backup set on the volumes, the server uses scratch volumes to store the
remainder of the backup set.
Consider the following items when you select a device class for writing the backup
set:
v Generate the backup set on any sequential access devices whose device types are
supported on both the client and server. If you do not have access to compatible
devices, you will need to define a device class for a device type that is
supported on both the client and server.
v Ensure that the media type and recording format used for generating the backup
set is supported by the device that will be reading the backup set.
v You must restore, with the IBM Tivoli Storage Manager server, backup sets that
are written to more than one volume and generated to a REMOVABLEFILE
device. Issue the RESTORE BACKUPSET command and specify -location=server to
indicate that the backup set is on the Tivoli Storage Manager server.
For more information, see Removable file device configuration on page 128.
To later display information about this backup set, you can include a wildcard
character with the name, such as mybackupset*, or you can specify the fully
qualified name, such as mybackupset.3099.
Backup sets are retained on the server for 365 days if you do not specify a value.
The server uses the retention period to determine when to expire the volumes on
which the backup set resides.
Backup sets are generated to a point-in-time by using one of two date and time
specifications: the date and time specified on the GENERATE BACKUPSET command, or
the date and time the that the GENERATE BACKUPSET command was issued.
Point-in-time backup set generation works best if a recent date and time are
specified. Files that have expired, or are marked as expire-immediately cannot be
included in the backup set.
You can use the DATATYPE parameter to limit the backup set to only one data type.
For example, you might do this if you don't want to store redundant data on the
backup set media. Alternatively, you can specify that both file and image backup
data be included from a machine in order to reduce the number of tapes that must
be included in your off-site tape rotation.
Image backup sets include the image and all files and directories changed or
deleted since the image was backed up so that all backup sets on the media
represent the same point in time. Tables of contents are automatically generated for
any backup sets that contain image or application data. If the GENERATE BACKUPSET
command cannot generate a table of contents for one of these backup sets, then it
will fail.
For file level backup sets, the table of contents generation is optional. By default,
the command attempts to create a table of contents for file level backup sets, but it
will not fail if a table of contents is not created. You can control the table of
contents option by specifying the TOC parameter.
A separate backup set is generated for each specified node, but all of the backup
sets will be stored together on the same set of output volumes. the backup set for
each node has its own entry in the database. The QUERY BACKUPSET command will
display information about all backup sets, whether they are on their own tape or
stacked together with other backup sets onto one tape.
On the DEFINE BACKUPSET command, you can also specify multiple nodes or node
groups, and you can use wildcards with node names. DEFINE BACKUPSET
determines what backup sets are on the set of tapes and defines any that match the
specified nodes. Specifying only a single wildcard character ('*') for the node name
has the effect of defining all the backup sets on the set of tapes. Conversely, you
can define only those backup sets belonging to a particular node by specifying just
the name of that node. Backup sets on tapes belonging to nodes that are not
specified on the command are not defined. They will still exist on the tape, but
cannot be accessed.
The QUERY, UPDATE, and DELETE BACKUPSET commands also allow the specification of
node group names in addition to node names. When you delete backup sets, the
volumes on which the backup sets are stored are not returned to scratch as long as
any backup set on the volumes remain active.
Backup sets can only be used by a backup-archive client, and only if the files in the
backup set originated from a backup-archive client.
For more information about restoring backup sets, see the Backup-Archive Clients
Installation and User's Guide for your particular operating system.
In order to query the contents of a backup set and choose files to restore, tables of
contents need to be loaded into the server database. The backup-archive client can
specify more than one backup set table of contents to be loaded to the server at the
beginning of a restore session.
Image backups and restores require a table of contents when generating a backup
set for image data. If the table of contents existed but was deleted for some reason
then the image backup set cannot be restored until the table of contents is
regenerated with the GENERATE BACKUPSETTOC command.
The level of the server defining the backup set must be equal to or greater than the
level of the server that generated the backup set.
Using the example described in Example: generating a client backup set on page
548, you can make the backup set that was copied to the CD-ROM available to
another server by issuing the following command:
550 IBM Tivoli Storage Manager for AIX: Administrator's Guide
define backupset johnson project devclass=cdrom volumes=BK1,BK2,BK3
description="backup set copied to CD-ROM"
If you have multiple servers connecting to different clients, the DEFINE BACKUPSET
command makes it possible for you to take a previously generated backup set and
make it available to other servers. The purpose is to allow the user flexibility in
moving backup sets to different servers, thus allowing the user the ability to
restore their data from a server other than the one on which the backup set was
created.
Important:
1. Devclass=cdrom specifies a device class of type REMOVABLEFILE that points
to your CD-ROM drive. CD-ROMs have a maximum capacity of 650MB.
2. Volumes=BK1,BK2,BK3 specifies the names of the volumes containing the
backup set. The volume label of these CD-ROMs must match the name of the
file on the volume exactly.
Tables of contents:
v Reside on the server even if the backup set's media has been moved off-site.
v Can be generated for existing backup sets that do not contain a table of contents.
v Can be re-generated when a backup set is defined on a new server, or if using a
user-generated copy on a different medium.
Backup set tables of contents are stored in the storage pool identified by the
TOCDESTINATION attribute of the backup copy group associated with the
management class to which the backup set is bound. The management class to
which the backup set is bound will either be the default management class in the
policy domain in which the backup set's node is registered, or the management
class specified by the TOCMGmtclass parameter of the GENERATE BACKUPSET,
GENERATE BACKUPSETTOC, or DEFINE BACKUPSET command. Tables of contents for
backup sets are retained until the backup set with which they are associated
expires or is deleted. They are not subject to the policy associated with their
management class. You can issue the QUERY BACKUPSET command to show whether
a given backup set has a table of contents or not. Output from the QUERY BACKUPSET
command can be filtered based on the existence of a table of contents. This allows
you to determine which backup sets may need to have a new table of contents
created, or conversely, which backup sets could be used with the client's file-level
restore.
The following figure shows the report that is displayed after you enter:
query backupset f=d
The FORMAT=DETAILED parameter on the QUERY BACKUPSET provides the client file
spaces contained in the backup set and the list of volumes of the backup set.
The server displays information about the files and directories that are contained in
a backup set. After you issue the query backupsetcontents jane engdata.3099
command, the following output is displayed:
Tip: To display the contents of an image backup set, specify DATATYPE=IMAGE on the
QUERY BACKUPSETCONTENTS command.
| File space names and file names that can be in a different code page or locale than
| the server do not display correctly in the Operations Center, the Administration
| Center, or the administrative command-line interface. The data itself is backed up
| and can be restored properly, but the file space or file name may display with a
| combination of invalid characters or blank spaces.
If the file space name is Unicode enabled, the name is converted to the server's
code page for display. The results of the conversion for characters not supported
by the current code page depends on the operating system. For names that Tivoli
Storage Manager is able to partially convert, you may see question marks (??),
blanks, unprintable characters, or .... These characters indicate to the
administrator that files do exist. If the conversion is not successful, the name is
displayed as "...". Conversion can fail if the string includes characters that are not
available in the server code page, or if the server has a problem accessing system
conversion routines.
To delete all backup sets belonging to client node JANE, created before 11:59 p.m.
on March 18, 1999, enter:
delete backupset jane * begindate=03/18/1999 begintime=23:59
When that date passes, the server automatically deletes the backup set when
expiration processing runs. However, you can also manually delete the client's
backup set from the server before it is scheduled to expire by using the DELETE
BACKUPSET command.
To help address this problem, you can use subfile backups. When a client's file has
been previously backed up, any subsequent backups are typically made of the
portion of the client's file that has changed (a subfile), rather than the entire file. A
base file is represented by a backup of the entire file and is the file on which
subfiles are dependent. If the changes to a file are extensive, a user can request a
backup on the entire file. A new base file is established on which subsequent
subfile backups are dependent.
This type of backup makes it possible for mobile users to reduce connection time,
network traffic, and the time it takes to do a backup.
To enable this type of backup, see Setting up clients to use subfile backup on
page 555.
Subfile backups
The following table describes how Tivoli Storage Manager manages backups of this
file.
Day of
subsequent
Version backup What Tivoli Storage Manager backs up
One Monday The entire CUST.TXT file (the base file)
Two Tuesday A subfile of CUST.TXT. The server compares the file backed up
on Monday with the file that needs to be backed up on
Tuesday. A subfile containing the changes between the two
files is sent to the server for the backup.
Three Wednesday A subfile of CUST.TXT. Tivoli Storage Manager compares the
file backed up on Monday with the file that needs to be
backed up on Wednesday. A subfile containing the changes
between the two files is sent to the server for the backup.
Related reference:
Setting policy to enable point-in-time restore for clients on page 530
Policy for logical volume backups on page 525
Restoring subfiles
When a client issues a request to restore subfiles, Tivoli Storage Manager restores
subfiles along with the corresponding base file back to the client. This process is
transparent to the client. That is, the client does not have to determine whether all
subfiles and corresponding base file were restored during the restore operation.
You can define (move) a backup set that contains subfiles to an earlier version of a
server that is not enabled for subfile backup. That server can restore the backup set
containing the subfiles to a client not able to restore subfiles. However, this process
is not recommended as it could result in a data integrity problem.
When the base file and its dependent subfiles are imported from the volumes to a
target server and import processing is canceled while the base file and subfiles are
being imported, the server automatically deletes any incomplete base files and
subfiles that were stored on the target server.
For example, when expiration processing runs, Tivoli Storage Manager recognizes a
base file as eligible for expiration but does not delete the file until all its dependent
subfiles have expired. For more information on how the server manages file
expiration, see Running expiration processing to delete expired files on page 514.
If the base file and dependent subfiles are stored on separate volumes when a
backup set is created, additional volume mounts may be required to create the
backup set.
When you optimize restore operations, the performance depends on the type of
media that you use. Reference Table 56 for information about the media that you
can use to restore data.
Table 56. Advantages and disadvantages of the different device types for restore operations
Device type Advantages Disadvantages
Random access disk v Quick access to files v No reclamation of unused
v No mount point needed space in aggregates
v No deduplication of data
Sequential access disk (FILE) v Reclamation of unused Requires mount point but
space in aggregates not as severe an impact as
real tape
v Quick access to files (disk
based)
v Allows deduplication of
data
Virtual tape library v Quick access to files v Requires mount point but
because of disk-based not as severe an impact as
media real tape
v Existing applications that v No deduplication of data
were written for real tape
do not have to be
rewritten
The following tasks can help you balance the costs against the need for optimized
restore operations:
v Identify systems that are most critical to your business. Consider where your
most important data is, what is most critical to restore, and what needs the
fastest restore. Identify which systems and applications you want to focus on,
optimizing for restore.
v Identify your goals and order the goals by priority. The following list has some
goals to consider:
Disaster recovery or recovery from hardware crashes, requiring file system
restores
Recovery from loss or deletion of individual files or groups of files
Recovery for database applications (specific to the API)
Point-in-time recovery of groups of files
The importance of each goal can vary for the different client systems that you
identified as being most critical.
For more information about restore operations for clients, see Concepts for client
restore operations on page 560.
Environment considerations
Tivoli Storage Manager performance depends upon the environment.
| You can also use active-data pools to store active versions of client backup data.
| Archive and space-managed data is not allowed in active-data pools. Inactive files
| are removed from the active-data pool during expiration processing. Active-data
| pools that are associated with a FILE device class do not require tape mounts, and
| the server does not have to position past inactive files. In addition, FILE volumes
| can be accessed concurrently by multiple client sessions or server processes. You
| can also create active-data pools that use tape or optical media, which can be
| moved off-site, but which require tape mounts.
| If you do not use FILE or active-data pools, consider how restore performance is
| affected by the layout of data across single or multiple tape volumes. You can have
| multiple simultaneous sessions when you use FILE to restore, and mount overhead
| is skipped with FILE volumes. Major causes of performance problems are excessive
| tape mounts and needing to skip over expired or inactive data on a tape. After a
| long series of incremental backups, perhaps over years, the active data for a single
| file space can be spread across many tape volumes. A single tape volume can have
| active data that is mixed with inactive and expired data.
Consider the following information when you run file system restore operations:
v Combine image backups with progressive incremental backups for the file
system to allow for full restore to an arbitrary point-in-time.
v To minimize disruption to the client during backup, use either hardware-based
or software-based snapshot techniques for the file system.
v Perform image backups infrequently. More frequent image backups give better
point-in-time granularity, but there is a cost. The frequent backups affect the tape
usage, there is an interruption of the client system during backup, and there is
greater network bandwidth needed.
As a guideline you can run an image backup after a percentage of data is
changed in the file system, since the last image backup.
Image backup is not available for all clients. If image backup is not available for
your client, use file-level restore as an alternative.
For more information about collocation, see Keeping client files together using
collocation on page 363.
For information about data protection for databases, see the Tivoli Storage
Manager information center.
If you also schedule incremental backups regularly, you might have greater
granularity in restoring to a discrete point-in-time. However, keeping many
versions can degrade restore operation performance. Setting policy to keep many
versions also has costs, in terms of database space and storage pool space. Your
policies might have overall performance implications.
If you cannot afford the resource costs of keeping the large numbers of file
versions and must restore to a point-in-time, consider the following options:
v Use backup sets
v Export the client data
v Use an archive
| v Take a volume image, including virtual machine backups
You can restore to the point-in-time when the backup set was generated, the export
was run, or the archive was created. Remember, when you restore the data, your
selection is limited to the time at which you created the backup set, export, or
archive.
Tip: If you use the archive function, create a monthly or yearly archive. Do not
use archive as a primary backup method because frequent archives with large
amounts of data can affect server and client performance.
The no-query restore requires less interaction between the client and the server,
and the client can use multiple sessions for the restore operation. The no-query
restore operation is useful when you restore large file systems on a client with
limited memory. The advantage is that no-query restore avoids some processing
that can affect the performance of other client applications. In addition, it can
achieve a high degree of parallelism by restoring with multiple sessions from the
server and storage agent simultaneously.
With no-query restore operations, the client sends a single restore request to the
server instead of querying the server for each object to be restored. The server
returns the files and directories to the client without further action by the client.
The client accepts the data that comes from the server and restores it to the
destination named on the restore command.
The no-query restore operation is used by the client only when the restore request
meets both of the following criteria:
v You enter the restore command with a source file specification that has an
unrestricted wildcard.
An example of a source file specification with an unrestricted wildcard is:
/home/mydocs/2002/*
An example of a source file specification with a restricted wildcard is:
/home/mydocs/2002/sales.*
v You do not specify any of the following client options:
inactive
latest
pick
fromdate
todate
To force classic restore operations, use ?* in the source file specification rather than
*. For example:
/home/mydocs/2002/?*
For more information about restore processes, see the Backup-Archive Clients
Installation and User's Guide.
You can issue the commands one after another in a single session or window, or
issue them at the same time from different command windows.
When you enter multiple commands to restore files from a single file space, specify
a unique part of the file space in each restore command. Be sure that you do not
use any overlapping file specifications in the commands. To display a list of the
directories in a file space, issue the QUERY BACKUP command on the client. For
example:
dsmc query backup -dirsonly -subdir=no /usr/
For more information, see the Backup-Archive Clients Installation and User's Guide.
Set the client option for resource utilization to one greater than the number of
sessions that you want. Use the number of drives that you want that single client
to use. The client option can be included in a client option set.
At the client, the option for resource utilization also affects how many drives
(sessions) the client can use. The client option, resource utilization, can be included
in a client option set. If the number specified in the MAXNUMMP parameter is too low
and there are not enough mount points for each of the sessions, it might not be
possible to achieve the benefit of the multiple sessions that are specified in the
resource utilization client option.
Archiving data
Managing archive data on the server becomes important when you have client
nodes that archive large numbers (hundreds or thousands) of files every day.
If you archive files with automated tools that start the command-line client or API,
you might encounter large numbers. If performance degrades over time during an
archive operation, or you have a large amount of storage that is used by archives,
consider advanced techniques. See Archive operations overview and Managing
storage usage for archives on page 564.
All files that are archived with the same description become members of the same
archive package. If the user does not specify a description when archiving, the
client program provides a default description with each archive request. The
default description includes the date.
When files are archived, the client program archives the paths (directories) to those
files to preserve access permissions which are specific to the operating system.
Directories are also included in archive packages. If the same directory is archived
with different descriptions, the directory is stored once with each package. If a
command line user issues a QUERY ARCHIVE command, multiple entries for the same
directory may appear. Closer inspection shows that each entry has a different
description.
The GUI and Web client programs allow a user to navigate through a client node's
archives by first displaying all descriptions (the package identifiers), then the
directories, and finally the files. Users can retrieve or delete individual files or all
files in a directory. Command line client and API users can specify a description
when they archive files, or when they send requests to query, retrieve or delete
archived files.
When retrieving files, the server searches for the most current file versions. It will
search in an active-data storage pool associated with a FILE device class, if such a
pool exists.
Consider the following two actions that you can take to minimize the storage
usage:
Minimize the number of unique descriptions
You can reduce storage usage by archiving more files into fewer packages
(by reducing the number of unique descriptions). The amount of storage
used for directories is also affected by the number of packages. If you
archive a file three different times using three different descriptions, the
server stores both the file and the directory three times, once in each
package. If you archives the same file three different times using just one
description, the server stores the file three times, but the directory is stored
just one time.
Archive directories only if needed
Archiving directories might be necessary if the directories are needed to
group files for query or retrieve, or if the directory-level access permission
information needs to be archived.
The users of the GUI and Web client programs need descriptions to aid in
navigation, to find archived files. You can minimize storage usage for archives by
reducing the number of packages. For client nodes that are always accessed via the
command-line interface you can also use some other techniques.
If the user follows these guidelines, the client node will have one or a limited
number of archive packages. Because of the small number of packages, there are
only small numbers of copies of each directory entry. The savings in storage space
that result are noticeable when files with the same path specification are archived
multiple times over multiple days.
See the Backup-Archive Clients Installation and User's Guide for details about archive
operations and client options.
Do not run the UPDATE ARCHIVE command while any other processing for the node
is running. If this command is issued for a node with any other object insertion or
deletion activity occurring at the same time, locking contention may occur. This
may result in processes and sessions hanging until the resource timeout is reached
and the processes and sessions terminate.
When you update archives for a node, you have two choices for the action to take:
Delete directory entries in all archive packages
This action preserves the archive packages, but removes directory entries
for all packages, reducing the amount of storage used for archives. Do this
only when directory entries that include access permissions are not needed
in the archive packages, and the paths are not needed to query or retrieve
a group of files. The amount of reduction depends on the number of
packages and the number of directory entries. For example, to remove
directory entries for the client node SNOOPY, enter this command:
update archive snoopy deletedirs
Attention: After you delete the directory entries, the directory entries
cannot be recreated in the archive packages. Do not use this option if users
of the client node need to archive access permissions for directories.
Reduce the number of archive packages to a single package for the node
This action removes all unique descriptions, thereby reducing the number
of archive packages to one for the client node. Do this only when the
descriptions are not needed and are causing large use of storage. This
action also removes directory entries in the archive packages. Because there
is now one package, there is one entry for each directory. For example, to
reduce the archive packages to one for the client node SNOOPY, enter this
command:
update archive snoopy resetdescriptions
After updating the archives for a node in this way, keep the archive
package count to a minimum.
Attention: You cannot recreate the packages after the descriptions have
been deleted. Do not use this option if users of the client node manage
archives by packages, or if the client node is accessed via the GUI or Web
client interface.
See Backup-Archive Clients Installation and User's Guide for details about the option.
Tip: The GUI and Web client programs use the directories to allow users to
navigate to the archived files. This option is not recommended for GUI or Web
client interface users.
Tasks:
Scheduling a client operation on page 568
Defining client schedules on page 568
Associating client nodes with schedules on page 569
Starting the scheduler on the clients on page 569
Displaying information about schedules on page 574
Starting the scheduler on the clients on page 569
Displaying information about schedules on page 574
Creating schedules for running command files on page 571
Updating the client options file to automatically generate a new password on page 572
When you define a schedule, you assign it to a specific policy domain. You can
define more than one schedule for each policy domain.
You can modify, copy, and delete any schedule you create. See Chapter 16,
Managing schedules for client nodes, on page 573 for more information.
To define a schedule for daily incremental backups, use the DEFINE SCHEDULE
command. You must specify the policy domain to which the schedule belongs and
the name of the schedule (the policy domain must already be defined). For
example:
define schedule engpoldom daily_backup starttime=21:00
duration=2 durunits=hours
You must have system privilege, unrestricted policy, or restricted policy (for the
policy domain to which the schedule belongs) to associate client nodes with
schedules. Issue the DEFINE ASSOCIATION command to associate client nodes with a
schedule.
Complete the following step to associate the ENGNODE client node with the
WEEKLY_BACKUP schedule, both of which belong to the ENGPOLDOM policy
domain:
define association engpoldom weekly_backup engnode
After a client schedule is defined, you can associate client nodes with it by
identifying the following information:
v Policy domain to which the schedule belongs
v List of client nodes to associate with the schedule
Administrators must ensure that users start the Tivoli Storage Manager scheduler
on the client or application client directory, and that the scheduler is running at the
schedule start time. After the client scheduler starts, it continues to run and
initiates scheduled events until it is stopped.
The way that users start the Tivoli Storage Manager scheduler varies, depending
on the operating system that the machine is running. The user can choose to start
the client scheduler automatically when the operating system is started, or can
start it manually at any time. The user can also have the client acceptor manage
the scheduler, starting the scheduler only when needed. For instructions on these
tasks, see the Backup-Archive Clients Installation and User's Guide.
The client and the Tivoli Storage Manager server can be set up to allow all sessions
to be initiated by the server. See Server-initiated sessions on page 433 for
instructions.
Note: Tivoli Storage Manager does not recognize changes that you made to the
client options file while the scheduler is running. For Tivoli Storage Manager to
use the new values immediately, you must stop the scheduler and restart it.
The following output shows an example of a report for a classic schedule that is
displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H 1 W Sat
For enhanced schedules, the standard schedule format displays a blank period
column and an asterisk in the day of week column. Issue FORMAT=DETAILED to
display complete information about an enhanced schedule. Refer to the
Administrator's Reference for command details. The following output shows an
example of a report for an enhanced schedule that is displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H (*)
Associate the client with the schedule and ensure that the scheduler is started on
the client or application client directory. The schedule runs the file called
c:\incr.cmd once a day between 6:00 p.m. and 6:05 p.m., every day of the week.
If a password expires and is not updated, scheduled operations fail. You can
prevent failed operations by allowing Tivoli Storage Manager to generate a new
password when the current password expires. If you set the PASSWORDACCESS
option to GENERATE in the Tivoli Storage Manager client options file, dsm.opt,
Tivoli Storage Manager automatically generates a new password for your client
node each time it expires, encrypts and stores the password in a file, and retrieves
the password from that file during scheduled operations. You are not prompted for
the password.
Tasks:
Managing node associations with schedules on page 575
Specifying one-time actions for client nodes on page 585
Managing event records on page 576
Managing the throughput of scheduled operations on page 579
Managing IBM Tivoli Storage Manager schedules
For a description of what Tivoli Storage Manager views as client nodes, see
Chapter 11, Adding client nodes, on page 421. For information about the
scheduler and creating schedules, see Chapter 15, Scheduling operations for client
nodes, on page 567
You can add new Tivoli Storage Manager schedules by using the DEFINE
SCHEDULE command.
After you add a new schedule, associate the node with the schedule. For more
information, see Defining client schedules on page 568.
Client node associations are not copied to the new schedule. You must associate
client nodes with the new schedule before it can be used. The associations for the
old schedule are not changed.
To copy the WINTER schedule from policy domain DOMAIN1 to DOMAIN2 and
name the new schedule WINTERCOPY, enter:
copy schedule domain1 winter domain2 wintercopy
For information, see Associating client nodes with schedules on page 569.
Modifying schedules
You can modify existing schedules by issuing the UPDATE SCHEDULE command.
Deleting schedules
When you delete a schedule, Tivoli Storage Manager deletes all client node
associations for that schedule.
Rather than delete a schedule, you may want to remove all nodes from the
schedule and save the schedule for future use. For information, see Removing
nodes from schedules on page 576.
See Associating client nodes with schedules on page 569 for more information.
For enhanced schedules, the standard schedule format displays a blank period
column and an asterisk in the day of week column. Issue FORMAT=DETAILED to
display complete information about an enhanced schedule. Refer to the
Administrator's Reference for command details. The following output shows an
example of a report for an enhanced schedule that is displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H (*)
You can perform the following activities to manage associations of client nodes
with schedules.
To associate client nodes with a schedule, you can use one of the following
methods:
Issue the DEFINE ASSOCIATION command from the command-line interface.
Use the Administration Center to associate a node with a schedule.
For more information, see Associating client nodes with schedules on page 569.
For example, you should query an association before deleting a client schedule.
To delete the association of the ENGNOD client with the ENGWEEKLY schedule,
in the policy domain named ENGPOLDOM, enter:
delete association engpoldom engweekly engnod
Instead of deleting a schedule, you may want to delete all associations to it and
save the schedule for possible reuse in the future.
You can also find information about scheduled events by checking the log file
described in Checking the schedule log file on page 578.
For example, you can issue the following command to find out which events were
missed in the previous 24 hours, for the DAILY_BACKUP schedule in the
STANDARD policy domain:
query event standard daily_backup begindate=-1 begintime=now
enddate=today endtime=now exceptionsonly=yes
Figure 75 on page 578 shows an example of the results of this query. To find out
why a schedule was missed or failed, you may need to check the schedule log on
the client node itself. For example, a schedule can be missed because the scheduler
was not started on the client node.
Such events are displayed with a status of Uncertain, indicating that complete
information is not available because the event records have been deleted. To
determine if event records have been deleted, check the message that is issued
after the DELETE EVENT command is processed.
The default name for the schedule log file is dsmsched.log. The file is located in
the directory where the Tivoli Storage Manager backup-archive client is installed.
You can override this file name and location by specifying the SCHEDLOGNAME option
in the client options file. See the Backup-Archive Clients Installation and User's
Guide for more information.
You can specify how long event records stay in the database before the server
automatically deletes them by using the SET EVENTRETENTION command. You
can also manually delete event records from the database, if database space is
required.
For example, to delete all event records written prior to 11:59 p.m. on June 30,
2002, enter:
delete event 06/30/2002 23:59
With client-polling mode, client nodes poll the server for the next scheduled event.
With server-prompted mode, the server contacts the nodes at the scheduled start
time. By default, the server permits both scheduling modes. The default (ANY)
allows nodes to specify either scheduling mode in their client options files. You can
modify this scheduling mode.
If you modify the default server setting to permit only one scheduling mode, all
client nodes must specify the same scheduling mode in their client options file.
Clients that do not have a matching scheduling mode will not process the
scheduled operations. The default mode for client nodes is client-polling.
The scheduler must be started on the client node's machine before a schedule can
run in either scheduling mode.
For more information about modes, see Overview of scheduling modes on page
580.
You can instead prevent clients from starting sessions, and allow only the server to
start sessions with clients.
To limit the start of backup-archive client sessions to the server only, complete the
following steps for each node:
1. Use the REGISTER NODE command or the UPDATE NODE command to change the
value of the SESSIONINITIATION parameter to SERVERONLY, Specify the high-level
address and low-level address options. These options must match what the
client is using, otherwise the server will not know how to contact the client.
2. Set the scheduling mode to server-prompted. All sessions must be started by
server-prompted scheduling on the port that was defined for the client with the
REGISTER NODE or the UPDATE NODE commands.
3. Ensure that the scheduler on the client is started. You cannot use the client
acceptor (dsmcad) to start the scheduler when SESSIONINITIATION is set to
SERVERONLY.
See Table 58 on page 581 and Table 57 for the advantages and disadvantages of
client-polling and server-prompted modes.
Table 57. Client-Polling mode
How the mode works Advantages and disadvantages
1. A client node queries the server at v Useful when a high percentage of clients
prescribed time intervals to obtain a start the scheduler manually on a daily
schedule. This interval is set with a client basis, for example when their workstations
option, QUERYSCHEDPERIOD. For are powered off nightly.
information about client options, refer to v Supports randomization, which is the
the appropriate Backup-Archive Clients random distribution of scheduled start
Installation and User's Guide. times. The administrator can control
2. At the scheduled start time, the client randomization. By randomizing the start
node performs the scheduled operation. times, Tivoli Storage Manager prevents all
3. When the operation completes, the client clients from attempting to start the
sends the results to the server. schedule at the same time, which could
overwhelm server resources.
4. The client node queries the server for its
next scheduled operation. v Valid with all communication methods.
1. The server contacts the client node when v Useful if you change the schedule start
scheduled operations need to be time frequently. The new start time is
performed and a server session is implemented without any action required
available. from the client node.
2. When contacted, the client node queries v Useful when a high percentage of clients
the server for the operation, performs the are running the scheduler and are waiting
operation, and sends the results to the for work.
server. v Useful if you want to restrict sessions to
server-initiated.
v Does not allow for randomization of
scheduled start times.
v Valid only with client nodes that use
TCP/IP to communicate with the server.
Client-Polling Scheduling Mode: To have clients poll the server for scheduled
operations, enter:
set schedmodes polling
Ensure that client nodes specify the same mode in their client options files.
Ensure that client nodes specify the same mode in their client options files.
Any Scheduling Mode: To return to the default scheduling mode so that the
server supports both client-polling and server-prompted scheduling modes, enter:
set schedmodes any
For more information, refer to the appropriate Backup-Archive Clients Installation and
User's Guide.
When you define a schedule, you specify the length of time between processing of
the schedule. Consider how these interact to ensure that the clients get the backup
coverage that you intend.
To enable the server to complete all schedules for clients, you may need to use trial
and error to control the workload. To estimate how long client operations take, test
schedules on several representative client nodes. Keep in mind, for example, that
the first incremental backup for a client node takes longer than subsequent
incremental backups.
Of these sessions, you can set a maximum percentage to be available for processing
scheduled operations. Limiting the number of sessions available for scheduled
operations ensures that sessions are available when users initiate any unscheduled
operations, such as restoring file or retrieving files.
If the number of sessions for scheduled operations is insufficient, you can increase
either the total number of sessions or the maximum percentage of scheduled
sessions. However, increasing the total number of sessions can adversely affect
server performance. Increasing the maximum percentage of scheduled sessions can
reduce the server availability to process unscheduled operations.
For example, assume that the maximum number of sessions between client nodes
and the server is 80. If you want 25% of these sessions to be used by for scheduled
operations, enter:
set maxschedsessions 25
The following table shows the trade-offs of using either the SET
MAXSCHEDSESSIONS command or the MAXSESSIONS server option.
A startup window is defined by the start time and duration during which a
schedule must be initiated. For example, if the start time is 1:00 a.m. and the
duration is 4 hours, the startup window is 1:00 a.m. to 5:00 a.m. For the
client-polling scheduling mode, specify the percentage of the startup window that
the server can use to randomize start times for different client nodes that are
associated with a schedule.
The settings for randomization and the maximum percentage of scheduled sessions
can affect whether schedules are successfully completed for client nodes. Users
receive a message if all sessions are in use when they attempt to process a
schedule. If this happens, you can increase randomization and the percentage of
scheduled sessions that are allowed to make sure that the server can handle the
workload. The maximum percentage of randomization that is allowed is 50%. This
limit ensures that half of the startup window is available for trying again, the
scheduled commands that failed.
It is possible, especially after a client node or the server has been restarted, that a
client node may not poll the server until after the beginning of the startup window
in which the next scheduled event is to start. In this case, the starting time is
randomized over the specified percentage of the remaining duration of the startup
window.
The result is that the nine client nodes that polled the server before the beginning
of the startup window are assigned randomly selected starting times between 8:00
and 8:30. The client node that polled at 8:30 receives a randomly selected starting
A larger startup window gives the client node more time to attempt initiation of a
session with the server.
Users can also set these values in their client user options files. (Root users on
UNIX and Linux systems set the values in client system options files.) However,
user values are overridden by the values that the administrator specifies on the
server.
The communication paths from client node to server can vary widely with regard
to response time or the number of gateways. In such cases, you can choose not to
set these values so that users can tailor them for their own needs.
Related tasks:
Setting how often clients query the server
Setting the number of command retry attempts on page 585
Setting the amount of time between retry attempts on page 585
For the client-polling scheduling mode, you can specify the maximum number of
hours that the scheduler on a client node waits between attempts to contact the
server to obtain a schedule. You can set this period to correspond to the frequency
with which the schedule changes are being made. If client nodes poll more
frequently for schedules, changes to scheduling information (through administrator
commands) are propagated more quickly to client nodes.
If you want to have all clients using polling mode contact the server every 24
hours, enter:
set queryschedperiod 24
This setting has no effect on clients that use the server-prompted scheduling mode.
The clients also have a QUERYSCHEDPERIOD option that can be set on each
client. The server value overrides the client value once the client successfully
contacts the server.
The maximum number of command retry attempts does not limit the number of
times that the client node can contact the server to obtain a schedule. The client
node never gives up when trying to query the server for the next schedule.
Be sure not to specify so many retry attempts that the total retry time is longer
than the average startup window.
If you want to have all client schedulers retry a failed attempt to process a
scheduled command up to two times, enter:
set maxcmdretries 2
Maximum command retries can also be set on each client with a client option,
MAXCMDRETRIES. The server value overrides the client value once the client
successfully contacts the server.
Typically, this setting is effective when set to half of the estimated time it takes to
process an average schedule. If you want to have the client scheduler retry every
15 minutes any failed attempts to either contact the server or process scheduled
commands, enter:
set retryperiod 15
You can use this setting in conjunction with the SET MAXCMDRETRIES command
(number of command retry attempts) to control when a client node contacts the
server to process a failed command. See Setting the number of command retry
attempts.
The retry period can also be set on each client with a client option, RETRYPERIOD.
The server value overrides the client value once the client successfully contacts the
server.
If the scheduling mode is set to prompted, the client performs the action within 3
to 10 minutes. If the scheduling mode is set to polling, the client processes the
command at its prescribed time interval. The time interval is set by the
QUERYSCHEDPERIOD client option. The DEFINE CLIENTACTION command
causes Tivoli Storage Manager to automatically define a schedule and associate
client nodes with that schedule. With the schedule name provided, you can later
query or delete the schedule and associated nodes. The names of one-time client
action schedules can be identified by a special character followed by numerals, for
example @1.
For example, you can issue a DEFINE CLIENTACTION command that specifies an
incremental backup command for client node HERMIONE in domain
ENGPOLDOM:
define clientaction hermione domain=engpoldom action=incremental
Tivoli Storage Manager defines a schedule and associates client node HERMIONE
with the schedule. The server assigns the schedule priority 1, sets the period units
(PERUNITS) to ONETIME, and determines the number of days to keep the
schedule active based on the value set with SET CLIENTACTDURATION
command.
For a list of valid actions, see the DEFINE CLIENTACTION command in the
Administrator's Reference. You can optionally include the OPTIONS and OBJECTS
parameters.
If the duration of client actions is set to zero, the server sets the DURUNITS
parameter (duration units) as indefinite for schedules defined with DEFINE
CLIENTACTION command. The indefinite setting for DURUNITS means that the
schedules are not deleted from the database.
| You can use the Operations Center to identify potential issues at a glance, manage
| alerts, and access the Tivoli Storage Manager command line.
| The Administration Center interface is also available, but the Operations Center is
| the preferred monitoring interface.
| Related concepts:
| Chapter 26, Alert monitoring, on page 807
| Related tasks:
| Chapter 27, Sending alerts by email, on page 809
|
| Opening the Operations Center
| You can open the Operations Center with a web browser.
| You can open the Operations Center by using any supported web browser. For a
| list of supported web browsers, see the chapter about web browser requirements in
| the Installation Guide.
| Configuring the hub server: If you are connecting to the Operations Center for
| the first time, you are redirected to the initial configuration wizard. In that wizard,
| you must provide the following information:
| v Connection information for the Tivoli Storage Manager server that you designate
| as a hub server
| v Login credentials for an administrator who is defined to that Tivoli Storage
| Manager server
| If the event-record retention period of the Tivoli Storage Manager server is less
| than 14 days, the value automatically increases to 14 days when you configure the
| server as a hub server.
| If you have multiple Tivoli Storage Manager servers in your environment, add the
| other Tivoli Storage Manager servers as spoke servers to the hub server, as
| described in Adding spoke servers on page 593.
| over the Help icon ( ? ) in the Operations Center menu bar and click the page
| name.
| To view general help for the Operations Center, including message help and
| conceptual and task topics, click Documentation.
| v To open the command-line interface, hover your mouse pointer over the Global
| Settings icon ( ) in the Operations Center menu bar, and click Command
| Line.
| In the command-line interface, you can run commands to manage Tivoli Storage
| Manager servers that are configured as hub or spoke servers.
| v To log out, click the administrator name in the menu bar, and click Log Out.
|
| Viewing the Operations Center on a mobile device
| You can view the Overview page of the Operations Center in the web browser of a
| mobile device to remotely monitor your storage environment. The Operations
| Center supports the Apple Safari web browser on the iPad. Other mobile devices
| can also be used.
| Open a web browser on your mobile device, and enter the web address of the
| Operations Center. See Opening the Operations Center on page 589.
|
| Administrator IDs and passwords
| An administrator must have a valid ID and password on the hub server to log in
| to the Tivoli Storage Manager Operations Center. An administrator ID is also
| assigned to the Operations Center so that the Operations Center can monitor
| servers.
| The following Tivoli Storage Manager administrator IDs are required to use the
| Operations Center:
| Operations Center administrator IDs
| Any administrator ID that is registered on the hub server can be used to
| log in to the Operations Center. The authority level of the ID determines
| which tasks can be completed. You can create new administrator IDs by
| using the REGISTER ADMIN command. For information about this command,
| see the Administrator's Reference.
| The Operations Center shows you a consolidated view of alerts and status
| information for the hub server and any spoke servers.
| You can install the Operations Center on the same computer as a Tivoli Storage
| Manager server or on a different computer.
| When you open the Operations Center for the first time, you connect it to one
| Tivoli Storage Manager server instance, which becomes the dedicated hub server.
| You can then connect more Tivoli Storage Manager servers as spoke servers.
| Tip: If you use library sharing, and the library manager server meets the
| Operations Center system requirements, consider designating this server as the
| hub server. Few, if any, Tivoli Storage Manager clients are typically registered to
| the library manager server. The smaller client workload of this server can make it a
| good candidate to take on the additional processing requirements of a hub server.
| Performance
| As a rule, a hub server can support 10-20 spoke servers. This number can vary,
| depending on your configuration.
| The following factors have the most significant impact on system performance:
| v The number of Tivoli Storage Manager clients or virtual machine file systems
| that are managed by the hub and spoke servers.
| v The frequency at which data is refreshed in the Operations Center.
| Consider grouping hub and spoke servers by geographic location. For example,
| managing a set of hub and spoke servers within the same data center can help
| prevent issues that can be caused by firewalls or the lack of appropriate network
| bandwidth between different locations.
| If necessary, you can further divide servers according to one or more of the
| following characteristics:
| v The administrator who manages the servers
| v The organizational entity that funds the servers
| v Server operating systems
| You can manage a hub server and multiple spoke servers from the same instance
| of the Operations Center.
| If you have more than 10-20 spoke servers, or if resource limitations require the
| environment to be partitioned, you can configure multiple hub servers and connect
| a subset of the spoke servers to each hub server.
| Restrictions:
| v A single server cannot be both a hub server and a spoke server.
| v Each spoke server can be assigned to only one hub server.
| v Each hub server requires a separate instance of the Operations Center, each of
| which has a separate web address.
| Tip: In the table on the TSM Servers page, a server might have a status of
| Unmonitored. An unmonitored server is a server that an administrator defined
| to the hub server by using the DEFINE SERVER command, but which is not yet
| configured as a spoke server.
| 2. Complete one of the following steps:
| v Click the server to highlight it, and from the table menu bar, click Monitor
| Spoke.
| v If the server that you want to add is not shown in the table, click
| Connect Spoke in the table menu bar.
| 3. Provide the necessary information, and complete the steps in the spoke
| configuration wizard.
| Note: If the event-record retention period of the server is less than 14 days, the
| value automatically increases to 14 days when you configure the server as a
| spoke server.
|
| You are not required to complete this procedure to change the following settings:
| v The frequency at which status data is refreshed
| v The duration for which alerts remain active, inactive, or closed
| v The conditions for which clients are shown as being at risk
| To change those settings, use the Settings page in the Operations Center.
| To restart the initial configuration wizard, you must delete a properties file. When
| you delete the file, you delete information about the hub server connection.
| However, any alerting, monitoring, at-risk, or multiserver settings that were
| configured for the hub server are not deleted. These settings are used as the
| default settings in the configuration wizard when the wizard restarts.
| 1. Stop the web server of the Operations Center. For instructions, see Stopping
| and starting the web server on page 595.
| 2. On the computer where the Operations Center is installed, go to the following
| directory:
| v AIX and Linux systems: installation_dir/ui/Liberty/usr/servers/
| guiServer
| v Windows systems: installation_dir\ui\Liberty\usr\servers\guiServer
| where installation_dir represents the directory in which the Operations
| Center is installed. For example:
| v AIX and Linux systems: /opt/tivoli/tsm/ui/Liberty/usr/servers/
| guiServer
| v Windows systems: c:\Program Files\Tivoli\TSM\ui\Liberty\usr\servers\
| guiServer
| 3. In the guiServer directory, delete the serverConnection.properties file.
| 4. Start the web server of the Operations Center.
| 5. Open the Operations Center. Start a web browser, and enter the following
| address: https://hostname:secure_port/oc, where hostname represents the
| name of the computer where the Operations Center is installed, and secure_port
| represents the port number that the Operations Center uses for HTTPS
| communication on that computer.
| 6. Use the configuration wizard to reconfigure the Operations Center. Specify a
| new password for the monitoring administrator ID.
| 7. Update the password for the monitoring administrator ID on any spoke servers
| that were previously connected to the hub server. Issue the following command
| from the Tivoli Storage Manager command-line interface:
| UPDATE ADMIN IBM-OC-hub_server_name new_password
| Restriction: Do not change any other settings for this administrator ID. After
| you specify the initial password, it is managed automatically by the Operations
| Center.
| If you must stop and start the web server for the Operations Center, for example,
| to restart the initial configuration wizard, use the following methods:
| From the /installation_dir/ui/utils directory, where installation_dir
| represents the directory where the Operations Center is installed, run the following
| programs:
| v To stop the server:
| ./stopserver.sh
| v To start the server:
| ./startserver.sh
| Tip: Consider using the new Operations Center interface to monitor your storage
| management environment, complete some administrative tasks, and access the
| Tivoli Storage Manager command-line interface. For additional information, see
| Chapter 17, Managing servers with the Operations Center, on page 589.
Basic items (for example, server maintenance, storage devices, and so on) are listed
in the navigation tree on the Tivoli Integrated Portal. When you click on an item, a
work page containing a portlet (for example, the Servers portlet) is displayed in a
work area. You use portlets to perform individual tasks, such as creating storage
pools.
When you click an item in the navigation tree, a new portlet populates the work
page, taking the place of the most recent portlet. To open multiple portlets, select
Open Page in New Tab from the Select Action menu. A tab is created with the
same portlet content as the original tab. To navigate among open items or to close
a specific page, use the tabs in the page bar.
Many portlets contain tables. The tables display objects like servers, policy
domains, or reports. To work with any table object, complete the following actions:
1. Click its radio button or check box in the Select column.
2. Click Select Action to display the table action list.
3. Select the action that you would like performed.
For some table objects, you can also click the object name to open a portlet or work
page pertaining to it. In most cases, a properties notebook portlet is opened. This
provides a fast way to work with table objects.
If you want more space in the work area, you can hide the navigation tree by
clicking
Do not use the Back, Forward and Refresh buttons in your browser. Doing so can
cause unexpected results. Using your keyboard's Enter key can also cause
unexpected results. Use the controls in the Administration Center interface instead.
The following task will help familiarize you with Administration Center controls.
Suppose you want to create a new client node and add it to the STANDARD
policy domain associated with a particular server.
1. If you have not already done so, access the Administration Center by entering
one of the following addresses in a supported web browser:
v http://workstation_name:16310/ibm/console
v https://workstation_name:16311/ibm/console
The workstation_name is the network name or IP address of the workstation on
which you installed the Administration Center. The default web administration
port (HTTP) is 16310. The default web administration port (HTTPS) is 16311. To
get started, log on using the Tivoli Integrated Portal user ID and password that
you created during the installation. Save this password in a safe location
because you need it to not only log on and to uninstall the Administration
Center.
2. Click Tivoli Storage Manager, and then click Policy Domains in the navigation
tree. The Policy Domains work page is displayed with a table that lists the
servers that are accessible from the Administration Center. The table also lists
the policy domains defined for each server:
3. In the Server Name column of the Policy Domains table, click the name of the
server with the STANDARD domain to which you want to add a client node. A
portlet is displayed with a table that lists the policy domains created for that
server:
6. In the client nodes table, click Select Action, and then select Create a Client
Node. The Create Client Node wizard is displayed:
In the following task descriptions, TIP_HOME is the root directory for your Tivoli
Integrated Portal installation and tip_admin and tip_pw are a valid Tivoli Integrated
Portal user ID and password.
The following table shows commands that are supported with some restrictions or
that are supported only by the command line in the Administration Center.
600 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Command Supported only by command line
ACCEPT DATE Yes
AUDIT LICENSES Yes
BEGIN EVENTLOGGING Yes
CANCEL EXPIRATION Yes
CANCEL MOUNT Yes
CANCEL RESTORE Yes
CONVERT ARCHIVE Yes
COPY DOMAIN Yes
COPY MGMTCLASS Yes
COPY POLICYSET Yes
COPY PROFILE Yes
COPY SCHEDULE Yes
COPY SCRIPT Yes
COPY SERVERGROUP Yes
DEFINE DEVCLASS for the z/OS media Yes
server
DEFINE EVENTSERVER Yes
DEFINE LIBRARY for the ZOSMEDIA Yes
library type
DEFINE PATH where the destination is Yes
a ZOSMEDIA library
DEFINE STGPOOL Supported in the user interface except for the
RECLAMATIONTYPE parameter, which is needed only
for EMC Centera devices.
DELETE DATAMOVER Yes
DELETE DEVCLASS for the z/OS media Yes
server
DELETE DISK Yes
DELETE EVENT Yes
DELETE EVENTSERVER Yes
DELETE LIBRARY for the ZOSMEDIA
library type
DELETE PATH where the destination is
a ZOSMEDIA library
DELETE SUBSCRIBER Yes
DISABLE EVENTS Yes
DISMOUNT DEVICE Yes
DISPLAY OBJNAME Yes
ENABLE EVENTS Yes
Event logging commands (BEGIN Yes
EVENTLOGGING, END EVENTLOGGING,
ENABLE EVENTS, DISABLE EVENTS) Some SNMP options can be viewed in the user
interface, in the properties notebook of a server.
MOVE GRPMEMBER Yes
For more information about backup operations, see the Backup-Archive Client
Installation and User's Guide.
In the following task description, TIP_HOME is the root directory for your Tivoli
Integrated Portal installation.
Tasks:
Licensing IBM Tivoli Storage Manager
Starting the Tivoli Storage Manager server on page 614
Moving the Tivoli Storage Manager server to another system on page 623
Date and time on the server on page 624
Managing server processes on page 624
Preemption of client or server operations on page 626
Setting the server name on page 627
Adding or updating server options on page 629
Getting help on commands and error messages on page 631
For current information about supported clients and devices, visit the IBM Tivoli
Storage Manager home page at http://www.ibm.com/support/entry/portal/
Overview/Software/Tivoli/Tivoli_Storage_Manager.
The base IBM Tivoli Storage Manager feature includes the following support:
To register a license, you must issue the REGISTER LICENSE command. The
command registers new licenses for server components, including Tivoli Storage
Manager (base), Tivoli Storage Manager Extended Edition, and System Storage
Archive Manager. You must specify the name of the enrollment certificate file
containing the license to be registered when you issue the REGISTER LICENSE
command. To unregister licenses, erase the NODELOCK file found in the server
instance directory and reregister the licenses.
The file specification can contain a wildcard character (*). The following are
possible certificate file names:
tsmbasic.lic
Registers IBM Tivoli Storage Manager base edition.
tsmee.lic
Registers IBM Tivoli Storage Manager Extended Edition. This includes the
disaster recovery manager, large libraries, and NDMP.
dataret.lic
Registers the System Storage Archive Manager. This is required to enable
Data Retention Protection and Expiration and Deletion Suspension
(Deletion Hold).
*.lic Registers all IBM Tivoli Storage Manager licenses for server components.
Notes:
v The NODELOCK file name is case-sensitive and must be entered in all uppercase
letters.
v You cannot register licenses for components that are licensed on the basis of
processors. For example, Tivoli Storage Manager for Mail, Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Enterprise Resource
Planning, Tivoli Storage Manager for Hardware, and Tivoli Storage Manager for
Space Management.
Monitoring licenses
When license terms change (for example, a new license is specified for the server),
the server conducts an audit to determine if the current server configuration
conforms to the license terms. The server also periodically audits compliance with
license terms. The results of an audit are used to check and enforce license terms.
If 30 days have elapsed since the previous license audit, the administrator cannot
cancel the audit. If an IBM Tivoli Storage Manager system exceeds the terms of its
license agreement, one of the following occurs:
v The server issues a warning message indicating that it is not in compliance with
the licensing terms.
v If you are running in Try Buy mode, operations fail because the server is not
licensed for specific features.
You must contact your IBM Tivoli Storage Manager account representative to
modify your agreement.
Note: During a license audit, the server calculates, by node, the amount of
backup, archive, and space management storage in use. This calculation
can take a great deal of CPU time and can stall other server activity. Use
the AUDITSTORAGE server option to specify that storage is not to be
calculated as part of a license audit.
Displaying license information
Use the QUERY LICENSE command to display details of your current
licenses and determine licensing compliance.
Scheduling automatic license audits
Use the SET LICENSEAUDITPERIOD command to specify the number of
days between automatic audits.
Important: The PVU calculations that are provided by Tivoli Storage Manager are
considered estimates and are not legally binding. The PVU information reported by
Tivoli Storage Manager is not considered an acceptable substitute for the IBM
License Metric Tool.
Metrics used to
calculate processor
value units (PVUs)
query pvuestimate
select * from
pvuestimate_details
Device classification
For purposes of PVU calculation, you can classify devices, such as workstations
and servers, as client nodes, server nodes, or other. By default, devices are
classified as client or server:
Client Backup-archive clients that run on Microsoft Windows 7, Microsoft
Windows XP Professional, and Apple systems are classified as client
devices.
Server Backup-archive clients that run on all platforms except for Microsoft
Windows 7, Microsoft Windows XP Professional, and Apple systems are
classified as server devices. All other node types are also classified as
server devices. The server on which Tivoli Storage Manager is running is
classified as a server device.
You can change the node classification to reflect how the device is used in the
system. For example, if a node is classified as a server, but functions as a client,
you can reclassify it as a client. If a node is not used in the system, you can
reclassify it as other.
When you assign a classification, consider the services that are associated with the
device. For example, a Microsoft Windows XP Professional notebook might be a
In a Tivoli Storage Manager system, you can assign multiple client node names to
the same physical workstation. For example, a clustering solution can have several
node names that are defined in the Tivoli Storage Manager server environment to
provide protection if a failover occurs. Redundant node names, or node names that
manage data for physical workstations that no longer exist, should not be counted
for licensing purposes. In this case, you might classify the node as other by using
the UPDATE NODE command.
Limitations
The PVU calculations are estimates because the software cannot determine all of
the factors that are required for a final number. The following factors affect the
accuracy of the calculations:
v PVU estimates are provided only for Tivoli Storage Manager V6.3 server devices
that have established a connection with the Tivoli Storage Manager server since
the installation of or upgrade to Tivoli Storage Manager V6.3.
v The default classification of nodes is based on assumptions, as described in
Device classification on page 609.
v The PVU estimate might not reflect the actual number of processors or processor
cores in use.
v The PVU estimate might not reflect cluster configurations.
v The PVU estimate might not reflect virtualization, including VMware and AIX
LPAR and WPAR.
v Common Inventory Technology might not be able to identify some processors,
and some processors might not have corresponding entries in the PVU table.
To calculate the total PVUs, sum the PVUs for all nodes.
Related information
Table 59. Information about PVUs and licensing
Information type Location
IBM PVU table ftp://public.dhe.ibm.com/software/
tivoli_support/misc/CandO/PVUTable/
PVU calculator https://www.ibm.com/software/
howtobuy/passportadvantage/
valueunitcalculator/vucalc.wss
Before you begin, review the information about how PVUs are estimated and what
the limitations are. For more information, see Role of processor value units in
assessing licensing requirements on page 608. Tivoli Storage Manager offers
several options for viewing PVU information. Select the option that best meets
your needs. To export the PVU estimates to a spreadsheet, use the SELECT * FROM
PVUESTIMATE_DETAILS command or export the data from the Administration Center.
Important: The PVU calculations that are provided by Tivoli Storage Manager are
considered estimates and are not legally binding.
5. To obtain a more accurate PVU estimate, you might want to change the
classifications of nodes. To change node classifications, issue the UPDATE NODE
command or update the role in the node notebook of the Administration
Center. For more information about the UPDATE NODE command, see the Tivoli
Storage Manager Administrator's Reference.
6. To calculate the PVUs for a node, use the following formula: PVUs = number of
processors on the node * processor type (core count) * pvu value. To
calculate the total PVUs, sum the PVUs for all nodes. For more information
about the PVU estimation formula, see Formula for PVU estimation.
7. After you generate a PVU report, additional analysis might include removing
redundancies, deleting obsolete information from the report, and accounting for
known systems that have not logged in to and connected to the server.
Tip: If you cannot obtain PVU information from a client node that is running
on a Linux operating system, ensure that Common Inventory Technology is
installed on that client node. After you install Common Inventory Technology,
obtain a new PVU estimate.
Tip: You can copy the files to any location on the host operating system, but
ensure that all files are copied to the same directory.
5. Ensure that guest virtual machines are running. This step is necessary to ensure
that the guest virtual machines are detected during the hardware scan.
6. To collect PVU information, issue the following command:
retrieve -v
If you restart the host machine or change the configuration, run the retrieve
command again to ensure that current information is retrieved.
Tip: When the IBM Tivoli Storage Manager for Virtual Environments license file is
installed on a VMware vStorage backup server, the platform string that is stored
on the Tivoli Storage Manager server is set to TDP VMware for any node name
that is used on the server. The reason is that the server is licensed for Tivoli
Storage Manager for Virtual Environments. The TDP VMware platform string can
be used for PVU calculations. If a node is used to back up the server with standard
backup-archive client functions, such as file-level and image backup, interpret the
TDP VMware platform string as a backup-archive client for PVU calculations.
The following events occur when you start or restart the IBM Tivoli Storage
Manager server:
v The server invokes the communication methods specified in the server options
file.
The date and time check occurs when the server is started and once each hour
thereafter. The following factors cause a date to be invalid:
v Earlier than the server installation date and time
v More than one hour earlier than the last time the date was checked
v More than 30 days later than the last time the date was checked
The standard way to start the server is by using the instance user ID. By using the
instance user ID, you simplify the setup process and avoid potential issues.
However, in some cases, it might be necessary to use another user ID to start the
server. For example, you might want to use the root user ID to ensure that the
server can access specific devices. To allow a user other than the instance user ID
to start the server, the user ID must have sufficient authority to issue the start
command for the server and database manager, and the user ID must belong to the
SYSADM_GROUP group. The user ID must have authority to access the server
database and to use all files, directories, and devices required by the server. Before
starting the server, explicitly grant server database authority to the user ID and
verify all other authorities for the user ID.
Tip:
When you start the Tivoli Storage Manager server, the server attempts to change
certain ulimit values to unlimited. In general, this helps to ensure optimal
performance and to assist in debugging. If you are a non-root user when you start
the server, attempts to change the ulimit values might fail. To ensure proper server
operation if you are running as a non-root user, make sure that you set the ulimit
values as high as possible, preferably to unlimited, before you start the server.
This task includes setting DB2 user limits as high as possible. DB2 relies on private
data memory for sort memory allocations during SQL processing. Insufficient
shared heap memory can lead toTivoli Storage Manager server failures when
interacting with DB2. For more information about setting the appropriate operating
system values, see the following technote: http://www.ibm.com/support/
docview.wss?uid=swg21212174
Instead of using the rc.dsmserv, you can use the dsmserv.rc script to automatically
start the server.
| If the server is installed on a Linux operating system, you must use the dsmserv.rc
| script to automatically start the server.
Tip: If you used either the upgrade wizard or the configuration wizard, you had
the choice of starting the upgraded server automatically when the system is
restarted. If you selected that choice, an entry for the server was added to the
/etc/inittab file.
If you did not use a wizard to configure the Tivoli Storage Manager server, add an
entry to the /etc/inittab file for each server that you want to automatically start:
v Set the run level to the value that corresponds to multiuser mode, with
networking enabled. Typically, the run level to use is 2, 3, or 5, depending on
the operating system and its configuration. Ensure that the run level in the
/etc/inittab file matches the run level of the operating system. For details
about run levels, consult the documentation for your operating system.
v On the rc.dsmserv command, specify the instance owner name with the -u
option, and the location of the server instance directory with the -i option.
Verify the correct syntax for the entry by consulting the documentation for your
operating system.
Example: Automatically starting a server instance
In this example, the instance owner is tsminst1; the server instance
directory is /home/tsminst1/tsminst1; the run level is 3; and the process ID
is tsm1. Add the following entry to /etc/inittab file, on one line:
tsm1:3:once:/opt/tivoli/tsm/server/bin/rc.dsmserv -u tsminst1
-i /home/tsminst1/tsminst1 -q >/dev/console 2>&1
Example: Automatically starting several server instances
If you have more than one server instance that you want to run, add an
entry for each server instance. This example uses the following instance
owner IDs:
v tsminst1
v tsminst2
This example uses the following instance directories:
v /home/tsminst1/tsminst1
v /home/tsminst2/tsminst2
This example uses the following process IDs:
v tsm1
v tsm2
For more information about the rc.dsmserv script, see the Administrator's Reference.
Here are some examples of operations that require starting the server in
stand-alone mode:
v Verifying the Tivoli Storage Manager server operations after completing a server
upgrade.
v Verifying the Tivoli Storage Manager server operations after performing one of
the following operations:
Restoring the server database by using the DSMSERV RESTORE DB
command.
Dumping, reinitializing, and reloading the server database if a catastrophic
error occurs (recovery log corruption, for example), and if the DSMSERV
RESTORE DB command cannot be used.
v Running Tivoli Storage Manager recovery utilities when asked by IBM Customer
Support.
To perform these tasks, you should disable the following server activities:
v All administrative sessions
v All client sessions
v All scheduled operations
v HSM client migration
v Storage pool migration
v Storage pool reclamation
v Client file expiration
Note: You can continue to access the server. Any current client activities
complete unless a user logs off or you cancel a client session.
4. You can perform maintenance, reconfiguration, or recovery operations, and
then halt the server.
To restart the server after completing the operations, follow this procedure:
1. To return the server options to their original settings, edit the dsmserv.opt file.
2. Start the server as described in Starting the Tivoli Storage Manager server on
page 614.
3. Enable client sessions, administrative sessions, and server-to-server sessions by
issuing the following command:
enable sessions all
Important: The standard way to start the server is by using the instance user ID.
By using the instance user ID, you simplify the setup process and avoid potential
issues. However, in some cases, it might be necessary to use another user ID to
start the server. For example, you might want to use the root user ID to ensure that
the server can access specific devices. To allow a user other than the instance user
ID to start the server, the user ID must have sufficient authority to issue the start
command for the server and database manager, and the user ID must belong to the
SYSADM_GROUP group. The user ID must have authority to access the server
database and to use all files, directories, and devices required by the server. Before
starting the server, explicitly grant server database authority to the user ID and
verify all other authorities for the user ID.
Log in, connect to the database, and issue the DB2 GRANT command. For example:
1. Log in from root or as the instance owner.
# su - tsmuser1
2. Start DB2.
$ db2start
3. Connect to the TSMDB1 database.
$ db2 connect to tsmdb1
4. Grant the root user authority to the database.
$ db2 grant dbadm on database to user tsmuser1
The root user must belong to the SYSADM_GROUP group, and must be
granted DBADM authority by the instance user.
5. Ensure that the root user has access to all files, directories, and devices required
by the server.
To start a Tivoli Storage Manager server instance from the root user ID, ensure that
you have authorized the root user ID to the database. Also ensure that the root
user ID has access to all files, directories, and devices required by the server. For
details, see Authorizing root users to start the server.
To start the server automatically by using the root user ID, use the rc.dsmserv
script in your system startup and set the -U option in the script. For information
about configuring the server for automatic startup, see the topic describing the
automatic startup of servers in the IBM Tivoli Storage Manager Upgrade and
Migration Guide for V5 Servers. For information about the rc.dsmserv startup script,
see the IBM Tivoli Storage Manager Administrator's Reference.
To start the server by using the root user ID, complete the following steps:
1. To set access rights, add the root user ID to the primary group of the user ID
that owns the instance.
2. Change the .profile file for the root user to run the db2profile script for the
instance user ID, by using the following command. For example, if the instance
name is tsminst1, the root user ID must run /home/tsminst1/sqllib/db2profile
to set the database environment variables and library.
# . ~tsminst1/sqllib/db2profile
Restriction: If you are running a Bourne shell, enter the fully qualified home
directory for the instance user ID.
# . <home_directory>/sqllib/db2profile
where home_directory is the fully qualified home directory for the instance user
ID.
3. Change to the instance directory. For example, for the server instance named
tsminst1:
# cd /tsminst1
4. Start the server instance.
v To start the tsminst1 server by using the root user ID and run it as the
instance owner, use the -u option.
# nohup /opt/tivoli/tsm/server/bin/dsmserv -u tsminst1 -q &
Important: The database and log files are written by the instance user ID,
not the root user ID. Ensure that the permissions on the database and log
directories are set to allow read and write access by the instance user ID.
Attention: Before running the server in the background, ensure the following
conditions exist:
1. An administrative node has been registered and granted system authority. See
Registering administrator IDs on page 898.
2. The administrative client options file has been updated with the correct
SERVERNAME and TCPPORT options.
3. The administrative client can access the Tivoli Storage Manager server.
If you do not follow these steps, you cannot control the server. When this occurs,
you can only stop the server by canceling the process, using the process number
displayed at startup. You may not be able to take down the server cleanly without
this process number.
To start the server running in the background, make sure you are running as the
instance ID for your Tivoli Storage Manager instance. Go to the instance directory
and enter the following:
dsmserv -q &
For example:
dsmserv option
Each server instance requires a unique user ID that is the instance owner.
As part of server configuration, you create a directory to store the files for the
server instance. The following files are stored in the instance directory:
v The server options file, dsmserv.opt
v The device configuration file, if the DEVCONFIG server option does not specify a
fully qualified name
v The volume history file, if the VOLUMEHISTORY server option does not specify a
fully qualified name
v Volumes for DEVTYPE=FILE storage pools, if the directory for the device class
is not fully specified, or not fully qualified
v The dsmserv.v6lock file
v User exits
v Trace output (if not fully qualified)
Database and recovery log files are stored in separate directories, not in the
instance directory.
To manage the system memory that is used by each server on a system, use the
DBMEMPERCENT server option to limit the percentage of system memory that can be
used by the database manager of each server. If all servers are equally important,
use the same value for each server. If one server is a production server and other
servers are test servers, set the value for the production server to a higher value
than the test servers.
For example, to run two server instances, tsminst1 and tsminst2, create instance
directories such as /tsminst1 and /tsminst2. In each directory, place the
dsmserv.opt file for that server. Each dsmserv.opt file must specify a different port
for the server to use.
To automatically start the two server instances, you can use the script, rc.dsmserv.
When you halt the server, all processes are abruptly stopped and client sessions are
canceled, even if they are not complete. Any in-progress transactions are rolled
back when the server is restarted. Administrator activity is not possible.
If possible, halt the server only after current administrative and client node
sessions have completed or canceled. To shut down the server without severely
impacting administrative and client node activity with the server, you must:
1. Disable the server to prevent new client node sessions from starting by issuing
the DISABLE SESSIONS command. This command does not cancel sessions
currently in progress or system processes like migration and reclamation.
Note: If the process you want to cancel is currently waiting for a tape volume
to be mounted (for example, a process initiated by EXPORT, IMPORT, or
MOVE DATA commands), the mount request is automatically cancelled. If a
volume associated with the process is currently being mounted by an automated
library, the cancel may not take effect until the mount is complete.
5. Halt the server to shut down all server operations by using the HALT
command.
Note:
1. The HALT command can be replicated using the ALIASHALT server option.
The server option allows you to define a term other than HALT that will
perform the same function. The HALT command will still function, however
the server option provides an additional method for issuing the HALT
command.
2. In order for the administrative client to recognize an alias for the HALT
command, the client must be started with the CHECKALIASHALT option
specified. See the Administrator's Reference for more information.
If you cannot connect to the server with an administrative client and you want to
stop the server, cancel the process by using these steps:
1. Ensure that you know the correct process ID for the IBM Tivoli Storage
Manager server. If you do not know the process ID, the information is in the
dsmserv.v6lock file, which is found in the directory from which the server is
running. To display the file enter:
cat /instance_dir/dsmserv.v6lock
2. Issue the KILL command with the process ID number.
These are the prerequisites to back up the database from one server and restore it
to another server:
v The same operating system must be running on both servers.
v The sequential storage pool that you use to back up the server database must be
accessible from both servers. Only manual and SCSI library types are supported
for the restore operation.
v The restore operation must be done by a Tivoli Storage Manager server at a code
level that is the same a that on the server that was backed up.
The sequential storage pool that you use to back up the server database must
be accessible from both servers.
3. Halt the server.
4. Move any libraries and devices from the original server to the new server, or
ensure that they are accessible through a storage area network.
5. Move copies of the volume history file, device configuration file, and server
options file to the target server.
6. Restore the backed up database on the target server. Ensure that you issue the
following commands as the instance user. For example:
v To maintain the current directory structure on the target server, issue this
command:
dsmserv restore db
v To change the current directory structure on the target server, create a file
(for example dbdir.txt), list the directories that are to be restored on separate
lines, and issue this command:
dsmserv restore db on=dbdir.txt
7. Start the target server.
Related tasks:
Moving the database and recovery log on a server on page 687
Every time the server is started and for each hour thereafter, a date and time check
occurs. An invalid date can be one of the following:
v Earlier than the server installation date and time.
v More than one hour earlier than the last time the date was checked.
v More than 30 days later than the last time the date was checked.
Most processes occur quickly and are run in the foreground, but others that take
longer to complete run as background processes.
The server assigns each background process an ID number and displays the
process ID when the operation starts. This process ID number is used for tracking
purposes. For example, if you issue an EXPORT NODE command, the server
displays a message similar to the following:
EXPORT NODE started as Process 10
Some of these processes can also be run in the foreground by using the WAIT=YES
parameter when you issue the command from an administrative client. See
Administrator's Reference for details.
If you do not know the process ID, you can display information about all
background processes by entering:
query process
The following figure shows a server background process report after a DELETE
FILESPACE command was issued. The report displays a process ID number, a
description, and a completion status for each background process.
To find the process number, issue the QUERY PROCESS command . For details,
see Requesting information about server processes.
Note:
1. To list open mount requests, issue the QUERY REQUEST command. You can
also query the activity log to determine if a given process has a pending
mount request.
2. A mount request indicates that a volume is needed for the current process.
However, the volume might not be available in the library. If the volume is
not available, the reason might be that you either issued the MOVE MEDIA
command or CHECKOUT LIBVOLUME command, or that you manually
removed the volume from the library.
The following operations can be preempted and are listed in order of priority. The
server selects the lowest priority operation to preempt, for example, reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media to sequential media
5. Reclamation
To disable preemption, specify NOPREEMPT in the server options file. If you specify
this option, the BACKUP DB command and the export and import commands are the
only operations that can preempt other operations.
The following high priority operations can preempt operations for access to a
specific volume:
v HSM recall
v Node replication
v Restore
v Retrieve
The following operations can be preempted, and are listed in order of priority. The
server preempts the lowest priority operation, for example reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media
5. Reclamation
To disable preemption, specify NOPREEMPT in the server options file. If you specify
this option, the BACKUP DB command and the export and import commands are the
only operations that can preempt other operations.
You can issue the QUERY STATUS command to see the name of the server.
To specify the server name you must have system privileges. For example, to
change the server name to WELLS_DESIGN_DEPT., enter the following:
set servername wells_design_dept.
Attention:
v If this is a source server for a virtual volume operation, changing its name can
impact its ability to access and manage the data it has stored on the
corresponding target server.
v To prevent problems related to volume ownership, do not change the name of a
server if it is a library client.
You can change the server name with the SET SERVERNAME command. But you
might have unfortunate results that, according to the platform, can vary. Some
examples to be aware of are:
v Passwords might be invalidated. For example, Windows clients use the server
name to identify which passwords belong to which servers. Changing the server
name after Windows backup-archive clients are connected forces clients to
re-enter the passwords.
v Device information might be affected.
v Registry information on Windows platforms might change.
This command displays all configuration settings that are used by the database.
5. In the instance directory/sqllib directory, locate the db2nodes.cfg file. The
file contains an entry that shows the previous host name, for example:
0 tsmmon TSMMON 0
a. Update the entry with the new host name. The entry is similar to the
following entry:
0 tsmnew newhostname 0
b. Save and close the changed file.
You can add or update server options by editing the dsmserv.opt file, using the
SETOPT command.
For information about editing the server options file, refer to the Administrator's
Reference.
You can update existing server options by issuing the SETOPT command. For
example, to update the existing server option value for MAXSESSIONS to 20,
enter:
setopt maxsessions 20
The contents of the volume history file are created by using the volume history
table in the server database. When opening a volume, the server might check the
table to determine whether the volume is already used. If the table is large, it can
take a long time to search. Other sessions or processes, such as backups and other
processes that use multiple sequential volumes, can be delayed due to locking.
For example, if you keep backups for seven days, information older than seven
days is not needed. If information about database backup volumes or export
volumes is deleted, the volumes return to scratch status. For scratch volumes of
device type FILE, the files are deleted. When information about storage pools
volumes is deleted, the volumes themselves are not affected.
To delete volume history, issue the DELETE VOLHISTORY command. For example, to
delete volume history that is seven days old or older, issue the following
command:
When deleting information about volume history, keep in mind the following
guidelines:
v Ensure that you delete volume history entries such as STGNEW, STGDELETE,
and STGREUSE that are older than the oldest database backup that is required
to perform a point-in-time database restore. If necessary, you can delete other
types of entries.
v Existing volume history files are not automatically updated with the DELETE
VOLHISTORY command.
v Do not delete information about sequential volumes until you no longer need
that information. For example, do not delete information about the reuse of
storage volumes unless you backed up the database after the time that was
specified for the delete operation.
v Do not delete the volume history for database backup or export volumes that
are stored in automated libraries unless you want to return the volumes to
scratch status. When the DELETE VOLHISTORY command removes information for
such volumes, the volumes automatically return to scratch status. The volumes
are then available for reuse by the server and the information stored on them
can be overwritten.
v To ensure that you have a backup from which to recover, you cannot remove the
most current database snapshot entry by deleting volume history. Even if a more
current, standard database backup exists, the latest database snapshot is not
deleted.
v To display volume history, issue the QUERY VOLHISTORY command. For example,
to display volume history up to yesterday, issue the following command:
query volhistory enddate=today-1
DRM: DRM automatically expires database backup series and deletes the volume history
entries.
To enable the server to use the AIX Asynchronous I/O support, set the
AIXASYNCIO option to YES. To disable AIXASYNCIO support, set the option to
NO. The default is for AIXASYNCIO to be disabled. It is recommended that you
refer to the AIX documentation for additional information regarding the AIO
subsystem and requirements.
You can issue the HELP command with no operands to display a menu of help
selections. You also can issue the HELP command with operands that specify help
menu numbers, commands, or message numbers.
Tivoli Storage Manager includes a central scheduling component that allows the
automatic processing of administrative commands during a specific time period
when the schedule is activated. Schedules that are started by the scheduler can run
in parallel. You can process scheduled commands sequentially by using scripts that
contain a sequence of commands with WAIT=YES. You can also use a scheduler
external to invoke the administrative client to start one or more administrative
commands.
Each scheduled administrative command is called an event. The server tracks and
records each scheduled event in the database. You can delete event records as
needed to recover database space.
Concepts:
Automating a basic administrative command schedule on page 634
Tailoring schedules on page 635
Copying schedules on page 638
Deleting schedules on page 638
Notes:
1. Scheduled administrative command output is directed to the activity log. This
output cannot be redirected. For information about the length of time activity
log information is retained in the database, see Using the Tivoli Storage
Manager activity log on page 803.
2. You cannot schedule MACRO or QUERY ACTLOG commands.
To later update or tailor your schedules, see Tailoring schedules on page 635.
Include the following parameters when you define a schedule with the DEFINE
SCHEDULE command:
v Specify the administrative command to be issued (CMD= ).
v Specify whether the schedule is activated (ACTIVE= ).
The following figure shows an example of a report that is displayed after you
enter:
query schedule backup_archivepool type=administrative
Note: The asterisk (*) in the first column specifies whether the corresponding
schedule has expired. If there is an asterisk in this column, the schedule has
expired.
You can check when the schedule is projected to run and whether it ran
successfully by using the QUERY EVENT command. For information about
querying events, see Querying events on page 639.
Tailoring schedules
To control your schedules more precisely, specify values for the schedule
parameters instead of accepting the default settings when you define or update
schedules.
You can specify the following values when you issue the DEFINE SCHEDULE or
UPDATE SCHEDULE command:
Schedule name
All schedules must have a unique name, which can be up to 30 characters.
Schedule style
You can specify either classic or enhanced scheduling. With classic
scheduling, you can define the interval between the startup windows for a
schedule. With enhanced scheduling, you can choose the days of the week,
days of the month, weeks of the month, and months the startup window
can begin on.
Initial start date, initial start time, and start day
You can specify a past date, the current date, or a future date for the initial
start date for a schedule with the STARTDATE parameter.
You can specify a start time, such as 6 p.m. with the STARTTIME parameter.
Copying schedules
You can create a new schedule by copying an existing administrative schedule.
When you copy a schedule, Tivoli Storage Manager copies the following
information:
v A description of the schedule
v All parameter values from the original schedule
You can then update the new schedule to meet your needs.
Deleting schedules
To delete the administrative schedule ENGBKUP, enter:
delete schedule engbkup type=administrative
All scheduled events, including their status, are tracked by the server. An event
record is created in the server database whenever processing of a scheduled
command is created or missed.
To minimize the processing time when querying events, minimize the time range.
To query an event for an administrative command schedule, you must specify the
TYPE=ADMINISTRATIVE parameter. Figure 77 shows an example of the results of
the following command:
query event * type=administrative
If you issue a query for events, past events may display even if the event records
have been deleted. The events displayed with a status of Uncertain indicate that
complete information is not available because the event records have been deleted.
To determine if event records have been deleted, check the message that is issued
after the DELETE EVENT command is processed.
Event records are automatically removed from the database after both of the
following conditions are met:
v The specified retention period has passed
v The startup window for the event has elapsed
You can change the retention period from the default of 10 days by using the SET
EVENTRETENTION command.
Use the DELETE EVENT command manually remove event records. For example,
to delete all event records written prior to 11:59 p.m. on June 30, 2002, enter:
delete event type=administrative 06/30/2002 23:59
The administrator can run the script from the Administration Center, or schedule
the script for processing using the administrative command scheduler on the
server.
You can define a script with the DEFINE SCRIPT command. You can initially
define the first line of the script with this command. For example:
define script qaixc "select node_name from nodes where platform=aix"
desc=Display AIX clients
To define additional lines, use the UPDATE SCRIPT command. For example, you
want to add a QUERY SESSION command, enter:
update script qaixc "query session *"
You can also easily define and update scripts using the Administration Center
where you can also use local workstation cut and paste functions.
Note: The Administration Center only supports ASCII characters for input. If you
need to enter characters that are not ASCII, do not use the Administration Center.
Issue the DEFINE SCRIPT and UPDATE SCRIPT commands from the server
console.
You can specify a WAIT parameter with the DEFINE CLIENTACTION command.
This allows the client action to complete before processing the next step in a
command script or macro. To determine where a problem is within a command in
a script, use the ISSUE MESSAGE command.
Restriction: You cannot redirect the output of a command within a Tivoli Storage
Manager script. Instead, run the script and then specify command redirection. For
example, to direct the output of script1 to the c:\temp\test.out directory, run the
script and specify command redirection as in the following example:
run script1 > c:\temp\test.out
For example, to define a script whose command lines are read in from the file
BKUP12.MAC, issue:
define script admin1 file=bkup12.mac
The script is defined as ADMIN1, and the contents of the script have been read in
from the file BKUP12.MAC.
Note: The file must reside on the server, and be read by the server.
You must schedule the maintenance script to run. The script typically includes
commands to back up, copy, and delete data. You can automate your server
maintenance by creating a maintenance script, and running it when your server is
not in heavy use.
A custom maintenance script can be created using the maintenance script editor or
by converting a predefined maintenance script.
When you click Server Maintenance in the navigation tree, a list of servers is
displayed in the Maintenance Script table with either None, Custom, or
Predefined noted in the Maintenance Script column.
Perform the following steps to create a custom maintenance script using the
maintenance script editor:
1. Select a server.
2. Click Select Action > Create Custom Maintenance Script.
3. Click Select an Action and construct your maintenance script by adding a
command to the script. The following actions are available:
v Back Up Server Database
v Back Up Storage Pool
v Copy Active Data to Active-data Pool
v Create Recovery Plan File
v Insert Comment
v Delete Volume History
v Delete Expired Data
v Migrate Stored Data
v Move Disaster Recovery Media
v Run Script Commands in Parallel
v Run Script Commands Serially
v Reclaim Primary Storage Pool
v Reclaim Copy Storage Pool
To edit your custom script after it is created and saved, click Server Maintenance
in the navigation tree, select the server with the custom script and click Select
Action > Modify Maintenance Script. Your custom maintenance script opens in
the script editor where you can add, remove, or change the order of the
commands.
You can produce a predefined maintenance script using the maintenance script
wizard.
When you click Server Maintenance in the navigation tree, a list of servers is
displayed in the Maintenance Script table with either None, Custom, or
Predefined noted in the Maintenance Script column.
Perform the following steps to create a maintenance script using the maintenance
script wizard:
1. Select a server that requires a maintenance script to be defined (None is
specified in the Maintenance Script column).
2. Click Select Action > Create Maintenance Script.
3. Follow the steps in the wizard.
After completing the steps in the wizard, you can convert your predefined
maintenance script into a custom maintenance script. If you choose to convert your
script into a custom script, select the server and click Select Action > Convert to
Custom Maintenance Script. Your predefined maintenance script is converted and
opened in the maintenance script editor where you can modify the schedule and
the maintenance actions.
Running commands serially in a script ensures that any preceding commands are
complete before proceeding and ensures that any following commands are run
serially. When a script starts, all commands are run serially until a PARALLEL
command is encountered. Multiple commands running in parallel and accessing
common resources, such as tape drives, can run serially.
Script return codes remain the same before and after a PARALLEL command is run.
When a SERIAL command is encountered, the script return code is set to the
maximum return code from any previous commands run in parallel.
When using server commands that support the WAIT parameter after a PARALLEL
command, the behavior is as follows:
v If you specify (or use the default) WAIT=NO, a script does not wait for the
completion of the command when a subsequent SERIAL command is
encountered. The return code from that command reflects processing only up to
In most cases, you can use WAIT=YES on commands that are run in parallel.
The following example illustrates how the PARALLEL command is used to back up,
migrate, and reclaim storage pools.
/*run multiple commands in parallel and wait for
them to complete before proceeding*/
PARALLEL
/*back up four storage pools simultaneously*/
BACKUP STGPOOL PRIMPOOL1 COPYPOOL1 WAIT=YES
BACKUP STGPOOL PRIMPOOL2 COPYPOOL2 WAIT=YES
BACKUP STGPOOL PRIMPOOL3 COPYPOOL3 WAIT=YES
BACKUP STGPOOL PRIMPOOL4 COPYPOOL4 WAIT=YES
/*wait for all previous commands to finish*/
SERIAL
/*after the backups complete, migrate stgpools
simultaneously*/
PARALLEL
MIGRATE STGPOOL PRIMPOOL1 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL2 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL3 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL4 DURATION=90 WAIT=YES
/*wait for all previous commands to finish*/
SERIAL
/*after migration completes, relcaim storage
pools simultaneously*/
PARALLEL
RECLAIM STGPOOL PRIMPOOL1 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL2 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL3 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL4 DURATION=120 WAIT=YES
When you run the script you must specify two values, one for $1 and one for $2.
For example:
run sqlsample node_name aix
The command that is processed when the SQLSAMPLE script is run is:
select node_name from nodes where platform=aix
As each command is processed in a script, the return code is saved for possible
evaluation before the next command is processed. The return code can be one of
three severities: OK, WARNING, or ERROR. Refer to Administrator's Reference for a
list of valid return codes and severity levels.
You can use the IF clause at the beginning of a command line to determine how
processing of the script should proceed based on the current return code value. In
the IF clause you specify a return code symbolic value or severity.
The server initially sets the return code at the beginning of the script to RC_OK.
The return code is updated by each processed command. If the current return code
from the processed command is equal to any of the return codes or severities in
the IF clause, the remainder of the line is processed. If the current return code is
not equal to one of the listed values, the line is skipped.
The following script example backs up the BACKUPPOOL storage pool only if
there are no sessions currently accessing the server. The backup proceeds only if a
return code of RC_NOTFOUND is received:
/* Backup storage pools if clients are not accessing the server */
select * from sessions
/* There are no sessions if rc_notfound is received */
if(rc_notfound) backup stg backuppool copypool
The following script example backs up the BACKUPPOOL storage pool if a return
code with a severity of warning is encountered:
The following example uses the IF clause together with RC_OK to determine if
clients are accessing the server. If a RC_OK return code is received, this indicates
that client sessions are accessing the server. The script proceeds with the exit
statement, and the backup does not start.
/* Back up storage pools if clients are not accessing the server */
select * from sessions
/* There are sessions if rc_ok is received */
if(rc_ok) exit
backup stg backuppool copypool
The GOTO statement is used in conjunction with a label statement. The label
statement is the target of the GOTO statement. The GOTO statement directs script
processing to the line that contains the label statement to resume processing from
that point.
The label statement always has a colon (:) after it and may be blank after the colon.
The following example uses the GOTO statement to back up the storage pool only
if there are no sessions currently accessing the server. In this example, the return
code of RC_OK indicates that clients are accessing the server. The GOTO statement
directs processing to the done: label which contains the EXIT statement that ends
the script processing:
/* Back up storage pools if clients are not accessing the server */
select * from sessions
/* There are sessions if rc_ok is received */
if(rc_ok) goto done
backup stg backuppool copypool
done:exit
The following is an example of the QSTATUS script. The script has lines 001, 005,
and 010 as follows:
001 /* This is the QSTATUS script */
005 QUERY STATUS
010 QUERY PROCESS
To append the QUERY SESSION command at the end of the script, issue the
following:
update script qstatus "query session"
The QUERY SESSION command is assigned a command line number of 015 and
the updated script is as follows:
001 /* This is the QSTATUS script */
005 QUERY STATUS
010 QUERY PROCESS
015 QUERY SESSION
You can change an existing command line by specifying the LINE= parameter.
Line number 010 in the QSTATUS script contains a QUERY PROCESS command.
To replace the QUERY PROCESS command with the QUERY STGPOOL command,
specify the LINE= parameter as follows:
update script qstatus "query stgpool" line=10
To add the SET REGISTRATION OPEN command as the new line 007 in the
QSTATUS script, issue the following:
update script qstatus "set registration open" line=7
The QUERY1 command script now contains the same command lines as the
QSTATUS command script.
The various formats you can use to query scripts are as follows:
Format Description
Standard Displays the script name and description. This is the default.
Detailed Displays commands in the script and their line numbers, date of
last update, and update administrator for each command line in the
script.
Lines Displays the name of the script, the line numbers of the commands,
comment lines, and the commands.
File Outputs only the commands contained in the script without all
other attributes. You can use this format to direct the script to a file
so that it can be loaded into another server with the DEFINE script
command specifying the FILE= parameter.
You can create additional server scripts by querying a script and specifying the
FORMAT=FILE and OUTPUTFILE parameters. You can use the resulting output as
input into another script without having to create a script line by line.
The following is an example of querying the SRTL2 script and directing the output
to newscript.script:
query script srtl2 format=raw outputfile=newscript.script
You can then edit the newscript.script with an editor that is available to you on
your system. To create a new script using the edited output from your query, issue:
define script srtnew file=newscript.script
For example, to delete the 007 command line from the QSTATUS script, issue:
delete script qstatus line=7
Note: There is no Tivoli Storage Manager command that can cancel a script after it
starts. To stop a script, an administrator must halt the server.
You can preview the command lines of a script without actually executing the
commands by using the PREVIEW=YES parameter with the RUN command. If the
script contains substitution variables, the command lines are displayed with the
substituted variables. This is useful for evaluating a script before you run it.
Enter:
run qaixc node_name aix
Using macros
Tivoli Storage Manager supports macros on the administrative client. A macro is a
file that contains one or more administrative client commands. You can only run a
macro from the administrative client in batch or interactive modes. Macros are
stored as a file on the administrative client. Macros are not distributed across
servers and cannot be scheduled on the server.
The name for a macro must follow the naming conventions of the administrative
client running on your operating system. For more information about file naming
conventions, refer to the Administrator's Reference.
In macros that contain several commands, use the COMMIT and ROLLBACK
commands to control command processing within the macro. For more information
about using these commands, see Command processing in a macro on page 653.
You can include the MACRO command within a macro file to invoke other macros
up to ten levels deep. A macro invoked from the Tivoli Storage Manager
administrative client command prompt is called a high-level macro. Any macros
invoked from within the high-level macro are called nested macros.
The administrative client ignores any blank lines included in your macro.
However, a completely blank line terminates a command that is continued (with a
continuation character).
The following is an example of a macro called REG.MAC that registers and grants
authority to a new administrator:
register admin pease mypasswd -
contact=david pease, x1234
grant authority pease -
classes=policy,storage -
domains=domain1,domain2 -
stgpools=stgpool1,stgpool2
This example uses continuation characters in the macro file. For more information
on continuation characters, see Using continuation characters on page 652.
After you create a macro file, you can update the information that it contains and
use it again. You can also copy the macro file, make changes to the copy, and then
run the copy. Refer to the Administrator's Reference for more information on how
commands are entered and the general rules for entering administrative
commands.
To write a comment:
v Write a slash and an asterisk (/*) to indicate the beginning of the comment.
v Write the comment.
v Write an asterisk and a slash (*/) to indicate the end of the comment.
You can put a comment on a line by itself, or you can put it on a line that contains
a command or part of a command.
For example, to use a comment to identify the purpose of a macro, write the
following:
/* auth.mac-register new nodes */
Comments cannot be nested and cannot span lines. Every line of a comment must
contain the comment delimiters.
To use a continuation character, enter a dash or a back slash at the end of the line
that you want to continue. With continuation characters, you can do the following:
v Continue a command. For example:
register admin pease mypasswd -
contact="david, ext1234"
v Continue a list of values by entering a dash or a back slash, with no preceding
blank spaces, after the last comma of the list that you enter on the first line.
Then, enter the remaining items in the list on the next line with no preceding
blank spaces. For example:
stgpools=stg1,stg2,stg3,-
stg4,stg5,stg6
v Continue a string of values enclosed in quotation marks by entering the first
part of the string enclosed in quotation marks, followed by a dash or a back
slash at the end of the line. Then, enter the remainder of the string on the next
line enclosed in the same type of quotation marks. For example:
contact="david pease, bldg. 100, room 2b, san jose,"-
"ext. 1234, alternate contact-norm pass,ext 2345"
Tivoli Storage Manager concatenates the two strings with no intervening blanks.
You must use only this method to continue a quoted string of values across more
than one line.
For example, to create a macro named AUTH.MAC to register new nodes, write it
as follows:
/* register new nodes */
register node %1 %2 - /* userid password */
contact=%3 - /* name, phone number */
domain=%4 /* policy domain */
Then, when you run the macro, you enter the values you want to pass to the
server to process the command.
For example, to register the node named DAVID with a password of DAVIDPW,
with his name and phone number included as contact information, and assign him
to the DOMAIN1 policy domain, enter:
If your system uses the percent sign as a wildcard character, the administrative
client interprets a pattern-matching expression in a macro where the percent sign is
immediately followed by a numeric digit as a substitution variable.
Running a macro
Use the MACRO command when you want to run a macro. You can enter the
MACRO command in batch or interactive mode.
If the macro does not contain substitution variables (such as the REG.MAC macro
described in the Writing commands in a macro on page 651), run the macro by
entering the MACRO command with the name of the macro file. For example:
macro reg.mac
If you enter fewer values than there are substitution variables in the macro, the
administrative client replaces the remaining variables with null strings.
If you want to omit one or more values between values, enter a null string ("") for
each omitted value. For example, if you omit the contact information in the
previous example, you must enter:
macro auth.mac pease mypasswd "" domain1
If an error occurs in any command in the macro or in any nested macro, the server
terminates processing and rolls back any changes caused by all previous
commands.
If you specify the ITEMCOMMIT option when you enter the DSMADMC
command, the server commits each command in a script or a macro individually,
after successfully completing processing for each command. If an error occurs, the
server continues processing and only rolls back changes caused by the failed
command.
You can control precisely when commands are committed with the COMMIT
command. If an error occurs while processing the commands in a macro, the server
terminates processing of the macro and rolls back any uncommitted changes.
Uncommitted changes are commands that have been processed since the last
COMMIT. Make sure that your administrative client session is not running with the
ITEMCOMMIT option if you want to control command processing with the
COMMIT command.
Chapter 20. Automating server operations 653
Note: Commands that start background processes cannot be rolled back. For a list
of commands that can generate background processes, see Managing server
processes on page 624.
You can test a macro before implementing it by using the ROLLBACK command.
You can enter the commands (except the COMMIT command) you want to issue in
the macro, and enter ROLLBACK as the last command. Then, you can run the
macro to verify that all the commands process successfully. Any changes to the
database caused by the commands are rolled back by the ROLLBACK command
you have included at the end. Remember to remove the ROLLBACK command
before you make the macro available for actual use. Also, make sure your
administrative client session is not running with the ITEMCOMMIT option if you
want to control command processing with the ROLLBACK command.
If you have a series of commands that process successfully via the command line,
but are unsuccessful when issued within a macro, there are probably dependencies
between commands. It is possible that a command issued within a macro cannot
be processed successfully until a previous command that is issued within the same
macro is committed. Either of the following actions allow successful processing of
these commands within a macro:
v Insert a COMMIT command before the command dependent on a previous
command. For example, if COMMAND C is dependent upon COMMAND B,
you would insert a COMMIT command before COMMAND C. An example of
this macro is:
command a
command b
commit
command c/
v Start the administrative client session using the ITEMCOMMIT option. This
causes each command within a macro to be committed before the next command
is processed.
The following sections provide detailed concept and task information about the
database and recovery log.
Concepts:
Database and recovery log overview
Tasks:
Estimating database space requirements on page 663
Estimating recovery log space requirements on page 667
Monitoring the database and recovery log on page 682
Increasing the size of the database on page 683
Reducing the size of the database on page 684
Increasing the size of the active log on page 686
Step 4: Running database backups on page 922
Restoring the database on page 946
Moving the database and recovery log on a server on page 687
Adding optional logs after server initialization on page 692
Transaction processing on page 692
Tivoli Storage Manager version 6.3 is installed with the IBM DB2 database
application. Users who are experienced DB2 administrators can choose to perform
advanced SQL queries and use DB2 tools to monitor the database. However, do
not use DB2 tools to change DB2 configuration settings from those settings that are
preset by Tivoli Storage Manager. Do not alter the DB2 environment for Tivoli
Storage Manager in other ways, such as with other products. The Tivoli Storage
Manager Version 6.3 server was built and tested with the data definition language
(DDL) and database configuration that Tivoli Storage Manager deploys.
Database: Overview
The database does not store client data; it points to the locations of the client files
in the storage pools. The Tivoli Storage Manager database contains information
about the Tivoli Storage Manager server. The database also contains information
about the data that is managed by the Tivoli Storage Manager server.
The database cannot be mirrored through Tivoli Storage Manager, but it can be
mirrored by using hardware mirroring, such as Redundant Array of Independent
Disks (RAID) 5.
The database manager manages database volumes, and there is no need to format
them. Some advantages of the database manager are:
Automatic backups
When the server is started for the first time, a full backup begins
Using TCP/IP to communicate with DB2 can greatly extend the number of
concurrent connections. The TCP/IP connection is part of the default configuration.
When the Tivoli Storage Manager V6.3 server is started for the first time, it
inspects the current configuration of the DB2 instance. It then makes any necessary
changes to ensure that both IPC and TCP/IP can be used to communicate with the
database manager. Any changes are made only as needed. For example, if the
TCP/IP node exists and has the correct configuration, it is not changed. If the node
was cataloged but has an incorrect IP address or port, it is deleted and replaced by
a node having the correct configuration.
When cataloging the remote database, the Tivoli Storage Manager server generates
a unique alias name based on the name of the local database. By default, a remote
database alias of TSMAL001 is created to go with the default database name of
TSMDB1.
Tip: Tivoli Storage Manager disables the TCP/IP connections if it cannot find an
alias in the range TSMAL001-TSMAL999 that is not already in use.
By default, the Tivoli Storage Manager server uses IPC to establish connections for
the first two connection pools, with a maximum of 480 connections for each pool.
After the first 960 connections are established, the Tivoli Storage Manager server
uses TCP/IP for any additional connections.
You can use the DBMTCPPORT server option to specify the port on which the TCP/IP
communication driver for the database manager waits for requests for client
sessions. The port number must be reserved for use by the database manager.
If Tivoli Storage Manager cannot connect to the database by using TCP/IP, it issues
an error message and halts. The administrator must determine the cause of the
problem and to correct it before restarting the server. The server verifies that it can
connect by using TCP/IP at startup even if it is configured to initially favor IPC
connections over TCP/IP connections.
Recovery log
The recovery log helps to ensure that a failure (such as a system power outage or
application error) does not leave the database in an inconsistent state. The recovery
log is essential when you restart the Tivoli Storage Manager or the database, and is
required if you must restore the database.
When you issue a command to make changes, the changes are committed to the
database to complete. A committed change is permanent and cannot be rolled
back. If a failure occurs, the changes that were made but not committed are rolled
back. Then all committed transactions, which might not have been physically
written to disk, are reapplied and committed again.
During the installation process, you specify the directory location, the size of the
active log, and the location of the archive logs. You can also specify the directory
location of a log mirror if you want the additional protection of mirroring the
active log. The amount of space for the archive logs is not limited, which improves
the capacity of the server for concurrent operations compared to previous versions.
The space that you designate for the recovery log is managed automatically by the
database manager program. Space is used as needed, up to the capacity of the
defined log directories. You do not need to create and format volumes for the
recovery log.
Ensure that the recovery log has enough space. Monitor the space usage for the
recovery log to prevent problems.
Attention: To protect your data, locate the database directories and all the log
directories on separate physical disks.
Related concepts:
Transaction processing on page 692
Active log
Changes to the database are recorded in the recovery log to maintain a consistent
database image. You can restore the server to the latest time possible, by using the
active and archive log files, which are included in database backups.
To help ensure that the required log information is available for restoring the
database, you can specify that the active log is mirrored to another file system
location. For the best availability, locate the active log mirror on a different
physical device.
Active log
The active log files record transactions that are in progress on the server.
The active log stores all the transactions that have not yet been committed. The
active log always contains the most recent log records. If a failure occurs, the
changes that were made but not committed are rolled back, and all committed
transactions, which might not have been physically written to disk, are reapplied
and committed again.
The location and size of the active log are set during initial configuration of a new
or upgraded server. You can also set these values by specifying the
ACTIVELOGDIRECTORY and the ACTIVELOGSIZE parameters of the DSMSERV FORMAT or
DSMSERV LOADFORMAT utilities. Both the location and size can be changed later. To
change the size of the active log, see Increasing the size of the active log on page
686. To change the location of the active log directory, see Moving only the active
log, archive log, or archive failover log on page 689.
Mirroring the active log can protect the database when a hardware failure occurs
on the device where the active log is stored. Mirroring the active log provides
another level of protection in addition to placing the active log on hardware that
has high-availability features. Creating a log mirror is optional but recommended.
Place the active log directory and the log mirror directory on different physical
devices. If you increase the size of the active log, the log mirror size is increased
automatically.
Mirroring the log can affect performance, because of the doubled I/O activity that
is required to maintain the mirror. The additional space that the log mirror requires
is another factor to consider.
You can create the log mirror during initial configuration of a new or upgraded
server. If you use the DSMSERV LOADFORMAT utility instead of the wizard to configure
the server, specify the MIRRORLOGDIRECTORY parameter. If the log mirror directory is
not created at that time, you can create it later by specifying the
MIRRORLOGDIRECTORY option in the server options file, dsmserv.opt.
Archive log
The archive log contains copies of closed log files that had been in the active log.
The archive log is not needed for normal processing, but it is typically needed for
recovery of the database.
To provide roll-forward recovery of the database to the current point in time, all
logs since the last database backup must be available for the restore operation. The
archive log files are included in database backups and are used for roll-forward
recovery of the database to the current point-in-time. All logs since the last full
database backup must be available to the restore function. These log files are
stored in the archive log. The pruning of the archive log files is based on full
database backups. The archive log files that are included in a database backup are
automatically pruned after a full database backup cycle has been completed.
The archive log is not needed during normal processing, but it is typically needed
for recovery of the database. Archived log files are saved until they are included in
a full database backup. The amount of space for the archive log is not limited.
Archive log files are automatically deleted as part of the full backup processes and
must not be deleted manually. Monitor both the active and archive logs. If the
active log is close to filling, check the archive log. If the archive log is full or close
to full, run one or more full database backups.
If the file systems or drives where the archive log directory and the archive
failover log directory are located become full, the archived logs are stored in the
active log directory. Those archived logs are returned to the archive log directory
when the space problem is resolved, or when a full database backup is run.
Specifying an archive failover log directory can prevent problems that occur if the
archive log runs out of space. Place the archive log directory and the archive
failover log directory on different physical drives.
You can specify the location of the failover log directory during initial
configuration of a new or upgraded server. You can also specify its location with
the ARCHFAILOVERLOGDIRECTORY parameter of the DSMSERV FORMAT or DSMSERV
LOADFORMAT utility. If it is not created through the utilities, it can be created later by
specifying the ARCHFAILOVERLOGDIRECTORY option in the server options file,
dsmserv.opt. See Adding optional logs after server initialization on page 692 for
details.
For information about the space required for the log, see Archive failover log
space on page 680.
The active log files contain information about in-progress transactions. This
information is needed to restart the server and database after a disaster.
Transactions are stored in the log files of the active log, and a transaction can span
multiple log files.
When all transactions that are part of an active log file complete, that log file is
copied from the active log to the archive log. Transactions continue to be written to
the active log files while the completed active log files are copied to the archive
log. If a transaction spans all the active log files, and the files are filled before the
transaction is committed, the Tivoli Storage Manager server halts.
When an active log file is full, and there are no active transactions referring to it,
the file is copied to the archive log directory. An active log file cannot be deleted
until all transactions in the log file are either committed or discontinued.
If the archive log is full and there is no failover archive log, the log files remain in
the active log. If the active log then becomes full and there are in-progress
transactions, the Tivoli Storage Manager server halts. If there is an archive failover
log, it is used only if the archive log fills. It is important to monitor the archive log
directory to ensure that there is space in the active log.
When the database is backed up, the database manager deletes the archive log files
that are no longer needed for future database backups or restores.
The archive log is included in database backups and is used for roll-forward
recovery of the database. The archive log files that are included in a database
backup are automatically pruned after a full database backup cycle has completed.
Therefore, ensure that the archive log has enough space to store the log files for the
database backups.
The user data limit that is displayed when you issue the ulimit -d command is the
soft user data limit. It is not necessary to set the hard user data limit for DB2. The
default soft user data limit is 128 MB. This is equivalent to the value of 262,144
512-byte units as set in /etc/security/limits folder, or 131,072 KB units as
displayed by the ulimit -d command. This setting limits private memory usage to
about one half of what is available in the 256 MB private memory segment
available for a 32-bit process on AIX.
Note: A DB2 server instance cannot make use of the Large Address Space or of
very large address space AIX 32-bit memory models due to shared memory
requirements. On some systems, for example those requiring large amounts of sort
memory for performance, it is best to increase the user data limit to allow DB2 to
allocate more than 128 MB of memory in a single process.
You can set the user data memory limit to "unlimited" (a value of "-1"). This setting
is not recommended for 32-bit DB2 because it allows the data region to overwrite
the stack, which grows downward from the top of the 256 MB private memory
segment. The result would typically be to cause the database to end abnormally. It
is, however, an acceptable setting for 64-bit DB2 because the data region and stack
are allocated in separate areas of the very large address space available to 64-bit
AIX processes.
Disk space requirements for the server database and recovery log
The drives or file systems on which you locate the database and log directories are
important to the proper operation of your IBM Tivoli Storage Manager server.
Placing each database and recovery log directory on a separate disk provides the
best performance and the best disaster protection.
For the optimal database performance, choose the fastest and most reliable disks
that are configured for random access I/O, such as Redundant Array of
Independent Disks (RAID) hardware. The internal disks included by default in
most servers and consumer grade Parallel Advanced Technology Attachment
(PATA) disks and Serial Advanced Technology Attachment (SATA) disks are too
slow.
It is best to use multiple directories for the database, with four to eight directories
for a large Tivoli Storage Manager database. Locate each database directory on a
disk volume that uses separate physical disks from other database directories. The
Tivoli Storage Manager server database I/O workload is spread over all
directories, thus increasing the read and write I/O performance. Having many
small capacity physical disks is better than having a few large capacity physical
disks with the same rotation speed.
Locate the active log, mirror log, and archive log directories also on high-speed,
reliable disks. The failover archive log can be on slower disks, assuming that the
archive log is sufficiently large and that the failover log is used infrequently.
The access pattern for the active log is always sequential. Physical placement on
the disk is important. It is best to isolate the active log from the database and from
the disk storage pools. If they cannot be isolated, then place the active log with
storage pools and not with the database.
Enable read cache for the database and recovery log, and enable write cache if the
disk subsystems support it.
Restriction: You cannot use raw logical volumes for the database. To reuse space
on the disk where raw logical volumes were located for an earlier version of the
server, create file systems on the disk first.
Capacity planning
Capacity planning for Tivoli Storage Manager includes managing resources such as
the database and recovery log. To maximize resources as part of capacity planning,
you must estimate space requirements for the database and the recovery log.
| For information about the benefits of deduplication and guidance on how to make
| effective use of the Tivoli Storage Manager deduplication feature, see Optimizing
| Performance.
Consider using at least 25 GB for the initial database space. Provision file system
space appropriately. A database size of 25 GB is adequate for a test environment or
a library-manager-only environment. For a production server supporting client
workloads, the database size is expected to be larger. If you use random-access
disk (DISK) storage pools, more database and log storage space is needed than for
sequential-access storage pools.
| Restriction: The guideline does not include space that is used during data
| deduplication.
| v 100 - 200 bytes for each cached file, copy storage pool file, active-data pool file,
| and deduplicated file.
| v Additional space is required for database optimization to support varying
| data-access patterns and to support server back-end processing of the data. The
| amount of extra space is equal to 50% of the estimate for the total number of
| bytes for file objects.
| In the following example for a single client, the calculations are based on the
| maximum values in the preceding guidelines. The examples do not take into
| account that you might use file aggregation. In general, when you aggregate small
| files, it reduces the amount of required database space. File aggregation does not
| affect space-managed files.
| 1. Calculate the number of file versions. Add each of the following values to
| obtain the number of file versions:
| a. Calculate the number of backed-up files. For example, as many as 500,000
| client files might be backed up at a time. In this example, storage policies
| are set to keep up to three copies of backed up files:
| 500,000 files * 3 copies = 1,500,000 files
| b. Calculate the number of archive files. For example, as many as 100,000
| client files might be archived copies.
| c. Calculate the number of space-managed files. For example, as many as
| 200,000 client files might be migrated from client workstations.
| Using 1000 bytes per file, the total amount of database space that is required
| for the files that belong to the client is 1.8 GB:
| (1,500,000 + 100,000 + 200,000) * 1000 = 1.8 GB
| 2. Calculate the number of cached files, copy storage-pool files, active-data pool
| files, and deduplicated files:
| a. Calculate the number of cached copies. For example, caching is enabled in a
| 5 GB disk storage pool. The high migration threshold of the pool is 90%
| and the low migration threshold of the pool is 70%. Thus, 20% of the disk
| pool, or 1 GB, is occupied by cached files.
| If the average file size is about 10 KB, approximately 100,000 files are in
| cache at any one time:
| 100,000 files * 200 bytes = 19 MB
| b. Calculate the number of copy storage-pool files. All primary storage pools
| are backed up to the copy storage pool:
| (1,500,000 + 100,000 + 200,000) * 200 bytes = 343 MB
Tip: In the preceding examples, the results are estimates. The actual size of the
database might differ from the estimate because of factors such as the number of
directories and the length of the path and file names. Periodically monitor your
database and adjust its size as necessary.
During normal operations, the Tivoli Storage Manager server might require
temporary database space. This space is needed for the following reasons:
v To hold the results of sorting or ordering that are not already being kept and
optimized in the database directly. The results are temporarily held in the
database for processing.
v To give administrative access to the database through one of the following
methods:
A DB2 open database connectivity (ODBC) client
An Oracle Java database connectivity (JDBC) client
Structured Query Language (SQL) to the server from an administrative-client
command line
| Consider using an extra 50 GB of temporary space for every 500 GB of space for
| file objects and optimization. See the guidelines in the following table. In the
| example that is used in the preceding step, a total of 1.7 TB of database space is
| required for file objects and optimization for 500 clients. Based on that calculation,
| 200 GB is required for temporary space. The total amount of required database
| space is 1.9 TB.
For example, expiration processing can use a large amount of database space. If
there is not enough system memory in the database to store the files identified for
expiration, some of the data is allocated to temporary disk space. During
expiration processing, if a node or file space is selected that is too large to process,
the database manager cannot sort the data.
To run database operations, consider adding more database space for the following
scenarios:
v The database has a small amount of space and the server operation that requires
temporary space uses the remaining free space.
v The file spaces are large, or the file spaces have a policy assigned to it that
creates many file versions.
v The Tivoli Storage Manager server must run with limited memory.
v An out of database space error is displayed when you deploy a Tivoli Storage
Manager V6 server.
Attention: Do not alter the DB2 software that is installed with IBM Tivoli
Monitoring for Tivoli Storage Manager installation packages and fix packs. Do not
install or upgrade to a different version, release, or fix pack of DB2 software
because doing so can damage the database.
The database manager sorts data in a specific sequence, as per the SQL statement
that you issue to request the data. Depending on the workload on the server, and
if there is more data than the database manager can manage, the data (that is
ordered in sequence) is allocated to temporary disk space. Data is allocated to
For example, expiration processing can produce a large result set. If there is not
enough system memory on the database to store the result set, some of the data is
allocated to temporary disk space. During expiration processing, if a node or file
space are selected that are too large to process, the database manager does not
have enough memory to sort the data.
To run database operations, consider adding more database space for the following
scenarios:
v The database has a small amount of space and the server operation that requires
temporary space uses the remaining free space.
v The file spaces are large, or the file spaces has a policy assigned to it which
creates many file versions.
v The Tivoli Storage Manager server must run with limited memory. The database
uses the Tivoli Storage Manager server main memory to run database
operations. However, if there is insufficient memory available, the Tivoli Storage
Manager server allocates temporary space on disk to the database. For example,
if 10G of memory is available and database operations require 12G of memory,
the database uses temporary space.
v An out of database space error is displayed when you deploy a Tivoli Storage
Manager V6 server. Monitor the server activity log for messages related to
database space.
Important: Do not change the DB2 software that is installed with the Tivoli
Storage Manager installation packages and fix packs. Do not install or upgrade to a
different version, release, or fix pack, of DB2 software to avoid damage to the
database.
In Tivoli Storage Manager servers V6.1 and later, the active log can be a maximum
size of 128 GB. The archive log size is limited to the size of the file system that it is
installed on.
Use the following general guidelines when you estimate the size of the active log:
v The suggested starting size for the active log is 16 GB.
| v Ensure that the active log is at least large enough for the amount of concurrent
| activity that the server typically handles. As a precaution, try to anticipate the
| largest amount of work that the server manages at one time. Provision the active
| log with extra space that can be used if needed. Consider using 20% of extra
| space.
The archive log directory must be large enough to contain the log files that are
generated since the previous full backup. For example, if you perform a full
backup of the database every day, the archive log directory must be large enough
to hold the log files for all the client activity that occurs during 24 hours. To
recover space, the server deletes obsolete archive log files after a full backup of the
database. If the archive log directory becomes full and a directory for archive
failover logs does not exist, log files remain in the active log directory. This
condition can cause the active log directory to fill up and stop the server. When the
server restarts, some of the existing active-log space is released.
| After the server is installed, you can monitor archive log utilization and the space
| in the archive log directory. If the space in the archive log directory fills up, it can
| cause the following problems:
| v The server is unable to perform full database backups. Investigate and resolve
| this problem.
| v Other applications write to the archive log directory, exhausting the space that is
| required by the archive log. Do not share archive log space with other
| applications including other Tivoli Storage Manager servers. Ensure that each
| server has a separate storage location that is owned and managed by that
| specific server.
| For guidance about the layout and tuning of the active log and archive log, see
| Optimizing Performance.
Related tasks:
Increasing the size of the active log on page 686
Example: Estimating active and archive log sizes for basic client-store
operations:
Basic client-store operations include backup, archive, and space management. Log
space must be sufficient to handle all store transactions that are in progress at one
time.
To determine the sizes of the active and archive logs for basic client-store
operations, use the following calculation:
number of clients x files stored during each transaction
x log space needed for each file
3.5 + 16 = 19.5 GB
1
Archive log: Suggested size 58.5 GB Because of the requirement to be able to store archive logs
across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the total archive
log requirement.
3.5 x 3 = 10.5 GB
10.5 + 48 = 58.5 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
If the client option RESOURCEUTILIZATION is set to a value that is greater than the
default, the concurrent workload for the server increases.
To determine the sizes of the active and archive logs when clients use multiple
sessions, use the following calculation:
number of clients x sessions for each client x files stored
during each transaction x log space needed for each file
10.5 + 16 = 26.5 GB
35 + 16 = 51 GB
10.5 x 3 = 31.5 GB
35 x 3 = 105 GB
31.5 + 48 = 79.5 GB
105 + 48 = 153 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for simultaneous write
operations:
If client backup operations use storage pools that are configured for simultaneous
write, the amount of log space that is required for each file increases.
The log space that is required for each file increases by about 200 bytes for each
copy storage pool that is used for a simultaneous write operation. In the example
in the following table, data is stored to two copy storage pools in addition to a
primary storage pool. The estimated log size increases by 400 bytes for each file. If
you use the suggested value of 3053 bytes of log space for each file, the total
number of required bytes is 3453.
4 + 16 = 20 GB
1
Archive log: Suggested size 60 GB Because of the requirement to be able to store archive logs
across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the archive log
requirement:
4 GB x 3 = 12 GB
12 + 48 = 60 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for basic client store operations
and server operations:
For example, migration of files from the random-access (DISK) storage pool to a
sequential-access disk (FILE) storage pool uses approximately 110 bytes of log
space for each file that is migrated. For example, suppose that you have 300
Add this value to the estimate for the size of the active log that calculated for basic
client store operations.
Example: Estimating active and archive log sizes under conditions of extreme
variation:
Problems with running out of active log space can occur if you have many
transactions that complete quickly and some transactions that take much longer to
complete. A typical case occurs when many workstation or file-server backup
sessions are active and a few very large database server-backup sessions are active.
If this situation applies to your environment, you might need to increase the size
of the active log so that the work completes successfully.
The Tivoli Storage Manager server deletes unnecessary files from the archive log
only when a full database backup occurs. Consequently, when you estimate the
space that is required for the archive log, you must also consider the frequency of
full database backups.
For example, if a full database backup occurs once a week, the archive log space
must be able to contain the information in the archive log for a full week.
The difference in archive log size for daily and full database backups is shown in
the example in the following table.
Table 63. Full database backups
Example
Item values Description
Maximum number of client nodes 300 The number of client nodes that back up, archive, or migrate
that back up, archive, or migrate files files every night.
concurrently at any time
Files stored during each transaction 4096 The default value of the server option TXNGROUPMAX is 4096.
Log space that is required for each 3453 bytes 3053 bytes for each file plus 200 bytes for each copy storage
file pool.
4 + 16 = 20 GB
1
Archive log: Suggested size with a 60 GB Because of the requirement to be able to store archive logs
full database backup every day across three backup cycles, multiply the estimate for the
active log by 3 to estimate the total archive log requirement:
4 GB x 3 = 12 GB
12 + 48 = 60 GB
1
Archive log: Suggested size with a 132 GB Because of the requirement to be able to store archive logs
full database every week across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the total archive
log requirement. Multiply the result by the number of days
between full database backups:
(4 GB x 3 ) x 7 = 84 GB
84 + 48 = 132 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested starting size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for data deduplication
operations:
If you deduplicate data, you must consider its effects on space requirements for
active and archive logs.
The following factors affect requirements for active and archive log space:
The amount of deduplicated data
The effect of data deduplication on the active log and archive log space
depends on the percentage of data that is eligible for deduplication. If the
percentage of data that can be deduplicated is relatively high, more log
space is required.
The size and number of extents
Approximately 1,500 bytes of active log space are required for each extent
50 + 16 = 66 GB
63.8 + 16 = 79.8 GB
50 GB x 3 = 150 GB
150 + 48 = 198 GB
63.8 GB x 3 = 191.4 GB
191.4 + 48 = 239.4 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that uses deduplication, 32 GB is the suggested minimum size for an active
log. The suggested minimum size for an archive log in a production environment that uses deduplication is 96 GB.
If you substitute values from your environment and the results are larger than 32 GB and 96 GB, use your results to
size the active log and archive log.
55.6 + 16 = 71.6 GB
93.4 + 16 = 109.4 GB
1 1
Archive log: 214.8 GB 328.2 GB The estimated size of the active log multiplied by a factor of
Suggested size 3.
55.6 GB x 3 = 166.8 GB
166.8 + 48 = 214.8 GB
280.2 + 48 = 328.2 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that uses deduplication, 32 GB is the suggested minimum size for an active
log. The suggested minimum size for an archive log in a production environment that uses deduplication is 96 GB.
If you substitute values from your environment and the results are larger than 32 GB and 96 GB, use your results to
size the active log and archive log.
Clustering indexes are prone to filling up the index pages, causing index splits and
merges that must also be logged. A number of the tables implemented by the
server have more than one index. A table that has four indexes would require 16
index log records for each row that is moved for the reorganization.
The server monitors characteristics of the database, the active log, and the archive
log to determine if a database backup is needed. For example, during an online
table reorganization, if the file system for the archive log space begins to fill up,
the server triggers a database backup. When a database backup is started, any
online table reorganization in progress is paused so that the database backup can
operate without contending for resources with the reorganization.
Creating a log mirror is a suggested option. If you increase the size of the active
log, the log mirror size is increased automatically. Mirroring the log can affect
performance because of the doubled I/O activity that is required to maintain the
mirror. The additional space that the log mirror requires is another factor to
consider when deciding whether to create a log mirror.
If the mirror log directory becomes full, the server issues error messages to the
activity log and to the db2diag.log. Server activity continues.
Specifying an archive failover log directory is optional, but it can prevent problems
that occur if the archive log runs out of space. If both the archive log directory and
the drive or file system where the archive failover log directory is located become
full, the data remains in the active log directory. This condition can cause the
active log to fill up, which causes the server to halt. If you use an archive failover
log directory, place the archive log directory and the archive failover log directory
on different physical drives.
Important: Maintain adequate space for the archive log directory, and consider
using an archive failover log directory. For example, suppose the drive or file
system where the archive log directory is located becomes full and the archive
failover log directory does not exist or is full. If this situation occurs, the log files
that are ready to be moved to the archive log remain in the active log directory. If
the active log becomes full, the server stops.
By monitoring the usage of the archive failover log, you can determine whether
additional space is needed for the archive log. The goal is to minimize the need to
use the archive failover log by ensuring that the archive log has adequate space.
The locations of the archive log and the archive failover log are set during initial
configuration. If you use the DSMSERV LOADFORMAT utility instead of the wizard to
configure the server, you specify the ARCHLOGDIRECTORY parameter for the archive
log directory. In addition, you specify the ARCHFAILOVERLOGDIRECTORY parameter for
the archive failover log directory. If the archive failover log is not created at initial
configuration, you can create it by specifying the ARCHFAILOVERLOGDIRECTORY option
in the server options file.
Active log
If the amount of available active log space is too low, the following messages are
displayed in the activity log:
ANR4531I: IC_AUTOBACKUP_LOG_USED_SINCE_LAST_BACKUP_TRIGGER
This message is displayed when the active log space exceeds the maximum
specified size. The Tivoli Storage Manager server starts a full database
backup.
To change the maximum log size, halt the server. Open the dsmserv.opt
file, and specify a new value for the ACTIVELOGSIZE option. When you are
finished, restart the server.
ANR0297I: IC_BACKUP_NEEDED_LOG_USED_SINCE_LAST_BACKUP
This message is displayed when the active log space exceeds the maximum
specified size. You must back up the database manually.
Archive log
If the amount of available archive log space is too low, the following message is
displayed in the activity log:
ANR0299I: IC_BACKUP_NEEDED_ARCHLOG_USED
The ratio of used archive-log space to available archive-log space exceeds
the log utilization threshold. The Tivoli Storage Manager server starts a full
automatic database backup.
Database
If the amount of space available for database activities is too low, the following
messages are displayed in the activity log:
ANR2992W: IC_LOG_FILE_SYSTEM_UTILIZATION_WARNING_2
The used database space exceeds the threshold for database space
utilization. To increase the space for the database, use the EXTEND DBSPACE
command, the EXTEND DBSPACE command, or the DSMSERV FORMAT
utility with the DBDIR parameter.
ANR1546W: FILESYSTEM_DBPATH_LESS_1GB
The available space in the directory where the server database files are
located is less than 1 GB.
When a Tivoli Storage Manager server is created with the DSMSERV
FORMAT utility or with the configuration wizard, a server database and
recovery log are also created. In addition, files are created to hold database
information used by the database manager. The path specified in this
message indicates the location of the database information used by the
database manager. If space is unavailable in the path, the server can no
longer function.
You must add space to the file system or make space available on the file
system or disk.
You can monitor the database and recovery log space whether the server is online
or offline.
v When the Tivoli Storage Manager server is online, you can issue the QUERY
DBSPACE command to view the total space, used space, and free space for the file
systems or drives where your database located. To view the same information
when the server is offline, issue the DSMSERV DISPLAY DBSPACE command. The
following example shows the output of this command:
Location: /tsmdb001
Total Space (MB): 46,080.00
Used Space (MB): 20,993.12
Free Space (MB): 25,086.88
Location: /tsmdb002
Total Space (MB): 46,080.00
Used Space (MB): 20,992.15
Free Space (MB): 25,087.85
Location: /tsmdb003
Total Space (MB): 46,080.00
Used Space (MB): 20,993.16
Free Space (MB): 25,086.84
Location: /tsmdb004
Total Space (MB): 46,080.00
Used Space (MB): 20,992.51
Free Space (MB): 25,087.49
v To view more detailed information about the database when the server is online,
issue the QUERY DB command. The following example shows the output of this
command if you specify FORMAT=DETAILED:
Database Name: TSMDB1
Total Size of File System (MB): 184,320
Space Used by Database (MB): 83,936
Free Space Available (MB): 100,349
Total Pages: 6,139,995
Usable Pages: 6,139,451
Used Pages: 6,135,323
Free Pages: 4,128
Buffer Pool Hit Ratio: 100.0
Total Buffer Requests: 97,694,823,985
Sort Overflows: 0
Package Cache Hit Ratio: 100.0
Last Database Reorganization: 06/25/2009 01:33:11
Full Device Class Name: LTO1_CLASS
Incrementals Since Last Full: 0
Last Complete Backup Date/Time: 06/06/2009 14:01:30
v When the Tivoli Storage Manager server is online, issue the QUERY LOG
FORMAT=DETAILED command to display the total space, used space, and free space
for the active log, and the locations of all the logs. To display the same
information when the Tivoli Storage Manager server is offline, issue the DSMSERV
DISPLAY LOG command. The following example shows the output of this
command:
v You can view information about the database on the server console and in the
activity log. You can set the level of database information by using the SET
DBREPORTMODE command. Specify that no diagnostic information is displayed
(NONE), that all diagnostic information is displayed (FULL), or that the only
events that are displayed are those that are exceptions and might represent
errors (PARTIAL). The default is PARTIAL.
The server can use all the space that is available to the drives or file systems where
the database directories are located. To ensure that database space is always
available, monitor the space in use by the server and the file systems where the
directories are located.
The QUERY DB command displays the number of free pages in the table space and
the free space available to the database. If the number of free pages are low and
there is a lot of free space available, the database allocates additional space.
However, if free space is low, it might not be possible to expand the database.
For example, to add two directories to the storage space for the database, issue the
following command:
extend dbspace /tsmdb005,/tsmdb006
After a directory is added to a Tivoli Storage Manager server, the directory might
not be used to its full extent. Some Tivoli Storage Manager events can cause the
added database space to become used, over time. For example, table
reorganizations or some temporary database transactions, such as long running
select statements, can help the added database space to begin filling up. The
database space redistribution among all directories can require a few days or
weeks. If the existing database directories are nearly full when the directory is
added, the server might encounter an out-of-space condition, as reported in the
db2diag.log.
If this condition occurs, halt and restart the server. If the restart does not correct
the condition, remove the database and then restore it to the same or new
directories.
Reorganization of table data can be initiated by the Tivoli Storage Manager server
or by DB2. If server-initiated reorganization is enabled. The server analyzes
selected database tables and indexes based on table activity, and determines when
reorganization is required. The database manager runs a reorganization while
server operations continue. If reorganization by DB2 is enabled, DB2 controls the
reorganization process. Reorganization by DB2 is not recommended.
The best time to start a reorganization is when server activity is low and when
access to the database is optimal. Schedule table reorganization for databases on
servers that are not running deduplication. Schedule table and index
reorganization on servers that are running deduplication.
Important: Ensure that the system on which the Tivoli Storage Manager server is
running has sufficient memory and processor resources. To assess how busy the
system is over time, use operating system tools to assess the load on the system.
You can also review the db2diag.log file and the server activity log. If the system
does not have sufficient resources, reorganization processing might be incomplete,
or it might degrade or destabilize the system.
Table reorganization
Index reorganization
| If you set only the REORGBEGINTIME option, reorganization is enabled for an entire
| day. If you do not specify the REORGBEGINTIME option, but you specify a value for
| the REORGDURATION option, the reorganization interval starts at 6:00 a.m. and runs
| for the specified number of hours.
To increase the size of the active log while the server is halted, complete the
following steps:
1. Issue the DSMSERV DISPLAY LOG offline utility to display the size of the active
log.
2. Ensure that the location for the active log has enough space for the increased
log size. If a log mirror exists, its location must also have enough space for the
increased log size.
3. Halt the server.
4. In the dsmserv.opt file, update the ACTIVELOGSIZE option to the new maximum
size of the active log, in megabytes. For example, to change the active log to its
maximum size of 128 GB, enter the following server option:
activelogsize 131072
5. If you plan to use a new active log directory, update the directory name
specified in the ACTIVELOGDIRECTORY server option. The new directory must be
empty and must be accessible to the user ID of the database manager.
6. Restart the server.
| If you have too much active log space, you can reduce the size of the active log by
| completing the following steps:
| 1. Stop the Tivoli Storage Manager server.
| 2. In the dsmserv.opt file, change the ACTIVELOGSIZE option to the new size of the
| active log, in megabytes. For example, to reduce the active log by 8 GB, enter
| the following server option:
| dsmserv activelogsize 8000
| 3. Restart the server.
| When you reduce the size of the active log, you must restart the Tivoli Storage
| Manager server twice. The first restart changes the DB2 parameters. The second
| restart removes the log files that are no longer required on the disk.
You might want to move the database and logs to take advantage of a larger or
faster disk. You have the following options:
v Moving both the database and recovery log
v Moving only the database on page 688
v Moving only the active log, archive log, or archive failover log on page 689
For information about moving a Tivoli Storage Manager server to another machine,
see Moving the Tivoli Storage Manager server to another system on page 623
To move the database from one location on the server to another location, follow
this procedure:
1. Back up the database by issuing the following command:
backup db type=full devclass=files
2. Halt the server.
3. Create directories for the database. The directories must be accessible to the
user ID of the database manager. For example:
mkdir /tsmdb005
mkdir /tsmdb006
mkdir /tsmdb007
mkdir /tsmdb008
4. Create a file that lists the locations of the database directories. This file will be
used if the database must be restored. Enter each location on a separate line.
For example, here are the contents of the dbdirs.txt file:
/tsmdb005
/tsmdb006
/tsmdb007
/tsmdb008
5. Remove the database instance by issuing the following command:
dsmserv removedb TSMDB1
6. Issue the DSMSERV RESTORE DB utility to move the database to the new
directories. For example:
dsmserv restore db todate=today on=dbdir.file
7. Start the server.
| To specify alternative locations for the database log files, complete the following
| steps:
| 1. To specify the location of subdirectories RstDbLog and failarch, use the
| ARCHFAILOVERLOGDIRECTORY server option. The Tivoli Storage Manager server
| creates the RstDbLog and failarch subdirectories in the directory that is
| specified by the server option.
| Restriction: If you do not specify the location of the subdirectories, the Tivoli
| Storage Manager server automatically creates the two subdirectories under the
| archive log directory
| If the archive log directory becomes full, it can limit the amount of space that is
| available for archived log files. If you must use the archive log directory, you
| can increase its size to accommodate both the RstDbLog and failarch
| directories.
| 2. Use a file system that is different from the file system that is specified by the
| ACTIVELOGDIRECTORY and ARCHLOGDIRECTORY parameters.
| Tip: If you do not set the ARCHFAILOVERLOGDIRECTORY option, the Tivoli Storage
| Manager server creates the RstDbLog and failarch subdirectories automatically
| in the directory that is specified for the ARCHLOGDIRECTORY parameter on the
| DSMSERV FORMAT or DSMSERV LOADFORMAT command. You must specify the
| ARCHLOGDIRECTORY parameter for these commands.
| 3. For a database restore operation, you can specify the location of the RstDbLog
| subdirectory, but not the failarch subdirectory, by using the RECOVERYDIR
| parameter on the DSMSERV RESTORE DB command. Consider allocating a
| relatively large amount of temporary disk space for the restore operation.
| Because database restore operations occur relatively infrequently, the RstDbLog
| subdirectory can contain many logs from backup volumes that are stored in
| preparation for pending roll-forward-restore processing.
The server also updates the DB2 parameter OVERFLOWLOGPATH that points to the
RstDbLog subdirectory and the DB2 parameter FAILARCHPATH, that points to the
failarch subdirectory. For details about these parameters, see the DB2 information
center at http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, suppose that you specify archlogfailover as the value of the
ARCHFAILOVERLOGDIRECTORY parameter on the DSMSERV FORMAT command:
The server creates the subdirectories RstDbLog and failarch in the parent directory
archlogfailover. The server also updates the following DB2 parameters:
OVERFLOWLOGPATH=/home/tsminst1/inst1/archlogfailover/RstDbLog
FAILARCHPATH=/home/tsminst1/inst1/archlogfailover/failarch
The server also updates the value of the ARCHFAILOVERLOGDIRECTORY option in the
server options file, dsmserv.opt:
ARCHFAILOVERLOGDIRECTORY /home/tsminst1/inst1/archlogfailover
For details about these parameters, see the DB2 Information Center at
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, suppose that you specify a value of archlog for the ARCHLOGDIRECTORY
parameter in a DSMSERV FORMAT command. You do not specify the
ARCHFAILOVERLOGDIRECTORY parameter:
dsmserv format
dbdir=/tsmdb001
activelogdirectory=/home/tsminst1/inst1/activelog
archlogdirectory=/home/tsminst1/inst1/archlog
The Tivoli Storage Manager server creates the subdirectories RstDbLog and
failarch under the archlog parent directory. The server also updates the following
DB2 parameters:
OVERFLOWLOGPATH=/home/tsminst1/inst1/archlog/RstDbLog
FAILARCHPATH=/home/tsminst1/inst1/archlog/failarch
The server also updates the value of the ARCHLOGDIRECTORY option in the server
options file, dsmserv.opt:
ARCHLOGDIRECTORY /home/tsminst1/inst1/archlog
The server also updates the DB2 parameter, OVERFLOWLOGPATH, that points to
RstDbLog. For details about this parameter, see the DB2 Information Center at
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, for a point-in-time database restore, you can issue the following
command:
dsmserv restore db
todate=5/12/2011
totime=14:45
recoverydir=/home/tsminst1/inst1/recovery
The server creates the RstDbLog subdirectory in the parent recovery directory. The
server also updates the OVERFLOWLOGPATH parameter:
OVERFLOWLOGPATH=/home/tsminst1/inst1/recovery/RstDbLog
After the database is restored, the RstDbLog subdirectory reverts to its location as
specified by the server option ARCHFAILOVERLOGDIRECTORY or ARCHLOGDIRECTORY in
the server options file, dsmserv.opt.
Transaction processing
A transaction is the unit of work exchanged between the client and server.
The log records for a given transaction are moved into stable storage when the
transaction is committed. The database information that is stored on disk remains
consistent because the server ensures that the recovery log records, which represent
the updates to these database pages, are written to disk.
During restart-recovery, the server uses the active and archive log information to
maintain the consistency of the server by redoing and, if necessary, undoing
ongoing transactions from the time that the server was halted. The transaction is
then committed to the database.
Transaction commit is a function of all the log records for that transaction being
written to the recovery log. This function ensures that the necessary redo and undo
information is available to replay these transaction changes against the database
information.
If you increase the value of TXNGROUPMAX by a large amount, monitor the effects on
the recovery log. A larger value for the TXNGROUPMAX option can have the following
impact:
v Affect the performance of client backup, archive, restore, and retrieve operations.
v Increase utilization of the recovery log, as well as increase the length of time for
a transaction to commit.
Also consider the number of concurrent sessions to be run. It might be possible to
run with a higher TXNGROUPMAX value with a few clients running. However, if there
are hundreds of clients running concurrently, you might need to reduce the
TXNGROUPMAX to help manage the recovery log usage and support this number of
concurrent clients. If the performance effects are severe, they might affect server
operations. See Monitoring the database and recovery log on page 682 for more
information.
The following examples show how the TXNGROUPMAX option can affect performance
throughput for operations to tape and the recovery log.
v The TXNGROUPMAX option is set to 20. The MAXSESSIONS option, which specifies the
maximum number of concurrent client/server sessions, is set to 5. Five
concurrent sessions are processing, and each file in the transaction requires 10
logged database operations. This would be a concurrent load of:
20*10*5=1000
This represents 1000 log records in the recovery log. Each time a transaction
commits the data, the server can free 200 log records.
v The TXNGROUPMAX option is set to 2000. The MAXSESSIONS option is set to 5. Five
concurrent sessions are processing, and each file in the transaction requires 10
logged database operations, resulting in a concurrent load of:
2000*10*5=100 000
This represents 100 000 log records in the recovery log. Each time a transaction
commits the data, the server can free 20 000 log records.
Remember: Over time and as transactions end, the recovery log can release the
space that is used by the oldest transactions. These transactions complete, and the
log space usage increases.
You can use several server options to tune server performance and reduce the risk
of running out of recovery log space:
v Use the THROUGHPUTTIMETHRESHOLD and THROUGHPUTDATATHRESHOLD options with
the TXNGROUPMAX option to prevent a slower performing node from holding a
transaction open for extended periods.
v Increase the size of the recovery log when you increase the TXNGROUPMAX setting.
Evaluate the performance and characteristics of each node before increasing the
TXNGROUPMAX setting. Nodes that have only a few larger objects to transfer do not
benefit as much as nodes that have multiple, smaller objects to transfer. For
example, a file server benefits more from a higher TXNGROUPMAX setting than does a
database server that has one or two large objects. Other node operations can
consume the recovery log at a faster rate. Be careful when increasing the
TXNGROUPMAX settings for nodes that often perform high log-usage operations. The
raw or physical performance of the disk drives that are holding the database and
recovery log can become an issue with an increased TXNGROUPMAX setting. The
drives must handle higher transfer rates to handle the increased load on the
recovery log and database.
You can set the TXNGROUPMAX option as a global server option value, or you can set
it for a single node. For optimal performance, specify a lower TXNGROUPMAX value
(between 4 and 512). Select higher values for individual nodes that can benefit
from the increased transaction size.
Refer to the REGISTER NODE command and the server options in the Administrator's
Reference.
An administrator working at one Tivoli Storage Manager server can work with
Tivoli Storage Manager servers at other locations around the world.
Concepts:
Concepts for managing server networks
Enterprise configuration on page 696
Tasks:
Setting up communications among servers on page 700
Setting up communications for enterprise configuration and enterprise event logging on
page 700
Setting up communications for command routing with multiple source servers on page
705
Completing tasks on multiple servers on page 731
Using virtual volumes to store data on another server on page 737
To manage a network of servers, you can use the following Tivoli Storage Manager
capabilities:
v Configure and manage multiple servers with enterprise configuration.
Distribute a consistent configuration for Tivoli Storage Manager servers through
a configuration manager to managed servers. By having consistent
configurations, you can simplify the management of a large number of servers
and clients.
v Perform tasks on multiple servers by using command routing, enterprise logon,
and enterprise console.
v Send server and client events to another server for logging.
v Monitor many servers and clients from a single server.
v Store data on another server by using virtual volumes.
In the descriptions for working with a network of servers, when a server sends
data, that server is sometimes referred to as a source server, and when a server
receives data, it is sometimes referred to as a target server. In other words, one
For details, see Licensing IBM Tivoli Storage Manager on page 605.
Enterprise configuration
The Tivoli Storage Manager enterprise configuration functions make it easier to
consistently set up and manage a network of Tivoli Storage Manager servers. You
can set up configurations on one server and distribute the configurations to other
servers. You can make changes to configurations and have the changes
automatically distributed.
On each server that is to receive the configuration information, identify the server
as a managed server by defining a subscription to one or more profiles owned by the
configuration manager. All the definitions associated with the profiles are then
copied into the managed server's database. Things defined to the managed server
in this way are managed objects that cannot be changed by the managed server.
From then on, the managed server gets any changes to the managed objects from
the configuration manager via the profiles. Managed servers receive changes to
configuration information at time intervals set by the servers, or by command.
Configuration Administrators
Manager Profiles
Schedules
Scripts
Subscriptions to profiles
Managed
servers
Managed
objects
Command routing
| Use the command-line interface to route commands to other servers.
The other servers must be defined to the server to which you are connected. You
must also be registered on the other servers as an administrator with the
administrative authority that is required for the command. To make routing
commands easier, you can define a server group that has servers as members.
Commands that you route to a server group are sent to all servers in the group.
For details, see Setting up server groups on page 735 and Routing commands
on page 732.
The following methods are ways in which you can centrally monitor activities:
v Enterprise event logging, in which events are sent from one or more of servers
to be logged at an event server.
For a description of the function, see Enterprise event logging: logging events
to another server on page 875. For information about communications setup,
see Setting up communications for enterprise configuration and enterprise
event logging on page 700.
| v Use the Operations Center to view server status and alerts. See Monitoring
| operations daily using the Operations Center on page 791 for more information.
v Allowing designated administrators to log in to any of the servers in the
network with a single user ID and password.
The data can also be a recovery plan file created by using disaster recovery
manager (DRM). The source server is a client of the target server, and the data for
the source server is managed only by the source server. In other words, the source
server controls the expiration and deletion of the files that comprise the virtual
volumes on the target server.
To use virtual volumes to store database and storage pool backups and recovery
plan files, you must have the disaster recovery manager function. For details, see
Licensing IBM Tivoli Storage Manager on page 605.
For information about using virtual volumes with DRM, see Chapter 35, Disaster
recovery manager, on page 1029.
Here are two scenarios to give you some ideas about how you can use the
functions:
v Setting up and managing Tivoli Storage Manager servers primarily from one
location. For example, an administrator at one location controls and monitors
servers at several locations.
v Setting up a group of Tivoli Storage Manager servers from one location, and
then managing the servers from any of the servers. For example, several
administrators are responsible for maintaining a group of servers. One
administrator defines the configuration information on one server for
distributing to servers in the network. Administrators on the individual servers
in the network manage and monitor the servers.
For example, suppose that you are an administrator who is responsible for Tivoli
Storage Manager servers at your own location, plus servers at branch office
locations. Servers at each location have similar storage resources and client
requirements. You can set up the environment as follows:
v Set up an existing or new Tivoli Storage Manager server as a configuration
manager.
After you complete the setup, you can manage many servers as if there was just
one. You can perform any of the following tasks:
v Have administrators that can manage the group of servers from anywhere in the
network by using the enterprise console, an interface available through a Web
browser.
v Have consistent policies, schedules, and client option sets on all servers.
v Make changes to configurations and have the changes automatically distributed
to all servers. Allow local administrators to monitor and tune their own servers.
v Perform tasks on any server or all servers by using command routing from the
enterprise console.
v Back up the databases of the managed servers on the automated tape library
that is attached to the server that is the configuration manager. You use virtual
volumes to accomplish this.
v Log on to individual servers from the enterprise console without having to
re-enter your password, if your administrator ID and password are the same on
each server.
For example, suppose that you are an administrator responsible for servers located
in different departments on a college campus. The servers have some requirements
in common, but also have many unique client requirements. You can set up the
environment as follows:
v Set up an existing or new Tivoli Storage Manager server as a configuration
manager.
v Set up communications so that commands can be sent from any server to any
other server.
v Define any configuration that you want to distribute by defining policy
domains, schedules, and so on, on the configuration manager. Associate the
configuration information with profiles.
v Have the managed servers subscribe to profiles as needed.
v Activate policies and set up storage pools as needed on the managed servers.
v Set up enterprise monitoring by setting up one server as an event server. The
event server can be the same server as the configuration manager or a different
server.
After setting up in this way, you can manage the servers from any server. You can
do any of the following tasks:
v Use enterprise console to monitor all the servers in your network.
Enterprise-administration planning
To take full advantage of the functions of enterprise administration, you should
decide on the servers you want to include in the enterprise network, the server
from which you want to manage the network, and other important issues.
The examples shown here apply to both functions. If you are set up for one, you
are set up for the other. However, be aware that the configuration manager and
event server are not defined simply by setting up communications. You must
identify a server as a configuration manager (SET CONFIGMANAGER command)
or an event server (DEFINE EVENTSERVER command). Furthermore, a
configuration manager and an event server can be the same server or different
servers.
Enterprise configuration
Each managed server must be defined to the configuration manager, and
the configuration manager must be defined to each managed server.
Figure 79 on page 702 shows the servers and the commands issued on each:
Munich Strasbourg
Figure 80 on page 703 shows the servers and the commands issued on each:
Munich Strasbourg
Note: Issuing the SET SERVERNAME command can affect scheduled backups
until a password is re-entered. Windows clients use the server name to identify
which passwords belong to which servers. Changing the server name after the
clients are connected forces the clients to re-enter the passwords. On a network
where clients connect to multiple servers, it is recommended that all of the servers
have unique names. See the Administrator's Reference for more details.
Communication security
Security for this communication configuration is enforced through the exchange of
passwords (which are encrypted) and, in the case of enterprise configuration only,
verification keys.
Communication among servers, which is through TCP/IP, requires that the servers
verify server passwords (and verification keys). For example, assume that
HEADQUARTERS begins a session with MUNICH:
1. HEADQUARTERS, the source server, identifies itself by sending its name to
MUNICH.
2. The two servers exchange verification keys (enterprise configuration only).
3. HEADQUARTERS sends its password to MUNICH, which verifies it against
the password stored in its database.
4. If MUNICH verifies the password, it sends its password to HEADQUARTERS,
which, in turn, performs password verification.
Note: You must be registered as an administrator with the same name and
password on the source server and all target servers. The privilege classes do not
need to be the same on all servers. However, to successfully route a command to
another server, an administrator must have the minimum required privilege class
for that command on the server from which the command is being issued.
For command routing in which one server will always be the sender, you would
only define the target servers to the source server. If commands can be routed from
any server to any other server, each server must be defined to all the others.
The example provided shows you how you can set up communications for
administrator HQ on the server HEADQUARTERS who will route commands to
the servers MUNICH and STRASBOURG. Administrator HQ has the password
SECRET and has system privilege class.
The procedure for setting up communications for command routing with one
source server is shown in the following list:
v On HEADQUARTERS: register administrator HQ and specify the server names
and addresses of MUNICH and STRASBOURG:
register admin hq secret
grant authority hq classes=system
Note: Command routing uses the ID and password of the Administrator. It does
not use the password or server password set in the server definition.
v On MUNICH and STRASBOURG Register administrator HQ with the required
privilege class on each server:
register admin hq secret
grant authority hq classes=system
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator and server lists to
MUNICH and STRASBOURG. In addition, all server definitions and server groups
are distributed by default to a managed server when it first subscribes to any
profile on a configuration manager. Therefore, it receives all the server definitions
that exist on the configuration manager, thus enabling command routing among
the servers.
The examples provided below show you how to set up communications if the
administrator, HQ, can route commands from any of the three servers to any of the
other servers. You can separately define each server to each of the other servers, or
you can cross define the servers. In cross definition, defining MUNICH to
HEADQUARTERS also results in automatically defining HEADQUARTERS to
MUNICH.
When setting up communications for command routing, you can define each
server to each of the other servers.
Figure 81 on page 706 shows the servers and the commands issued on each.
Munich Strasbourg
When setting up communications for command routing, you can cross-define the
other servers.
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator lists and server lists
to MUNICH and STRASBOURG. In addition, all server definitions and server
groups are distributed by default to a managed server when it first subscribes to
any profile on a configuration manager. Therefore, it receives all the server
definitions that exist on the configuration manager, thus enabling command
routing among the servers.
Figure 82 on page 708 shows the servers and the commands issued on each.
Munich Strasbourg
You can update a server definition by issuing the UPDATE SERVER command.
v For server-to-server virtual volumes:
If you update the node name, you must also update the password.
If you update the password but not the node name, the node name defaults
to the server name specified by the SET SERVERNAME command.
v For enterprise configuration and enterprise event logging: If you update the
server password, it must match the password specified by the SET
SERVERPASSWORD command at the target server.
v For enterprise configuration: When a server is first defined at a managed server,
that definition cannot be replaced by a server definition from a configuration
You can delete a server definition by issuing the DELETE SERVER command. For
example, to delete the server named NEWYORK, enter the following:
delete server newyork
The deleted server is also deleted from any server groups of which it is a member.
You cannot delete a server if any of the following conditions are true:
v The server is defined as an event server.
You must first issue the DELETE EVENTSERVER command.
v The server is a target server for virtual volumes.
A target server is named in a DEFINE DEVCLASS (DEVTYPE=SERVER)
command. You must first change the server name in the device class or delete
the device class.
v The server is named in a device class definition whose device type is SERVER.
v The server has paths defined to a file drive.
v The server has an open connection to or from another server.
You can find an open connection to a server by issuing the QUERY SESSION
command.
See Setting up server groups on page 735 for information about server groups.
Each managed server stores the distributed information as managed objects in its
database. Managed servers receive periodic updates of the configuration
information from the configuration manager, or an administrator can trigger an
update by command.
If you use an LDAP directory server to authenticate passwords, any target servers
must be configured for LDAP passwords. Data that is replicated from a node that
authenticates with an LDAP directory server is inaccessible if the target server is
not properly configured. If your target server is not configured, replicated data
from an LDAP node can still go there. But the target server must be configured to
use LDAP in order for you to access the data.
Enterprise configuration scenario gives you an overview of the steps to take for
one possible implementation of enterprise configuration. Sections that follow give
more details on each step. For details on the attributes that are distributed with
these objects, see Associating configuration information with a profile on page
715. After you set up server communication as described in Setting up
communications for enterprise configuration and enterprise event logging on page
700, you set up the configuration manager and its profiles.
Headquarters
Configuration Manager
Managed
servers
London Munich New York Santiago Delhi Tokyo
The following sections give you an overview of the steps to take to complete this
setup. For details on each step, see the section referenced.
Figure 84 illustrates the commands that you must issue to set up one Tivoli Storage
Manager server as a configuration manager. The following procedure gives you an
overview of the steps required to set up a server as a configuration manager.
Headquarters
set configmanager on
define profile
Configuration define profassocation
Manager
1. Decide whether to use the existing Tivoli Storage Manager server in the
headquarters office as the configuration manager or to install a new Tivoli
Storage Manager server on a system.
2. Set up the communications among the servers.
3. Identify the server as a configuration manager.
Use the following command:
set configmanager on
This command automatically creates a profile named DEFAULT_PROFILE. The
default profile includes all the server and server group definitions on the
configuration manager. As you define new servers and server groups, they are
also associated with the default profile.
4. Create the configuration to distribute.
The tasks that might be involved include:
v Register administrators and grant authorities to those that you want to be
able to work with all the servers.
v Define policy objects and client schedules
v Define administrative schedules
v Define Tivoli Storage Manager server scripts
v Define client option sets
v Define servers
v Define server groups
Example 1: You need a shorthand way to send commands to different groups
of managed servers. You can define server groups. For example, you can define
a server group named AMERICAS for the servers in the offices in North
America and South America.
Note: You must set up the storage pool itself (and associated device class) on
each managed server, either locally or by using command routing. If a
managed server already has a storage pool associated with the automated
tape library, you can rename the pool to TAPEPOOL.
Example 4: You want to ensure that client data is consistently backed up and
managed on all servers. You want all clients to be able to store three backup
versions of their files. You can do the following:
v Verify or define client schedules in the policy domain so that clients are
backed up on a consistent schedule.
v In the policy domain that you will point to in the profile, update the backup
copy group so that three versions of backups are allowed.
v Define client option sets so that basic settings are consistent for clients as
they are added.
5. Define one or more profiles.
For example, you can define one profile named ALLOFFICES that points to all
the configuration information (policy domain, administrators, scripts, and so
on). You can also define profiles for each type of information, so that you have
one profile that points to policy domains, and another profile that points to
administrators, for example.
See Setting up communications among servers on page 700 for details. For
more information, see Creating the default profile on a configuration
manager on page 714. See Defining a server group and members of a server
group on page 735 for details. For details, see Creating and changing
configuration profiles on page 714.
Figure 85 on page 713 shows the specific commands needed to set up one Tivoli
Storage Manager server as a managed server. The following procedure gives you
an overview of the steps required to set up a server as a managed server.
A server becomes a managed server when that server first subscribes to a profile
on a configuration manager.
1. Query the server to look for potential conflicts.
Look for definitions of objects on the managed server that have the same name
as those defined on the configuration manager. With some exceptions, these
objects will be overwritten when the managed server first subscribes to the
profile on the configuration manager.
If the managed server is a new server and you have not defined anything, the
only objects you will find are the defaults (for example, the STANDARD policy
domain).
2. Subscribe to one or more profiles.
A managed server can only subscribe to profiles on one configuration manager.
If you receive error messages during the configuration refresh, such as a local
object that could not be replaced, resolve the conflict and refresh the
configuration again. You can either wait for the automatic refresh period to be
reached, or kick off a refresh by issuing the SET CONFIGREFRESH command,
setting or resetting the interval.
3. If the profile included policy domain information, activate a policy set in the
policy domain, add or move clients to the domain, and associate any required
schedules with the clients.
You may receive warning messages about storage pools that do not exist, but
that are needed for the active policy set. Define any storage pools needed by
the active policy set, or rename existing storage pools.
4. If the profile included administrative schedules, make the schedules active.
Administrative schedules are not active when they are distributed by a
configuration manager. The schedules do not run on the managed server until
you make them active on the managed server. See Tailoring schedules on
page 635.
5. Set how often the managed server contacts the configuration manager to
update the configuration information associated with the profiles.
The initial setting for refreshing the configuration information is 60 minutes.
For more information, see the following topics:
v Associating configuration information with a profile on page 715
v Defining storage pools on page 255
v Getting information about profiles on page 722
v Refreshing configuration information on page 728
v Renaming storage pools on page 411
v Subscribing to a profile on page 724
After you define the profile and its associations, a managed server can subscribe to
the profile and obtain the configuration information.
After you define a profile and associate information with the profile, you can
change the information later. While you make changes, you can lock the profiles to
prevent managed servers from refreshing their configuration information. To
distribute the changed information associated with a profile, you can unlock the
Before you can associate specific configuration information with a profile, the
definitions must exist on the configuration manager. For example, to associate a
policy domain named ENGDOMAIN with a profile, you must have already
defined the ENGDOMAIN policy domain on the configuration manager.
Suppose you want the ALLOFFICES profile to distribute policy information from
the STANDARD and ENGDOMAIN policy domains on the configuration manager.
Enter the following command:
define profassociation alloffices domains=standard,engdomain
You can make the association more dynamic by specifying the special character, *
(asterisk), by itself. When you specify the *, you can associate all existing objects
with a profile without specifically naming them. If you later add more objects of
the same type, the new objects are automatically distributed via the profile. For
example, suppose that you want the ADMINISTRATORS profile to distribute all
administrators registered to the configuration manager. Enter the following
commands on the configuration manager:
define profile administrators
description=Profile to distribute administrators IDs
The administrator with the name SERVER_CONSOLE is never distributed from the
configuration manager to a managed server.
For administrator definitions that have node authority, the configuration manager
only distributes information such as password and contact information. Node
authority for the managed administrator can be controlled on the managed server
using the GRANT AUTHORITY and REVOKE AUTHORITY commands specifying
the CLASS=NODE parameter.
A subscribing managed server may already have a policy domain with the same
name as the domain associated with the profile. The configuration refresh
overwrites the domain defined on the managed server unless client nodes are
already assigned to the domain. Once the domain becomes a managed object on
the managed server, you can associate clients with the managed domain. Future
configuration refreshes can then update the managed domain.
If nodes are assigned to a domain with the same name as a domain being
distributed, the domain is not replaced. This safeguard prevents inadvertent
replacement of policy that could lead to loss of data. To replace an existing policy
domain with a managed domain of the same name, perform the following steps on
the managed server:
1. Copy the domain.
2. Move all clients assigned to the original domain to the copied domain.
3. Trigger a configuration refresh.
4. Activate the appropriate policy set in the new, managed policy domain.
5. Move all clients back to the original domain, which is now managed.
Any servers and server groups that you define later are associated automatically
with the default profile and the configuration manager distributes the definitions at
the next refresh. For a server definition, the following attributes are distributed:
v Communication method
v TCP/IP address (high-level address), Version 4 or Version 6
v Port number (low-level address)
v Server password
v Server URL
v The description
When server definitions are distributed, the attribute for allowing replacement is
always set to YES. You can set other attributes, such as the server's node name, on
the managed server by updating the server definition.
A managed server may already have a server defined with the same name as a
server associated with the profile. The configuration refresh does not overwrite the
local definition unless the managed server allows replacement of that definition.
On a managed server, you allow a server definition to be replaced by updating the
local definition. For example:
update server santiago allowreplace=yes
A configuration refresh does not replace or remove any local schedules that are
active on a managed server. However, a refresh can update an active schedule that
is already managed by a configuration manager.
Changing a profile
You can change a profile and its associated configuration information.
For example, if you want to add a policy domain named FILESERVERS to objects
already associated with the ALLOFFICES profile, enter the following command:
define profassociation alloffices domains=fileservers
You can also delete associated configuration information, which results in removal
of configuration from the managed server. Use the DELETE PROFASSOCIATION
command.
You can change the description of the profile. Enter the following command:
update profile alloffices
description=Configuration for all offices with file servers
See Removing configuration information from managed servers on page 720 for
details.
For example, to lock the ALLOFFICES profile for two hours (120 minutes), enter
the following command:
lock profile alloffices 120
You can let the lock expire after two hours, or unlock the profile with the following
command:
unlock profile alloffices
From the configuration manager, to notify all servers that are subscribers to the
ALLOFFICES profile, enter the following command:
notify subscribers profile=alloffices
The managed servers then refresh their configuration information, even if the time
period for refreshing the configuration has not passed.
See Refreshing configuration information on page 728 for how to set this period.
On the configuration manager, you can delete the association of objects with a
profile. For example, you may want to remove some of the administrators that are
associated with the ADMINISTRATORS profile. With an earlier command, you had
included all administrators defined on the configuration manager (by specifying
ADMINS=*). To change the administrators included in the profile you must first
delete the association of all administrators, then associate just the administrators
that you want to include. Do the following:
1. Before you make these changes, you may want to prevent any servers from
refreshing their configuration until you are done. Enter the following
command:
lock profile administrators
2. Now make the change by entering the following commands:
delete profassociation administrators admins=*
When you delete the association of an object with a profile, the configuration
manager no longer distributes that object via the profile. Any managed server
subscribing to the profile deletes the object from its database when it next contacts
the configuration manager to refresh configuration information. However, a
managed server does not delete the following objects:
v An object that is associated with another profile to which the server subscribes.
v A policy domain that has client nodes still assigned to it. To delete the domain,
you must assign the affected client nodes to another policy domain on the
managed server.
v An administrator that currently has a session open with the server.
v An administrator that is the last administrator with system authority on the
managed server.
Also the managed server does not change the authority of an administrator if
doing so would leave the managed server without any administrators having
the system privilege class.
You can avoid both problems by ensuring that you have locally defined at least
one administrator with system privilege on each managed server.
Deleting profiles
You can delete a profile from a configuration manager. Before deleting a profile,
you should ensure that no managed server still has a subscription to the profile. If
the profile still has some subscribers, delete the subscriptions on each managed
server first.
When you delete subscriptions, consider whether you want the managed objects to
be deleted on the managed server at the same time. For example, to delete the
subscription to profile ALLOFFICES from managed server SANTIAGO without
deleting the managed objects, log on to the SANTIAGO server and enter the
following command:
delete subscription alloffices
Note: You can use command routing to issue the DELETE SUBSCRIPTION
command for all managed servers.
If you try to delete a profile, that still has subscriptions, the command fails unless
you force the operation:
delete profile alloffices force=yes
If you do force the operation, managed servers that still subscribe to the deleted
profile will later contact the configuration manager to try to get updates to the
deleted profile. The managed servers will continue to do this until their
subscriptions to the profile are deleted. A message will be issued on the managed
server alerting the administrator of this condition.
See Deleting subscriptions on page 727 for more details about deleting
subscriptions on a managed server.
For example, from a configuration manager, you can display information about
profiles defined on that server or on another configuration manager. From a
managed server, you can display information about any profiles on the
configuration manager to which the server subscribes. You can also get profile
information from any other configuration manager defined to the managed server,
even though the managed server does not subscribe to any of the profiles.
You may need to get detailed information about profiles and the objects associated
with them, especially before subscribing to a profile. You can get the names of the
objects associated with a profile by entering the following command:
query profile server=headquarters format=detailed
If the server from which you issue the query is already a managed server
(subscribed to one or more profiles on the configuration manager being queried),
by default the query returns profile information as it is known to the managed
server. Therefore the information is accurate as of the last configuration refresh
done by the managed server. You may want to ensure that you see the latest
version of profiles as they currently exist on the configuration manager. Enter the
following command:
query profile uselocal=no format=detailed
To get more than the names of the objects associated with a profile, you can do one
of the following:
v If command routing is set up between servers, you can route query commands
from the server to the configuration manager. For example, to get details on the
ENGDOMAIN policy domain on the HEADQUARTERS server, enter this
command:
headquarters: query domain engdomain format=detailed
Subscribing to a profile
After an administrator at a configuration manager has created profiles and
associated objects with them, managed servers can subscribe to one or more of the
profiles.
Note:
v Unless otherwise noted, the commands in this section would be run on a
managed server:
v An administrator at the managed server could issue the commands.
v You could log in from the enterprise console and issue them.
v If command routing is set up, you could route them from the server that you are
logged in to.
Before a managed server subscribes to a profile, be aware that if you have defined
any object with the same name and type as an object associated with the profile
that you are subscribing to, those objects will be overwritten. You can check for
such occurrences by querying the profile before subscribing to it.
Note: Although a managed server can subscribe to more than one profile on a
configuration manager, it cannot subscribe to profiles on more than one
configuration manager at a time.
Subscription scenario
The scenario that is documented is a typical one, where a server subscribes to a
profile on a configuration manager, in this case HEADQUARTERS.
You might want to get detailed information on some of the objects by issuing
specific query commands on either your server or the configuration manager.
Note: If any object name matches and you subscribe to a profile containing an
object with the matching name, the object on your server will be replaced, with
the following exceptions:
v A policy domain is not replaced if the domain has client nodes assigned to it.
v An administrator with system authority is not replaced by an administrator
with a lower authority level if the replacement would leave the server
without a system administrator.
v The definition of a server is not replaced unless the server definition on the
managed server allows replacement.
v A server with the same name as a server group is not replaced.
v A locally defined, active administrative schedule is not replaced
2. Subscribe to the ADMINISTRATORS and ENGINEERING profiles.
After the initial subscription, you do not have to specify the server name on the
DEFINE SUBSCRIPTION commands. If at least one profile subscription already
exists, any additional subscriptions are automatically directed to the same
configuration manager. Issue these commands:
define subscription administrators server=headquarters
The object definitions in these profiles are now stored on your database. In
addition to ADMINISTRATORS and ENGINEERING, the server is also
subscribed by default to DEFAULT_PROFILE. This means that all the server
and server group definitions on HEADQUARTERS are now also stored in your
database.
3. Set the time interval for obtaining refreshed configuration information from the
configuration manager.
Chapter 22. Managing a network of Tivoli Storage Manager servers 725
If you do not perform this step, your server checks for updates to the profiles
at start up and every 60 minutes after that. Set up your server to check
HEADQUARTERS for updates once a day (every 1440 minutes). If there is an
update, HEADQUARTERS sends it to the managed server automatically when
the server checks for updates.
set configrefresh 1440
Note: You can initiate a configuration refresh from a managed server at any time.
To initiate a refresh, simply reissue the SET CONFIGREFRESH with any value
greater than 0. The simplest approach is to use the current setting:
set configrefresh 1440
Querying subscriptions
From time to time you might want to view the profiles to which a server is
subscribed. You might also want to view the last time that the configuration
associated with that profile was successfully refreshed on your server.
The QUERY SUBSCRIPTION command gives you this information. You can name
a specific profile or use a wildcard character to display all or a subset of profiles to
which the server is subscribed. For example, the following command displays
ADMINISTRATORS and any other profiles that begin with the string ADMIN:
query subscription admin*
To see what objects the ADMINISTRATORS profile contains, use the following
command:
query profile administrators uselocal=no format=detailed
The field Managing profile shows the profile to which the managed server
subscribes to get the definition of this object.
Deleting subscriptions
If you decide that a server no longer needs to subscribe to a profile, you can delete
the subscription.
When you delete a subscription to a profile, you can choose to discard the objects
that came with the profile or keep them in your database. For example, to request
that your subscription to PROFILEC be deleted and to keep the objects that came
with that profile, issue the following command:
delete subscription profilec discardobjects=no
After the subscription is deleted on the managed server, the managed server issues
a configuration refresh request to inform the configuration manager that the
subscription is deleted. The configuration manager updates its database with the
new information.
When you choose to delete objects when deleting the subscription, the server may
not be able to delete some objects. For example, the server cannot delete a
managed policy domain if the domain still has client nodes registered to it. The
server skips objects it cannot delete, but does not delete the subscription itself. If
you take no action after an unsuccessful subscription deletion, at the next
configuration refresh the configuration manager will again send all the objects
associated with the subscription. To successfully delete the subscription, do one of
the following:
v Fix the reason that the objects were skipped. For example, reassign clients in the
managed policy domain to another policy domain. After handling the skipped
objects, delete the subscription again.
v Delete the subscription again, except this time do not discard the managed
objects. The server can then successfully delete the subscription. However, the
objects that were created because of the subscription remain.
By issuing this command with a value greater than zero, you cause the managed
server to immediately start the refresh process.
At the configuration manager, you can cause managed servers to refresh their
configuration information by notifying the servers. For example, to notify
subscribers to all profiles, enter the following command:
notify subscribers profile=*
The managed servers then start to refresh configuration information to which they
are subscribed through profiles.
The configuration manager sends the objects that it can distribute to the managed
server. The configuration manager skips (does not send) objects that conflict with
local objects. If the configuration manager cannot send all objects that are
associated with the profile, the managed server does not record the configuration
refresh as complete. The objects that the configuration manager successfully sent
are left as local instead of managed objects in the database of the managed server.
The local objects left as a result of an unsuccessful configuration refresh become
managed objects at the next successful configuration refresh of the same profile
subscription.
See Associating configuration information with a profile on page 715 for details
on when objects cannot be distributed.
To do this from the configuration manager, you do not simply delete the
association of the object from the profile, because that would cause the object to be
deleted from subscribing managed servers. To ensure the object remains in the
databases of the managed servers as a locally managed object, you can copy the
current profile, make the deletion, and change the subscriptions of the managed
servers to the new profile.
For example, servers are currently subscribed to the ENGINEERING profile. The
ENGDOMAIN policy domain is associated with this profile. You want to return
control of the ENGDOMAIN policy domain to the managed servers. You can do
the following:
1. Copy the ENGINEERING profile to a new profile, ENGINEERING_B:
copy profile engineering engineering_b
2. Delete the association of the ENGDOMAIN policy domain from
ENGINEERING_B:
delete profassociation engineering_b domains=engdomain
3. Use command routing to delete subscriptions to the ENGINEERING profile:
americas,europe,asia: delete subscription engineering
discardobjects=no
4. Delete the ENGINEERING profile:
delete profile engineering
5. Use command routing to define subscriptions to the new ENGINEERING_B
profile:
americas,europe,asia: define subscription engineering_b
To return objects to local control when working on a managed server, you can
delete the subscription to one or more profiles. When you delete a subscription,
you can choose whether to delete the objects associated with the profile. To return
objects to local control, you do not delete the objects. For example, use the
following command on a managed server:
delete subscription engineering discardobjects=no
To ensure passwords stay valid for as long as expected on all servers, set the
password expiration period to the same time on all servers. One way to do this is
to route a SET PASSEXP command from one server to all of the others.
Ensure that you have at least one administrator that is defined locally on each
managed server with system authority. This avoids an error on configuration
refresh when all administrators for a server would be removed as a result of a
change to a profile on the configuration manager.
It might appear that the configuration information is more recent on the managed
server than on the configuration manager. This could occur in the following
situations:
v The database on the configuration manager has been restored to an earlier time
and now has configuration information from profiles that appear to be older
than what the managed server has obtained.
v On the configuration manager, an administrator deleted a profile, forcing the
deletion even though one or more managed servers still subscribed to the
profile. The administrator redefined the profile (using the same name) before the
managed server refreshed its configuration information.
If the configuration manager still has a record of the managed server's subscription
to the profile, the configuration manager does not send its profile information at
the next request for refreshed configuration information. The configuration
manager informs the managed server that the profiles are not synchronized. The
managed server then issues a message indicating this condition so that an
administrator can take appropriate action. The administrator can perform the
following steps:
1. If the configuration manager's database has been restored to an earlier point in
time, the administrator may want to query the profile and associated objects on
the managed server and then manually update the configuration manager with
that information.
2. Use the DELETE SUBSCRIPTION command on the managed server to delete
subscriptions to the profile that is not synchronized. If desired, you can also
delete definitions of the associated objects, then define the subscription again.
It is possible that the configuration manager may not have a record of the
managed server's subscription. In this case, no action is necessary. When the
managed server requests a refresh of configuration information, the configuration
manager sends current profile information and the managed server updates its
database with that information.
When you issue the DELETE SUBSCRIPTION command, the managed server
automatically notifies the configuration manager of the deletion by refreshing its
configuration information. As part of the refresh process, the configuration
manager is informed of the profiles to which the managed server subscribes and to
which it does not subscribe. If the configuration manager cannot be contacted
immediately for a refresh, the configuration manager will find out that the
subscription was deleted the next time the managed server refreshes configuration
information.
See Setting the server name on page 627 for more information before using the
SET SERVERNAME command.
| You can use the Operations Center to view status and alerts for multiple Tivoli
| Storage Manager servers, to issue commands to those servers, and to access web
| clients.
| You can also use the Administration Center and access all of the Tivoli Storage
| Manager servers and web clients for which you have administrative authority.
| Tip: You can use the Operations Center to view status and alerts for multiple
| Tivoli Storage Manager servers, and to issue commands to those servers.
For more information, see Chapter 17, Managing servers with the Operations
Center, on page 589, and Chapter 18, Managing servers with the Administration
Center, on page 597.
Routing commands
Command routing enables an administrator to send commands for processing to
one or more servers at the same time. The output is collected and displayed at the
server that issued the routed commands.
You can route commands to one server, multiple servers, servers defined to a
named group, or a combination of these servers. A routed command cannot be
further routed to other servers; only one level of routing is allowed.
Each server that you identify as the target of a routed command must first be
defined with the DEFINE SERVER command. If a server has not been defined, that
server is skipped and the command routing proceeds to the next server in the
route list.
Tivoli Storage Manager does not run a routed command on the server from which
you issue the command unless you also specify that server. To be able to specify
the server on a routed command, you must define the server just as you did any
other server.
Routed commands run independently on each server to which you send them. The
success or failure of the command on one server does not affect the outcome on
any of the other servers to which the command was sent.
The return codes for command routing can be one of three severities: 0, ERROR, or
WARNING. See Administrator's Reference for a list of valid return codes and
severity levels.
To route a command to a single server, enter the defined server's name, a colon,
and then the command to be processed.
For example, to route a QUERY STGPOOL command to the server that is named
ADMIN1, enter:
admin1: query stgpool
The colon after the server name indicates the end of the routing information. This
is also called the server prefix. Another way to indicate the server routing
information is to use parentheses around the server name, as follows:
(admin1) query stgpool
Note: When writing scripts, you must use the parentheses for server routing
information.
To route a command to more than one server, separate the server names with a
comma. For example, to route a QUERY OCCUPANCY command to three servers named
ADMIN1, GEO2, and TRADE5 enter:
admin1,geo2,trade5: query occupancy
or
(admin1,geo2,trade5) query occupancy
The routed command output of each server is displayed in its entirety at the server
that initiated command routing. In the previous example, output for ADMIN1
would be displayed, followed by the output of GEO2, and then the output of
TRADE5.
Processing of a command on one server does not depend upon completion of the
command processing on any other servers in the route list. For example, if GEO2
server does not successfully complete the command, the TRADE5 server continues
processing the command independently.
A server group is a named group of servers. After you set up the groups, you can
route commands to the groups.
or
(west_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX.
or
(west_complex,north_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX, and servers NE12 and
NW13 which are members of group NORTH_COMPLEX.
See Setting up server groups on page 735 for how to set up a server group.
You can route commands to multiple single servers and to server groups at the
same time.
For example, to route the QUERY DB command to servers HQSRV, REGSRV, and
groups WEST_COMPLEX and NORTH_COMPLEX, enter:
hqsrv,regsrv,west_complex,north_complex: query db
or
(hqsrv,regsrv,west_complex,north_complex) query db
After you have the server groups set up, you can manage the groups and group
members.
To route commands to a server group you must perform the following steps:
1. Define the server with the DEFINE SERVER command if it is not already
defined.
2. Define a new server group with the DEFINE SERVERGROUP command. Server
group names must be unique because both groups and server names are
allowed for the routing information.
3. Define servers as members of a server group with the DEFINE GRPMEMBER
command.
You can obtain information about server groups using the QUERY SERVERGROUP
command.
You can copy a server group using the COPY SERVERGROUP command.
This command creates the new group. If the new group already exists, the
command fails.
You can rename a server group using the RENAME SERVERGROUP command.
You can update a server group using the UPDATE SERVERGROUP command.
You can delete a server group using the DELETE SERVERGROUP command.
To delete WEST_COMPLEX server group from the Tivoli Storage Manager server,
enter:
delete servergroup west_complex
This command removes all members from the server group. The server definition
for each group member is not affected. If the deleted server group is a member of
other server groups, the deleted group is removed from the other groups.
You can move group members to another group using the MOVE GRPMEMBER
command.
You can delete group members from a group using the DELETE GROUPMEMBER
command.
To delete group member BLD12 from the NEWWEST server group, enter:
delete grpmember newwest bld12
When you delete a server, the deleted server is removed from any server groups of
which it was a member.
The PING SERVER command uses the user ID and password of the administrative
ID that issued the command. If the administrator is not defined on the server
being pinged, the ping fails even if the server may be running.
Tivoli Storage Manager allows a server (a source server) to store these items on
another server (a target server):
v database backups
v export operations
v storage pool operations
v DRM PREPARE command
The data is stored as virtual volumes, which appear to be sequential media volumes
on the source server, but which are actually stored as archive files on a target
server. Virtual volumes can be any of these:
The source server is a client of the target server, and the data for the source server
is managed only by the source server. In other words, the source server controls
the expiration and deletion of the files that comprise the virtual volumes on the
target server. You cannot use virtual volumes when the source server and the
target server are located on the same Tivoli Storage Manager server.
At the target server, the virtual volumes from the source server are seen as archive
data. The source server is registered as a client node (of TYPE=SERVER) at the
target server and is assigned to a policy domain. The archive copy group of the
default management class of that domain specifies the storage pool for the data
from the source server.
Note: If the default management class does not include an archive copy group,
data cannot be stored on the target server.
You can benefit from the use of virtual volumes in the following ways:
v Smaller Tivoli Storage Manager source servers can use the storage pools and
tape devices of larger Tivoli Storage Manager servers.
v For incremental database backups, virtual volumes can decrease wasted space on
volumes and under-utilization of high-end tape drives.
v The source server can use the target server as an electronic vault for recovery
from a disaster.
For details, see Reconciling virtual volumes and archive files on page 743.
Related concepts:
Performance limitations for virtual volume operations on page 740
Related tasks:
Setting up source and target servers for virtual volumes
In the following example (illustrated in Figure 86 on page 740), the source server is
named TUCSON and the target server is named MADERA.
v At Tucson site:
1. Define the target server:
MADERA has a TCP/IP address of 127.0.0.1:1845
Assign the password CALCITE to MADERA.
Assign TUCSON as the node name by which the source server TUCSON
will be known by the target server. If no node name is assigned, the server
name of the source server is used. To see the server name, you can issue
the QUERY STATUS command.
2. Define a device class for the data to be sent to the target server. The device
type for this device class must be SERVER, and the definition must include
the name of the target server.
v At Madera site:
Register the source server as a client node. The target server can use an existing
policy domain and storage pool for the data from the source server. However,
you can define a separate management policy and storage pool for the source
server. Doing so can provide more control over storage pool resources.
1. Use the REGISTER NODE command to define the source server as a node of
TYPE=SERVER. The policy domain to which the node is assigned determines
where the data from the source server is stored. Data from the source server
is stored in the storage pool specified in the archive copy group of the
default management class of that domain.
2. You can set up a separate policy and storage pool for the source server.
a. Define a storage pool named SOURCEPOOL:
define stgpool sourcepool autotapeclass maxscratch=20
b. Copy an existing policy domain STANDARD to a new domain named
SOURCEDOMAIN:
copy domain standard sourcedomain
c. Assign SOURCEPOOL as the archive copy group destination in the
default management class of SOURCEDOMAIN:
TUCSON MADERA
Related tasks:
Changing policy on page 481
Some of the factors that can affect volume performance when using virtual
volumes are:
v Distance between locations
v Network infrastructure and bandwidth between locations
v Network configuration
v Data size and distribution
v Data read and write patterns
Use the server-to-server virtual volumes feature to share a single tape library with
multiple servers. Although there are other situations that can use this feature, such
as cross-server or off-site vaulting, this feature is not optimized for long distances.
Avoid moving large amounts of data between the servers, which might slow down
communications significantly, depending on the network bandwidth and
availability.
Specify, in the device class definition (DEVTYPE=SERVER) how often, and how
long a time period you want the source server to attempt to contact the target
server. Keep in mind that frequent attempts to contact the target server over an
extended period can affect your communications.
To minimize mount wait times, set the total mount limit for all server definitions
that specify the target server to a value that does not exceed the mount total limit
at the target server. For example, a source server has two device classes, each
specifying a mount limit of 2. A target server has only two tape drives. In this case,
the source server mount requests might exceed the target server tape drives.
For example, to perform an incremental backup of the source server and send the
volumes to the target server, issue the following command:
backup db type=incremental devclass=targetclass
See Moving copy storage pool and active-data pool volumes on-site on page
1048 for more information.
For example, a primary storage pool named TAPEPOOL is on the source server.
You can define a copy storage pool named TARGETCOPYPOOL, also on the
source server. TARGETCOPYPOOL must have an associated device class whose
device type is SERVER. When you back up TAPEPOOL to TARGETCOPYPOOL,
the backup is sent to the target server. To accomplish this, issue the following
commands:
define stgpool targetcopypool targetclass pooltype=copy
maxscratch=20
backup stgpool tapepool targetcopypool
To configure your system, ensure that the management policy for those nodes
specifies a storage pool that has a device class whose device type is SERVER. For
example, the following command defines the storage pool named TARGETPOOL.
define stgpool targetpool targetclass maxscratch=20
reclaim=100
For details about storage pool reclamation and how to begin it manually, see
Reclaiming space in sequential-access storage pools on page 372.
For example, storage pool TAPEPOOL is on the source server. The TAPEPOOL
definition specifies NEXTSTGPOOL=TARGETPOOL. TARGETPOOL has been
defined on the source server as a storage pool of device type SERVER. When data
is migrated from TAPEPOOL, it is sent to the target server.
define stgpool tapepool tapeclass nextstgpool=targetpool
maxscratch=20
For example, to copy server information directly to a target server, issue the
following command:
export server devclass=targetclass
If data has been exported from a source server to a target server, you can import
that data from the target server to a third server. The server that will import the
data uses the node ID and password of the source server to open a session with
the target server. That session is in read-only mode because the third server does
not have the proper verification code.
For example, to import server information from a target server, issue the following
command:
import server devclass=targetclass
Two methods are available to perform the export and import operation:
v Export directly to another server on the network. This results in an immediate
import process without the need for compatible sequential device types between
the two servers.
v Export to sequential media. Later, you can use the media to import the
information to another server that has a compatible device type.
This chapter takes you through the export and import tasks. See the following
sections:
Concepts:
Reviewing data that can be exported and imported
Tasks for Exporting Directly to Another Server:
Exporting data directly to another server on page 748
Preparing to export to another server for immediate import on page 752
Monitoring the server-to-server export process on page 754
Tasks for Exporting to Sequential Media:
Exporting and importing data using sequential media volumes on page 756
Exporting tasks on page 758
Importing data from sequential media volumes on page 761
Exporting restrictions
The export function does have some limitations and restrictions. One restriction is
that you can export information from an earlier version and release of Tivoli
Storage Manager to a later version and release, but not from a later version and
release to an earlier version and release.
For example, you can export from a V6.1 server to a V6.2 server, but you cannot
export from V6.2 server to V6.1 server.
Important:
1. Because results could be unpredictable, ensure that expiration, migration,
backup, or archive processes are not running when the EXPORT NODE command
is issued.
2. The EXPORT NODE and EXPORT SERVER commands will not export data from shred
pools unless you explicitly permit it by setting the ALLOWSHREDDABLE parameter
to YES. If this value is specified, and the exported data includes data from
shred pools, but that data can no longer be shredded.
Related concepts:
Securing sensitive client data on page 541
When you export to sequential media, administrators or users may modify data
shortly after it has been exported, then the information copied to tape may not be
consistent with data stored on the source server. If you want to export an exact
point-in-time copy of server control information, you can prevent administrative
and other client nodes from accessing the server.
When you export directly to another server, administrators or users may modify
data shortly after it has been exported. You can decide to merge file spaces, use
incremental export, or prevent administrative and other client nodes from
accessing the server.
Related concepts:
Preventing administrative clients from accessing the server on page 748
Related tasks:
Preventing client nodes from accessing the server on page 748
Related reference:
Options to consider before exporting on page 748
To prevent users from accessing the server during export operations, cancel
existing client sessions. Then you can perform one of the following steps:
1. Disable server access to prevent client nodes from accessing the server.
This option is useful when you export all client node information from the
source server and want to prevent all client nodes from accessing the server.
2. Lock out particular client nodes from server access.
This option is useful when you export a subset of client node information from
the source server and want to prevent particular client nodes from accessing
the server until the export operation is complete.
After the export operation is complete, allow client nodes to access the server
again by:
v Enabling the server
v Unlocking client nodes
If you do not want to merge file spaces, see the topic on how duplicate file spaces
are managed.
Choosing to merge file spaces allows you to restart a cancelled import operation
because files that were previously imported can be skipped in the subsequent
import operation. This option is available when you issue an EXPORT SERVER or
EXPORT NODE command.
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client file
copies on two or more servers.
The following definitions show how the server merges imported files, based on the
type of object, when you specify MERGEFILESPACES=YES.
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, name, insert date, and description is found to
already exist on the target server, the imported object is skipped.
Otherwise, the archive object is imported.
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
Otherwise, the server performs the following tasks:
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy, and the active copy on the target
server is made inactive. Tivoli Storage Manager expires this inactive
version based on the number of versions that are allowed in policy.
v If the imported backup object has an earlier (less recent) insert date than
an active copy of an object on the target server with the same node, file
space, TCP/IP address, TCP/IP port, then the imported backup object is
inserted as an inactive version.
v If there are no active versions of an object with the same node, file
space, TCP/IP address, and TCP/IP port on the target server, and the
imported object has the same node, file space, TCP/IP address, and
TCP/IP port as the versions, then:
An imported active object with a later insert date than the most recent
inactive copy will become the active version of the file.
The number of objects imported and skipped is displayed with the final statistics
for the import operation.
Related concepts:
Managing duplicate file spaces on page 769
Related tasks:
Querying the activity log for export or import information on page 774
You can use the FROMDATE and FROMTIME parameters to export data based on the
date and time the file was originally stored in the server. The FROMDATE and
FROMTIME parameters only apply to client user file data; these parameters have no
effect on other exported information such as policy. If clients continue to back up
to the originating server while their data is moving to a new server, you can move
the backup data that was stored on the originating server after the export
operation was initiated. This option is available when you issue an EXPORT SERVER
or EXPORT NODE command.
You can use the TODATE and TOTIME parameters to further limit the time you specify
for your export operation.
Alternatively, you can have the server skip duplicate definitions. This option is
available when you issue any of the EXPORT commands.
Related concepts:
Determining whether to replace existing definitions on page 763
The resumed export continues at a point where the suspension took place.
Therefore, data that has already been exported is not exported again and only the
data that was not sent is included in the restarted export. Issue the QUERY EXPORT
command to view all running and suspended restartable export operations, the
RESTART EXPORT command to restart an export operation, or the SUSPEND EXPORT to
suspend a running server-to-server EXPORT NODE or EXPORT SERVER process.
Suspended server-to-server export operations are not affected by a server restart.
Note: Do not issue the CANCEL PROCESS command if you want to restart the
operation at a later time. CANCEL PROCESS ends the export process and deletes all
saved status.
If an export operation fails prior to identifying all eligible files, when the export
operation is restarted it continues to identify eligible files and may export files that
were backed up while the operation was suspended.
A restarted export operation will export only the data that was identified. During a
suspension, some files or nodes identified for export might be deleted or might
expire. To ensure that all data is exported, restart the export operation at the
earliest time and restrict operations on the selected data.
At any given time, a restartable export operation will be in one of the following
states:
Running - Not Suspendible
This state directly corresponds to phase 1 of a restartable export, Creating
definitions on target server.
Attention: Ensure that the target server's Tivoli Storage Manager level is newer
or the same as the source server's level. If you suspend export operations and
upgrade the source server's database, the target server may stop the export
operation if the new source server's Tivoli Storage Manager level is incompatible
with the target server's level.
To determine how much space is required to export all server data, issue the
following command:
export server filedata=all previewimport=yes
After you issue the EXPORT SERVER command, a message similar to the following
message is issued when the server starts a background process:
EXPORT SERVER started as Process 4
You can view the preview results by querying the activity log.
You can also view the results on the following applications:
v Server console
Related tasks:
Requesting information about an export or import process on page 772
Canceling server processes on page 625
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
If you want to view the status of any server-to-server exports that can be
suspended, issue the QUERY EXPORT command. The QUERY EXPORT command lists all
running or suspended operations.
If a process completes, you can query the activity log for status information from
an administrative client running in batch or interactive mode.
You can also query the activity log for status information from the server console.
The process first builds a list of what is to be exported. The process can therefore
be running for some time before any data is transferred. The connection between
the servers might time-out. You may need to adjust the COMMTIMEOUT and
IDLETIMEOUT server options on one or both servers.
If a process completes, you can query the activity log for status information from
the server console or from an administrative client running in batch or interactive
mode. The process first builds a list of what is to be exported. The process can
therefore be running for some time before any data is transferred. The connection
between the servers might time-out. You may need to adjust the COMMTIMEOUT and
IDLETIMEOUT server options on one or both servers.
You can specify a list of administrator names, or you can export all administrator
names.
Issue the following command to export all the administrator definitions to the
target server defined as OTHERSERVER.
export admin * toserver=otherserver previewimport=yes
This lets you preview the export without actually exporting the data for immediate
import.
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
To export client node information and all client files for NODE1 directly to
SERVERB, issue the following example command:
export node node1 filedata=all toserver=serverb
Important: When you specify a list of node names or node patterns, the server
will not report the node names or patterns that do not match any entries in the
database. Check the summary statistics in the activity log to verify that the server
exported all intended nodes.
To export server data to another server on the network and have the file spaces
merged with any existing file spaces on the target server, as well as replace
definitions on the target server and have the data, that is to be exported, to begin
with any data inserted in the originating server beginning on 10/25/2007, issue the
following command:
export server toserver=serv23 fromdate=10/25/2007 filedata=all
mergefilespaces=yes dates=relative
You can view the preview results by querying the activity log or the following
place:
v Server console
You can request information about the background process. If necessary, you
can cancel an export or import process.
Related tasks:
Requesting information about an export or import process on page 772
Canceling server processes on page 625
Note:
a. If the mount limit for the device class selected is reached when you request
an export (that is, if all the drives are busy), the server automatically cancels
lower priority operations, such as reclamation, to make a mount point
available for the export.
b. You can export data to a storage pool on another server by specifying a
device class whose device type is SERVER.
2. Estimate the number of removable media volumes to label.
To estimate the number of removable media volumes to label, divide the
number of bytes to be moved by the estimated capacity of a volume.
You can estimate the following forms of removable media volumes:
v The number of tapes or optical disks needed to store export data
For example, cartridge system tape volumes used with 3490 tape devices have
an estimated capacity of 360 MB. If the preview shows that you need to
transfer 720 MB of data, label at least two tape volumes before you export the
data.
3. Use scratch media. The server allows you to use scratch media to ensure that
you have sufficient space to store all export data. If you use scratch media,
record the label names and the order in which they were mounted.
Or, use the USEDVOLUMELIST parameter on the export command to create a file
containing the list of volumes used.
Exporting tasks
You can export all server control information or a subset of server control
information.
When you export data, you must specify the device class to which export data will
be written. You must also list the volumes in the order in which they are to be
mounted when the data is imported.
You can specify the USEDVOLUMELIST parameter to indicate the name of a file where
a list of volumes used in a successful export operation will be stored. If the
specified file is created without errors, it can be used as input to the IMPORT
command on the VOLUMENAMES=FILE:filename parameter. This file will contain
comment lines with the date and time the export was done, and the command
issued to create the export.
Note: An export operation will not overwrite an existing file. If you perform an
export operation and then try the same operation again with the same volume
name, the file is skipped, and a scratch file is allocated. To use the same volume
name, delete the volume entry from the volume history file.
Related tasks:
Planning for sequential media used to export data on page 757
You can specify a list of administrator names, or you can export all administrator
names.
Issue the following command to export definitions for the DAVEHIL and PENNER
administrator IDs to the DSM001 tape volume, which the TAPECLASS device class
supports, and to not allow any scratch media to be used during this export
process:
export admin davehil,penner devclass=tapeclass
volumenames=dsm001 scratch=no
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
When exporting active versions of client backup data, the server searches for active
file versions in an active-data pool associated with a FILE device class, if such a
pool exists. This process minimizes the number of mounts that are required during
the export process.
If you do not specify that you want to export file data, then the server only exports
client node definitions.
For example, suppose that you want to perform the following steps:
When you issue the EXPORT POLICY command, the server exports the following
information belonging to each specified policy domain:
v Policy domain definitions
v Policy set definitions, including the active policy set
v Management class definitions, including the default management class
v Backup copy group and archive copy group definitions
v Schedule definitions
v Associations between client nodes and schedules
For example, suppose that you want to export policy and scheduling definitions
from the policy domain named ENGPOLDOM. You want to use tape volumes
DSM001 and DSM002, which belong to the TAPECLASS device class, but allow the
server to use scratch tape volumes if necessary.
For example, you want to export server data to four defined tape cartridges, which
the TAPECLASS device class supports. You want the server to use scratch volumes
if the four volumes are not enough, and so you use the default of SCRATCH=YES.
After Tivoli Storage Manager is installed and set up on the target server, a system
administrator can import all server control information or a subset of server
control information by specifying one or more of the following import commands:
v IMPORT ADMIN
v IMPORT NODE
v IMPORT POLICY
v IMPORT SERVER
You can merge imported client backup, archive, and space-managed files into
existing file spaces, and automatically skip duplicate files that may exist in the
target file space on the server. Optionally, you can have new file spaces created.
If you do not want to merge file spaces, look into how duplicate file spaces are
managed. Choosing to merge file spaces allows you to restart a cancelled import
operation since files that were previously imported can be skipped in the
subsequent import operation.
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client file
copies on two or more servers.
The following definitions show how the server merges imported files, based on the
type of object, when you specify MERGEFILESPACES=YES.
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, insert date, and description is found to already exist
on the target server, the imported object is skipped. Otherwise, the archive
object is imported.
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
Otherwise, the server performs the following tasks:
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy. The active copy on the target server
is made inactive. Tivoli Storage Manager expires this inactive version
based on the number of versions that are allowed in policy.
The number of objects imported and skipped is displayed with the final statistics
for the import operation.
Related concepts:
Managing duplicate file spaces on page 769
Related tasks:
Querying the activity log for export or import information on page 774
By using the REPLACEDEFS parameter with the IMPORT command, you can specify
whether to replace existing definitions on the target server when Tivoli Storage
Manager encounters an object with the same name during the import process.
For example, if a definition exists for the ENGPOLDOM policy domain on the
target server before you import policy definitions, then you must specify
REPLACEDEFS=YES to replace the existing definition with the data from the
export tape.
When you import file data, you can keep the original creation date for backup
versions and archive copies, or you can specify that the server use an adjusted
date.
If you want to keep the original dates set for backup versions and archive copies,
use DATES=ABSOLUTE, which is the default. If you use the absolute value, any
files whose retention period has passed will be expired shortly after they are
imported to the target server.
When you specify a relative date, the dates of the file versions are adjusted to the
date of import on the target server. This is helpful when you export from a server
When you set PREVIEW=YES, tape operators must mount export tape volumes so
that the target server can calculate the statistics for the preview.
Issue the following design to preview information for the IMPORT SERVER
command:
import server devclass=tapeclass preview=yes
volumenames=dsm001,dsm002,dsm003,dsm004
Figure 87 on page 765 shows an example of the messages sent to the activity log
and the following place:
Server console
Figure 87. Sample report created by issuing preview for an import server command
Use the value reported for the total number of bytes copied to estimate storage
pool space needed to store imported file data.
For example, Figure 87 shows that 8 856 358 bytes of data will be imported.
Ensure that you have at least 8 856 358 bytes of available space in the backup
storage pools defined to the server. You can issue the QUERY STGPOOL and QUERY
VOLUME commands to determine how much space is available in the server storage
hierarchy.
In addition, the preview report shows that 0 archive files and 462 backup files will
be imported. Because backup data is being imported, ensure that you have
sufficient space in the backup storage pools used to store this backup data.
Importing definitions
When previewing information before importing data, you must import server
control information. This includes administrator definitions, client node definitions,
policy domain, policy set, management class, and copy group definitions, schedule
definitions, and client node associations.
However, do not import file data at this time, because some storage pools named
in the copy group definitions may not exist yet on the target server.
Before you import server control information, perform the following tasks:
1. Read the following topics:
v Determining whether to replace existing definitions on page 763
v Determining how the server imports active policy sets
2. Start an administrative client session in console mode to capture import
messages to an output file.
3. Import the server control information from specified tape volumes.
Related tasks:
Directing import messages to an output file on page 767
Importing server control information on page 767
When the server imports policy definitions, several objects are imported to the
target server.
If the server encounters a policy set named ACTIVE on the tape volume during the
import process, it uses a temporary policy set named $$ACTIVE$$ to import the
active policy set.
After each $$ACTIVE$$ policy set has been activated, the server deletes that
$$ACTIVE$$ policy set from the target server. To view information about active
policy on the target server, you can use the following commands:
v QUERY COPYGROUP
v QUERY DOMAIN
v QUERY MGMTCLASS
v QUERY POLICYSET
Results from issuing the QUERY DOMAIN command show the activated policy set as
$$ACTIVE$$. The $$ACTIVE$$ name shows you that the policy set which is
currently activated for this domain is the policy set that was active at the time the
export was performed.
The information generated by the validation process can help you define a storage
hierarchy that supports the storage destinations currently defined in the import
data.
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
If you have completed the prerequisite steps, you might be ready to import the
server control information.
Based on the information generated during the preview operation, you know that
all definition information has been stored on the first tape volume named DSM001.
Specify that this tape volume can be read by a device belonging to the
TAPECLASS device class.
You can issue the command from an administrative client session or from the
following:
To tailor server storage definitions on the target server, complete the following
steps:
1. Identify any storage destinations specified in copy groups and management
classes that do not match defined storage pools:
v If the policy definitions you imported included an ACTIVE policy set, that
policy set is validated and activated on the target server. Error messages
generated during validation include whether any management classes or
copy groups refer to storage pools that do not exist on the target server. You
have a copy of these messages in a file if you directed console messages to
an output file.
v Query management class and copy group definitions to compare the storage
destinations specified with the names of existing storage pools on the target
server.
To request detailed reports for all management classes, backup copy groups,
and archive copy groups in the ACTIVE policy set, enter these commands:
query mgmtclass * active * format=detailed
query copygroup * active * standard type=backup format=detailed
query copygroup * active * standard type=archive format=detailed
2. If storage destinations for management classes and copy groups in the ACTIVE
policy set refer to storage pools that are not defined, perform one of the
following tasks:
v Define storage pools that match the storage destination names for the
management classes and copy groups.
v Change the storage destinations for the management classes and copy
groups. perform the following steps:
a. Copy the ACTIVE policy set to another policy set
b. Modify the storage destinations of management classes and copy groups
in that policy set, as required
c. Activate the new policy set
Depending on the amount of client file data that you expect to import, you may
want to examine the storage hierarchy to ensure that sufficient storage space is
available. Storage pools specified as storage destinations by management classes
and copy groups may fill up with data. For example, you may need to define
additional storage pools to which data can migrate from the initial storage
destinations.
Related tasks:
Directing import messages to an output file on page 767
Defining storage pools on page 255
Related reference:
Defining and updating a policy set on page 502
You can request that file data be imported in any of the following groupings:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
Data being imported will not be stored in active-data pools. Use the COPY
ACTIVEDATA command to store newly imported data into an active-data pool.
When the server imports file data information, it imports any file spaces belonging
to each specified client node. If a file space definition already exists on the target
server for the node, the server does not replace the existing file space name.
If the server encounters duplicate file space names when it imports file data
information, it creates a new file space name for the imported definition by
replacing the final character or characters with a number. A message showing the
old and new file space names is written to the system log and to the activity log. A
message showing the old and new file space names is written to the activity log
and to the following place:
v server console
For example, if the C_DRIVE and D_DRIVE file space names reside on the target
server for node FRED and on the tape volume for FRED, then the server imports
the C_DRIVE file space as C_DRIV1 file space and the D_DRIVE file space as
D_DRIV1 file space, both assigned to node FRED.
When you import file data, you can keep the original creation date for backup
versions and archive copies, or you can specify that the server use an adjusted
date.
Because tape volumes containing exported data might not be used for some time,
the original dates defined for backup versions and archive copies may be old
enough that files are expired immediately when the data is imported to the target
server.
For example, assume that data exported to tape includes an archive copy archived
five days prior to the export operation. If the tape volume resides on the shelf for
six months before the data is imported to the target server, the server resets the
archival date to five days prior to the import operation.
If you want to keep the original dates set for backup versions and archive copies,
use DATES=ABSOLUTE, which is the default. If you use the absolute value, any
files whose retention period has passed will be expired shortly after they are
imported to the target server.
You can import file data, either by issuing the IMPORT SERVER or IMPORT NODE
command. When you issue either of these commands, you can specify which type
of files should be imported for all client nodes specified and found on the export
tapes.
You can specify any of the following values to import file data:
All Specifies that all active and inactive versions of backed up files, archive
copies of files, and space-managed files for specified client nodes are
imported to the target server
None Specifies that no files are imported to the target server; only client node
definitions are imported
Archive
Specifies that only archive copies of files are imported to the target server
Backup
Specifies that only backup copies of files, whether active or inactive, are
imported to the target server
Backupactive
Specifies that only active versions of backed up files are imported to the
target server
Allactive
Specifies that only active versions of backed up files, archive copies of files,
and space-managed files are imported to the target server
Spacemanaged
Specifies that only files that have been migrated from a users local file
system (space-managed files) are imported
For example, suppose you want to import all backup versions of files, archive
copies of files, and space-managed files to the target server. You do not want to
replace any existing server control information during this import operation.
Specify the four tape volumes that were identified during the preview operation.
These tape volumes can be read by any device in the TAPECLASS device class. To
issue this command, enter:
import server filedata=all replacedefs=no
devclass=tapeclass volumenames=dsm001,dsm002,dsm003,dsm004
If the ENGDOM policy domain exists on the target server, the imported nodes are
assigned to that domain. If ENGDOM does not exist on the target server, the
imported nodes are assigned to the STANDARD policy domain.
If you do not specify a domain on the IMPORT NODE command, the imported node
is assigned to the STANDARD policy domain.
While the server allows you to issue any import command, data cannot be
imported to the server if it has not been exported to tape. For example, if a tape is
created with the EXPORT POLICY command, an IMPORT NODE command will not find
any data on the tape because node information is not a subset of policy
information.
See Table 72 for the commands that you can use to import a subset of exported
information to a target server.
Table 72. Importing a subset of information from tapes
If tapes were created with You can issue this import You cannot issue this import
this export command: command: command:
EXPORT SERVER IMPORT SERVER
IMPORT ADMIN Not applicable.
IMPORT NODE
IMPORT POLICY
EXPORT NODE IMPORT NODE IMPORT ADMIN
IMPORT SERVER IMPORT POLICY
EXPORT ADMIN IMPORT ADMIN IMPORT NODE
IMPORT SERVER IMPORT POLICY
EXPORT POLICY IMPORT POLICY IMPORT ADMIN
IMPORT SERVER IMPORT NODE
If invalid data is encountered during an import operation, the server uses the
default value for the new object's definition. If the object already exists, the existing
parameter is not changed.
During import and export operations, the server reports on the affected objects to
the activity log and also to the:
server console
You should query these objects when the import process is complete to see if they
reflect information that is acceptable.
A file space definition may already exist on the target server for the node. If so, an
administrator with system privilege can issue the DELETE FILESPACE command to
remove file spaces that are corrupted or no longer needed. For more information
on the DELETE FILESPACE command, refer to the Administrator's Reference.
Related concepts:
Managing duplicate file spaces on page 769
An imported file space can have the same name as a file space that already exists
on a client node. In this case, the server does not overlay the existing file space,
and the imported file space is given a new system generated file space name.
This new name may match file space names that have not been backed up and are
unknown to the server. In this case, you can use the RENAME FILESPACE command
to rename the imported file space to the naming convention used for the client
node.
You can use the following two ways to monitor export or import processes:
v You can view information about a process that is running on the server console
or from an administrative client running in console mode.
v After a process has completed, you can query the activity log for status
information from an administrative client running in batch or interactive mode.
Watch for mount messages, because the server might request mounts of volumes
that are not in the library. The process first builds a list of what is to be exported.
The process can therefore be running for some time before any data is transferred.
You can query an export or import process by specifying the process ID number.
For example, to request information about the EXPORT SERVER operation, which
started as process 4, enter:
query process 4
If you issue a preview version of an EXPORT or IMPORT command and then query
the process, the server reports the types of objects to be copied, the number of
objects to be copied, and the number of bytes to be copied.
When you export or import data and then query the process, the server displays
To minimize processing time when querying the activity log for export or import
information, restrict the search by specifying EXPORT or IMPORT in the SEARCH
parameter of the QUERY ACTLOG command.
To determine how much data will be moved after issuing the preview version of
the EXPORT SERVER command, query the activity log by issuing the following
command:
query actlog search=export
Tasks:
Chapter 25, Basic monitoring methods, on page 793
Chapter 24, Daily monitoring tasks, on page 779
Using IBM Tivoli Storage Manager queries to display information on page 793
Using SQL to query the IBM Tivoli Storage Manager database on page 798
Using the Tivoli Storage Manager activity log on page 803
Chapter 31, Logging IBM Tivoli Storage Manager events to receivers, on page 861
Chapter 28, Monitoring Tivoli Storage Manager accounting records, on page 811
Tivoli Monitoring for Tivoli Storage Manager
Cognos Business Intelligence on page 820
Backing up and restoring Tivoli Monitoring for Tivoli Storage Manager
| You can complete the monitoring tasks by using the command-line interface (CLI).
| A subset of tasks can also be completed by using the Operations Center, the
| Administration Center, or IBM Tivoli Monitoring for Tivoli Storage Manager.
The following list describes some of the items that are important to monitor daily.
Instructions for monitoring these items, and other monitoring tasks can be found
in the topics in this section. Not all of these tasks apply to all environments.
v Verify that the database file system has enough space.
v Examine the database percent utilization, available free space, and free-pages.
v Verify that there is enough disk space in the file systems that contain these log
files.
Active log
Archive log
Mirror log
Archive failover log
v Verify that the instance directory file system has enough space.
v Verify that the database backups completed successfully, and that they are
running frequently enough.
v Check the database and recovery log statistics.
v Verify that you have current backup files for device configuration and volume
history information. You can find the file names for the backups by looking in
the dsmserv.opt file for the DEVCONFIG and VOLUMEHISTORY options. Ensure that
file systems where the files are stored have sufficient space.
v Search the summary table for failed processes.
v Search the activity log for error messages.
v For storage pools that have deduplication enabled, ensure that processes are
completing successfully.
v Check the status of your storage pools to ensure that there is enough space
available.
v Check for any failed storage pool migrations.
v Check the status of sequential access storage pools.
v Check how many scratch volumes are available.
v Determine if there are any tape drives offline, or their paths that are offline.
v Determine if there are any libraries offline, or their paths that are offline.
v Verify that all of the tapes have the appropriate write-access.
v Verify the status and settings for disaster recovery manager (DRM).
v Check for failed or missed schedules.
v Check the summary table for scheduled client operations such as backup,
restore, archive, and retrieve.
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The examples used here are based on a 24-hour period, but your values can differ
depending on the time frame you specify.
The following steps describe the commands that you can use to monitor server
processes:
1. Search the summary table for any server processes that failed within the
previous 24-hour period:
select activity as process, number as processnum from summary where
activity in (EXPIRATION,RECLAMATION,MIGRATION,STGPOOL BACKUP,
FULL_DBBACKUP,INCR_DBBACKUP,REPLICATION) and successful=NO
and end_time> (current_timestamp - interval 24 hours)
This example output indicates that backup storage pool process number 7
failed:
PROCESS: STGPOOL BACKUP
PROCESSNUM: 7
2. Search the activity log for the messages associated with the failed process
number that was indicated in the output of the command in Step 1.
select message from actlog where process=7 and date_time>(current_timestamp
- interval 24 hours) and severity in (W,E,S)
Example output:
MESSAGE: ANR1221E BACKUP STGPOOL: Process 7 terminated - insufficient space in
target storage pool FILECOPYPOOL. (SESSION: 1, PROCESS: 7)
Example output:
FREQUENCY
------------
3
Example output:
ACTIVITY: IDENTIFY
NUMBER: 5
FILESPROCESSED: 12946
DUPLICATEEXTENTS: 10504
DUPLICATEBYTES: 127364341
SUCCESSFUL: YES
Related tasks:
Monitoring your database daily
Monitoring disk storage pools daily on page 784
Monitoring sequential access storage pools daily on page 785
Monitoring scheduled operations daily on page 788
Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager
on page 789
Monitoring operations daily using the Operations Center on page 791
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor the
database:
1. Use the QUERY DBSPACE command, and then examine the file system information
reported through the query to ensure that the file system has adequate space.
Examine the total, used, and free space.
query dbspace
Example output:
2. Examine the file systems where the database is located, using the appropriate
operating system commands for the following:
v Ensure that the file systems are not approaching full.
v Ensure that other applications, or unexpected users of the file system space
are not storing data in the server database directories.
v Check the operating system and device error logs for any early signs or
indications of device failures.
3. Query the database to ensure that the percent utilization is acceptable, and that
the remaining space is sufficient for the next few days or weeks of expected
activity. This includes examining the free space available, and the free-pages
values. If you find that you are approaching your space limits, take action to
ensure that you get additional space provisioned to avoid any potential
problems.
query db format=detailed
Example output:
Database Name: mgsA2
Total Size of File System (MB): 253,952
Space Used by Database(MB): 544
Free Space Available (MB): 191,821
Total Pages: 40,964
Usable Pages: 40,828
Used Pages: 33,116
Free Pages: 7,712
Buffer Pool Hit Ratio: 97.7
Total Buffer Requests: 102,279
Sort Overflows: 0
Package Cache Hit Ratio: 78.9
Last Database Reorganization: 08/24/2011 17:28:28
Full Device Class Name: FILECLASS
Incrementals Since Last Full: 1
Last Complete Backup Date/Time: 08/25/2011 15:02:31
4. Monitor the file systems to ensure that they are not running out of space. Verify
that there is enough disk space in the file systems that contain these log files:
v Active log
v Archive log
v Mirror log
v Archive failover log
If the archive log directory fills up it will overflow to the active log directory. If
you see archive log space file systems filling up, it might be an indication that
a database backup is not being run, or not being run often enough. It might
also be an indication that the space is shared with other applications that are
contending for the same space.
Issue this command to look at the total space used, free space, and so on.
query log format=detailed
Example output:
5. Examine the instance directory to ensure that it has enough space. If there is
insufficient space in this directory, the Tivoli Storage Manager server fails to
start.
You should also examine the instance_dir/sqllib/db2dump directory and delete
*.trap.txt and *.dump.bin files regularly.
V6.1 servers:
v Servers that are running version 6.1 must periodically delete the db2diag.log
file.
6. Verify that the database backups completed successfully, and examine the
details to determine if there are any problems:
select * from summary where end_time>(current_timestamp - interval
24 hours) and activity in (FULL_DBBACKUP,INCR_DBBACKUP)
If there are no results to this select command, then there were no database
backups in the previous 24-hour period.
a. Issue the QUERY PROCESS command to look at current status of an active
backup:
query process
Example output:
Process Process Description Status
Number
-------- -------------------- -------------------------------------------------
5 Database Backup TYPE=FULL in progress. 62,914,560 bytes
backed up to volume /fvt/kolty/srv/Storage/143-
12072.DSS .
7. Check to ensure that the DEVCONFIG and VOLUMEHISTORY files configured in the
dsmserv.opt file are current and up-to-date. Ensure that the file systems where
these files are being written to are not running out of space. If there are old or
unnecessary volume history entries, consider pruning the old entries using the
DELETE VOLHISTORY command.
Important: Save the volume history file to multiple locations. Ensure that these
different locations represent different underlying disks and file systems.
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor disk
storage pools:
1. Check the status of storage pools, and ensure that there is enough space
available.
v Examine the percent utilization to ensure that the amount of space is
sufficient for ingestion rates.
v The high and low migration thresholds should be set to values that will
allow for proper migration cycles.
v If the storage pool is set to CACHE=YES, the percent migration should be
approaching zero.
v This indicates that items are being cleared out of the pool appropriately.
Issue the QUERY STGPOOL command to display information about one or more
storage pools.
query stgpool
Example output:
Storage Device Estimated Pct Pct High Low Next
Pool Class Capacity UtilMigr Mig Mig Storage-
Name Name Pct Pct Pool
----------- ---------- ---------- ----- ----- ---- --- -----------
ARCHIVEPOOL DISK 1,000.0 M 0.0 0.0 90 70 storage_pool
BACKUPPOOL DISK 1,000.0 M 0.0 0.0 5 1 storage_pool
2. Check the status of the disk volumes. Issue the SELECT command and specify a
particular device class name:
select volume_name, status from volumes
where devclass_name=devclass name
Example output:
VOLUME_NAME: /fvt/kolty/srv/Storage/ar1
STATUS: ONLINE
VOLUME_NAME: /fvt/kolty/srv/Storage/bk1
STATUS: ONLINE
Example output:
START_TIME: 2011-08-23 14:53:37.000000
END_TIME: 2011-08-23 14:53:38.000000
PROCESS: MIGRATION
PROCESSNUM: 7
POOLNAME: storage_pool_example
Related tasks:
Monitoring your server processes daily on page 780
Monitoring your database daily on page 781
Monitoring sequential access storage pools daily
Monitoring scheduled operations daily on page 788
Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager
on page 789
Monitoring operations daily using the Operations Center on page 791
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor sequential
access storage pools:
1. Check the status of your storage pools, and ensure that there is enough space
available. Examine the percent utilization to ensure that the amount of space is
sufficient for the amount of data that is being taken in. Set the high and low
migration thresholds to values that will allow for proper migration cycles.
Issue the QUERY STGPOOL command to display information about one or more
storage pools.
query stgpool
Example output:
Storage Device Estimated Pct Pct High Low Next Storage-
Pool Name Class Name Capacity UtilMigr Mig Mig Pool
Pct Pct
----------- ---------- ---------- ----- ----- ---- --- -----------
ARCHIVEPOOL DISK 1,000.0 M 0.0 0.0 90 70 storage_pool
BACKUPPOOL DISK 1,000.0 M 0.0 0.0 5 1 storage_pool
2. Check the status of the sequential access storage pool volumes with this SELECT
command:
select volume_name,status,access,write_errors,read_errors,
error_state from volumes where stgpool_name=STORAGE_POOL_NAME
The select statement can be modified to limit the results based on error state,
read-write errors, or current-access state. Example output:
VOLUME_NAME: /fvt/kolty/srv/Storage/00000153.BFS
STATUS: FULL
ACCESS: READWRITE
WRITE_ERRORS: 0
READ_ERRORS: 0
ERROR_STATE: NO
3. Verify that all of the tapes have the appropriate write-access by issuing this
command:
select volume_name,access from volumes
where stgpool_name=TAPEPOOL and access!=READWRITE
For example, this output indicates that the following volumes are not available
for use:
VOLUME_NAME: A00011L4
ACCESS: DESTROYED
VOLUME_NAME: KP0033L3
ACCESS: UNAVAILABLE
4. Use the QUERY DIRSPACE command to display information about free space in
the directories that are associated with a device class with a device type of
FILE.
query dirspace
Example ouput:
Device Class Directory Estimated Estimated
Name Capacity Available
------------ ------------------------------------ --------- ---------
FILECLASS /fvt/kolty/srv/Storage 253,952 M 185,616 M
Tip: Ensure that the amount of available space is higher than the total capacity
of all storage pools assigned to the device class or classes using that directory.
5. Determine how many scratch volumes are available in tape libraries with this
SELECT command:
select library_name,count(*) "Scratch volumes" from libvolumes
where status=Scratch group by library_name
Example output:
LIBRARY_NAME Scratch volumes
------------------------- ----------------
TS3310 6
6. Determine how many scratch volumes can be potentially allocated out of the
storage pools using those tape libraries.
select stgpool_name,(maxscratch-numscratchused)
as "Num Scratch Allocatable" from stgpools
where devclass=DEVICE_CLASS_NAME
Example output:
Tip: Ensure that the number of allocatable scratch volumes is equal to the
number of available scratch library volumes in the assigned tape library.
7. Issue these SELECT commands to determine if there are any tape drives or paths
that are offline:
a. Check to ensure that the drives are online:
select drive_name,online from drives
where online<>YES
Example output:
DRIVE_NAME ONLINE
-------------------------------- -----------------------------------------
DRIVEA NO
b. Check to ensure that the paths to the drives are also online. A drive can be
online, while the path is offline.
select library_name,destination_name,online
from paths where online<>YES and destination_type=DRIVE
Example output:
LIBRARY_NAME: TS3310
DESTINATION_NAME: DRIVEA
ONLINE: NO
8. Check to see if there are any library paths that are offline with this SELECT
command:
select destination_name,device,online from paths
where online<>YES and destination_type=LIBRARY
Example output:
DESTINATION_NAME: TS3310
DEVICE: /dev/smc0
ONLINE: NO
9. If you are using the DRM, check the status and settings.
a. Check to see which copy storage pool volumes are onsite:
select stgpool_name,volume_name,upd_date,voltype from drmedia
where state in (MOUNTABLE,NOTMOUNTABLE)
Example output:
STGPOOL_NAME: COPYPOOL
VOLUME_NAME: CR0000L5
UPD_DATE: 2011-04-17 16:09:47.000000
VOLTYPE: Copy
Example output:
PLANPREFIX:
INSTRPREFIX:
PLANVPOSTFIX: @
NONMOUNTNAME: NOTMOUNTABLE
COURIERNAME: COURIER
VAULTNAME: VAULT
DBBEXPIREDAYS: 60
CHECKLABEL: Yes
FILEPROCESS: No
CMDFILENAME:
RPFEXPIREDAYS: 60
Related tasks:
Monitoring your server processes daily on page 780
Monitoring your database daily on page 781
Monitoring disk storage pools daily on page 784
Monitoring scheduled operations daily
Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager
on page 789
Monitoring operations daily using the Operations Center on page 791
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor scheduled
operations:
1. The most valuable command that you can use to check the status of your
scheduled operations is the QUERY EVENT command. Issue this command and
look for any missed or failed scheduled operations that might indicate a
problem:
query event * * type=client
query event * type=admin
All of these steps are completed from the Tivoli Enterprise Portal. For additional
information about logging on to the Tivoli Enterprise Portal, see Monitoring Tivoli
Storage Manager real time.
1. Start the Tivoli Enterprise Portal, log on with your sysadmin ID and password,
and navigate to Tivoli Storage Manager.
2. Many of the items that you want to check on a daily basis are displayed in the
dashboard view when it opens. The dashboard displays a grouping of
commonly viewed items in a single view. Examine these items and look for any
values that might indicate a potential problem:
Node storage space used
Check this graph for disk, storage, and tape space used.
For some commands, you can display the information in either a standard or
detailed format. The standard format presents less information than the detailed
format, and is useful in displaying an overview of many objects. For displaying
more information about a particular object, use the detailed format when
supported by a given command.
For information about creating customized queries of the database, see Using SQL
to query the IBM Tivoli Storage Manager database on page 798.
Most of these definition queries let you request standard format or detailed format.
Standard format limits the information and usually displays it as one line per
object. Use the standard format when you want to query many objects, for
example, all registered client nodes. Detailed format displays the default and
specific definition parameters. Use the detailed format when you want to see all
the information about a limited number of objects.
Here is an example of the standard output for the QUERY NODE command:
Node Name Platform Policy Days Days Locked?
Domain Since Since
Name Last Password
Access Set
---------- -------- --------- ------ -------- -------
CLIENT1 AIX STANDARD 6 6 No
GEORGE Linux86 STANDARD 1 1 No
JANET HPUX STANDARD 1 1 No
JOE2 Mac STANDARD <1 <1 No
TOMC WinNT STANDARD 1 1 No
Here is an example of the detailed output for the QUERY NODE command:
You can use the QUERY SESSION command to request information about client
sessions. Figure 90 shows a sample client session report.
Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name
Number Method State Time Sent Recvd Type
------ ------ ------ ------ ------- ------- ----- -------- --------------------
3 Tcp/Ip IdleW 9 S 7.8 K 706 Admin WinNT TOMC
5 Tcp/Ip IdleW 0 S 1.2 K 222 Admin AIX GUEST
6 Tcp/Ip Run 0 S 117 130 Admin Mac2 MARIE
Check the wait time to determine the length of time (seconds, minutes, hours) the
server has been in the current state. The session state reports status of the session
and can be one of the following:
Start Connecting with a client session.
Run Running a client request.
End Ending a client session.
Most commands run in the foreground, but others generate background processes.
In some cases, you can specify that a process run in the foreground. Tivoli Storage
Manager issues messages that provide information about the start and end of
processes. In addition, you can request information about active background
processes. If you know the process ID number, you can use the number to limit the
search. However, if you do not know the process ID, you can display information
about all background processes by issuing the QUERY PROCESS command.
This list is not all-inclusive. For a detailed explanation of the QUERY STATUS
command, see the Administrator's Reference.
You can issue the QUERY OPTION command with no operands to display general
information about all defined server options. You also can issue it with a specific
option name or pattern-matching expression to display information on one or more
server options. You can set options by editing the server options file.
See the QUERY OPTION command in the Administrator's Reference for more
information.
When you enter the QUERY SYSTEM command, the server issues the following
queries:
QUERY ASSOCIATION
Displays all client nodes that are associated with one or more client
schedules
QUERY COPYGROUP
Displays all backup and archive copy groups (standard format)
IBM Tivoli Storage Manager Versions 6.1 and later use the DB2 open database
connectivity (ODBC) driver to query the database and display the results.
DB2 provides its own ODBC driver which can also be used to access the Tivoli
Storage Manager server DB2 database. For more information on the DB2 native
ODBC driver, refer to DB2 documentation at: http://pic.dhe.ibm.com/infocenter/
db2luw/v9r7. Search on Introduction to DB2 CLI and ODBC
You can issue the SELECT command from the command line of an administrative
client. You cannot issue this command from the server console.
To help you find what information is available in the database, Tivoli Storage
Manager provides three system catalog tables:
SYSCAT.TABLES
Contains information about all tables that can be queried with the SELECT
command.
SYSCAT.COLUMNS
Describes the columns in each table.
SYSCAT.ENUMTYPES
Defines the valid values for each enumerated type and the order of the
values for each type.
You can issue the SELECT command to query these tables and determine the
location of the information that you want. For example, to get a list of all tables
available for querying in the database TSMDB1 enter the following command:
select tabname from syscat.tables where tabschema=TSMDB1 and type=V
You can also issue the SELECT command to query columns. For example, to get a
list of columns for querying in the database TSMDB1 and the table name ACTLOG,
enter the following command:
select colname from syscat.columns where tabschema=TSMDB1and tabname=ACTLOG
COLNAME: DATE_TIME
COLNAME: DOMAINNAME
COLNAME: MESSAGE
COLNAME: MSGNO
COLNAME: NODENAME
COLNAME: ORIGINATOR
COLNAME: OWNERNAME
COLNAME: PROCESS
COLNAME: SCHEDNAME
COLNAME: SERVERNAME
COLNAME: SESSID
COLNAME: SESSION
COLNAME: SEVERITY
For many more examples of the command, see the Administrator's Reference.
Example 1: Find the number of nodes by type of operating system by issuing the
following command:
select platform_name,count(*) as "Number of Nodes" from nodes
group by platform_name
Example 2: For all active client sessions, determine how long they have been
connected and their effective throughput in bytes per second:
select session_id as "Session", client_name as "Client", state as "State",
current_timestamp-start_time as "Elapsed Time",
(cast(bytes_sent as decimal(18,0)) /
cast(second(current_timestamp-start_time) as decimal(18,0)))
as "Bytes sent/second",
(cast(bytes_received as decimal(18,0)) /
cast(second(current_timestamp-start_time) as decimal(18,0)))
as "Bytes received/second"
from sessions
Session: 24
Client: ALBERT
State: Run
Elapsed Time: 4445.000000
Bytes sent/second: 564321.9302768451
Bytes received/second: 0.0026748857944
Session: 26
Client: MILTON
State: Run
Elapsed Time: 373.000000
Bytes sent/second: 1638.5284210992221
Bytes received/second: 675821.6888561849
For example:
DATABASE_NAME: mgsA62
TOT_FILE_SYSTEM_MB: 511872
USED_DB_SPACE_MB: 448
FREE_SPACE_MB: 452802
PAGE_SIZE: 16384
TOTAL_PAGES: 32772
USABLE_PAGES: 32636
USED_PAGES: 24952
FREE_PAGES: 768
BUFF_HIT_RATIO: 99.7
TOTAL_BUFF_REQ: 385557
SORT_OVERFLOW: 0
LOCK_ESCALATION: 0
PKG_HIT_RATIO: 99.8
LAST_REORG:
FULL_DEV_CLASS:
NUM_BACKUP_INCR: 0
LAST_BACKUP_DATE:
PHYSICAL_VOLUMES: 1
A script can be run from an administrative client or the server console. You can
also include it in an administrative command schedule to run automatically. See
Tivoli Storage Manager server scripts on page 640 for details.
Tivoli Storage Manager is shipped with a file that contains a number of sample
scripts. The file, scripts.smp, is in the server directory. To create and store the
scripts as objects in your server's database, issue the DSMSERV RUNFILE
command during installation:
> dsmserv runfile scripts.smp
You can also run the file as a macro from an administrative command line client:
macro scripts.smp
The sample scripts file contains Tivoli Storage Manager commands. These
commands first delete any scripts with the same names as those to be defined,
then define the scripts. The majority of the samples create SELECT commands, but
others do such things as back up storage pools. You can also copy and change the
sample scripts file to create your own scripts.
Some of the client operations recorded to the table are BACKUP, RESTORE,
ARCHIVE and RETRIEVE. Server processes include MIGRATION,
RECLAMATION and EXPIRATION.
To list column names and their descriptions from the activity summary table, enter
the following command:
select colname,remarks from columns where tabname=summary
You can determine how long to keep information in the summary table. For
example, to keep the information for 5 days, enter the following command:
set summaryretention 5
Tivoli Storage Manager does not create records in the SQL activity summary table
for manual backups or for successful scheduled backups of 0 bytes. Records are
created in the summary table for successful scheduled backups only if data is
backed up.
For details about using command line options and redirecting command output,
see the Administrator's Reference.
You can also query the activity log for client session information. For example,
issue the following command to search the activity log for any messages that were
issued in relation to session 4:
query actlog search="(SESSION:4)"
Any error messages sent to the server console are also stored in the activity log.
Use the following sections to adjust the size of the activity log, set an activity log
retention period, and request information about the activity log.
To minimize processing time when querying the activity log, you can:
v Specify a time period in which messages have been generated. The default for
the QUERY ACTLOG command shows all activities that have occurred in the
previous hour.
v Specify the message number of a specific message or set of messages.
v Specify a string expression to search for specific text in messages.
v Specify the QUERY ACTLOG command from the command line for large
queries instead of using the graphical user interface.
v Specify whether the originator is the server or client. If it is the client, you can
specify the node, owner, schedule, domain, or session number. If you are doing
client event logging to the activity log and are only interested in server events,
then specifying the server as the originator will greatly reduce the size of the
results.
For example, to review messages generated on May 30 between 8 a.m. and 5 p.m.,
enter:
query actlog begindate=05/30/2002 enddate=05/30/2002
begintime=08:00 endtime=17:00
To request information about messages related to the expiration of files from the
server storage inventory, enter:
query actlog msgno=0813
You can also request information only about messages logged by one or all clients.
For example, to search the activity log for messages from the client for node JEE:
query actlog originator=client node=jee
Note: With retention-based management, you lose some control over the amount
of space that the activity log occupies. For more information on size-based activity
log management, see Setting a size limit for the activity log.
The server will periodically remove the oldest activity log records until the activity
log size no longer exceeds the configured maximum size allowed. To manage the
activity log by size, the parameter MGMTSTYLE must be set to the value SIZE. To
change the maximum size of the activity log to 12 MB, for example, enter:
set actlogretention 12 mgmtstyle=size
Note: With size-based management, you lose some control over the length of time
that activity log messages are kept. For more information on retention-based
activity log management, see Setting a retention period for the activity log.
| For a newly installed server or for an upgraded server without defined alerts, a
| default set of messages is defined to trigger alerts. The administrator can add
| messages to, or remove messages from, the default set.
| You can configure alert monitoring and its characteristics, such as defining which
| messages trigger alerts and configuring email notification for administrators about
| alerts.
| To configure alert monitoring, use the following server commands, which are
| grouped according to the general configuration task to which they apply. For more
| information about these commands and about configuring alerts, see the
| Administrator's Reference.
| Activate alert monitoring
| v SET ALERTMONITOR
| v SET ALERTUPDATEINTERVAL
| Define which messages trigger alerts
| v DEFINE ALERTTRIGGER
| v UPDATE ALERTTRIGGER
| v DELETE ALERTTRIGGER
| v QUERY ALERTTRIGGER
| Define the time interval for alerts to be kept in the database
| v SET ALERTACTIVEDURATION
| v SET ALERTINACTIVEDURATION
| v SET ALERTCLOSEDDURATION
| Query existing alerts
| v QUERY ALERTSTATUS
| Update the status of an alert
| v UPDATE ALERTSTATUS
| Configure email notification for administrators about alerts
| v QUERY MONITORSETTINGS
| v SET ALERTEMAIL
| v SET ALERTEMAILFROMADDR
| v SET ALERTEMAILSMTPHOST
| v SET ALERTEMAILSMTPPORT
| v REGISTER ADMIN
| For detailed information about the commands that are mentioned here, see the
| Administrator's Reference.
| An administrator with system privilege can complete the following steps on the
| server to enable alerts to be sent by email:
| 1. Issue the QUERY MONITORSETTINGS command to verify that alert monitoring is set
| to ON. If the monitoring settings output indicates Off, issue the SET
| ALERTMONITOR command to start alert monitoring on the server:
| set alertmonitor on
| Tip: If alert monitoring is on, alerts are displayed in the Operations Center
| even though the alert email feature might not be enabled.
| 2. Enable alerts to be sent by email by issuing the SET ALERTEMAIL command:
| set alertemail on
| 3. Define the SMTP host server that is used to send email by issuing the SET
| ALERTEMAILSMTPHOST command:
| set alertemailsmtphost
| 4. Set the SMTP port by issuing the SET ALERTEMAILSMTPPORT command:
| set alertemailsmtpport port_number
| Tip: You can suspend email alerts for an administrator by using one of the
| following methods:
| v Use the UPDATE ADMIN command, and specify ALERT=no.
| v Use the ALERTTRIGGER command, and specify the DELADMIN parameter.
| The following example describes the commands that are used to enable the
| administrators myadmin, djadmin, and csdadmin to receive email alerts for
| ANR1075E messages.
| set alertmonitor on
| set alertmaiil on
| set alertemailsmtphost mymailserver.domain.com
| set alertemailsmtpport 450
| set alertemailfromaddr srvadmin@mydomain.com
| update admin myadmin alert=yes emailaddress=myaddr@example.com
| update admin djadmin alert=yes emailaddress=djaddr@example.com
| update admin csadmin alert=yes emailaddress=csaddr@example.com
| define alerttrigger anr0175e admin=myadmin,djadmin,csdadmin
| Related concepts:
| Chapter 26, Alert monitoring, on page 807
| Related tasks:
| Chapter 17, Managing servers with the Operations Center, on page 589
The accounting file contains text records that can be viewed directly or can be read
into a spreadsheet program. The file remains opened while the server is running
and accounting is set to ON. The file continues to grow until you delete it or prune
old records from it. To close the file for pruning, either temporarily set accounting
off or stop the server.
There are 31 fields, which are delimited by commas (,). Each record ends with a
new-line character. Each record contains the following information:
Field Contents
1 Product version
2 Product sublevel
3 Product name, ADSM',
4 Date of accounting (mm/dd/yyyy)
5 Time of accounting (hh:mm:ss)
6 Node name of Tivoli Storage Manager client
7 Client owner name (UNIX)
8 Client Platform
9 Authentication method used
10 Communication method used for the session
11 Normal server termination indicator (Normal=X'01', Abnormal=X'00')
12 Number of archive store transactions requested during the session
13 Amount of archived files, in kilobytes, sent by the client to the server
14 Number of archive retrieve transactions requested during the session
15 Amount of space, in kilobytes, retrieved by archived objects
16 Number of backup store transactions requested during the session
17 Amount of backup files, in kilobytes, sent by the client to the server
18 Number of backup retrieve transactions requested during the session
Tivoli Monitoring for Tivoli Storage Manager also provides reports based on the
historical data retrieved. You can use the existing historical reports provided, or
you can create your own custom reports.
Tivoli Monitoring for Tivoli Storage Manager consists of the following components:
IBM DB2
Stores historical data that is obtained from Tivoli Storage Manager servers
that are monitored by IBM Tivoli Monitoring.
IBM Tivoli Monitoring
Consists of a number of components that accumulate and monitor
historical data for reporting:
v Tivoli Enterprise Portal server
v Tivoli Data Warehouse
v Tivoli Enterprise Monitoring server
v Summarization Pruning agent
v Warehouse Proxy agent
v Tivoli Monitoring for Tivoli Storage Manager agent
The Tivoli Monitoring for Tivoli Storage Manager agent queries and formats data
to be presented to you in the following ways:
v As workspaces from the Tivoli Enterprise Portal
v As reports using the Tivoli Data Warehouse and the reporting portion of Tivoli
Monitoring for Tivoli Storage Manager
The agent is installed on the Tivoli Storage Manager server or the IBM Tivoli
Monitoring server, and is a multi-instance data collection agent.
The Tivoli Monitoring for Tivoli Storage Manager agent communicates with the
Tivoli Monitoring for Tivoli Storage Manager server to retrieve data from its
database and return this data to the Tivoli Monitoring server.
Tivoli Monitoring for Tivoli Storage Manager reports on the Tivoli Storage
Manager server activities from data that is collected using the Tivoli Storage
Manager monitoring agent. The monitoring feature uses the Tivoli Enterprise
Portal to view the current status of the Tivoli Storage Manager server.
User ID:
and K. Reports E and F shouldC. Reports B and C should
Reports G and H should be be run 4 times daily.
mailed to personnel S,be T, terminated after one hour.
and K. Reports E and F should
Reports G and H should be
mailed to personnel S,be T, terminated after one hour.
and K.
Reports G and H should be
mailed to personnel S, T,
and K.
sysadmin
Report on Monitoring
Machine A is functioning
Report at a
on Monitoring
B level.
Historical
Machine A is functioning
Report at a
on Monitoring
MachineBB is functioning at a
level.
B level with some issues.
Machine A is functioning at a
Machine BB is functioning at a
level.
There are two with
B level machines
some that
issues.
need immediate attention.
Machine B is functioning at a
reports
There are two with
B level machines
some that
issues.
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
There are two machines that
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
Machine C needs maintenenc
Machine D is terminal.
User ID:
Tivoli Enterprise Tivoli Enterprise
AIX Linux
Portal server Monitoring server Tivoli Data Warehouse itmuser
Windows
ITMuser
User ID:
DB2 database AIX Linux
db2inst1
Windows
db2admin
Tivoli Monitoring
for Tivoli Storage
Manager agent
instances
You can create your own custom reports using IBM Cognos 8 Business Intelligence,
or you can install the Business Intelligence and Reporting Tools (BIRT) software.
See the IBM Tivoli Storage Manager Installation Guide for details on installing BIRT
software.
When you open the Tivoli Enterprise Portal and navigate to the Tivoli Storage
Manager view, a dashboard workspace displays commonly viewed information in
a single location. To view more details click the first column, which is a chain-link
icon. To return to the dashboard view, click the back arrow in the upper left.
The dashboard workspace can be customized to suit your monitoring needs, but
the default settings display the following information:
v Storage space that is used for each node that is defined on the server
v Storage pool summary details
v Unsuccessful client and server schedules, including all missed or failed
schedules
v Client node activity for all nodes on the server
v Activity log errors, including all severe error messages
Tip: The data in these reports can be sorted by clicking the column that you want
to sort by. To display subworkspaces, select the main workspace, right-click, select
Workspace, and click the subworkspace that you want to view.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 815
Table 73 lists the attribute groups, their workspaces, and descriptions.
Table 73. Tivoli Enterprise Portal workspaces and subworkspaces
Attribute group name Description
Activity log This workspace provides information about activity log messages based on the parameters
selected. The data can be used to generate aggregated reports that are grouped by server,
and subgrouped by client.
Activity summary This workspace provides summarized activity log information about virtual environments.
Agent log This workspace provides trace file information that is produced by the agent without
having to enable tracing. It provides messages information such as login successes and
failures, and agent processes.
Availability This workspace provides the status and the performance of the agent that is running for
each of the different workspaces that are listed under the Tivoli Storage Manager agent. It
can help to identify problems with the gathering of historical data.
Client node storage The main workspace displays information about client node storage, disk, and tape usage
data. This data can help you identify the clients that are using the most resources on the
server. Disk and tape usage information is displayed in graph format.
The subworkspaces display data in a tabular format and a graph format. To display the
subworkspaces, select the Client Node Storage workspace, right-click and select
Workspace, and click the subworkspace that you want to view.
Total capacity and total space used data is displayed in a bar chart format, and database
details such as percent space used, and total space used is displayed in a tabular format.
Drives This workspace provides status about the drives, including drive name, library name,
device type, drive status such as loaded or empty, the volume name, and whether the
drive is online.
Additional subworkspace:
v Drives drill down
Libraries This workspace provides status about libraries, such as the library name, type, if it is
shared or not, LAN-free, auto label, number of available scratch volumes, whether the
path is online, and the serial number.
The subworkspaces display data in a tabular format and a graph format. To display the
subworkspaces, select the Node Activity workspace, right-click and select Workspace, and
click the subworkspace that you want to view.
The subworkspace displays data in a tabular format and a graph format. To display the
subworkspaces, select the Occupancy workspace, right-click and select Workspace, and
click the subworkspace that you want to view.
Additional subworkspace:
v Drives drill down
Processor Value Unit This workspace provides PVU details by product, and PVU details by node. It includes
(PVU) details information such as node name, product, license name, last used date, try buy, release,
and level. If the Tivoli Storage Manager server is not a version 6.3 server the workspace
will be blank.
Replication details This workspace provides byte by byte replication details. It describes all of the replication
details such as node name file space ID and name, version, start and end times, status,
complete stat, incomplete reason, estimated percent complete, estimated time remaining,
and estimated time to completion.
Replication status This workspace provides the replication status for a node without all the details that the
replication details workspace provides. It displays node name, server, file space type,
name and ID, target server, source and target server number of files.
Schedule This workspace provides details about client and server schedules. You can group the
data by node name, schedule name, or status to help in identifying any potential
problems. It displays information such as schedule name, node name, server name,
scheduled start, actual start, and the status of the schedule which can be success, missed,
or failed, along with any error or warning text.
Sessions This workspace provides a view of all the client sessions that are running on the specified
server. This workspace is useful for determining which clients are connected to the Tivoli
Storage Manager server and how much data has been sent or received. The workspace
also shows tape mount information which can give an indication about library and tape
usage.
Note: By default, historical data collection is not enabled by this workspace, and is used
more as a monitoring tool. You can modify the historical collection settings to enable this
data to be stored, but this type of data can cause the WAREHOUS database to grow very
large over time.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 817
Table 73. Tivoli Enterprise Portal workspaces and subworkspaces (continued)
Attribute group name Description
Storage pool This workspace provides you with detailed information about your storage pools. Tivoli
Storage Manager can contain multiple storage pools. These storage pools define the
methods and resources that are used to store data being backed up or archived to the
Tivoli Storage Manager server. The data displayed in this workspace includes storage pool
names, server name, device classes, total space, utilized space, total volumes used, percent
space used, disk space used, and deduplication savings. It also displays a graph with the
total space, total usage, and total volumes used.
Server This workspace provides the operational status of the Tivoli Storage Manager server.
These operations are measured by Megabytes per-operation. After they are reported, the
values are reset back to zero. The counts reported for each operation are not cumulative
over time. You can view the following activities or status:
v What activities are taking time to complete?
v As the server migrates data or mounts storage onto devices, what are the possible
problem activities?
v The status of server-only activities.
The data that is displayed includes information such as server name, current disk storage
pool space, tape usage count, current database size, previous days information for client
operations, object count reclamation by byte and duration, migration by byte and
duration, backup by byte and duration.
Bar graphs are also provided to display server operation duration and server operation
byte counts.
Storage device This workspace provides you with the read and write error status of the storage devices.
This status helps you identify possible problems with any of your storage devices. Bar
chart graphs also display read and write error count.
Tape usage This workspace provides you with tape usage data for each client.
Tape volume This workspace provides the status of all tape storage devices. This information can help
you identify any storage devices that are near full capacity.
To view the available Tivoli Storage Manager monitoring workspaces, complete the
following steps:
1. Log in to Tivoli Enterprise Portal with the sysadmin user ID and password
using one of the following methods:
a. Start the Tivoli Enterprise Monitoring Services console:
v Run the CandleManage program issuing the following commands:
cd /opt/tivoli/tsm/reporting/itm/bin
./CandleManage &
Tip: Some of these attribute groups have sub-workspaces that you can view
when you right-click the main attribute group. See the section on the overview
of the monitoring workspaces to learn more details about using the
workspaces.
6. The details of your selection are displayed in the workspace in the right panel
and in the bottom panel.
Related reference:
Types of information to monitor with Tivoli Enterprise Portal workspaces on
page 815
After you complete the installation and created and configured your Tivoli
Monitoring for Tivoli Storage Manager agent instance, you can view reports from
the Tivoli Integrated Portal.
To run the available Tivoli Storage Manager client and server reports, complete
these steps:
1. Log in to the Tivoli Storage Manager Tivoli Integrated Portal.
a. If the Tivoli Integrated Portal is not running, start it. For additional details,
see Starting and stopping the Tivoli Integrated Portal.
b. Open a supported web browser and enter the following address:
https://hostname:port/ibm/console, where port is the port number
specified when you installed the Tivoli Integrated Portal. The default port is
16311.
If you are using a remote system, you can access the Tivoli Integrated Portal
by entering the IP address or fully qualified host name of the remote
system. If there is a firewall, you must authenticate to the remote system.
c. The Tivoli Integrated Portal window opens. In the User ID field, enter the
Tivoli Integrated Portal user ID that was defined when you installed Tivoli
Monitoring for Tivoli Storage Manager. For example, tipadmin.
d. In the Password field, enter the Tivoli Integrated Portal password that you
defined in the installation wizard and click Log in.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 819
Tip: Create a desktop shortcut, or bookmark in your browser for quick access
to the portal in the future.
2. On the left side of the window, expand and click Reporting > Common
Reporting.
3. In the Work with reports pane, click the Public Folders tab.
4. To work with Cognos reports, select IBM Tivoli Storage Manager Cognos
Reports.
5. To work with BIRT reports, select Tivoli Products > Tivoli Storage Manager.
The report name and descriptions are displayed in the Reports pane. Double-click
the report to open the parameter selections page, or use the icons at the top of the
reports listing. You can view reports in HTML, PDF, Excel, and CSV formats.
Items added from the package to your report are called report items. Report items
display as columns in list reports, and as rows and columns in cross-tab reports. In
charts, report items display as data markers and axis labels.
You can expand the scope of an existing report by inserting additional report
items, or you can focus on specific data by removing unnecessary report items.
If you frequently use items from different query subjects or dimensions in the
same reports, ask your modeler to organize these items into a folder or model
query subject and then to republish the relevant package. For example, if you use
the product code item in sales reports, the modeler can create a folder that contains
the product code item and the sales items you want.
IBM Cognos Business Intelligence includes many components that you can use, but
only the basic report tasks are documented here. For additional information
regarding Cognos you can visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
These Cognos reports are available in HTML, PDF, Microsoft Excel, XML, and CSV
(delimited text) formats. There are limitations when producing reports in Microsoft
Excel formats, such as timestamps not displaying. For a complete list of all
limitations see: Limitations when producing reports in Microsoft Excel format.
| You can customize the data that is displayed in your reports by specifying the
| parameter values that you want to include or exclude. After you run the report,
| the parameter values that you specified are displayed at the bottom.
| This list specifies the client reports that you can generate. The report descriptions
are described in Table 75 on page 822.
v Client activity status
v Client activity success rate
v Client backup currency
v Client backup status
v Client schedule success rate
v Client schedule status
v Client storage pool usage summary
v Client storage summary and details
v Client storage usage trends
v Current client occupancy summary
v Current storage pool summary
v Highest storage space usage
v Server database growth trends
v Server schedule status
v Server storage growth trends
v VE activity status
v VE backup type summary
v VE current occupancy summary
v Yesterday's missed and failed client schedules
Table 74. Report parameters
Parameter Description
Activity type Use this parameter to select the following client activities:
v Archive
v Backup
v Restore
v Retrieve
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 821
Table 74. Report parameters (continued)
Parameter Description
Date range Use this parameter to specify one the following date ranges to display.
The default is All.
v All
v Date range (below)
v Today
v Yesterday
v The last 7 days
v The last 30 days
v The last 90 days
v The last 365 days
v The current week
v The current month
v The current year to date
v The last week
v The last month
v The last year
Servers Use this parameter to specify single or multiple servers.
Client node Use this parameter to specify a client from the server to report on. This
name parameter can also accept wildcard characters by using the percent
symbol (%). The default selects all the client nodes.
Summarization Use this parameter to select how to group or summarize the data. You
type can specify daily, hourly, weekly, monthly, quarterly, or yearly. The
default is monthly.
Number of Use this parameter to specify the number of top clients that you want to
clients to display in the report.
display
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 823
Table 75. Cognos status and trend reports (continued)
Report name Description Report folder
Client This report provides details about the results of client Status reports
schedule schedules over a specified time period.
status v Details include the node name, schedule name,
completion status such as failed, missed, severed, the
scheduled date, and any error or warning messages.
v Data is displayed in a tabular table format.
v Failed schedules are highlighted in red and missed
schedules are highlighted in yellow.
Client This report displays the amount of storage pool space a Status reports
storage pool client node is using on the server. For each selected client
usage node, the report displays the total space used on the
summary server and the total space used per storage pool that the
node is assigned to.
v Ability to select specific servers.
v Ability to select specific client nodes by using the
Search and select widget. For advanced searching,
expand the Options list. The default option that is
selected is Starts with any of these keywords. Case
insensitive is selected by default. The result of the
search shows up in the Results list. Client nodes
added to the Choices list are used when the report is
run.
v Details include physical MB, logical MB, and reporting
MB.
v Data is displayed in a tabular table format.
Tip: For performance reasons, run the report in the
background as it is a long running task.
| Client This report provides details about how much storage Status reports
| storage space client nodes are currently using.
|| summary v Details are grouped by server and domain.
| and details
| v Ability to sort on multiple columns in ascending or
| descending order.
| v Details for the client nodes include storage space used
| by each type of storage. These storage space types
| include disk and file storage, server storage, and tape
| storage, where server storage are virtual volumes used
| by the client node.
| v Data is displayed in a tabular table format, with totals
| at the bottom of every Tivoli Storage Manager server.
| Client This report provides details about the storage usage of a Trending reports
| storage client node over a specified time period.
|| usage trends v Data can be summarized daily, weekly, monthly,
| quarterly, and yearly.
| v Details for the client nodes include storage space used
| by each type of storage. These storage space types
| include disk and file storage, server storage, and tape
| storage, where server storage are virtual volumes used
| by the client node.
| v Report shows one client for one server at a time.
| v Data is displayed in a line chart format, and a tabular
| table.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 825
Table 75. Cognos status and trend reports (continued)
Report name Description Report folder
Server This report provides details about server growth over a Trending reports
storage specified time period.
growth v Data can be summarized daily, weekly, monthly,
trends quarterly, and yearly.
v Details include date, disk, and file storage in MB, and
tape usage counts.
v Data is displayed in a line charts and tabular table
format.
| VE activity This report provides details about virtual machine guest Status reports
| status activity (backup, archive, restore, or retrieve) over a
| specified time period.
| v The report displays information about the success or
| failure of each activity for data center nodes.
| v It includes the data center node name, virtual machine
| name, number of objects failed, total kilobytes
| transferred, and more.
| v Data is displayed in a tabular table format.
| v Failed activities are highlighted in red.
| Important: Run this report on Tivoli Storage Manager
| servers that are at version 6.3.3 or later.
| VE backup This report shows the number of incremental and full Status reports
| type backups for each selected client node. The report is
| summary useful to determine which client node backups might be
| having problems when the backups are always full
| instead of incremental.
| v It includes the data center node name, virtual machine
| name, the number of full backups, and the number of
| incremental backups, over the specified amount of
| time.
| v Data is displayed in a tabular table format.
| Important: Run this report on Tivoli Storage Manager
| servers that are at version 6.3.3 or later.
| VE current This report provides current details about the storage Status reports
| occupancy occupancy that a VE guest operating system is using on
| summary the Tivoli Storage Manager server.
| v Details are grouped by data center node and virtual
| machine name. Details include file space information,
| reporting MB, physical MB, logical MB, and number of
| files.
| v Data is displayed in a tabular table format.
| v Data center node names are links that provide more
| details when clicked by linking to the VE Node
| Activity Status report to get current information about
| the activity of the VE on the Tivoli Storage Manager
| server.
| Important: Run this report on Tivoli Storage Manager
| servers that are at version 6.3.3 or later.
In Report Studio you can view data, create reports, change the appearance of
reports, and then use that data for comparison and analysis purposes.
1. Log in to the Tivoli Storage Manager Tivoli Integrated Portal.
a. If the Tivoli Integrated Portal is not running, start it. For additional details,
see Starting and stopping the Tivoli Integrated Portal.
b. Open a supported web browser and enter the following address:
https://hostname:port/ibm/console, where port is the port number
specified when you installed the Tivoli Integrated Portal. The default port is
16311.
If you are using a remote system, you can access the Tivoli Integrated Portal
by entering the IP address or fully qualified host name of the remote
system. If there is a firewall, you must authenticate to the remote system.
c. The Tivoli Integrated Portal window opens. In the User ID field, enter the
Tivoli Integrated Portal user ID that was defined when you installed Tivoli
Monitoring for Tivoli Storage Manager. For example, tipadmin.
d. In the Password field, enter the Tivoli Integrated Portal password that you
defined in the installation wizard and click Log in.
Tip: Create a desktop shortcut, or bookmark in your browser for quick access
to the portal in the future.
2. On the left side of the window, expand and click Reporting > Common
Reporting.
3. In the upper-right corner, click the Launch icon, and select Report Studio.
4. Select the Tivoli Storage Manager Cognos Reports package as your data
source.
5. Click Allow access to allow data to be written to your clipboard, and Report
Studio to access it.
6. Choose whether you want to create a report or template, or open an existing
report or template. To learn more about creating a custom report, see Creating a
custom Cognos report.
For additional information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 827
Creating a custom Cognos report
You can create custom Cognos reports by inserting items from the data source in to
an empty report. You can open an existing report, modify it, and save it with a
different name.
Complete these example steps to create a simple custom report that displays
details about your IBM Tivoli Storage Manager server databases:
1. Open the Report Studio portal application in a web browser and provide the
logon ID and password if prompted. See Opening the Cognos Report Studio
portal.
2. In the Welcome window, click the Create a new report or template, or from
the main menu, click File > New.
3. Click the blank icon.
4. From the Insertable Objects pane, click the Toolbox tab, and drag in a
container for your report values. For example, drag the list container over to
the report.
5. From the Insertable Objects pane, click the Source tab, and expand Tivoli
Storage Manager Cognos Reports > Consolidation View > Tivoli Storage
Manager Report Data > Key Metrics > Performance > Detailed.
6. A list of attribute groups are displayed. Expand any of the attribute groups to
display attributes you can use to build your report.
7. Drag any of the attributes in to the list container to include this data to your
report. For example, from the Database attribute group, click and drag the
Server Name and Total Capacity GB attributes in to the list container
side-by-side.
8. Run the report. Click Run from the main menu, and select the format that you
want your report to display. For example, html, or PDF.
9. To save the new report, click File > Save as.
Tip: To avoid naming conflicts, save all reports with unique report names.
You can create a folder in the Public Folders directory to store your new
reports. For example, create a folder that is called Server Reports for your
server reports.
10. Inside the directory that you want to save the report to, specify a unique
report name and click Save.
You can view the newly created report in Tivoli Common Reporting. The name of
the report is the name that you saved it as and it is in the folder where you saved
it. For example, if you created a report that is called Server Storage Details in the
Server Reports directory in the Public Folders directory, from Tivoli Common
Reporting, click Server Reports to find your report.
For more information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
Tip: You can select Run Options to configure options to run your reports. For
example, you can specify the format, and number of rows per page.
Note: If prompted, specify the fields that you want to display in the report
and click Finish.
4. From the drop-down list in the upper-right corner, click Keep this Version >
Email report.
5. Type the email address or addresses of the people that you want to receive
the report.
6. Optionally, click the Attach the report check box.
7. Click OK to complete the process.
Tip: Click the house icon in the upper-right corner to return to the previous
menu.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 829
v Automatically schedule a report to run and email to recipients
1. Log on to the Tivoli Integrated Portal with the tipadmin ID and password
and click Reporting > Common Reporting > IBM Tivoli Storage Manager
Cognos Reports.
2. Navigate to the report that you want to schedule. For example, select the
Status Reports or Trending Reports folder to display a list of reports.
3. Click the small calendar icon that is located to the right of the report that
you want scheduled.
4. Type in the start dates, end dates, times, the days you want the report to
run, and so on.
5. Check the Override the default values check box to display further options.
6. Select the report format that you want.
7. Click the Send the report and a link to the report by email check box.
8. Click the Edit the options check box.
9. Type the email address or addresses of the people that you want to receive
this report.
10. Optionally, you can also click the Attach the report check box.
11. Click OK to complete the email process.
12. Click OK again to complete the scheduling process.
For additional information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
| To share Cognos reports, you must export them from one Administration Center
| instance and import them into another. Alternatively, you can use a stand-alone
| Tivoli Common Reporting instance to export and import Cognos reports.
| After a custom Cognos report is created, the report can be shared and used by
| other Tivoli Common Reporting instances. Tivoli Common Reporting can be a
| stand-alone instance or a component that is installed in the Administration Center.
| To share a report, you must export it to an XML format and then you can import it
| in to another Tivoli Common Reporting instance.
| You are now ready to import the report into any other Tivoli Common Reporting
| instance. For more information, see Importing a Cognos report.
| After you export Cognos reports, you can distribute them to be used by other
| teams and organizations.
| You can import Cognos reports in to any supported Tivoli Common Reporting
| instance. Tivoli Common Reporting can be a stand-alone instance or a component
| that is installed in the Administration Center. To complete the task of importing
| Cognos reports, complete the following steps:
| 1. In a text editor, open the report file that you want to import and copy the XML
| code to the clipboard. For more information about exporting a report, see
| Exporting a Cognos report on page 830
| 2. Log on to the Tivoli Integrated Portal.
| 3. Expand Reporting in the navigation tree, and select Common Reporting to
| open the reporting workspace.
| 4. Click Launch > Report Studio. Report Studio is opened in a new web browser.
| 5. In the Select a package (Navigate) window, select the IBM Tivoli Storage
| Manager Cognos Reports package. If you open Report Studio from inside a
| package, the window is not displayed.
| 6. In Report Studio, click Tools > Open Report from Clipboard.
| Attention: An error is displayed if the XML code is not copied to the
| clipboard. You might be prompted to paste the XML code into a window before
| the report is opened and displayed.
| 7. Click File > Save. You can create a directory for the report to be saved in or
| you can choose an existing directory.
| 8. Save the report with a name that describes the report. This name is used as the
| display name for your report in Tivoli Common Reporting.
| 9. Exit Report Studio.
| When you refresh the Tivoli Integrated Portal web browser window, the Cognos
| report you imported is available.
| You can import the packaged Tivoli Monitoring for Tivoli Storage Manager Cognos
| reports, or you can import your own custom report. The packaged reports are the
| reports that come with the Tivoli Monitoring for Tivoli Storage Manager software.
| To use a stand-alone Tivoli Common Reporting instance to view historical reports,
| you must install the DB2 client and configure it to set up a connection to the
| WAREHOUS database.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 831
| Importing Cognos reports in to a Tivoli Common Reporting instance:
| The Cognos reports and data model are bundled together in the Administration
| Center software package. The data model allows for the connection between the
| Tivoli Common Reporting user interface and the DB2 database. The Cognos
| reports and data model require Tivoli Common Reporting V2.1 and Cognos V8.4.1
| and a connection to DB2. For Cognos to communicate with DB2, the DB2 client
| libraries for Cognos must be installed.
| You must import the Tivoli Storage Manager packaged Cognos into a stand-alone
| Tivoli Common Reporting environment. After this is complete, you can import a
| custom Cognos report. For more information, see Importing a Cognos report on
| page 831.
| Complete the following steps to import the Tivoli Storage Manager packaged
| Cognos reports in to a stand-aloneTivoli Common Reporting instance:
| 1. Log on to the Tivoli Common Reporting system.
| 2. Obtain the TSM_Cognos.zip file, from the Administration Center installation
| media (DVD or downloaded package), in the COI\PackageSteps\BirtReports\
| FILES directory. This compressed file contains the Cognos reports and data
| model.
| 3. From a command prompt, change directories to the following location:
| /opt/IBM/tivoli/tipv2Components/TCRComponent/bin/
| 4. Import Cognos reports by issuing the following command:
| ./trcmd.sh -import -bulk path/TSM_Cognos.zip -username tipadmin
| -password password
| where path refers to the path to the compressed file from Step 2. Replace
| password with your password for tipadmin.
| If the command was successful, the following message is displayed:
| CTGTRQ092I Import operation successfully performed
| If the command failed, complete the following steps to restart the Tivoli
| Common Reporting server, and then try the trcmd command again:
| a. Open a command prompt window, and change directories to
| /opt/IBM/tivoli/tipv2Components/TCRComponent/bin.
| b. Stop the server by issuing the following command:
| ./stopTCRserver
| c. Start the server by issuing the following command:
| ./startTCRserver
| In order for the Cognos reports to run successfully, Tivoli Common Reporting must
| be configured to connect to the WAREHOUS database. Complete the following
| steps:
| 1. Install and configure the DB2 client by completing the steps in one of the
| following topics, based on your operating system:
| Installing and configuring the DB2 client on AIX and Linux
| 2. Configure the data source from within Cognos by following the steps in
| Creating a data source by using Cognos Administration on page 835.
| Tip: Use the Search the network option to find the WAREHOUS database on
| another system.
| 11. In the Add Database Confirmation window, click Test Connection and enter
| the user name and password for the database. The user name is itmuser.
| 12. For Cognos to communicate with DB2, the DB2 libraries must be in the path.
| Open the /opt/IBM/tivoli/tipv2Components/TCRComponent/bin/
| startTCRserver.sh file in a text editor and add the following call:
| # Add call to db2profile to set DB2 library path for Cognos
| . /home/db2inst1/sqllib/db2profile
| 13. Recycle the Tivoli Common Reporting server by completing the following
| steps:
| a. Open a command prompt window, and go to /opt/IBM/tivoli/
| tipv2Components/TCRComponent/bin.
| b. Stop the server by issuing the following command:
| ./stopTCRserver
| a. Start the server by issuing the following command:
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 833
| ./startTCRserver
| Related tasks:
| Importing Cognos reports in to a Tivoli Common Reporting instance on page 832
| Installing and configuring the DB2 client on Windows
| Creating a data source by using Cognos Administration on page 835
| Tip: Use the Search the network option to find the WAREHOUS database on
| another system.
| 9. In the Add Database Confirmation window, click Test Connection and enter
| the user name and password for the database. The user name is ITMUser.
| 10. Recycle the Tivoli Common Reporting server by completing the following
| steps:
| a. Open a command window, and go to install_dir\tipv2Components\
| TCRComponent\bin, where install_dir is the path where the Tivoli Common
| Reporting instance is located. The default path is C:\IBM\tivoli.
| b. Stop the server by issuing the following command:
| stopTCRserver server1
| c. Start the server by issuing the following command:
| startTCRserver server1
| Related tasks:
| Importing Cognos reports in to a Tivoli Common Reporting instance on page 832
| Installing and configuring the DB2 client on AIX and Linux on page 833
| Creating a data source by using Cognos Administration on page 835
|
| You must create and configure a new data source to allow Tivoli Common
| Reporting to access the WAREHOUS database when you run Cognos reports.
| The data source defines the connection between Cognos and the WAREHOUS
| database. After you import Cognos reports in to a stand-alone Tivoli Common
| Reporting environment, you must create a data source. Create and configure a data
| source by completing the following steps:
| 1. Open a web browser and log on to Tivoli Common Reporting.
| 2. Select Reporting > Common Reporting.
| 3. Click Launch, and select Administration.
| 4. Click the Configuration tab.
| 5. Create a data source by clicking the New Data Source icon, in the upper right
| of the Configuration tabbed window.
| 6. For the name of the data source, enter TDW and click Next.
| 7. For the type, select DB2. Leave the isolation level set to Use the default
| object gateway, and click Next.
| 8. For the DB2 database name, enter WAREHOUS.
| 9. Click Signon and select the Password check box.
| 10. Under Create a signon that the Everyone group can use section, enter the user
| ID as itmuser.
| 11. Enter the itmuser password and confirm the password in the fields.
| 12. Click Test the connection, and then click Test to test the connection. The
| Status column in the table displays Succeeded.
| 13. Click Close to close the test connection and click Close again.
| 14. Click Finish to complete the new data source wizard.
| Related tasks:
| Importing Cognos reports in to a Tivoli Common Reporting instance on page 832
| Installing and configuring the DB2 client on AIX and Linux on page 833
| Installing and configuring the DB2 client on Windows on page 834
These reports are generated by the Tivoli Monitoring for Tivoli Storage Manager
agent, and are available in HTML, PDF, Microsoft Excel, XML, and CSV (delimited
text) formats.
You can customize the data that gets displayed in your reports by specifying the
values that you want in the On-Demand report parameters window.
This list specifies the client reports that you can run. The report descriptions are
described in Table 77 on page 837.
v Client activity details
v Client activity history
v Client backup currency
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 835
v Client backup missed files
v Client storage summary details
v Client storage pool media details
v Client storage summary details
v Client top activity
v Node replication details
v Node replication growth
v Node replication summary
v Schedule status
Table 76. On-demand BIRT report parameters
Parameter Description
Activity type This parameter is used to select the following different client activities:
v Backup (incremental only)
v Archive
v Restore
v Retrieve
Report period This parameter is used to select one the following date ranges to display.
v All
v Today
v Yesterday
v The last 24 hours
v The last 7 days
v The last 30 days
v The last 90 days
v The last 365 days
v The current week
v The last month
v The last 3 months
v Year to date
Start date and This parameter is used to overwrite the report period by choosing a start
end date date and an end date.
Server name This parameter is used to select which server to report on.
Client node This parameter is used to supply a client from the server or a wildcard
name (% or A%) to report on.
Summarization This parameter is used to select how to group or summarize the data by
type either daily (default), hourly, weekly, monthly, quarterly, or yearly.
Number of This parameter displays the number of top clients you want to see in the
clients to report.
display
This report displays only scheduled backups and does not display
manual backups. If a node runs manual backups daily, this report shows
that the node has never run a backup.
Client backup This report lists the details and reasons that a file was not backed up for
missed files a specific client. The report can be run for a specific date range, server,
or client. This data is displayed in a tabular format.
Client storage This report provides a short summary of the client activity details. The
summary details report can be limited to specific servers or clients by specifying the
applicable parameters.
Client storage This report provides average usage per client node of activity. It
pool media includes information about storage pools: copy, primary, disk, and tape.
details The data is displayed in tabular columns, with totals at the bottom.
Client storage This report summarized the growth or reduction in client storage over a
summary specified time period, including average usage, and possible maximums.
Client top This report provides a summary of the largest and longest running
activity activities. You can specify particular dates and server names by
specifying the applicable parameters. For example, you can display the
number of users who run the most backups (incremental only), archives,
restores, or retrieves on the Tivoli Storage Manager server. It does not
include information about image or NDMP backups.
Node replication This report provides node replication details, for a specified server or
details node, for the specific date range.
Node replication This report displays two line charts; one for total files replicated, and
growth another for total MB replicated.
Node replication This report provides details of the node replication for the specific date
summary range for the specified servers and nodes.
Schedule status This report provides summary and detailed data on the status of the
schedules within a specific date range. The report provides a pie chart
with totals, and four sub-tables to break out the summary information in
the pie chart. The information includes jobs that ran, jobs that failed,
and jobs that ended with warnings.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 837
Related tasks:
Viewing historical data and running reports on page 819
Related reference:
BIRT Server reports
These reports are generated by the Tivoli Monitoring for Tivoli Storage Manager
agent and are available in HTML, PDF, PostScript, and Microsoft Excel format.
This list specifies the server reports that you can view and run. The report
descriptions are described in Table 79 on page 839.
v Activity log
v Server activity details
v Server database details
v Server resource usage
v Server throughput
v Server throughput (pre version 6.3 agents)
v Tape volume capacity analysis
Depending on the type of report you want to run, and the parameters available for
that report, you can choose the parameters in the On-Demand Report Parameters
window to customize how the data is displayed in the reports. Table 78 describes
these parameters.
Table 78. Reporting parameters
Parameter Description
Activity type This parameter is used to select the following server activity:
v Database backup
Report period This parameter is used to select one the following date ranges to
display
v All
v Today
v Yesterday
v The last 24 hours
v The last 7 days
v The last 30 days
v The last 90 days
v The last 365 days
v The current week
v The last month
v The last 3 months
v Year to date
Start date and end date This parameter is used to overwrite the report period by
choosing a start date and an end date.
Server name This parameter is used to select which server to report on.
Related tasks:
Viewing historical data and running reports on page 819
Related reference:
BIRT Client reports on page 835
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 839
Modifying the IBM Tivoli Monitoring environment file to customize
agent data collection
You can modify the IBM Tivoli Monitoring environment file to customize the data
that the agent collects from the Tivoli Storage Manager server.
When you create a Tivoli Storage Manager monitoring agent instance in the Tivoli
Enterprise Monitoring server application, a new environment file is created. You
can modify this file to change the behavior of the monitoring agent.
There are many variables that can be configured, but care must be taken to not
destroy performance of the Tivoli Storage Manager server by setting variables
incorrectly.
You can use any text editor to edit the environment file.
v Valid values are 0 and 1.
v A value of 0 disables the query.
v A value of 1 enables the query
v An invalid value disables the query.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 841
KSK_STGDEV_ON, Default Value = 1
Queries the Tivoli Storage Manager server for the storage device data.
KSK_SUMM_ON, Default Value = 1
Queries the Tivoli Storage Manager server for the activity summary data.
KSK_TAPEUSG_ON, Default Value = 1
Queries the Tivoli Storage Manager server for the tape usage data.
KSK_TAPEVOL_ON, Default Value = 1
Queries the Tivoli Storage Manager server for the tape volume data.
KSK_TRACE, Default Value = 0
Specify a value of 1 to allow the agent to create a log file indicating the
attempts to query both the Tivoli Storage Manager server and the DERBY
pre-fetch data cache.
Trace files are stored in the installation_directory /itm/logs directory.
Trace files are named: instance_nameport_numberdate_time.txt.
For example: pompeii2150020110620101220000.txt, where instance name =
pompeii2, port number = 1500, and date = June 20, 2011 at 10:12 a.m.
There are other variables included in this environment file that can affect the
performance of the server. See the IBM Tivoli Storage Manager Performance Tuning
Guide for details of these environment variables.
After Tivoli Monitoring for Tivoli Storage Manager is installed and the agent
instance is created and configured, the agent begins collecting data. The data
collected is not written directly to the database, but is first stored as temporary
files on the host system where the agent is running. Over time, the data gets
moved to the DB2 database named WAREHOUS, where it is permanently stored
and used to create reports by the Tivoli Integrated Portal Common Reporting
function.
If you modified your configuration, or customized any reports, you might need to
back up and restore those modified configurations.
If a system failure occurs that affects your data and configuration modifications,
you must first reinstall and configure Tivoli Monitoring for Tivoli Storage Manager,
then restore the backed-up data and configurations.
These are the tasks that you must perform to back up your system, ensure that
your backups are successful, and then to restore your system.
v Backing up the system includes these tasks:
Installing the Tivoli Storage Manager client
Backing up the IBM Tivoli Monitoring server
Configuring the system to back up the DB2 WAREHOUS database, and
performing backups
Validating the success of the backups
The following scenario outlines the tasks that must be completed to back up your
system, and verify that your backups are successful.
1. Install the Tivoli Storage Manager client (both 32 bit and 64 bit runtimes:
v Installing Tivoli Storage Manager clients
2. Back up the IBM Tivoli Monitoring and Tivoli Enterprise Monitoring server
data by using Tivoli Storage Manager client:
v Backing up IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and
agent configuration settings on page 849
3. Configure the system to back up the DB2 WAREHOUS database, and then
perform backups:
v Backing up the DB2 WAREHOUS database on AIX and Linux systems on
page 844
4. Validate the success of your backups:
v Verifying and deleting backups of Tivoli Monitoring for Tivoli Storage
Manager on page 847
5. Export any customized Tivoli Enterprise Portal workspaces and queries to the
file system and back them up using the Tivoli Storage Manager client:
v Exporting and importing Tivoli Enterprise Portal workspaces and queries
on page 850
6. Back up any customized configuration files for the storage agent by using the
Tivoli Storage Manager client:
v Backing up IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and
agent configuration settings on page 849
7. Export any customized Cognos reports to the file system and back them up by
using the Tivoli Storage Manager client:
v Exporting customized Cognos reports on page 851
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 843
Related tasks:
Backing up the DB2 WAREHOUS database on AIX and Linux systems
The following steps describe how you can back up the historical data that is
gathered by Tivoli Monitoring for Tivoli Storage Manager and stored in the DB2
WAREHOUS database. They also describe how to use the Tivoli Storage Manager
server as the backup repository. You can also back up the database to other media
such as a hard disk drive. Learn more from the IBM DB2 Database Information
Center at: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/
index.jsp?topic=/com.ibm.db2.luw.admin.ha.doc/doc/c0052073.html.
1. To back up your database to a Tivoli Storage Manager server, install the Tivoli
Storage Manager backup-archive client on the same system where IBM Tivoli
Monitoring is installed. See Installing Tivoli Storage Manager clients in the
Backup-Archive Clients Installation and User's Guide for additional information.
2. From the Tivoli Storage Manager server, create a management class for the
DB2 WAREHOUS backups and log files.
Notes:
a. You can use the Administration Center to create the management class, or
you can use the DEFINE MGMTCLASS command.
b. The management class in these examples is called
WAREHOUS_BACKUPS.
3. In the backup and archive copy groups of the management class you created,
apply these settings:
Note: You can use the Administration Center to apply these settings, or you
can use the DEFINE COPYGROUP or UPDATE COPYGROUP commands.
a. Apply these settings to the backup copy group:
verexists=1
verdeleted=0
retextra=0
retonly=0
b. Apply this setting to the archive copy group:
retver=nolimit
4. Register a node for the DB2 backup client, and note the node name and
password for later use.
register node node_name password domain=domain_name backdelete=yes
5. Log on with the root user ID to the system where Tivoli Monitoring for Tivoli
Storage Manager is installed, and create the dsm.sys file in the
opt/tivoli/tsm/client/api/bin/ directory. Add the following statements to
the file:
servername myserver
commmethod tcpip
tcpport 1500
tcpserveraddress myaddress.mycompany.com
passwordaccess generate
nodename mynode
tcpclientaddress 11.22.33.44
*This is the include list that binds the mgmtclass to backup and logs files
Where my_server_name, is the same as the server name in the dsm.sys file.
7. Log on to DB2 using the DB2 instance ID, which by default is db2inst1.
Remain logged on with the DB2 instance ID to perform most of the remaining
steps, except where notated:
su - db2inst1
Tip: This user ID and password were defined when IBM Tivoli Monitoring
for Tivoli Storage Manager was installed and configured on the system.
8. Set up the environment variables that are needed by the Tivoli Storage
Manager client by editing the instance profile /home/db2inst1/.profile, and
adding these lines:
export DSMI_DIR=/opt/tivoli/tsm/client/api/bin
export DSMI_CONFIG=/home/db2inst1/tsm/dsm.opt
export DSM_LOG=/home/db2inst1/tsm
Notes:
v The DSM_DIR variable must point to the API client installation directory.
v The DSMI_CONFIG must be set to the location of dsm.sys file that you
created in Step 5 on page 844. The default directory is:
opt/tivoli/tsm/client/api/bin/dsm.sys
v The DSMI_LOG is the logging directory. The default directory is
/home/db2inst1/tsm.
9. From the current open shell, source the /home/db2inst1/.profile to add the
DSMI_XXXX variables to its environment:
. /home/db2inst1/.profile
10. With the root user ID, start the Manage Tivoli Monitoring Services console,
which is also commonly referred to as CandleManage, /opt/tivoli/tsm/
reporting/itm/bin/CandleManage, and stop all IBM Tivoli Monitoring agents
and services in this order:
a. Tivoli Storage Manager agents
b. Summarization and Pruning agent
c. Warehouse Proxy agent
d. Tivoli Enterprise Portal server
e. Tivoli Enterprise Monitoring server
11. Determine whether there are any active application connections by issuing this
command:
db2 list applications for db warehouse
12. If there are active connections, stop them by issuing the following command:
db2 force applications all
13. To complete the configuration of the Tivoli Storage Manager client, restart
DB2:
db2stop
db2start
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 845
14. With the root User ID, source the db2 instance profile to apply the
DSMI_XXXX environment variables:
. /home/db2inst1/.profile
15. With the root User ID, set the Tivoli Storage Manager password with this
command:
/opt/tivoli/tsm/reporting/db2/adsm/dsmapipw
When prompted, specify the password that you used when you registered the
node in Step 4 on page 844.
16. While logged in using the db2inst1 ID, confirm that the password was
correctly set by issuing this command:
Important: Perform this step using the db2inst1 user ID, because
the/home/db2inst1/tsm/dsierror.log file is owned by the first ID to issue this
command.
db2adutl query
If the command returns a message that states that no db2 objects are found,
you successfully set the password.
17. Optional: You can check the activity log on the Tivoli Storage Manager server
to confirm that the node successfully authenticated when you ran the
db2adutl command.
18. Optional: You can check the /home/db2inst1/tsm directory for the
dsierror.log file to ensure that it is owned by the db2inst1 user ID, and
investigate the log file for any potential errors.
19. Configure DB2 to roll forward:
db2 update db cfg for WAREHOUS using logarchmeth1 tsm
20. Configure the database to use the management class that you created in Step 2
on page 844.
db2 update db cfg for WAREHOUS using tsm_MGMTCLASS WAREHOUS_BACKUPS
21. Set TRACKMOD to ON by using this command:
db2 update db cfg for WAREHOUS using TRACKMOD ON
a. If you see the SQL1363W message displayed in response to these
commands, one or more of the parameters submitted for modification
were not dynamically changed, so issue this command:
db2 force applications all
b. Issue this command to ensure that the settings for LOGARCHMETH1,
TSM_MGMTCLASS, and TRACKMOD have been updated:
db2 get db cfg for warehous
22. Perform a full offline backup of the database:
db2 backup db WAREHOUS use tsm
23. With the root user ID, restart the Tivoli Monitoring for Tivoli Storage Manager
services using the Manage Tivoli Monitoring Services console. Start all the
agents and services in this order:
a. Tivoli Storage Manager agents
b. Summarization and Pruning agent
c. Warehouse Proxy agent
d. Tivoli Enterprise Portal server
e. Tivoli Enterprise Monitoring server
Tip: Specify the keyword delta to ensure that the incremental backups are
not cumulative. This reduces the size of the backups and the amount of time
each backup takes to run. If you want your incremental backups to be
cumulative, do not specify the keyword delta. This increases the size of the
backups, but reduces the number of incremental backups required to perform
a restore. If your backups do not take much time or storage space, you might
choose to only perform full backups, which would only require a single
backup to restore.
26. After you completed a full set of incremental backups, perform a full backup:
db2 backup db warehous online use tsm
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 847
Retrieving INCREMENTAL TABLESPACE BACKUP information.
No INCREMENTAL TABLESPACE BACKUP images found for WAREHOUS
Tip: The db2adutl utility uses the keyword delta to mean a non-cumulative,
incremental backup
v If you perform cumulative incremental backups, you can issue this command
to retain the most recent backup:
db2adutl delete full incremental keep 1 db warehous
You can back up the entire contents of the repository directories, and the agent
configuration file, using an application such as the Tivoli Storage Manager client.
The monitoring agent must be stopped for the duration of the backup process.
Failing to stop the agent might result file-in-use errors, and an internally
inconsistent snapshot of the data. The agent can be restarted after the backup is
complete.
Complete these steps to back up the IBM Tivoli Monitoring and Tivoli Enterprise
Portal server configuration settings:
1. Back up the Derby database cache that is stored in a directory named DERBY.
This directory is created by the monitoring agent on the system where the
agent runs. If there are multiple monitoring agents installed on one system,
they all use this directory.
The default directory is:
/opt/tivoli/tsm/reporting/itm/tables/DERBY
Tip: If the monitoring agent is started from a command shell, the DERBY
directory is created in the directory where the agent is started.
2. Back up the collection of binary files that are created by the monitoring agent.
The system where these files reside depends on the collection location that is
specified in the historical settings for the Tivoli Enterprise Portal server. See the
configuration steps for more information about accessing these settings.
v TEMA binary files are kept on the monitoring agent system in the following
directory:
/opt/tivoli/tsm/reporting/itm/aix*/sk/hist/agent_instance_name
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 849
Exporting and importing Tivoli Enterprise Portal workspaces and
queries
If you modified the Tivoli Enterprise Portal workspaces, or added workspaces after
installation, you can export them to an .xml file, which can be backed up and used
to perform a restore if necessary.
Complete these steps to export and import the workspaces and queries:
1. Log in to the Tivoli Enterprise Portal client with the sysadmin user ID to
modify the authority that is necessary to export and import the workspaces and
queries.
2. From the main menu click Edit > Administer Users.
3. Select the SYSADMIN user ID, and in the Authorities pane, select Workspace
Administration.
4. Select the Workspace Administration Mode check box, and click OK.
Note: Ensure that the Workspace Administration Mode and Workspace Author
Mode check boxes are selected.
5. Export the workspaces to a file. From a shell or command window, navigate to
the directory containing the tacmd command and issue this command.
cd /opt/tivoli/tsm/reporting/itm/bin
./tacmd exportworkspaces -t sk -x workspaces_output_filename -u sysadmin
-p sysadmin_password -f
Note: The tacmd file is located in the bin directory where you installed IBM
Tivoli Monitoring.
After exporting the queries to the two .xml output files, you can back them up
using a backup utility such as the Tivoli Storage Manager client.
Import the workspaces and queries with the following commands, after you have
reinstalled and configured your Tivoli Monitoring for Tivoli Storage Manager
system:
./tacmd importworkspaces -x workspaces_output_filename -u sysadmin -p
sysadmin_password -f
./tacmd importqueries -x queries_output_filename -u sysadmin -p
sysadmin_password -f
Related tasks:
Exporting customized Cognos reports on page 851
Related information:
IBM DB2 Data recovery
After you have exported your reports to a file, ensure that they are backed up and
restored. Validate that you can back up and restore the data.
Related tasks:
Restoring Tivoli Monitoring for Tivoli Storage Manager on page 852
Related information:
IBM DB2 Data recovery
Complete these steps to export any customized BIRT reports to a .zip file:
1. Log on to the system where Tivoli Integrated Portal is installed and open a
command prompt.
2. Navigate to the applicable directory:
v If you are exporting from version 6.3 to version 6.3:
/opt/IBM/tivoli/tipv2Components/TCRComponent/bin
v If you are exporting from version 6.2 to version 6.3:
installation_directory/ac/products/tcr/bin
v If you are exporting from version 6.1 to version 6.3:
installation_directory/ac/bin
3. To obtain a list of all reports, issue this command:
trcmd.sh -list reports
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 851
Your output should look similar to this:
"/TSMReports/TSM_server_tape_volume_capacity"
"/Custom Reports/TSM_client_storage_details_test"
"/TSMReports/TSM_client_activity_details"
"/TSMReports/TSM_client_backup_currency"
"/TSMReports/TSM_server_database_details"
"/TSMReports/TSM_client_backup_missed_files"
"/TSMReports/TSM_client_schedule_status"
"/Custom Reports/TSM_client_top_activity_test"
"/TSMReports/TSM_client_storage"
"/TSMReports/TSM_server_Throughput"
"/TSMReports/TSM_server_activity_details"
"/TivoliProducts/TCR/Overview"
"/TSMReports/TSM_server_resource_usage"
"/TSMReports/TSM_client_top_activity"
"/TSMReports/TSM_client_activity_history"
4. Issue this command, on one line, to export the .zip file to your home directory.
Specify the names of the reports to be exported, within quotation marks, and a
name for the output file. You can also specify a different directory, if you
prefer:
Tip: Do not specify report names unless they have been added or customized.
Doing so results in overwriting the installed version 6.3 reports of the same
name.
trcmd.sh -export -bulk /home/user1/customized_reports.zip -reports
"/Custom Reports/TSM_client_storage_details_test"
"/Custom Reports/TSM_client_top_activity_test"
After you have exported your reports to a .zip file, ensure that they are backed up,
and perform a restore to validate that you have a successful backup.
This scenario outlines the tasks required to restore your Tivoli Monitoring for
Tivoli Storage Manager system by using your backups.
1. Reinstall and configure Tivoli Monitoring for Tivoli Storage Manager: Installing
Tivoli Monitoring for Tivoli Storage Manager.
2. Restore your DB2 WAREHOUS database from backup: Restoring backups of
Tivoli Monitoring for Tivoli Storage Manager on page 853.
3. Restore your IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and agent
configuration files from backup: Restoring IBM Tivoli Monitoring, Tivoli
Enterprise Portal server, and agent configuration settings on page 855.
4. Import any customized Cognos reports: Importing customized Cognos
reports on page 856.
5. Import any customized BIRT reports: Importing customized BIRT reports on
page 856.
Related tasks:
Restoring backups of Tivoli Monitoring for Tivoli Storage Manager on page 853
This procedure assumes that the system where Tivoli Monitoring for Tivoli Storage
Manager was installed has been lost. Before you can perform a restore from
backups, you must reinstall and configure Tivoli Monitoring for Tivoli Storage
Manager and the Tivoli Storage Manager client.
To learn more about reinstalling see Installing Tivoli Monitoring for Tivoli Storage
Manager, and Installing the Tivoli Storage Manager backup-archive clients, in the
Installation Guide.
1. To restore the DB2 WAREHOUS database, you must first stop all Tivoli
Monitoring for Tivoli Storage Manager agents and services. From the Manage
Tivoli Monitoring Services console, which is also referred to as CandleManage,
stop these agents and services in this order:
a. Tivoli Storage Manager agents
b. Summarization and Pruning agent
c. Warehouse Proxy agent
d. Tivoli Enterprise Portal server
e. Tivoli Enterprise Monitoring server
2. From the DB2 command prompt, connect to the WAREHOUS database:
su - db2inst1
Note: The remaining steps must be performed using the DB2 instance user ID,
which by default is db2inst1. This user ID and password were defined when
you installed Tivoli Monitoring for Tivoli Storage Manager.
3. Determine if there are any existing application connections by issuing this
command:
db2 list applications for db warehous
4. Stop the active connections by issuing this command:
db2 force applications all
5. Obtain a list of all available full and incremental backups by using the db2adutl
utility. To do this, issue the following command:
db2adutl query full db warehous
The output lists the available full, delta, and incremental backups:
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 853
Query for database WAREHOUS
6. To perform a restore, you must issue a restore command for each backup
involved in the restore. DB2 requires configuration information that is
contained in your most recent backup, therefore you must restore it first before
proceeding to restore the entire series.
For example, if you perform daily backups with #7 being the most recent
backup and #1 being the oldest backup, restore backup #7 first, followed by
backup #1, #2, #3,# 4, #5, #6, and then #7 again.
Table 80. Backup scenario: restore order for backups
Backup # Day Type of backup Restore order
7 Sunday, December 31 Incremental 1st
1 Monday, December 25 Full 2nd
2 Tuesday, December 26 Incremental 3rd
3 Wednesday, December 27 Incremental 4th
4 Thursday, December 28 Incremental 5th
5 Friday, December 29 Incremental 6th
6 Saturday, December 30 Incremental 7th
7 Sunday, December 31 Incremental 8th
7. If the most recent backup completed was a full backup, you can restore only
that backup without having to restore the whole series of incremental backups.
For example:
db2 restore database warehous use Tivoli Storage Manager taken at
20101229110234
8. Because the backups were configured for rollforward recovery, you must
complete the restore process with the rollforward command:
db2 rollforward database warehous to end of logs and complete
After completing this restore procedure, perform a full, offline backup before
starting the IBM Tivoli Monitoring services and agents.
Related tasks:
Backing up the DB2 WAREHOUS database on AIX and Linux systems on page
844
Related information:
IBM DB2 Data recovery
Complete these steps to restore the IBM Tivoli Monitoring and Tivoli Enterprise
Portal repository directories and the agent configuration files:
1. Restore the Derby database that you backed up to the directory that was
created by the monitoring agent on the system where the agent runs. If there
are multiple monitoring agents installed on one system, they all use this
directory. The default directory is:
/opt/tivoli/tsm/reporting/itm/tables/DERBY
Tip: If the monitoring agent is started from a command shell, the DERBY
directory is created in the current directory from where it is started.
2. Restore the collection of binary files that were created by the monitoring agent
to their directories. The system where these files reside depends on the
collection location that is specified in the historical settings for the Tivoli
Enterprise Portal server.
v TEMA binary files are restored to the monitoring agent system in the
following directory:
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 855
/opt/tivoli/tsm/reporting/itm/aix*/sk/hist/agent_instance_name
Complete these steps to import a Cognos .zip package file that has been restored
from backup:
1. Log in to the Tivoli Integrated Portal.
2. Expand the Reporting item in the navigation tree and select Common
Reporting to open the reporting workspace.
3. Copy the restored Cognos .zip package file into the appropriate directory:
/opt/IBM/tivoli/tipv2Components/TCRComponent/cognos/deployment
4. Click Launch > Administration. This switches to a tabbed workspace.
5. Select the Configuration tab, and then select Content Administration in the
box on the left.
6. Click the New Import icon on the Administration toolbar.
7. Start the New Import wizard.
8. Click Refresh in the upper-right corner until you see the final status of the
export.
Related tasks:
Exporting customized Cognos reports on page 851
Related information:
IBM DB2 Data recovery
Before you can import the previously exported BIRT reports, you must remove the
reportdata.xml file from the .zip file You can either use a tool that allows you to
remove the file without extracting the files, or you can extract the files, remove the
reportdata.xml, and then zip up the files in the directory.
After you have imported the customized BIRT reports, log on to Tivoli Integrated
Portal, and validate that your customized reports were successfully imported.
Chapter 29. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 857
858 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Chapter 30. Monitoring client backup and restore operations
You can monitor client performance by using the Tivoli Storage Manager client
performance-monitor function. By using the client performance monitor, you can
view and analyze performance data for client backup and restore operations.
The client performance monitor uses the Tivoli Storage Manager API to collect
performance data about backup and restore operations.
The client performance monitor is automatically installed with the Tivoli Storage
Manager Administration Center. You can access the client performance monitor
and detailed information about how to view and analyze performance information
from the Reporting portlet of the Administration Center.
To change the client performance monitor parameters after installation, specify the
following parameters in the assist.cfg file:
validTime
The number of hours that the client performance monitor keeps state
information about unfinished operations. The default is 24 hours.
validOperationSaveTime
The number of days that operation data is kept in the client performance
monitor history. The default is 14 days.
port The communication port where the client performance monitor listens for
performance data. The default is 5129.
If you change the configuration file after the client performance monitor is
installed, restart the client performance monitor server for the changes to become
effective.
Use the following script to start, stop, or view the status of the client performance
monitor:
v TIP_HOME/profiles/TIPProfile/installedApps/TIPCell/isc.ear/TsmAC.war/
TSMPerfMon/tsm-perfmon -start|-stop|-process
You can log the events to any combination of the following receivers:
Tivoli Storage Manager server console and activity log
See Logging events to the IBM Tivoli Storage Manager server console and
activity log on page 863.
File and user exits
See Logging events to a file exit and a user exit on page 864.
Tivoli event console
See Logging events to the Tivoli Enterprise Console on page 865.
Event server receiver (Enterprise Event Logging)
Routes the events to an event server. See Enterprise event logging:
logging events to another server on page 875.
Simple Network Management Protocol (SNMP)
See Logging events to an SNMP manager on page 869.
In addition, you can filter the types of events to be enabled for logging. For
example, you might enable only severe messages to the event server receiver and
one or more specific messages, by number, to another receiver. Figure 93 shows a
possible configuration in which both server and client messages are filtered by the
event rules and logged to a set of specified receivers.
Tivoli Storage
Manager Server Activity Log
Server Console
Client
File
Messages Event
Rules
User Exit
Tivoli Event
Console
Event Server
Server
Messages
When you enable or disable events, you can specify the following:
v A message number or an event severity (ALL, INFO, WARNING, ERROR, or
SEVERE).
v Events for one or more client nodes (NODENAME) or for one or more servers
(SERVERNAME).
To enable or disable events, issue the ENABLE EVENTS and DISABLE EVENTS
commands. For example,
v To enable event logging to a user exit for all error and severe server messages,
enter:
enable events userexit error,severe
v To enable event logging to a user exit for severe client messages for all client
nodes, enter:
enable events userexit severe nodename=*
v To disable event logging to a user exit for error server messages, enter:
disable events userexit error
If you specify a receiver that is not supported on any platform, or if you specify an
invalid event or name, Tivoli Storage Manager issues an error message. However,
any valid receivers, events, or names that you specified are still enabled. Certain
events, such as messages that are issued during server startup and shutdown,
automatically go to the console. They do not go to other receivers, even if they are
enabled.
Note: Server messages in the SEVERE category and message ANR9999 can provide
valuable diagnostic information if there is a serious problem. For this reason, you
should not disable these messages. Use the SET CONTEXTMESSAGING ON command to
get additional information that could help determine the cause of ANR9999D
messages. The IBM Tivoli Storage Manager polls the server components for
information that includes process name, thread name, session ID, transaction data,
locks that are held, and database tables that are in use.
At server startup, event logging begins automatically to the server console and
activity log and for any receivers that are started based on entries in the server
options file. A receiver for which event logging has begun is an active receiver.
To begin logging events to receivers for which event logging is not started
automatically, issue the BEGIN EVENTLOGGING command. You can also use this
command after you have disabled event logging to one or more receivers. To end
event logging for an active receiver issue the END EVENTLOGGING command.
For example,
v To begin logging events to the event server, enter:
begin eventlogging eventserver
v To end logging events to the event server, enter:
end eventlogging eventserver
Logging events to the IBM Tivoli Storage Manager server console and
activity log
Logging events to the server console and activity log begins automatically at server
startup.
Enabling client events to the activity log will increase the database utilization. You
can set a retention period or size limit for the log records by using the SET
ACTLOGRETENTION command (see Setting a retention period for the activity
log on page 805 and Setting a size limit for the activity log on page 805). At
server installation, activity log management is retention-based, and this value is set
to one day. If you increase the retention period or the size limit, utilization is
further increased. For more information about the activity log, see Using the
Tivoli Storage Manager activity log on page 803.
You can disable server and client events to the server console and client events to
the activity log. However, you cannot disable server events to the activity log.
Also, certain messages, such as those issued during server startup and shutdown
and responses to administrative commands, will still be displayed at the console
even if disabled.
To enable all error and severe client events to the console and activity log, you can
issue the ENABLE EVENTS command. See the Administrator's Reference for more
information.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 863
Logging events to a file exit and a user exit
A file exit is a file that receives all the information related to its enabled events.
You can log events to a file exit and a user exit.
Be aware that this file can rapidly grow in size depending on the events enabled
for it. There are two versions of the file exit: binary and text. The binary file exit
stores each logged event as a record, while the text file exit stores each logged
event as a fixed-sized, readable line. For more information about the text file exit,
see Readable text file exit (FILETEXTEXIT) format on page 880.
See Beginning and ending event logging on page 863 for more information.
Application clients, Data Protection for IBM ESS for DB2, and Data Protection for
IBM ESS for Oracle must have enhanced Tivoli Enterprise Console support enabled
in order to route the events to the Tivoli Enterprise Console. Because of the
number of messages, you should not enable all messages from a node to be logged
to the Tivoli Enterprise Console.
Enabling either of these options not only changes the event class format, but also
generates a unique event class for individual Tivoli Storage Manager messages for
the client, the server, application clients, Data Protection for IBM ESS for DB2, Data
Protection for IBM ESS for Oracle, and Data Protection for IBM ESS for R/3.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 865
Option Name Function
UNIQUETDPTECEVENTS Changes the event class format and
generates a unique event class for all client,
server, and all Data Protection messages
where #### represents the message number. For exact details of the event class
format, look at the appropriate baroc file.
Application clients can issue unique events in the following ranges. All events
follow the IBM 3.4 naming convention, which uses a three-character prefix
followed by four digits.
Based upon the setting of the option or options on the Tivoli Storage Manager
server, the Tivoli Enterprise Console administrator must create a rule base using
one of the following baroc files:
Each successive baroc file accepts the events of the previous baroc file. For
example, itsmuniq.baroc accepts all events in ibmtsm.baroc, and itsmdpex.baroc
accepts all events contained in itsmuniq.baroc.
To determine whether this option is enabled, issue the QUERY OPTION command.
To set up Tivoli as a receiver for event logging, complete the following procedure:
1. Define the Tivoli Storage Manager event classes to the Tivoli Enterprise Console
with the baroc file for your operating system:
ibmtsm.baroc
This file is distributed with the server.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 867
Note: Please refer to Tivoli Enterprise Console documentation for instructions
on removing an existing baroc file, if needed, and installing a new baroc file.
Before the events are displayed on a Tivoli Enterprise Console, you must
import the baroc file into an existing rule base or create a new rule base and
activate it. To do this, complete the following steps:
a. From the Tivoli desktop, click on the Rule Base icon to display the pop-up
menu.
b. Select Import, then specify the location of the baroc file.
c. Select the Compile pop-up menu.
d. Select the Load pop-up menu and Load, but activate only when server
restarts from the resulting dialog.
e. Shut down the event server and restart it.
To create a new rule base, complete the following steps:
a. Click on the Event Server icon from the Tivoli desktop. The Event Server
Rules Bases window will open.
b. Select Rule Base from the Create menu.
c. Optionally, copy the contents of an existing rule base into the new rule base
by selecting the Copy pop-up menu from the rule base to be copied.
d. Click on the RuleBase icon to display the pop-up menu.
e. Select Import and specify the location of the baroc file.
f. Select the Compile pop-up menu.
g. Select the Load pop-up menu and Load, but activate only when server
restarts from the resulting dialog.
h. Shut down the event server and restart it.
2. To define an event source and an event group:
a. From the Tivoli desktop, select Source from the EventServer pop-up menu.
Define a new source whose name is Tivoli Storage Manager from the
resulting dialog.
b. From the Tivoli desktop, select Event Groups from the EventServer pop-up
menu. From the resulting dialog, define a new event group for Tivoli
Storage Manager and a filter that includes event classes
IBMTSMSERVER_EVENT and IBMTSMCLIENT_EVENT.
c. Select the Assign Event Group pop-up menu item from the Event Console
icon and assign the new event group to the event console.
d. Double-click on the Event Console icon to start the configured event
console.
3. Enable events for logging to the Tivoli receiver. See Enabling and disabling
events on page 862 for more information.
4. In the server options file, specify the location of the host on which the Tivoli
server is running. For example, to specify a Tivoli server at the IP address
9.114.22.345:1555, enter the following:
techost 9.114.22.345
tecport 1555
5. Begin event logging for the Tivoli receiver. You do this in one of two ways:
v To begin event logging automatically at server start up, specify the following
server option:
tecbegineventlogging yes
Or
v Enter the following command:
See Beginning and ending event logging on page 863 for more information.
Tivoli Storage Manager also implements an SNMP subagent that can be configured
to report exception conditions and provide support for a management information
base (MIB). The management information base (MIB), which is shipped with Tivoli
Storage Manager, defines the variables that will run server scripts and return the
server scripts' results. You must register SNMPADMIN, the administrative client
the server runs these scripts under. Although a password is not required for the
subagent to communicate with the server and run scripts, a password should be
defined for SNMPADMIN to prevent access to the server from unauthorized users.
An SNMP password (community name) is required, however, to access the SNMP
agent, which forwards the request to the subagent.
Note: Because the SNMP environment has weak security, you should consider not
granting SNMPADMIN any administrative authority. This restricts SNMPADMIN
to issuing only Tivoli Storage Manager queries.
SNMP SET requests are accepted for the name and input variables associated with
the script names stored in the MIB by the SNMP subagent. This allows a script to
be processed by running a GET request for the ibmAdsm1ReturnValue and
ibmAdsm2ReturnValue variables. A GETNEXT request will not cause the script to
run. Instead, the results of the previous script processed will be retrieved. When an
entire table row is retrieved, the GETNEXT request is used. When an individual
variable is retrieved, the GET request is used.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 869
1. Choose the name and parameters for a Tivoli Storage Manager script.
2. Use the application to communicate with the SNMP agent. This agent changes
the Tivoli Storage Manager MIB variable for one of the two script names that
the Tivoli Storage Manager subagent maintains. The SNMP agent also sets the
parameter variables for one of the two scripts.
3. Use the application to retrieve the variable ibmAdsmReturnValue1.x or
ibmAdsmReturnValue2.x, where x is the index of the server that is registered
with the subagent.
To set the variables associated with the script (for example, ibmAdsmServerScript1/2
or ibmAdsmM1Parm1/2/3), the nodes on which the subagent and the agent are run
must have read-write authority to the MIB variables. This is done through the
SNMP configuration process on the system that the SNMP agent runs on.
Important: In AIX, the default SNMP version is SNMP, V3. The snmpv3_ssw
command can be used to switch to SNMP, V1. All of the instructions in this
chapter were developed for SNMP, V1. Tivoli Storage Manager continues to use the
SNMP, V1 protocols that are still supported in the SNMP, V3 environment. See the
IBM AIX Networks and Communication Management Guide, section "Migrating from
SNMPv1 to SNMPv3" on how to convert a SNMP, V1 configuration to a SNMP, V3
configuration.
In AIX, the file name is /etc/snmpdv3.conf for SNMP, version 3. If you are using
SNMP, version 1, the file name is /etc/snmpd.conf.
The statements grant read-write authority to the MIB for the local node through
the loopback mechanism (127.0.0.1), and to nodes with the three 9.115.xx.xx
addresses. The smux statement allows the dpid2 daemon to communicate with
snmpd.
Here is an example of this command used to set and retrieve MIB variables:
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmServerScript1.1=QuerySessions
This command issues the set operation (-ms ), passing in community name public,
sending the command to host tpcnov73, and setting up variable
ibmAdsmServerScript1 to have the value QuerySessions. QuerySessions is the name of
a server script that has been defined on a server that will register with the Tivoli
Storage Manager subagent. In this case, the first server that registers with the
subagent is the .1 suffix in ibmAdsmServerScript1.1. The following commands set the
parameters for use with this script:
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm1.1=xyz
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm2.1=uvw
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm3.1=xxx
You can set zero to three parameters. Only the script name is needed. To make the
QuerySessions script run, retrieve the ibmAdsmM1ReturnValue variable (in this case,
ibmAdsmM1ReturnValue.1). For example:
snmpinfo -v -mg -c public -h tpcnov73 ibmAdsmM1ReturnValue.1
Note: Not all MIB browsers properly handle embedded carriage return/newline
characters.
In this case, ibmAdsmM1ReturnCode.1 will contain the return code associated with
the running of the script. If ibmAdsmM2ReturnValue is retrieved, the results of
running the script named in ibmAdsmServerScript2 are returned as a single numeric
return code. Notice the -mg instead of -ms to signify the GET operation in the
command to retrieve ibmAdsmM1ReturnValue.1. If the entire row is retrieved, the
command is not run. Instead, the results from the last time the script was run are
retrieved. This would be the case if the following command were issued:
snmpinfo -v -md -c public -h tpcnov73 ibmAdsm
An SNMP agent is needed for communication between an SNMP manager and its
managed systems. The SNMP agent is realized through the snmpd daemon. The
Distributed Protocol Interface (DPI) Version 2 is an extension of this SNMP agent.
SNMP managers can use the MIB that is shipped with Tivoli Storage Manager to
manage the server. Therefore, an SNMP agent supporting DPI Version 2 must be
used to communicate with the Tivoli Storage Manager subagent. This SNMP agent
is not included with Tivoli Storage Manager. A supported DPI agent ships with
AIX. The Tivoli Storage Manager subagent is included with Tivoli Storage Manager
and, before server startup, must be started as a separate process communicating
with the DPI-enabled SNMP agent.
The SNMP manager system can reside on the same system as the Tivoli Storage
Manager server, but typically would be on another system connected through
SNMP. The SNMP management tool can be any application, such as NetView or
Tivoli Enterprise Console, which can manage information through SNMP MIB
monitoring and traps. The Tivoli Storage Manager server system runs the processes
needed to send Tivoli Storage Manager event information to an SNMP
management system. The processes are:
v SNMP agent (snmpd)
v Tivoli Storage Manager SNMP subagent (dsmsnmp)
v Tivoli Storage Manager server (dsmserv)
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 871
SNMP Manager
Windows Linux
System
Tivoli Storage Tivoli Storage
AIX Manager server Manager server
SNMP Manager
SNMP DPI
Solaris HP-UX
Figure 95 shows how the communication for SNMP works in a Tivoli Storage
Manager system:
v The SNMP manager and agent communicate with each other through the SNMP
protocol. The SNMP manager passes all requests for variables to the agent.
v The agent then passes the request to the subagent and sends the answer back to
the manager. The agent responds to the manager's requests and informs the
manager about events by sending traps.
v The agent communicates with both the manager and subagent. It sends queries
to the subagent and receives traps that inform the SNMP manager about events
taking place on the application monitored through the subagent. The SNMP
agent and subagent communicate through the Distributed Protocol Interface
(DPI). Communication takes place over a stream connection, which typically is a
TCP connection but could be another stream-connected transport mechanism.
v The subagent answers MIB queries of the agent and informs the agent about
events by sending traps. The subagent can also create and delete objects or
subtrees in the agent's MIB. This allows the subagent to define to the agent all
the information needed to monitor the managed application.
SNMP Manager
get/set
respond
SNMP Protocol SNMP Protocol
trap
SNMP Agent
query
reply
SNMP DPI register SNMP DPI
trap
SNMP Subagent
Note:
SNMP Manager
AIX
SNMP Protocol SNMP Protocol
snmpd
SNMP DPI
Windows
SNMP DPI
dsmsnmp
dsmserv.opt
Tivoli Storage
Manager server
commmethod snmp
snmpsubagent hostname jimbo communityname public timeout 600
snmpsubagentport 1521
snmpheartbeatinterval 5
snmpmessagecategory severity
For details about server options, see the server options section in
Administrator's Reference.
2. Install, configure, and start the SNMP agent as described in the documentation
for that agent. The SNMP agent must support the DPI Version 2.0 standard.
Tivoli Storage Manager supports the SNMP agent that is built into the AIX
operating system.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 873
Configure the AIX SNMP agent by customizing the SNMP file; the SNMP, V1
file is named: /etc/snmpd.conf, the SNMP, V3 file is named: /etc/snmpdv3.conf. A
default configuration for SNMP, V1 might look like this:
logging file=/var/snmp/snmpd.log enabled
logging size=0 level=0
community public
community private 127.0.0.1 255.255.255.255 readWrite
community system 127.0.0.1 255.255.255.255 readWrite 1.17.2
view 1.17.2 system enterprises view
trap public <snmp_manager_ip_adr> 1.2.3 fe
snmpd maxpacket=16000 smuxtimeout=60
smux 1.3.6.1.4.1.2.3.1.2.2.1.1.2 public
Attention: The trap statement also defines the system to which the AIX SNMP
agent forwards traps that it receives.
Before starting the agent, ensure that the dpid2 and snmpd subsystems have
been started.
The sending server receives the enabled events and routes them to a designated
event server. This is done by a receiver that IBM Tivoli Storage Manager provides.
At the event server, an administrator can enable one or more receivers for the
events being routed from other servers.Figure 98 shows the relationship of a
sending Tivoli Storage Manager server and a Tivoli Storage Manager event server.
Server Console
Client
Messages Event Event File
Rules EVENTS Rules
User Exit
Tivoli Event
Console
Server
Messages
The following scenario is a simple example of how enterprise event logging can
work.
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 875
Then the administrator enables the events by issuing the ENABLE EVENTS
command for each sending server. For example, for SERVER_A the
administrator would enter:
enable events file severe,error servername=server_a
Note: By default, logging of events from another server is enabled to the event
server activity log. However, unlike events originating from a local server,
events originating from another server can be disabled for the activity log at an
event server.
One or more servers can send events to an event server. An administrator at the
event server enables the logging of specific events from specific servers. In the
previous example, SERVER_A routes severe, error, and warning messages to
SERVER_B. SERVER_B, however, logs only the severe and error messages. If a
third server sends events to SERVER_B, logging is enabled only if an ENABLE
EVENTS command includes the third server. Furthermore, the SERVER_B
determines the receiver to which the events are logged.
Because the lists of enabled and disabled events could be very long, Tivoli Storage
Manager displays the shorter of the two lists.
For example, assume that 1000 events for client node HSTANFORD were enabled
for logging to the user exit and that later two events were disabled. To query the
enabled events for HSTANFORD, you can enter:
query enabled userexit nodename=hstanford
The output would specify the number of enabled events and the message names of
disabled events:
998 events are enabled for node HSTANFORD for the USEREXIT receiver.
The following events are DISABLED for the node HSTANFORD for the USEREXIT
receiver:
ANE4000, ANE49999
The QUERY EVENTRULES command displays the history of events that are
enabled or disabled by a specific receiver for the server or for a client node.
query enabled userexit nodename=hstanford
The samples for the C, H, and make files are shipped with the server code in the
/usr/lpp/adsmserv/bin directory.
Attention:
1. Use caution in modifying these exits. A user exit abend will bring down the
server.
2. The file specified in the file exit option will continue to grow unless you prune
it.
You can also use Tivoli Storage Manager commands to control event logging. For
details, see Chapter 31, Logging IBM Tivoli Storage Manager events to receivers,
on page 861 and Administrator's Reference.
/*****************************************************************
* Name: userExitSample.h
* Description: Declarations for a user exit
*****************************************************************/
#ifndef _H_USEREXITSAMPLE
#define _H_USEREXITSAMPLE
#include <stdio.h>
#include <sys/types.h>
typedef struct
{
uchar year; /* Years since BASE_YEAR (0-255) */
uchar mon; /* Month (1 - 12) */
uchar day; /* Day (1 - 31) */
uchar hour; /* Hour (0 - 23) */
uchar min; /* Minutes (0 - 59) */
uchar sec; /* Seconds (0 - 59) */
} DateTime;
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 877
/******************************************
* Some field size definitions (in bytes) *
******************************************/
#define MAX_SERVERNAME_LENGTH 64
#define MAX_NODE_LENGTH 64
#define MAX_COMMNAME_LENGTH 16
#define MAX_OWNER_LENGTH 64
#define MAX_HL_ADDRESS 64
#define MAX_LL_ADDRESS 32
#define MAX_SCHED_LENGTH 30
#define MAX_DOMAIN_LENGTH 30
#define MAX_MSGTEXT_LENGTH 1600
/**********************************************
* Event Types (in elEventRecvData.eventType) *
**********************************************/
/***************************************************
* Application Types (in elEventRecvData.applType) *
***************************************************/
/*****************************************************
* Event Severity Codes (in elEventRecvData.sevCode) *
*****************************************************/
/**************************************************************
* Data Structure of Event that is passed to the User-Exit. *
* This data structure is the same for a file generated using *
* the FILEEXIT option on the server. *
**************************************************************/
/************************************
* Size of the Event data structure *
************************************/
/*************************************
* User Exit EventNumber for Exiting *
*************************************/
/**************************************
*** Do not modify above this line. ***
**************************************/
#endif
/***********************************************************************
* Name: userExitSample.c
* Description: Example user-exit program invoked by the server
* Environment: AIX 4.1.4+ on RS/6000
***********************************************************************/
#include <stdio.h>
#include "userExitSample.h"
/**************************************
*** Do not modify below this line. ***
**************************************/
/************
*** Main ***
************/
} /* End of main() */
/******************************************************************
* Procedure: adsmV3UserExit
* If the user-exit is specified on the server, a valid and
* appropriate event causes an elEventRecvData structure (see
* userExitSample.h) to be passed to adsmV3UserExit that returns a void.
* INPUT : A (void *) to the elEventRecvData structure
* RETURNS: Nothing
******************************************************************/
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 879
void adsmV3UserExit( void *anEvent )
{
/* Typecast the event data passed */
elEventRecvData *eventData = (elEventRecvData *)anEvent;
/**************************************
*** Do not modify above this line. ***
**************************************/
/* Be aware that certain function calls are process-wide and can cause
* synchronization of all threads running under the TSM Server process!
* Among these is the system() function call. Use of this call can
* cause the server process to hang and otherwise affect performance.
* Also avoid any functions that are not thread-safe. Consult your
* systems programming reference material for more information.
*/
The following table presents the format of the output. Fields are separated by
blank spaces.
Table 81. Readable text file exit (FILETEXTEXIT) format
Column Description
0001-0006 Event number (with leading zeros)
0008-0010 Severity code number
0012-0013 Application type number
0015-0023 Session ID number
0025-0027 Event structure version number
0029-0031 Event type number
0033-0046 Date/Time (YYYYMMDDDHHmmSS)
0048-0111 Server name (right padded with spaces)
1
0113-0176 Node name
1
0178-0193 Communications method name
1
0195-0258 Owner name
1
0260-0323 High-level internet address (n.n.n.n)
1
0325-0356 Port number from high-level internet address
1
0358-0387 Schedule name
1
0389-0418 Domain name
Chapter 31. Logging IBM Tivoli Storage Manager events to receivers 881
882 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Part 6. Protecting the server
Disasters, by their very nature, cannot be predicted, in either their intensity, timing,
or long-term effects. The ability to recover from a disaster, if one occurs, is
essential. To protect your system infrastucture and data and to recover from a
disaster, use the tools and procedures that Tivoli Storage Manager provides.
The security of your data is the most important aspect of managing data. You can
control access to the server and client nodes, encrypt data transmission, and
protect administrator and node passwords through authentication processes. The
two methods of authentication are LOCAL and LDAP. The LOCAL password
authentication takes place on the Tivoli Storage Manager server, and those
passwords are not case-sensitive.
LDAP password authentication takes place on the LDAP directory server, and the
passwords are case-sensitive. When using LDAP authentication, the password is
sent to the server by the client. By default, Secure Sockets Layer (SSL) is required
when LDAP authentication is used, to avoid exposing the password. SSL is used
when authenticating the server to the client and secures all communication
between the client and server. You can choose not to use SSL with LDAP
authentication if other security measures are in place to protect the password. One
example of an alternative security measure is a virtual private network (VPN)
connection.
Related concepts:
Managing Tivoli Storage Manager administrator IDs on page 898
Managing passwords and logon procedures on page 904
Securing the server console on page 896
Securing sensitive client data on page 541
Related reference:
Managing access to the server and clients on page 903
Administrative authority and privilege classes on page 896
Securing communications
You can add more protection for your data and passwords by using Secure Sockets
Layer (SSL).
SSL is the standard technology for creating encrypted sessions between servers and
clients. SSL provides a secure channel for servers and clients to communicate over
open communication paths. With SSL, the identity of the server is verified through
the use of digital certificates.
To ensure better system performance, use SSL only for sessions when it is needed.
Consider adding additional processor resources on the Tivoli Storage Manager
server to manage the increased requirements.
Tip: The SSL implementation described here is different from the Administration
Center SSL, which is implemented in Tivoli Integrated Portal. Both methods use
the same SSL technology, but they have different implementations and purposes.
See Finding documentation about TLS for Tivoli Integrated Portal on page 890.
Setting up TLS
The Tivoli Storage Manager server and client installation procedures include the
silent installation of the Global Security Kit (GSKit). The backup-archive client and
server communicate with Transport Layer Security (TLS) through services provided
by GSKit.
If you use passwords that are authenticated with an LDAP directory server, the
Tivoli Storage Manager server connects securely to the LDAP server with TLS.
LDAP server connections are secured by the TLS protocol. The LDAP directory
server must supply a trusted certificate to the Tivoli Storage Manager server. If the
Tivoli Storage Manager server trusts the certificate, a TLS connection is
established. If not, the connection fails. The root certificate that helps sign the
LDAP Directory server certificate must be added to the Tivoli Storage Manager
server key database file. If the certificate is not added, the certificate cannot be
trusted.
Tip: Any Tivoli Storage Manager documentation that indicates SSL or to select
SSL applies to TLS.
For more information about TLS, see Configuring TLS for LDAP directory
servers on page 892.
The backup-archive client must import a .arm file according to the default label
that is used. The following table shows you which file to import:
Table 82. Determining the .arm file to use according to the default label
Default label in the key
Type of certificate database Import this file
Server self-signed certificate TSM Server SelfSigned Key cert.arm
TSM Server SelfSigned SHA cert256.arm
Key,
The cert256.arm file is generated by the V6.3 server for distribution to the V6.3 or
later backup-archive clients. The TLS protocol requires the cert256.arm file. The
cert.arm file might also be generated by the V6.3 server, but is not designed for
passwords that authenticate with an LDAP server. To show the available
certificates, issue the gsk8capicmd_64 -cert -list -db cert.kdb -stashed
command.
Important: To use TLS, the default label must be TSM Server SelfSigned SHA
key and you must specify the SSLTLS12 YES server option.
To configure Tivoli Storage Manager servers and clients for TLS, complete the
following steps:
1. Specify the TCP/IP port on which the server waits for TLS-enabled client
communications. You can use the SSLTCPADMINPORT or SSLTCPPORT options, or
both to specify TLS port numbers.
If the key database file (cert.kdb) does not exist, it is created. For Tivoli
Storage Manager server V6.3.3 and later, the cert256.arm file and other
TLS-related files are created when the server is first started. If a password exists
for the server database, it is reused for the new key database. After creating the
database, the key database access password is generated and stored.
The label, in this case TSM061, must be unique within the client key database.
Choose a label that identifies the server to which it is associated. Ensure that
the transfer method is secure. This public certificate is made the default
certificate if a self-signed certificate from an earlier release is not found in the
key database.
4. Using a backup-archive client user ID, specify SSL YES in the dsm.opt client
options file. The TLS communications start and the TCPPORT administrative
client option value is updated.
5. If you want to use a different certificate, install the certificate authority (CA)
root certificate on all clients. A set of default root certificates are already
installed if you specified the -populate parameter in the command when you
created the key database file.
For more information, see the Backup-Archive Clients Installation and User's Guide.
Related reference:
Adding a certificate to the key database on page 888
For IPv4 or IPv6, the COMMMETHOD server option must specify either TCPIP or
V6TCPIP. The server options for TLS communications are SSLTCPPORT and
SSLTCPADMINPORT. The server can listen on separate ports for the following
communications:
v Backup-archive clients that use the regular protocol
v Administrator IDs that use the regular protocol
The backup-archive client user decides which protocol to use and which port to
specify in the dsmserv.opt file for the SSLTCPADMINPORT option. If the
backup-archive client requires TLS authentication but the server is not in TLS
mode, the session fails.
Related concepts:
Managing passwords and logon procedures on page 904
Related tasks:
Configuring Tivoli Directory Server for TLS on the iKeyman GUI on page 892
Configuring Tivoli Directory Server for TLS on the command line on page 894
Related reference:
Configuring Windows Active Directory for TLS/SSL on page 895
You can use your own certificates or purchase certificates from a CA. Either can be
installed and added to the key database. If you include the -stashpw parameter on
a GSKit gsk8capicmd_64 command, the password that you define is saved for later
use.
The key database is created when you start the Tivoli Storage Manager server. If
the certificate is signed by a trusted CA, obtain the certificate, install it in the key
database, and restart the server. Because the certificate is provided by a trusted
authority, the certificate is accepted by Tivoli Storage Manager and communication
between server and client can start.
You can use a Transport Layer Security (TLS) certificate if the client trusts the
certificate authority (CA). Trust is established when you add a signed certificate to
the server key database and use a root certificate for the CA in the client key
database.
The Global Security Kit (GSKit) is included in the Tivoli Storage Manager server
installation. The backup-archive client and server communicate with SSL through
services provided by GSKit.
Complete the following steps to add a certificate to the key database using GSKit:
1. Obtain a signed, server key database certificate from your CA.
2. To receive the signed certificate and make it the default for communicating
with clients, issue the following command:
gsk8capicmd_64 -cert -receive -db cert.kdb
-pw password -stash -file cert_signed.arm -default_cert yes
Tip: For this example, the client key database name is dsmcert.kdb.
6. To verify that the client can successfully connect, issue the dsmc query session
command.
If you do not have a backup copy of the cert.kdb file, perform the following steps:
1. Issue the DELETE KEYRING server command to delete the entry for it that is
located in the Tivoli Storage Manager database.
2. Delete all remaining cert.* files.
3. Shut down the server.
4. Start the server. The server automatically creates a new cert.kdb file and a
corresponding entry in the Tivoli Storage Manager database. If you do not issue
the DELETE KEYRING command, the server attempts, on startup, to create the key
database with the previous password.
5. Redistribute the new cert.arm file to all backup-archive clients that are using
TLS. Reinstall any third-party certificates on the backup-archive client. If you
are using an LDAP directory server to authenticate passwords, add the root
certificate that was used to sign the LDAP servers certificate. If the root
certificate is already a default trusted certificate, you do not have to add it
again.
The documentation for configuring TLS for Tivoli Integrated Portal is available
within the Tivoli Integrated Portal information center.
Log on to the Administration Center and click Help to open the information center.
Search for SSL.
To set up the storage agent to use SSL communication with the Tivoli Storage
Manager server and client, complete the following steps:
1. On the storage agent, issue the DSMSTA SETSTORAGESERVER command to
initialize the storage agent and add communication information to the device
configuration file and the storage agent options file dsmsta.opt:
Hint: The following command is entered on one line, but is displayed here on
multiple lines to make it easier to read.
dsmsta setstorageserver myname=sta
mypa=sta_password
myhla=ip_address
servername=server_name
serverpa=server_password
hla=ip_address
lla=ssl_port
STAKEYDBPW=password
ssl=yes
Requirement:
v When you set the SSL=YES and STAKEYDBPW=password parameters, a key
database file is set up in the storage agent options file, dsmsta.opt. All
passwords are obfuscated in dsmsta.opt.
v To enable SSL communication, ensure that the Tivoli Storage Manager LLA
parameter specifies the server SSLTCPADMIN port and set the SSL
parameter to YES.
2. Import the Tivoli Storage Manager server certificate, cert256.arm, to the key
database file for the storage agent. Ensure that the required SSL certificates are
in the key database file that belongs to each storage agent that uses SSL
communication. To import the SSL certificate, switch to the storage agent
directory and issue the following command:
gskcapicmd_64 -cert -add -label server_example_name
-db cert.kdb -stashed -file cert256.arm -format ascii
3. Specify the SSLTCPPORT and the SSLTCPADMINPORT options in the dsmsta.opt
options file.
4. Create the key database certificate and default certificates by starting the
storage agent.
Tip: To provide the new password to the storage agent, specify the
STAKEYDBPW=newpassword parameter with the DSMSTA SETSTORAGESERVER
command. Rerun the DSMSTA SETSTORAGESERVER command.
5. On the Tivoli Storage Manager server, issue the following command:
define server sta
hla=ip_address
lla=ssl_port
serverpa=password
ssl=yes
6. Stop the storage agent.
7. Stop the Tivoli Storage Manager server.
8. Import the cert256.arm certificate from the storage agent to the key database
file for the Tivoli Storage Manager server. Ensure that the required SSL
When the Tivoli Storage Manager server and storage agent initiate communication,
SSL certificate information is displayed to indicate that SSL is in use.
Related reference:
Adding a certificate to the key database on page 888
The directory servers that are available are IBM Tivoli Directory Server V6.2 or 6.3
or Windows Active Directory V2003 or 2008. You can configure Tivoli Directory
Server with the graphical user interface (GUI) or with the command-line interface.
See the following topics for more information about configuring a directory server
for TLS:
v Configuring Tivoli Directory Server for TLS on the iKeyman GUI
v Configuring Tivoli Directory Server for TLS on the command line on page 894
v Configuring Windows Active Directory for TLS/SSL on page 895
Configuring IBM Tivoli Directory Server is one of the preliminary tasks you must
do before you can authenticate passwords with an LDAP directory server. The
Tivoli Directory Server can use a self-signed certificate to secure the
communication between server and backup-archive client, and the LDAP directory
server.
You can use the iKeyman graphical user interface (GUI) to set up Tivoli Directory
Server. If the Tivoli Storage Manager server already has a trusted certificate from
your LDAP server, you do not have to complete the steps that are documented
here. If the LDAP directory server already has a signed certificate, you do not have
to complete these steps.
The X Window System client must be installed on the operating system where
Tivoli Directory Server is installed. Ensure that the X Window System server is
running on the local system. You must also set the DISPLAY environment variable.
You must configure IBM Tivoli Directory Server before you can authenticate
passwords with an LDAP directory server. The Tivoli Directory Server can use a
self-signed certificate to secure the communication between server and
backup-archive client, and the LDAP directory server.
If the Tivoli Storage Manager server already has a trusted certificate from your
LDAP server, you do not have to complete the steps that are documented here. If
the LDAP directory server already has a signed certificate, you do not have to
complete these steps.
To configure Tivoli Directory Server for Transport Layer Security (TLS), complete
the following steps:
1. Using the Tivoli Directory Server instance user name, create the key database
by issuing the following command:
gsk8capicmd_64 -keydb -create -db "directory/filename.kdb"
-pw pa$$=w0rd -stashpw -populate
2. Create a self-signed certificate or get one from a certificate authority (CA). To
create a self-signed certificate, issue the following command:
gsk8capicmd_64 -cert -create -db "directory/filename.kdb" -stashed -label
"LDAP_directory_server" -dn "cn=ldapserver.company.com"
-san_dnsname ldapserver.company.com -size 2048
-sigalg SHA256WithRSA -expire 3650
3. Extract the certificate to a file by issuing the following command:
gsk8capicmd_64 -cert -extract -db "directory/filename.kdb" -stashed -label
"LDAP_directory_server" -target ldapcert.arm -format ascii
4. Copy the certificate file (ldapcert.arm) to the Tivoli Storage Manager server.
5. To add the certificate to the Tivoli Storage Manager server key database, issue
the following command from the Tivoli Storage Manager server. You must issue
the command from the instance user ID from the instance directory.
gsk8capicmd_64 -cert -add -db "cert.kdb" -stashed -label
"LDAP_directory_server" -format ascii -file ldapcert.arm
6. Configure the key database file to work with Tivoli Directory Server. To set the
key database for TLS, issue the following command:
idsldapmodify -D <adminDN> -w <adminPW> -i <filename>
Tip: The Tivoli Storage Manager server authenticates with the LDAP simple
password authentication method.
You must configure Windows Active Directory before the Tivoli Storage Manager
server can authenticate passwords.
To set up the Windows Active Directory server, complete the following steps:
1. Turn off automatic root certificate updates to Windows Update if your
Windows Active Directory server does not have access to the internet.
2. Synchronize the system times of the Tivoli Storage Manager server and the
Windows Active Directory system. You can use a Network Time Protocol (NTP)
server. For more information about synchronizing the system times, see your
operating system documentation. You can also see the Microsoft website for
information about synchronizing Active Directory (http://
technet.microsoft.com/en-us/library/cc786897).
3. Set up Transport Layer Security (TLS) for LDAP server connections. Go to the
Microsoft website (http://www.microsoft.com) and search for LDAP and SSL.
a. Obtain a signed certificate. Active Directory requires that a signed certificate
be in the Windows certificate store to enable TLS. You can obtain a signed
certificate from the following sources:
v A third-party certificate authority (CA)
v Install the Certificate Services role on a system that is joined to the Active
Directory domain and configure an enterprise root CA
Tip: To determine whether the file is DER binary or ASCII, open the certificate
in a text editor. If you can read the characters, then the file is ASCII.
Ensure that you have the root certificate and that the subject on the certificate
matches the CA name. The Issued by and Issued to/subject for the root
certificate must be the same. Export the CA certificate by using one of the
following methods:
v Export the CA certificate from the Certificates (Local Computer) Microsoft
Management Console (MMC) snap-in.
v Copy the certificate from C:\Windows\system32\certsrv\CertEnroll\*.crt
into the server key database. The file is in DER binary format.
v Download the CA certificate file from the Certificate Services web interface
http://<certificate server hostname>/certsrv/, if it is enabled through
the Certificate Enrollment Web Services.
6. Copy the certificate to the Tivoli Storage Manager server.
Tip: The Tivoli Storage Manager server authenticates with the LDAP simple
password authentication method.
Related tasks:
Setting up TLS
An administrator with system privilege can revoke or grant new privileges to the
SERVER_CONSOLE user ID. However, an administrator cannot update, lock,
rename, or remove the SERVER_CONSOLE user ID. The SERVER_CONSOLE user
ID does not have a password.
Therefore, you cannot use the user ID from an administrative client unless you set
authentication to off.
Important: Two server options give you additional control over the ability of
administrators to perform tasks.
v QUERYAUTH allows you to select the privilege class that an administrator must
have to issue QUERY and SELECT commands. By default, no privilege class is
required. You can change the requirement to one of the privilege classes,
including system.
v REQSYSAUTHOUTFILE allows you to specify that system authority is required for
commands that cause the server to write to an external file (for example,
BACKUP DB). By default, system authority is required for such commands.
See the Administrator's Reference for details on server options.
Restricted Restricted
Unrestricted Unrestricted
Table 83 summarizes the privilege classes, and gives examples of how to set
privilege classes.
Table 83. Authority and privilege classes
Privilege Class Capabilities
System Perform any administrative task with the
grant authority rocko classes=system server.
v System-wide responsibilities
v Manage the enterprise
v Manage IBM Tivoli Storage Manager
security
Unrestricted Policy Manage the backup and archive services for
grant authority smith classes=policy nodes assigned to any policy domain.
v Manage nodes
v Manage policy
v Manage schedules
Restricted Policy Same capabilities as unrestricted policy
grant authority jones domains=engpoldom except authority is limited to specific policy
domains.
Unrestricted Storage Manage server storage, but not definition or
grant authority coyote classes=storage deletion of storage pools.
v Manage the database and recovery log
v Manage IBM Tivoli Storage Manager
devices
v Manage IBM Tivoli Storage Manager
storage
Restricted Storage Manage server storage, but limited to specific
grant authority holland stgpools=tape* storage pools.
v Manage IBM Tivoli Storage Manager
devices
v Manage IBM Tivoli Storage Manager
storage
Related concepts:
Overview of remote access to web backup-archive clients on page 449
Managing Tivoli Storage Manager administrator IDs
Related reference:
Administrative authority and privilege classes on page 896
To query the system for a detailed report on administrator ID DAVEHIL, issue the
following example QUERY ADMIN command:
query admin davehil format=detailed
Only administrator IDs that authenticate to the LDAP directory server are listed in
the report.
For example, JONES has restricted policy privilege for policy domain
ENGPOLDOM.
1. To extend JONES authority to policy domain MKTPOLDOM and add operator
privilege, issue the following example command:
grant authority jones domains=mktpoldom classes=operator
2. As an additional example, assume that three tape storage pools exist:
TAPEPOOL1, TAPEPOOL2, and TAPEPOOL3. To grant restricted storage
privilege for these storage pools to administrator HOLLAND, you can issue the
following command:
grant authority holland stgpools=tape*
3. HOLLAND is restricted to managing storage pools with names that begin with
TAPE, if the storage pools existed when the authority was granted. HOLLAND
is not authorized to manage any storage pools that are defined after authority
has been granted. To add a new storage pool, TAPEPOOL4, to HOLLANDs
authority, issue the following command:
grant authority holland stgpools=tapepool4
For example, rather than revoking all of the privilege classes for administrator
JONES, you want to revoke only the operator authority and the policy authority to
policy domain MKTPOLDOM.
Issue the following command to revoke only the operator authority and the policy
authority to policy domain MKTPOLDOM:
revoke authority jones classes=operator domains=mktpoldom
For example, administrator HOGAN has system authority. To reduce authority for
HOGAN to the operator privilege class, perform the following steps:
1. Revoke the system privilege class by issuing the following command:
revoke authority hogan classes=system
2. Grant operator privilege class by issuing the following command:
grant authority hogan classes=operator
For example, to revoke both the storage and operator privilege classes from
administrator JONES, issue the following command:
revoke authority jones
Tip: If you authenticate a password with an LDAP directory server, the letters and
characters that comprise the password are case-sensitive.
Renaming an administrator ID
You can rename an administrator ID if it needs to be identified by a new ID. You
can also assign an existing administrator ID to another person by issuing the
RENAME command. You cannot rename an administrator ID to one that exists on the
system.
For example, if administrator HOLLAND leaves your organization, you can assign
administrative privilege classes to another user by completing the following steps:
1. Assign HOLLAND's user ID to WAYNESMITH by issuing the RENAME ADMIN
command:
rename admin holland waynesmith
By renaming the administrator's ID, you remove HOLLAND as a registered
administrator from the server. In addition, you register WAYNESMITH as an
administrator with the password, contact information, and administrative
privilege classes previously assigned to HOLLAND.
2. Change the password to prevent the previous administrator from accessing the
server by entering:
update admin waynesmith new_password contact="development"
Important:
1. You cannot remove the last system administrator from the system.
2. You cannot remove the administrator SERVER_CONSOLE.
You can also lock or unlock administrator IDs according to the form of
authentication that they use. When you specify AUTHENTICATION=LOCAL in the
command, all administrator IDs that authenticate with the Tivoli Storage Manager
server are affected. When you specify AUTHENTICATION=LDAP in the command, all
administrator IDs that authenticate with an LDAP directory server are affected.
Table 84 describes the typical tasks for managing access to the server and clients.
Table 84. Managing access
Task Details
Allow a new administrator to access the 1. Registering administrator IDs on page
server 898
2. Granting authority to administrators on
page 900
Modify authority for registered Managing Tivoli Storage Manager
administrators administrator IDs on page 898
Give a user authority to access a client Managing client access authority levels on
remotely page 451
Give an administrator authority to create a Generating client backup sets on the server
backup set for a client node on page 546
Prevent administrators from accessing the Locking and unlocking administrator IDs
server from the server on page 902
Prevent new sessions with the server, but Disabling or enabling access to the server
allow current sessions to complete on page 474
Prevent clients from accessing the server Locking and unlocking client nodes on
page 444
Change whether passwords are required to Disabling the default password
access IBM Tivoli Storage Manager authentication on page 915
Change requirements for passwords v Modifying the default password
expiration period for passwords that are
managed by the Tivoli Storage Manager
server on page 911
v Setting a limit for invalid password
attempts on page 914
v Setting a minimum length for a
password on page 915
Prevent clients from initiating sessions within Server-initiated sessions on page 433
a firewall
Tip: For information on connecting with IBM
Tivoli Storage Manager across a firewall, refer
to the Installation Guide.
To protect data, you can limit backups to just the root user ID when you specify
BACKUPINITiation=root with the REGISTER NODE or UPDATE NODE commands.
Tivoli Storage
Manager server LDAP server
Tivoli Storage
Manager backup-
archive clients DB2
Figure 103. Configuring the server to authenticate passwords with an LDAP directory server
The LDAP directory server interprets letters differently from the Tivoli Storage
Manager server. The LDAP directory server distinguishes the case that is used,
either uppercase or lowercase. For example, the LDAP directory server can
distinguish between secretword and SeCretwOrd. The Tivoli Storage Manager server
interprets all letters for LOCAL passwords as uppercase.
The following terms are used when describing the LDAP directory server
environment:
Distinguished name (DN)
A unique name in an LDAP directory. The DN consists of the following
information. The information must be ordered in this way.
v The relative distinguished name (RDN)
v The organizational unit (ou)
v The organization (o)
v The country (c)
For example:
You must know the user ID that was specified in the SET LDAPUSER command. For
information about the Tivoli Directory access control lists, go to the Tivoli
Directory server information center (http://publib.boulder.ibm.com/infocenter/
tivihelp/v2r1/topic/com.ibm.IBMDS.doc/admin_gd410.htm).
Note: Windows Active Directory users who change passwords when the Enforce
password history policy is enabled can authenticate with the previous password
for one hour. For more information, see the Microsoft site (http://
support.microsoft.com/?id=906305).
Complete the following steps to set up the LDAP directory server so that it can
authenticate passwords:
1. Ensure that you have a directory server installed on the LDAP server. Use one
of the following directory servers:
v IBM Tivoli Directory Server V6.2 or 6.3
v Windows Active Directory version 2003 or 2008
Requirement: If you use Tivoli Directory Server V6.2, you must update Global
Security Kit (GSKit) to V7.0.4.33 or later. For more information, see SSL errors
after upgrading to ITDS 6.3 client (http://www.ibm.com/support/
docview.wss?uid=swg21469388).
2. Create the base distinguished name (Base DN) on the LDAP directory server
for the Tivoli Storage Manager namespace. The Base DN is the part of the
LDAP directory structure from which Tivoli Storage Manager operates,
specified in the LDAPURL option. For example, ou=armonk,cn=tsmdata can be a
Base DN. See your LDAP documentation for how to create a Base DN.
3. Edit the access controls on the LDAP directory server and grant access to the
Base DN to the user ID, which is specified in the SET LDAPUSER command. This
To verify that the LDAP directory server is properly set up, complete the following
steps on the Tivoli Storage Manager server:
1. Test the forward- and reverse-DNS lookup of the LDAP directory server.
2. Test the network connection with the LDAP directory server.
3. Use an LDAP utility test to connect to the LDAP server and search without
Secure Sockets Layer (SSL)/Transport Layer Security (TLS).
4. Use an LDAP utility test to connect to the LDAP server and search with
SSL/TLS.
Related tasks:
Configuring Tivoli Directory Server for TLS on the iKeyman GUI on page 892
Configuring Tivoli Directory Server for TLS on the command line on page 894
Related reference:
Configuring Windows Active Directory for TLS/SSL on page 895
You establish policies for passwords that will be authenticated by each server.
Restriction: You can issue Tivoli Storage Manager server commands to manage
your password policies. If you set a password policy on both the LDAP server and
Tivoli Storage Manager server, the settings might conflict. The result might be that
you are not able to access a node or log on with an administrator ID. For
information on the maximum invalid attempts policy, see the table in Setting a
limit for invalid password attempts on page 914.
In addition to setting a policy for case sensitivity, you can configure the
LDAP-authenticated password policy to set the following options:
Password history
The password history is the number of times that you must define a new
password before you can reuse a password.
Minimum age
The minimum age is the length of time before you can change the
password.
Maximum age
The maximum age is the length of time before you must change the
password.
A combination of characters
You can determine the number of special characters, numbers, and
alphabetical characters for your passwords. For example, some products set
up a password policy to enforce the following rules:
The LDAP server that you use determines the complexity that you can
have for passwords outside of Tivoli Storage Manager.
Complete the following steps on the Tivoli Storage Manager server to authenticate
passwords with an LDAP directory server:
1. Import the key database file from the LDAP directory server. You can use any
method to copy the file from the LDAP directory server to the Tivoli Storage
Manager server.
2. Open the dsmserv.opt file and specify the LDAP directory server with the
LDAPURL option. Specify the LDAP directory server URL and the base
distinguished name (DN) on the LDAPURL option. For example:
LDAPURL ldap://server.dallas.gov/cn=project_x
The default port is 389. If you want to use a different port number, specify it as
part of the LDAPURL option. For example, to specify a port of 222:
LDAPURL ldap://server.dallas.gov:222/cn=project_x
3. Restart the Tivoli Storage Manager server.
4. Issue the SET LDAPUSER command to define the ID of the user who can
administer Tivoli Storage Manager operations on the LDAP directory server.
This user ID must have full administrative authority over the Base DN and be
able to add, delete, and modify all Base DN entries. For example:
set ldapuser "cn=apastolico,ou=manufacturing,o=dhs,c=us"
See the Administrators Reference for more information about the SET LDAPUSER
command.
5. Issue the SET LDAPPASSWORD command to define the password for the user ID
that is defined in the LDAPUSER option. For example:
set ldappassword "boX=T^p$"
If the user ID and password are verified to be correct, communication lines are
opened and the node or administrator ID can run Tivoli Storage Manager
applications.
For example:
register admin admin1 c0m=p1e#Pa$$w0rd?s authentication=ldap
register node node1 n0de^Passw0rd%s authentication=ldap
After you issue the commands, the passwords for administrator ID admin1 and
the node ID node1 can be authenticated with an LDAP directory server.
Tip: A node and its password or an administrator ID and its password each
occupy one inetOrgPerson object on the LDAP directory server. For information
about inetOrgPerson objects, see Definition of the inetOrgPerson LDAP Object
Class (http://www.ietf.org/rfc/rfc2798.txt).
To know which authentication method is in use, issue the QUERY NODE
FORMAT=DETAILED or QUERY ADMIN FORMAT=DETAILED command.
2. Optional: To register all new nodes and administrator IDs with a default
authentication method, issue the SET DEFAULTAUTHENTICATION command. Any
REGISTER NODE or REGISTER ADMIN commands that are issued after you issue the
SET DEFAULTAUTHENTICATION command create nodes and administrators with the
default authentication method. You can set the authentication methods to
LDAP or LOCAL.
For information about the SET DEFAULTAUTHENTICATION command, see the
Administrator's Reference.
When you authenticate nodes and administrator IDs with an LDAP directory
server, you ensure more protection for your passwords. Communication lines
between the LDAP directory server and Tivoli Storage Manager are protected with
Transport Layer Security (TLS).
You can change a password authentication method after you configure the LDAP
directory server and the Tivoli Storage Manager server. However, you cannot
update the authentication method for your own user ID unless you have system
authority. If necessary, another administrator must change the authentication
method.
The following example UPDATE NODE command has a password that is made up
of characters that are supported by the Tivoli Storage Manager server:
update node node1 n0de^87^n0de authentication=ldap
Tip: A shared LDAP server might have a password that is on the LDAP
directory server. In that case, the user is not prompted to enter a new
password.
2. Optional: Issue the QUERY NODE FORMAT=DETAILED or the QUERY ADMIN
FORMAT=DETAILED command to view the results. If you must change the
authentication method for several nodes or administrator IDs, you can use a
wildcard character (*). For example,
update node * authentication=ldap
In the preceding example, the authentication method for all nodes is changed
to LDAP pending.
All nodes and administrator IDs require new passwords after you run the
UPDATE command. Before the node and administrative IDs are given a password,
they are in the LDAP pending state. The node and administrator IDs are updated
to use LDAP authentication, but you must first give them a password.
Find the nodes that are authenticated with the LDAP directory server:
query node authentication=ldap
Find the administrator IDs that do not authenticate their passwords with an LDAP
directory server:
query admin authentication=local
You can query individual nodes or administrator IDs to determine whether they
authenticate with an LDAP directory server. To determine the password
authentication method for node tivnode_12 issue the following command:
query node tivnode_12 format=detailed
Issue the SET PASSEXP command to set the password expiration period for selected
administrator IDs or client nodes. You must specify the administrator ID or node
name with the ADMIN or NODE parameter in the SET PASSEXP command. If you set
the expiration period only for selected users, the expiration period can be 0 - 9999
days. A value of 0 means that user's password never expires.
Restriction: The SET PASSEXP command does not affect administrator IDs and
nodes if their passwords are authenticated with an LDAP directory server.
Issue the following command to set the expiration period of client node
node_tsm12 to 120 days:
| The Tivoli Storage Manager server administrator has a new node that must
| authenticate its password with an LDAP directory server. The first action is to
| create the cn=tsmdata entry and Base DN on the LDAP directory server. The
| server administrator can then set up the LDAPURL option that is based on the Base
| DN. Here is an example entry for the LDAPURL option:
| dsmserv.opt
| LDAPURL ldaps://mongo.storage.tucson.ibm.com:389/cn=tsmdata
| After you set the LDAPURL option, restart the server. Complete the following steps
| to configure the server:
| 1. Issue the query option ldapurl command to validate that you entered all of
| the values correctly.
| 2. Issue the set ldapuser uid=tsmserver,ou=Users,cn=aixdata command to
| configure the LDAPUSER.
| 3. Issue the SET LDAPPASSWORD adsm4Data command to define the password.
| 4. For this scenario, the node that must be added is NODE1. Issue the following
| command:
| register node c0mplexPassw0rd NODE1 authentication=ldap
| command.
| A single node (UPDNODE1) that currently authenticates with the Tivoli Storage
| Manager server is now required to authenticate with an LDAP directory server. For
| UPDNODE1, use the AUTHENTICATION parameter in the UPDATE NODE command. For
| example:
| update node updnode1 newC0mplexPW$ authentication=ldap
| If you want to update all your nodes to authenticate with an LDAP directory
| server, you can use a wildcard. Issue the following command to have all the nodes
| authenticate with an LDAP directory server:
| update node * authentication=ldap
| If you have nodes that authenticate with the Tivoli Storage Manager server and
| nodes that authenticate with an LDAP directory server, you can determine where
| nodes are authenticating. Issue the following command to determine which nodes
| authenticate with an LDAP directory server:
| query node authentication=ldap
| Issue the following command to determine which nodes authenticate with the
| Tivoli Storage Manager server:
| query node authentication=local
| You can issue a LOCK NODE command to lock all nodes that authenticate with the
| Tivoli Storage Manager server. These nodes might be rarely used, and you might
| not know by which password authentication method they are supposed to be
| managed. When you lock the nodes, the node owners must consult with you. At
| that point, you can find out whether they want to use the LDAP directory server
| or stay with the Tivoli Storage Manager server. You can issue the LOCK NODE or
| UNLOCK NODE commands with a wildcard to lock or unlock all nodes in that group.
| To lock all nodes that authenticate with the Tivoli Storage Manager server, issue
| the following command:
| lock node * authentication=local
| After you configure everything, you can design it so that every new node and
| administrator authenticate with an LDAP directory server. After you issue the SET
| DEFAULTAUTH command, you do not have to designate the authentication method
| for any REGISTER NODE or REGISTER ADMIN commands. Issue the following
| command to set the default authentication method to LDAP:
| set defaultauth=ldap
| Any REGISTER NODE or REGISTER ADMIN command that is issued after this SET
| DEFAULTAUTH command inherits the authentication method (LDAP). If you want to
| register a node that authenticates with the Tivoli Storage Manager server, include
| AUTHENTICATION=LOCAL in the REGISTER NODE command.
On the Tivoli Storage Manager server, issue the SET INVALIDPWLIMIT command to
limit the invalid password attempts for the Tivoli Storage Manager namespace.
If you initially set a limit of 4 and then lower the limit, some clients might fail
verification during the next logon attempt.
After a client node is locked, only an administrator with storage authority can
unlock the node.
An administrator can also force a client to change their password on the next
logon by specifying the FORCEPWRESET=YES parameter on the UPDATE NODE or UPDATE
ADMIN command. For more information, see the Administrator's Reference.
Related tasks:
Locking and unlocking client nodes on page 444
Locking and unlocking administrator IDs from the server on page 902
This feature affects all node and administrator passwords, whether the password
authenticates with the Tivoli Storage Manager server or the LDAP directory server.
To set the minimum password length to eight characters, issue the following
example command:
set minpwlength 8
The default value at installation is 0. A value of 0 means that the password length
is not checked. You can set the length value from 0 to 64.
You can only disable password authentication for passwords that authenticate with
the Tivoli Storage Manager server (LOCAL).
To allow administrators and client nodes to access the Tivoli Storage Manager
server without entering a password, issue the following command:
set authentication off
Database backups, infrastructure setup files, and copies of client data can be stored
offsite, as shown in Figure 104.
On-site Off-site
Backup
Archive Database
Database and
backups
recovery log
Disk storage pool
Server
Infrastructure
setup files
DRM: The disaster recovery manager (DRM ) can automate some disaster recovery tasks. A
note like this one identifies those tasks.
Related concepts:
Requirements for a PowerHA cluster on page 1102
Related tasks:
Storage pool hierarchies on page 270
Related information:
Configuring clustered environments
DRM: To store database backup media and setup files offsite, you can use disaster recovery
manager.
Related tasks:
Chapter 35, Disaster recovery manager, on page 1029
Automatic backups by the database manager are based on the following values
that are set by Tivoli Storage Manager:
v The active log space that was used since the last backup, which triggers a full
database backup
v The active log utilization ratio, which triggers an incremental database backup
You can back up the database to tape, FILE, or to remote virtual volumes.
Reserve the device class that you want to use for backups so that the server does
not attempt to back up the database if a device is not available. If a database
backup shares a device class with a lower priority operation, such as reclamation,
and all the devices are in use, the lower priority operation is automatically
canceled. The canceled operation frees a device for the database backup.
Restriction: Tivoli Storage Manager does not support database backup (loading
and unloading) to a Centera device.
To specify the device class to be used for database backups, issue the SET
DBRECOVERY command. For example, to specify a device class named DBBACK,
issue the following command:
set dbrecovery dbback
Tips:
v When you issue the SET DBRECOVERY command, you can also specify the number
of number of concurrent data streams to use for the backup. Use the NUMSTREAMS
parameter.
v To change the device class, reissue the SET DBRECOVERY command.
v If you issue the BACKUP DB command with the TYPE=FULL parameter, and the
device class is not the one that is specified in the SET DBRECOVERY command, a
warning message is issued. However, the backup operation continues and is not
affected.
v Device class definitions are saved in the device configuration files.
Related concepts:
Configuring concurrent multistreaming on page 920
Related tasks:
Protecting the device configuration file on page 926
By default, the percentage of the virtual address space that is dedicated to all
database manager processes is set to 70 - 80 percent of system random-access
memory.
To change this setting, specify the DBMEMPERCENT server option. Ensure that the
value that you specify provides adequate memory for applications other than the
Tivoli Storage Manager are running on the system.
Chapter 33. Protecting and recovering the server infrastructure and client data 919
Configuring concurrent multistreaming
Multiple, concurrent data streams reduce the time required to back up or restore
the database. You can specify the number of data streams that the IBM Tivoli
Storage Manager server uses for backup and restore operations.
For example, if you assign four drives to database backup processing, Tivoli
Storage Manager attempts to write to all four drives concurrently. For restore
operations, the server uses the information that is in the volume history file to
determine the number of data streams that were used during the backup
operation. The server attempts to use the same number of data streams during the
restore operation. For example, if the backup operation used four data streams, the
server attempts the restore operation using four data streams.
Operation If the number of available If the number of available If the number of available drives is
drives exceeds the specified drives equals the specified less than the specified number of
number of streams, the server number of streams, the streams, the server uses
uses server uses
Backup The number of drives that is The number of drives that is All available drives.
equal to the specified number equal to the specified
of streams. number of streams.
Restore The number of drives that is The number of drives that is All available drives. At least one drive
equal to the number of equal to the number of is required for restore processing
streams that were used in the streams that were used in the
backup operation. A restore backup operation.
process never uses more
drives than the number of
streams that were used to
back up the database.
Suppose that you specify four data streams for database backup operations. To
indicate the maximum number of volumes that can be simultaneously mounted,
you specify 4 as the value of the MOUNTLIMIT parameter in the device class
definition. If only three drives are available at the time of the backup operation,
the operation runs using three drives. A message is issued that indicates that fewer
drives are being used for the backup operation than the number requested. If all
four drives for the device class are online, but one drive is in use by another
operation, the backup operation has a higher priority and preempts use of the
drive. If you specify four data streams, but the value of the MOUNTLIMIT parameter
is 2, only two streams are used.
Important: Although multiple, concurrent data streams can reduce the time that is
required for a backup operation, the amount of time that you can save depends on
the size of the database. In general, the benefit of using multiple, concurrent data
streams for database backup and restore operations is limited if the database is less
than 100 GB.
Another potential disadvantage is that more volumes are required for multistream
processing than for single-stream processing. For example, if the backup of an 850
GB database requires a single linear tape open (LTO) volume, switching to four
data streams requires four volumes. Furthermore, those volumes might be partially
filled, especially if you use high-capacity volumes and device compression. For
The decision to use multiple, concurrent data streams for database backup and
restore operations depends on the size of the database, the cost of media, and
performance impacts.
When deciding whether to use data streaming, consider the following issues to
determine whether the benefits of concurrent data streaming are sufficient. If the
disadvantages of multiple, concurrent data streaming exceed the benefits, continue
to use single-stream processing.
v What is the size of your database? In general, the amount of time that you save
by using multiple, concurrent data streams decreases as the size of the database
decreases because of the extra time caused by additional tape mounts. If your
database is less than 100 GB, the amount of time that you save might be
relatively small.
In many environments with databases larger than 100 GB, two database-backup
streams can provide superior performance. However, depending on your
environment, additional streams might not provide enough I/O throughput
relative to the size of your database, the devices that you use, and the I/O
capability of your environment. Consider using three or four database-backup
streams only for environments in which the following conditions apply:
The Tivoli Storage Manager database is located on very high-performing disk
subsystems.
The database is spread across several different RAID arrays that use multiple
database directories.
v How many drives are available for the device class to be used for database
backup?
v Will server operations other than database backup operations compete for
drives?
v If drives are preempted by a database backup operation, what will be the effect
on server operations?
v What is the cost of the tape volumes that you use for database backup
operations? For example, suppose that the backup of an 850 GB database
requires a single high-capacity LTO volume. If you specify four streams, the
same backup operation requires four volumes.
You can specify multiple data streams for automatic or manual database-backup
operations. For database restore operations, the server attempts to use the same
number of data streams that you specified for the backup operation.
Chapter 33. Protecting and recovering the server infrastructure and client data 921
v For manual database-backup operations, issue the BACKUP DB command and
specify a value for the NUMSTREAMS parameter. The value of the NUMSTREAMS
parameter that you specify with the BACKUP DB command overrides the value for
the NUMSTREAMS parameter that you specify with the SET DBRECOVERY command.
For example, if you have a device class DBBACK, issue the following command
to specify three data streams:
backup db dbback numstreams=3
Tips:
v To change the number of data streams for automatic database backup
operations, reissue the SET DBRECOVERY command and specify a different value
for the NUMSTREAMS parameter. For example, reissue the SET DBRECOVERY
command if you add additional drives to the target library or if drives are not
available because of maintenance or device failure. The new value specified by
the NUMSTREAMS parameter is used for the next backup operation.
v To display the number of data streams that are to be used for a database backup
operation, issue the QUERY DB command.
v During a database backup operation, the number of sessions that is displayed in
the output of the QUERY SESSION command or the SELECT command is equal to or
less than the number of specified data streams. For example, if you specified
four data streams, but only three drives are online, 3 sessions are displayed in
the output. If you issue the QUERY DRIVE command, the number of drives in use
is also 3.
v If you reduce the number of data streams after a database backup operation, this
information will not be available to the server when the database is restored. To
specify fewer data streams for the restore operation, take one or both of the
following actions in the device configuration file:
Reduce the number of online and usable drive definitions by removing
DEFINE DRIVE commands.
Update the value of the MOUNTLIMIT parameter of the DEFINE DEVCLASS
command.
During database backup operations, stop other Tivoli Storage Manager
database activities. Other database activities compete for database I/O and
affect throughput during database backup operations that use multiple
streams.
Ensure that you can recover the database to its most current state or to a specific
point-in-time by making both full and incremental database backups:
v To restore the database to its most current state, you need the last full backup,
the last incremental backup after that full backup, and the active and archive log
files.
Tivoli Storage Manager can make full and incremental database backups to tape
while the server is running and available to clients. However, when deciding what
backups to do and when to do them, consider the following properties of backups:
v Full backups take longer than incremental backups.
v Full backups have shorter recovery times than incremental backups because you
must load only one set of volumes to restore the entire database.
v Full backups are required for the first backup and after extending the database
size.
v Only full backups prune archive log space in the archive log directory. If the
available active and archive log space gets low, full database backups occur
automatically. To help prevent space problems, schedule regular full backups
frequently.
For a full database backup, specify TYPE=FULL. For an incremental database backup,
specify TYPE=INCREMENTAL. For example, to run a full database backup using a
device class LTOTAPE, three volumes, and three concurrent data streams, issue the
following command:
backup db devclass=ltotape type=full volumenames=vol1,vol2,vol3
numstreams=3
Database backups require devices, media, and time. Consider scheduling backups
at specific times of the day and after major storage operations.
To schedule database backups, use the DEFINE SCHEDULE command. For a full
database backup, specify TYPE=FULL. For an incremental database backup, specify
Chapter 33. Protecting and recovering the server infrastructure and client data 923
TYPE=INCREMENTAL. For example, to set up a schedule to run a full backup to device
class FILE every day at 1:00 a.m., enter the following command:
define schedule daily_backup type=administrative
cmd="backup db deviceclass=file type=full" starttime=01:00
Tip: You can also schedule a database backup schedule as part of a maintenance
script that you create in the Administration Center.
A snapshot database backup is a full database backup that does not interrupt the
full and incremental backup series. Consider using snapshot database backups in
addition to full and incremental backups.
To make a snapshot database backup, issue the BACKUP DB command. For example,
to make a snapshot database backup to the TAPECLASS device class, enter the
following command:
backup db type=dbsnapshot devclass=tapeclass
New volume history entries are created for the snapshot database volumes.
Restriction: To prevent the accidental loss of what might only way to recover the
server, you cannot delete the most current snapshot database using the DELETE
VOLHISTORY command.
Related concepts:
Volume history file and volume reuse on page 81
Related tasks:
Protecting the volume history file on page 925
For protection against database and log media failures, place the active log and the
archive log in different file systems. In addition, mirror both logs. Mirroring
simultaneously writes data to two independent disks. For example, suppose that a
sudden power outage causes a partial page write. The active log is corrupted and
is not readable. Without mirroring, recovery operations cannot complete when the
server is restarted. However, if the active log is mirrored and a partial write is
detected, the log mirror can be used to construct valid images of the missing data.
To protect the active log, the archive log, and the archive failover log, take the
following steps:
v To specify the active log mirror, use the MIRRORLOGDIRECTORY parameter on the
DSMSERV FORMAT command. Mirror the active log in a file system that exists on a
different disk drive than the primary active log.
Tips:
v Consider mirroring the active log and the archive log if retention protection is
enabled. If a database restore is needed, you can restore the database to the
current point in time with no data loss.
v You can dynamically start or stop mirroring while Tivoli Storage Manager is
running.
v Despite its benefits, mirroring does not protect against a disaster or a hardware
failure that affects multiple drives or causes the loss of the entire system. In
addition, mirroring doubles the amount of disk space that is required for logs.
Mirroring also results in decreased performance.
Related concepts:
Active log on page 659
Archive log on page 660
Archive failover log on page 661
The following volume history is stored in the Tivoli Storage Manager database and
updated in the volume history files:
v Sequential-access storage-pool volumes that were added, reused through
reclamation or move data operations, or deleted during delete volume or
reclamation operations
v Full and incremental database-backup volumes
v Export volumes for administrator, node, policy, and server data
v Snapshot database-backup volumes
v Backup set volumes
To specify the file path and name for a volume history file, use the VOLUMEHISTORY
server option. To specify more than one path and name, use multiple
VOLUMEHISTORY entries. Tivoli Storage Manager stores duplicate volume histories in
all the files that are specified with VOLUMEHISTORY options. To find the required
volume-history information during a database restore operation, the server tries to
open volume history files in the order in which the VOLUMEHISTORY entries occur in
the server options file. If the server cannot read a file, the server tries to open the
Chapter 33. Protecting and recovering the server infrastructure and client data 925
next volume history file.
Ensure that volume history is protected by taking one or more of the following
steps:
v Store at least one copy of the volume history file offsite or on a disk separate
from the database.
v Store a printout of the file offsite.
v Store a copy of the file offsite with your database backups and device
configuration file.
v Store a remote copy of the file, for example, on an NFS-mounted file system.
Tip: To manually update the volume history file, you can use the BACKUP
VOLHISTORY command. Ensure that updates are complete by following these
guidelines:
v If you must halt the server, wait a few minutes after issuing the BACKUP
VOLHISTORY command.
v Specify multiple VOLUMEHISTORY options in the server options file.
v Review the volume history files to verify that the files were updated.
DRM: DRM saves a copy of the volume history file in its disaster recovery plan file.
Related tasks:
Deleting information about volume history on page 629
To specify the file path and name for a device configuration file, use the DEVCONFIG
server option. To specify more than one path and name, use multiple DEVCONIG
entries. Tivoli Storage Manager stores duplicate device configuration information in
all the files that are specified with DEVCONFIG options.
To find the required device-configuration information during a database restore
operation, the server tries to open device configuration files in the order in which
the DEVCONFIG entries occur in the server options file. If the server cannot read a
file, the server tries to open the next device configuration file.
To ensure the availability of device configuration information, take one or more of
the following steps:
Tips:
v To manually update the device configuration file, use the BACKUP DEVCONFIG
command. Ensure that updates are complete by following these guidelines:
If you must halt the server, wait a few minutes after issuing the BACKUP
DEVCONIG command.
Specify multiple DEVCONFIG options in the server options file.
Review the device configuration files to verify that the files were updated.
If you are using automated tape libraries, volume location information is
saved in the device configuration file. The file is updated whenever CHECKIN
LIBVOLUME, CHECKOUT LIBVOLUME, and AUDIT LIBRARY commands are issued,
and the information is saved as comments (/*....*/). This information is used
during restore or load operations to locate a volume in an automated library.
If a disaster occurs, you might have to restore Tivoli Storage Manager with devices
that are not included in the device configuration file.
DRM: DRM automatically saves a copy of the device configuration file in its disaster
recovery plan file.
Related tasks:
Updating the device configuration file on page 951
To ensure the availability of server options file, take one or more of the following
steps:
v Store at least one copy of the server options file offsite or on a disk separate
from the database.
v Store a printout of the file offsite.
v Store a copy of the file offsite with your database backups and device
configuration file.
v Store a remote copy of the file, for example, on an NFS-mounted file system.
DRM: DRM automatically saves a copy of the server options file in its disaster recovery
plan file.
Chapter 33. Protecting and recovering the server infrastructure and client data 927
Protecting information about the database and recovery logs
To restore the database, you need detailed information about the database and
recovery log. The recovery log includes the active log, the active log mirror, the
archive log, and the archive failover log. The recovery log contains records of
changes to the database.
You can determine the following information from the recovery log:
v The directory where the recovery log is located
v The amount of disk space required
If you lose the recovery log, you lose the changes that were made since the last
database backup.
DRM: DRM helps you save database and recovery log information.
The cert.kdb file includes the server's public key, which allows the client to
encrypt data. The digital certificate file cannot be stored in the server database
because the Global Security Kit (GSKit) requires a separate file in a certain format.
The cert256.arm file is generated by the V6.3 server for distribution to the V6.3
clients.
Keep backup copies of the cert.kdb and cert256.arm file in a secure location. If
both of the original files and any copies are lost or corrupted, you can generate a
new certificate file.
Attention: If client data object encryption is in use and the encryption key is not
available, data cannot be restored or retrieved under any circumstance. When
using ENABLECLIENTENCRYPTKEY for encryption, the encryption key is stored
on the server database. This means that for objects using this method, the server
database must exist and have the proper values for the objects for a proper restore
operation. Ensure that you back up the server database frequently to prevent data
loss.
For more information about encryption keys, see IBM Tivoli Storage Manager Using
the Application Programming Interface.
Related tasks:
Troubleshooting the certificate key database on page 890
You can use server-to-server communications to store copies of the recovery plan
on a remote target server, in addition to traditional disk-based files. Storing
recovery plan files on a target server provides the following advantages:
v A central repository for recovery plan files
v Automatic expiration of plan files
v Query capabilities for displaying information about plan files and their contents
v Fast retrieval of a recovery plan file if a disaster occurs
You can also store the recovery plan locally, on CD, or in print.
DRM: DRM can query the server and generate a detailed recovery plan for your
installation.
Related tasks:
Storing the disaster recovery plan locally on page 1041
Storing the disaster recovery plan on a target server on page 1041
Related reference:
The disaster recovery plan file on page 1068
A typical Tivoli Storage Manager configuration includes a primary disk pool and
primary tape pool for data backup. Copy storage pools contain active and inactive
versions of data that is backed up from primary storage pools. Figure 105 on page
930 shows a configuration with an onsite FILE-type active-data pool and an offsite
copy storage pool.
Chapter 33. Protecting and recovering the server infrastructure and client data 929
Server Storage On-site storage
HSM
Backup Active backup
data only
Archive
Disk Storage
Pool (FILE)
Related concepts:
Active-data pools on page 251
Copy storage pools on page 251
Primary storage pools on page 250
Related tasks:
Storage pool hierarchies on page 270
Tip: Backing up storage pools requires an additional 200 bytes of space in the
database for each file copy. As more files are added to the copy storage pools and
active-data pools, reevaluate your database size requirements.
Each of the commands in the following examples uses four parallel processes
(MAXPROCESS=4) to perform an incremental backup of the primary storage pool
to the copy storage pool or a copy to the active-data pool. Set the MAXPROCESS
parameter in the BACKUP STGPOOL command to the number of mount points or
drives that can be dedicated to this operation.
v To back up data in a primary storage pool to a copy storage pool, use the BACKUP
STGPOOL command. For example, to back up a primary storage pool named
ARCHIVEPOOL to a copy storage pool named DISASTER-RECOVERY, issue the
following command:
backup stgpool archivepool disaster-recovery maxprocess=4
The only files backed up to the DISASTER-RECOVERY pool are files for which a
copy does not exist in the copy storage pool. The data format of the copy
storage pool and the primary storage pool can be NATIVE, NONBLOCK, or the
NDMP formats NETAPPDUMP, CELERRADUMP, or NDMPDUMP. The server
copies data from the primary storage pool only to a copy storage pool that has
the same format.
Tip: To further minimize the potential loss of data, you can mark the backup
volumes in the copy storage pool as OFFSITE and move them to an offsite
location. In this way, the backup volumes are preserved and are not reused or
mounted until they are brought on-site. Ensure that you mark the volumes as
OFFSITE before you back up the database. To avoid marking volumes as offsite
or physically move volumes:
Specify a device class of SERVER in your database backup.
Back up a primary storage pool to a copy storage pool or associated with a
device class of SERVER.
v To copy active data, use the COPY ACTIVEDATA command. For example, to copy
active data from a primary storage pool named BACKUPPOOL to an active-data
pool named CLIENT-RESTORE, issue the following command:
copy activedata backuppool client-restore maxprocess=4
The primary storage pool must have a data format of NATIVE or NONBLOCK.
Copies from primary storage pools with any of the NDMP formats are not
permitted. The only files copied to the CLIENT-RESTORE pool are active backup
files for which a copy does not exist in the active-data pool.
Because backups and active-data copies are made incrementally, you can cancel the
processes. If you reissue the BACKUP STGPOOL or COPY ACTIVEDATA command, the
backup or active-data copy continues from the point at which the process was
canceled.
Restrictions:
v If a backup is to be made to a copy storage pool and the file exists with the
same insertion date, no action is taken. Similarly, if a copy is to be made to an
active-data pool and the file exists with the same insertion data, no action is
taken.
v When a disk storage pool is backed up, cached files (copies of files that remain
on disk after being migrated to the next storage pool) are not backed up.
v Files in a copy storage pool or an active-data pool do not migrate to another
storage pool.
v After a file is backed up to a copy storage pool or a copy is made to an
active-data pool, the file might be deleted from the primary storage pool. When
an incremental backup of the primary storage pool occurs, the file is then
deleted from the copy storage pool. Inactive files in active-data pools are deleted
during the process of reclamation. If an aggregate being copied to an active-data
pool contains some inactive files, the aggregate is reconstructed into a new
aggregate without the inactive files.
Chapter 33. Protecting and recovering the server infrastructure and client data 931
Related concepts:
Active-data pools on page 251
Copy storage pools on page 251
Primary storage pools on page 250
Securing sensitive client data on page 541
Related tasks:
Backing up the data in a storage hierarchy on page 275
Chapter 20, Automating server operations, on page 633
Create a schedule for backing up two primary storage pools to the same copy
storage pool.
Assume that you have two primary storage pools: one random access storage pool
(DISKPOOL) and one tape storage pool (TAPEPOOL, with device class
TAPECLASS). Files stored in DISKPOOL are migrated to TAPEPOOL. You want to
back up the files in both primary storage pools to a copy storage pool.
Note:
a. Because scratch volumes are allowed in this copy storage pool, you do not
need to define volumes for the pool.
b. All storage volumes in COPYPOOL are located onsite.
2. Perform the initial backup of the primary storage pools by issuing the
following commands:
backup stgpool diskpool copypool maxprocess=2
backup stgpool tapepool copypool maxprocess=2
3. Define schedules to automatically run the commands for backing up the
primary storage pools. The commands to schedule are those that you issued in
step 2.
Tips:
v To minimize tape mounts, you can take one or both of the following steps:
Back up the disk storage pool first, then the tape storage pool.
If you schedule storage pool backups and migrations and have enough disk
storage, back up or copy as many files as possible from the disk storage pool
to copy storage pools and active-data pools. After the backup and copy
operations are complete, migrate the files from the disk storage pools to
primary tape storage pools.
v if you have active-data pools, you can schedule the COPY ACTIVEDATA command
to copy the active data that is in primary storage pools to the active-data pools.
Performing a storage pool backup for data stored in a Centera storage pool is not
supported. To ensure the safety of the data, therefore, consider using the
replication feature of the Centera storage device.
With this feature, you can copy data to a replication Centera storage device at a
different location. If the data in the primary Centera storage pool become
unavailable, you can access the replication Centera storage device by specifying its
IP address using the HLADDRESS parameter on the UPDATE DEVCLASS command for
the device class pointed to by the Centera storage pool. After the primary Centera
storage device is re-established, you can issue the UPDATE DEVCLASS command again
and change the value of the HLADDRESS parameter to point back to the primary
Centera storage device. You must restart the server each time you update the
HLADDRESS parameter on the UPDATE DEVCLASS command.
Related concepts:
Files on sequential volumes (CENTERA) on page 49
You can also enable the simultaneous-write function so that active client backup
data is written to active-data pools at the same time it is written to the primary
storage pool. The active-data pools must be specified in the definition of the
primary storage pool, and the clients whose active data is to be saved must be
members of a policy domain that specifies the active-data pool as the destination
for active backup data.
Chapter 33. Protecting and recovering the server infrastructure and client data 933
Delaying reuse of volumes for recovery purposes
When you define or update a sequential access storage pool, you can use the
REUSEDELAY parameter. This parameter specifies the number of days that must
elapse before a volume can be reused or returned to scratch status after all files are
expired, deleted, or moved from the volume.
When you delay reuse of such volumes and they no longer contain any files, they
enter the pending state. Volumes remain in the pending state for the time that is
specified with the REUSEDELAY parameter for the storage pool to which the volume
belongs.
Delaying reuse of volumes can be helpful under certain conditions for disaster
recovery. When files are expired, deleted, or moved from a volume, they are not
erased from the volumes: The database references to these files are removed. Thus
the file data might still exist on sequential volumes if the volumes are not
immediately reused.
A disaster might force you to restore the database using a database backup that is
not the most recent backup. In this case, some files might not be recoverable
because the server cannot find them on current volumes. However, the files might
exist on volumes that are in pending state. You might be able to use the volumes
in pending state to recover data by doing the following steps:
1. Restore the database to a point-in-time before file expiration.
2. Use a primary, copy-storage, or active-data pool volume that is not rewritten
and that contains the expired file at the time of database backup.
If you back up your primary storage pools, set the REUSEDELAY parameter for the
primary storage pools to 0 to efficiently reuse primary scratch volumes. For your
copy storage pools and active-data pools, delay the reuse of volumes for as long as
you keep your oldest database backup.
Related tasks:
Scenario: Protecting the database and storage pools on page 944
Related reference:
Running expiration processing to delete expired files on page 514
Use this section to help you audit storage pool volumes for data integrity.
To ensure that all files are accessible on volumes in a storage pool, audit any
volumes you suspect might have problems by using the AUDIT VOLUME command.
You have the option of auditing multiple volumes using a time range criteria, or
auditing all volumes in a storage pool.
If a storage pool has data validation enabled, run an audit for the volumes in the
storage pool to have the server validate the data.
Note: If Tivoli Storage Manager detects a damaged file on a Centera volume, then
a command is sent to Centera to delete the file. If Centera is unable to delete the
file because the retention period for the file is not expired, then the volume that
contains the file is not be deleted.
To display the results of a volume audit after it completes, use the QUERY ACTLOG
command.
Related tasks:
Requesting information from the activity log on page 804
During the auditing process, the server performs the following actions:
v Sends informational messages about processing to the server console.
v Prevents new files from being written to the volume.
v Generates a cyclic redundancy check, if data validation is enabled for the storage
pool.
You can specify whether you want the server to correct the database if
inconsistencies are detected. Tivoli Storage Manager corrects the database by
deleting database records that refer to files on the volume that cannot be accessed.
The default is to report inconsistencies that are found (files that cannot be
accessed), but to not correct the errors.
If files with read errors are detected, their handling depends on the following
conditions:
v The type of storage pool to which the volume is assigned
v The FIX parameter on the AUDIT VOLUME command
v The location of file copies (whether a copy of the file exists in a copy storage
pool)
Chapter 33. Protecting and recovering the server infrastructure and client data 935
Errors in an audit of a primary storage pool volume:
When a volume in a primary storage pool is audited, the setting of the FIX
parameter determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports, but does not delete, any database records that refer to
files found with logical inconsistencies. If the AUDIT VOLUME command
detects a read error in a file, the file is marked as damaged in the database.
You can do one of the following actions:
v If a backup copy of the file is stored in a copy storage pool, you can
restore the file by using the RESTORE VOLUME or RESTORE STGPOOL
command.
v If the file is a cached copy, you can delete references to the file on this
volume by using the AUDIT VOLUME command again. Specify FIX=YES.
If the AUDIT VOLUME command does not detect a read error in a damaged
file, the file state is reset, and the file can be used. For example, if a dirty
tape head caused some files to be marked damaged, you can clean the
head and then audit the volume to make the files accessible again.
FIX=YES
Any inconsistencies are fixed as they are detected.
If the AUDIT VOLUME command detects a read error in a file:
v If the file is not a cached copy and a backup copy is stored in a copy
storage pool, the file is marked as damaged in the database. The file can
then be restored using the RESTORE VOLUME or RESTORE STGPOOL
command.
v If the file is not a cached copy and a backup copy is not stored in a copy
storage pool, all database records that refer to the file are deleted.
v If the file is a cached copy, the database records that refer to the cached
file are deleted. The primary file is stored on another volume.
If the AUDIT VOLUME command does not detect a read error in a damaged
file, the file state is reset, and the file can be used. For example, if a dirty
tape head caused some files to be marked damaged, you can clean the
head and then audit the volume to make the files accessible again.
When a volume in a copy storage pool is audited, the setting of the FIX parameter
determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports the error and marks the file copy as damaged in the
database.
FIX=YES
The server deletes references to the file on the audited volume from the
database.
When a volume in an active-data storage pool is audited, the setting of the FIX
parameter determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports the error and marks the file copy as damaged in the
database.
FIX=YES
The server deletes references to the file on the audited volume from the
database. The physical file is deleted from the active-data pool.
When auditing a volume in an active-data pool, the server skips inactive files in
aggregates that were removed by reclamation. These files are not reported as
skipped or marked as damaged.
Data validation is helpful if you introduce new hardware devices. The validation
assures that the data is not corrupted as it moves through the hardware, and then
is written to the volume in the storage pool. You can use the DEFINE STGPOOL or
UPDATE STGPOOL commands to enable data validation for storage pools.
When you enable data validation for an existing storage pool, the server validates
data that is written from that time forward. The server does not validate existing
data which was written to the storage pool before data validation was enabled.
When data validation is enabled for storage pools, the server generates a cyclic
redundancy check (CRC) value and stores it with the data when it is written to the
storage pool. The server validates the data when it audits the volume, by
generating a cyclic redundancy check and comparing this value with the CRC
value stored with the data. If the CRC values do not match, then the server
processes the volume in the same manner as a standard audit volume operation.
This process can depend on the following conditions:
v The type of storage pool to which the volume is assigned
v The FIX parameter of the AUDIT VOLUME command
v The location of file copies (whether a copy of the file exists in a copy storage
pool or an active-data pool)
Check the activity log for details about the audit operation.
The server removes the CRC values before it returns the data to the client node.
Related reference:
Errors in an audit of active-data storage pool volumes
Errors in an audit of copy storage pool volumes on page 936
Errors in an audit of a primary storage pool volume on page 936
Chapter 33. Protecting and recovering the server infrastructure and client data 937
Choosing when to enable data validation:
Data validation is available for nodes and storage pools. The forms of validation
are independent of each other.
Tivoli Storage
Manager
client
1 2
3 Tivoli Storage
Storage
Manager
Agent
server
4 5
Storage
Pool
Table 87 provides information that relates to Figure 106. This information explains
the type of data being transferred and the appropriate command to issue.
Table 87. Setting data validation
Where to Set
Numbers in Data Type of Data
Figure 106 Validation Transferred Command Command Parameter Setting
1 Node File Data and See Note See Note
definition Metadata
2 Node File Data and REGISTER NODE UPDATE NODE VALIDATEPROTOCOL=ALL or
definition Metadata VALIDATEPROTOCOL=DATAONLY
3 Server Metadata DEFINE SERVER UPDATE VALIDATEPROTOCOL=ALL
definition SERVER
(storage agent
only)
Note: The storage agent reads the VALIDATEPROTOCOL setting for the client from the
Tivoli Storage Manager server.
Figure 107 is similar to the previous figure, however note that the top section
encompassing 1, 2, and 3 is shaded. All three of these data validations are
related to the VALIDATEPROTOCOL parameter. What is significant about this validation
is that it is active only during the client session. After validation, the client and
server discard the CRC values generated in the current session. This is in contrast
to storage pool validation, 4 and 5, which is always active when the storage
pool CRCDATA setting is YES.
The validation of data transfer between the storage pool and the storage agent 4
is managed by the storage pool CRCDATA setting defined by the Tivoli Storage
Manager server. Even though the flow of data is between the storage agent and the
storage pool, data validation is determined by the storage pool definition.
Therefore, if you always want your storage pool data validated, set your primary
storage pool CRCDATA setting to YES.
Tivoli Storage
Manager
client
1 2
4 5
Storage
Pool
Figure 107. Protocol data validation versus storage pool data validation
Chapter 33. Protecting and recovering the server infrastructure and client data 939
If the network is unstable, you might decide to enable only data validation for
nodes. Tivoli Storage Manager generates a cyclic redundancy check when the data
is sent over the network to the server. Certain nodes might have more critical data
than others and might require the assurance of data validation. When you identify
the nodes that require data validation, you can choose to have only the user's data
validated or all the data validated. Tivoli Storage Manager validates both the file
data and the file metadata when you choose to validate all data.
If the network is fairly stable but your site is perhaps using new hardware devices,
you might decide to enable only data validation for storage pools. When the server
sends data to the storage pool, the server generates cyclic redundancy checking,
and stores the CRC value with the data. The server validates the CRC value when
the server audits the volume. Later, you might decide that data validation for
storage pools is no longer required after the devices prove to be stable.
Related tasks:
Using virtual volumes to store data on another server on page 737
Auditing storage pool volumes on page 934
Related reference:
Validating a node's data during a client session on page 538
Consider the impact on performance when you decide whether data validation is
necessary for storage pools. This method of validation is independent of validating
data during a client session with the server. When you choose to validate storage
pool data, there is no performance impact on the client.
If you enable CRC for storage pools on devices that later prove to be stable, you
can increase performance by updating the storage pool definition to disable data
validation.
Use the AUDIT VOLUME command to specify an audit for data written to volumes
within a range of days, or to run an audit for a storage pool.
You can manage when the validation of data in storage pools occurs by scheduling
the audit volume operation. You can choose a method suitable to your
environment, for example:
v Select volumes at random to audit. A random selection does not require
significant resources or cause much contention for server resources but can
provide assurance that the data is valid.
v Schedule a daily audit of all volumes written in the last day. This method
validates data written to a storage pool on a daily basis.
To display the results of a volume audit after it completes, you can issue the QUERY
ACTLOG command.
To specify that only summary messages for /dev/vol1 are sent to the activity log
and server console, issue the following command:
audit volume /dev/vol1 quiet=yes
The audit volume process is run in the background and the server returns the
following message:
ANR2313I Audit Volume NOFIX process started for volume /dev/vol1
(process id 4).
To view the status of the audit volume process, issue the following command:
query process 4
The server then begins the audit process with the first volume on which the first
file is stored. For example, Figure 108 shows five volumes defined to ENGBACK2.
In this example, File A spans VOL1 and VOL2, and File D spans VOL2, VOL3,
VOL4, and VOL5.
A
D
B
A D D
E
C
D
VOL1 VOL2 VOL3 VOL4 VOL5
If you request that the server audit volume VOL3, the server first accesses volume
VOL2, because File D begins at VOL2. When volume VOL2 is accessed, the server
only audits File D. It does not audit the other files on this volume.
Chapter 33. Protecting and recovering the server infrastructure and client data 941
Because File D spans multiple volumes, the server accesses volumes VOL2, VOL3,
VOL4, and VOL5 to ensure that there are no inconsistencies between the database
and the storage pool volumes.
For volumes that require manual mount and demount operations, the audit
process can require significant manual intervention.
This option is useful when the volume you want to audit contains part of a file,
the rest of which is stored on a different, damaged volume. For example, to audit
only volume VOL5 in the example in Figure 108 on page 941 and have the server
fix any inconsistencies found between the database and the storage volume, enter:
audit volume vol5 fix=yes skippartial=yes
When you use the parameters FROMDATE, TODATE, or both, the server limits the audit
to only the sequential media volumes that meet the date criteria, and automatically
includes all online disk volumes. When you include the STGPOOL parameter you
limit the number of volumes that might include disk volumes.
Issue the AUDIT VOLUME command with the FROMDATE and TODATE parameters.
For example, to audit the volumes in storage pool BKPOOL1 for volumes written
from March 20, 2002 to March 22, 2002.
audit volume stgpool=bkppool1 fromdate=03/20/2002 todate=03/22/2002
The server audits all volumes that were written to starting at 12:00:01 a.m. on
March 20 and ending at 11:59:59 p.m. on March 22, 2002.
For example, you can audit the volumes in storage pool BKPOOL1 by issuing the
following command:
audit volume stgpool=bkppool1
For example, if your critical users store data in storage pool STPOOL3 and you
want all volumes in the storage pool audited every two days at 9:00 p.m., issue the
following command:
define schedule crcstg1 type=administrative
cmd=audit volume stgpool=stgpool3 active=yes starttime=21:00 period=2
A data error, which results in a file being unreadable, can be caused by such things
as a tape deteriorating or being overwritten or by a drive needing cleaning. If a
data error is detected when a client tries to restore, retrieve, or recall a file or
during a volume audit, the file is marked as damaged. If the same file is stored in
other copy storage pools or active-data pools, the status of those file copies is not
changed.
If files are marked as damaged, you can perform the following operations on them:
v Restore, retrieve, or recall the files
v Move the files by migration, reclamation, or the MOVE DATA command
v Back up during a BACKUP STGPOOL operation if the primary file is damaged
v Restore during a RESTORE STGPOOL or RESTORE VOLUME operation if the backup
copy in a copy storage pool or active-data pool volume is damaged
v Migrate or reclaim during migration and reclamation
To maintain the data integrity of user files, you can perform the following steps:
1. Detect damaged files before the users do. The AUDIT VOLUME command marks a
file as damaged if a read error is detected for the file. If an undamaged copy is
in an on-site copy storage pool or an active-data pool volume, it is used to
provide client access to the file.
2. Reset the damaged status of files if the error that caused the change to
damaged status was temporary. You can use the AUDIT VOLUME command to
correct situations when files are marked damaged due to a temporary hardware
problem, such as a dirty tape head. The server resets the damaged status of
files if the volume in which the files are stored is audited and no read errors
are detected.
3. Correct files that are marked as damaged. If a primary file copy is marked as
damaged and a usable copy exists in a copy storage pool or an active-data pool
volume, the primary file can be corrected using the RESTORE VOLUME or RESTORE
STGPOOL command.
4. Regularly run commands to identify files that are marked as damaged:
v The RESTORE STGPOOL command displays the name of each volume in the
restored storage pool that contains one or more damaged primary files. Use
this command with the preview option to identify primary volumes with
damaged files without actually performing the restore operation.
v The QUERY CONTENT command with the DAMAGED parameter displays damaged
files on a specific volume.
Related tasks:
Data validation during audit volume processing on page 937
Restoring damaged files on page 944
Chapter 33. Protecting and recovering the server infrastructure and client data 943
Restoring damaged files
If you use copy storage pools, you can restore damaged client files. You can also
check storage pools for damaged files and restore the files.
This section explains how to restore damaged files based on the scenario in
Example: Scheduling a backup with one copy storage pool on page 932.
If a client tries to access a file stored in TAPEPOOL and a read error occurs, the file
in TAPEPOOL is automatically marked as damaged. Future accesses to the file
automatically use the copy in COPYPOOL as long as the copy in TAPEPOOL is
marked as damaged.
To restore any damaged files in TAPEPOOL, you can define a schedule that issues
the following command periodically:
restore stgpool tapepool
You can check for and replace any files that develop data-integrity problems in
TAPEPOOL or in COPYPOOL. For example, every three months, query the
volumes in TAPEPOOL and COPYPOOL by entering the following commands:
query volume stgpool=tapepool
query volume stgpool=copypool
Then issue the following command for each volume in TAPEPOOL and
COPYPOOL:
audit volume <volname> fix=yes
If a read error occurs on a file in TAPEPOOL, that file is marked damaged and an
error message is produced. If a read error occurs on file in COPYPOOL, that file is
deleted and a message is produced.
This scenario assumes a storage hierarchy that consists of the following storage
pools:
v Default random-access storage pools named BACKUPPOOL, ARCHIVEPOOL,
and SPACEMGPOOL
v A tape storage pool named TAPEPOOL
To provide extra levels of protection for client data, the scenario also specifies an
offsite copy storage pool and an onsite active-data pool.
The standard procedures for the company include the following activities:
Chapter 33. Protecting and recovering the server infrastructure and client data 945
c. Back up the database by using the BACKUP DB command. For example, issue
the following command:
backup db type=incremental devclass=tapeclass scratch=yes
| Restriction: Do not run the MOVE DRMEDIA and BACKUP STGPOOL or BACKUP DB
| commands concurrently. Ensure that the storage pool backup processes are
| complete before you issue the MOVE DRMEDIA command.
5. Perform the following operations nightly after the scheduled operations
completes:
a. Back up the volume history and device configuration files. If they change,
back up the server options files and the database and recovery log setup
information.
b. Move the copy storage pool volumes marked offsite, the database backup
volumes, volume history files, device configuration files, server options
files, and the database and recovery log setup information to the offsite
location.
c. Identify offsite volumes that must be returned onsite. For example, issue the
following command:
query volume stgpool=disaster-recovery access=offsite status=empty
For database restore operations, the Tivoli Storage Manager server reads the
information that is in the volume history file to determine the number of data
streams to read. The server attempts to match the number of streams that were
used during the backup operation. For example, if the backup operation used four
streams, the Tivoli Storage Manager server attempts the restore operation using
four streams.
If you reduce the number of data streams after a database backup operation, this
information will not be available to the server when the database is restored. To
specify fewer data streams for the restore operation, take one or both of the
following actions in the device configuration file:
To restore a database to point in time, you need the latest full backup before the
point in time. You also need the latest incremental backup after that last full
backup. You can also use snapshot database backups to restore a database to a
specific point in time.
Before restoring the database, have available the following infrastructure setup
files:
v Server options file
v Volume history file:
Copy the volume history file pointed to by the server options file. The backup
copy must a different name. If the restore fails and you must try it again, you
might need the backup copy of the volume history file. After the database is
restored, any volume history information pointed to by the server options is lost.
This information is required to identify the volumes to be audited.
If your old volume history file shows that any of the copy storage pool volumes
that are required to restore your storage pools were reused (STGREUSE) or
deleted (STGDELETE), you might not be able to restore all your files. You can
avoid this problem by including the REUSEDELAY parameter when you define
your copy storage pools.
Chapter 33. Protecting and recovering the server infrastructure and client data 947
v Device configuration file:
You might need to modify the device configuration file based on the hardware
available at the recovery site. For example, the recovery site might require a
different device class, library, and drive definitions.
v Detailed query output about the database and recovery log
If files were migrated, reclaimed, or moved after a backup, the files might be lost
and the space occupied by those files might be reused. You can minimize this loss
by using the REUSEDELAY parameter when defining or updating sequential-access
storage pools. This parameter delays volumes from being returned to scratch or
being reused.
Similarly, the volume inventories for Tivoli Storage Manager and for any
automated libraries might also be inconsistent. Issue the AUDIT LIBRARY command
to synchronize these inventories.
Related tasks:
Updating the device configuration file on page 951
Restoring to a point-in-time in a shared library environment on page 959
Delaying reuse of volumes for recovery purposes on page 934
You can use full and incremental backups to restore a database to its most current
state. Snapshot database backups are complete database copies of a point in time.
You can restore a database to its most current state if the last backup series that
was created for the database is available. A backup series consists of a full backup,
the latest incremental backup, and all active and archive logs for database changes
since the last backup in the series was run.
Attention: Recovering the database to its most current state is not possible if the
active or archive logs are lost.
To restore a database to its most current state, issue the DSMSERV RESTORE DB
command. For example:
dsmserv restore db
If the original database and recovery log directories are available, use the DSMSERV
RESTORE DB utility to restore the database. However, if the database and recovery
log directories are lost, recreate them first, and then issue the DSMSERV RESTORE DB
utility.
In a Tivoli Storage Manager shared library environment, the server that manages
and controls the shared library is known as the library manager. The library
manager maintains a database of the volumes within the shared library.
Chapter 33. Protecting and recovering the server infrastructure and client data 949
3. Gather the outputs from your detailed queries about your database and
recovery log setup information.
4. Determine whether the original database and recovery log directories exist. If
the original database or recovery log directories were lost, recreate them using
the operating system mkdir command.
Note: The directories must have the same name as the original directories.
5. Use the DSMSERV RESTORE DB utility to restore the database to the current time.
6. Start the Tivoli Storage Manager server instance.
7. Issue an AUDIT LIBRARY command from each library client for each shared
library.
8. Create a list from the old volume history information (generated by the QUERY
VOLHISTORY command) that shows all of the volumes that were reused
(STGREUSE), added (STGNEW), and deleted (STGDELETE) since the original
backup. Use this list to perform the rest of this procedure.
9. Audit all disk volumes, all reused volumes, and any deleted volumes located
by the AUDIT VOLUME command using the FIX=YES parameter.
10. Issue the RESTORE STGPOOL command to restore those files detected as
damaged by the audit. Include the FIX=YES parameter on the AUDIT VOLUME
command to delete database entries for files not found in the copy storage
pool or active-data pool.
11. Mark any volumes that cannot be located as destroyed, and recover those
volumes from copy storage pool backups. Recovery from active-data pool
volumes is not suggested unless the loss of inactive data is acceptable. If no
backups are available, delete the volumes from the database by using the
DELETE VOLUME command with the DISCARDDATA=YES parameter.
12. Redefine any storage pool volumes that were added since the database
backup.
In a Tivoli Storage Manager shared library environment, the servers that share a
library and rely on a library manager to coordinate and manage the library usage
are known as library clients. Each library client maintains a database of volume
usage and volume history. If the database of the library client becomes corrupted,
it might be restored by following these steps:
1. Copy the volume history file to a temporary location and rename the file.
After the database is restored, any volume history information that is pointed
to by the server options is lost. You need this information to identify the
volumes to be audited.
2. Put the device configuration file and the server options file in the server
working directory. You can no longer recreate the device configuration file;
you must have a copy of the original.
Note: The directories must have the same name as the original directories.
5. Use the DSMSERV RESTORE DB utility to restore the database to the current time.
6. Create a list from the old volume history information (generated by the QUERY
VOLHISTORY command) that shows all of the volumes that were reused
(STGREUSE), added (STGNEW), and deleted (STGDELETE) since the original
backup. Use this list to perform the rest of this procedure.
7. Audit all disk volumes, all reused volumes, and any deleted volumes located
by the AUDIT VOLUME command using the FIX=YES parameter.
8. Issue the RESTORE STGPOOL command to restore those files detected as
damaged by the audit. Include the FIX=YES parameter on the AUDIT VOLUME
command to delete database entries for files not found in the copy storage
pool.
9. Mark any volumes that cannot be located as destroyed, and recover those
volumes from copy storage pool backups. If no backups are available, delete
the volumes from the database by using the DELETE VOLUME command with the
DISCARDDATA=YES parameter.
10. Issue the AUDIT LIBRARY command for all shared libraries on this library client.
11. Redefine any storage pool volumes that were added since the database
backup.
If this occurs, you must update the device configuration files manually with
information about the new devices. Whenever you define, update, or delete device
information in the database, the device configuration file is automatically updated.
This information includes definitions for device classes, libraries, drives, and
servers.
For virtual volumes, the device configuration file stores the password (in encrypted
form) for connecting to the remote server. If you regressed the server to an earlier
point-in-time, this password might not match what the remote server expects. In
this case, manually set the password in the device configuration file. Then ensure
that the password on the remote server matches the password in the device
configuration file.
Note: Set the password in clear text. After the server is operational again, you can
issue a BACKUP DEVCONFIG command to store the password in encrypted form.
Related tasks:
Recovering with different hardware at the recovery site on page 1060
Automated SCSI library at the original and recovery sites on page 1060
Related reference:
Automated SCSI library at the original site and a manual scsi library at the
recovery site
The RESTORE STGPOOL command restores specified primary storage pools that have
files with the following problems:
v The primary copy of the file had read errors during a previous operation. Files
with read errors are marked as damaged.
v The primary copy of the file on a volume that has an access mode of
DESTROYED..
v The primary file is in a storage pool that is UNAVAILABLE, and the operation is
for restore, retrieve, or recall of files to a user, or export of file data.
Restrictions:
v Cached copies of files in a disk storage pool are never restored. References to
any cached files were identified with read errors or cached files that are stored
on a destroyed volume are removed from the database during restore processing.
v Restoring from an active-data pool might cause some or all inactive files to be
deleted from the database if the server determines that an inactive file needs to
be replaced but cannot find it in the active-data pool. Do not consider
active-data pools for recovery of a primary pool unless the loss of inactive data
is acceptable.
v You cannot restore a storage pool defined with a CENTERA device class.
v Restoring from an active-data pool might cause some or all inactive files to be
deleted from the database if the server determines that an inactive file needs to
be replaced but cannot find it in the active-data pool.
Restore processing copies files from a copy storage pool or an active-data pool
onto new primary storage pool volumes. The server then deletes database
After the files are restored, the old references to these files in the primary storage
pool are deleted from the database. Tivoli Storage Manager locates these files on
the volumes to which they were restored, rather than on the volumes on which
they were previously stored. If a destroyed volume becomes empty because all
files were restored to other locations, the destroyed volume is automatically
deleted from the database.
To restore a storage pool, use the RESTORE STGPOOL command. To identify volumes
that contain damaged primary files, use the PREVIEW=YES parameter. During
restore processing, a message is issued for every volume in the restored storage
pool that contains damaged, noncached files. To identify the specific files that are
damaged on these volumes, use the QUERY CONTENT command.
DRM: DRM can help you track your on-site and offsite primary and copy storage pool
volumes. DRM can also query the server and generate a current, detailed disaster recovery
plan for your installation.
Related tasks:
Fixing damaged files on page 943
This process preserves the collocation of client files. However, if the copy storage
pool or active-data pool being used to restore files does not have collocation
enabled, restore processing can be slow.
If you need to use a copy storage pool or an active-data pool that is not collocated
to restore files to a primary storage pool that is collocated, you can improve
performance by completing the following steps:
1. Restore the files first to a random access storage pool (on disk).
2. Allow or force the files to migrate to the target primary storage pool.
For the random access pool, set the target storage pool as the next storage pool.
Adjust the migration threshold to control when migration occurs to the target
storage pool.
Related tasks:
Keeping client files together using collocation on page 363
Chapter 33. Protecting and recovering the server infrastructure and client data 953
Fixing an incomplete storage pool restoration
If the restoration of storage pool volumes is incomplete, you can get more
information about the remaining files on those volumes.
The restoration might be incomplete for one or more of the following reasons:
v Either files were never backed up, or the backup copies were marked as
damaged.
v A copy storage pool or active-data pool was specified on the RESTORE STGPOOL
command, but files were backed up to a different copy storage pool or
active-data pool. If you suspect this problem, use the RESTORE STGPOOL command
again without specifying a copy storage pool or active-data pool from which to
restore files. You can specify the PREVIEW parameter on the second RESTORE
STGPOOL command, if you do not actually want to restore files.
v Volumes in the copy storage pool or active-data pool needed to perform the
restore operation are offsite or unavailable. Check the activity log for messages
that occurred during restore processing.
v Backup file copies in copy storage pools or active-data pools were moved or
deleted by other processes during restore processing. To prevent this problem,
do not issue the following commands for copy storage pool volumes or
active-data pool volumes while restore processing is in progress:
MOVE DATA
DELETE VOLUME and with the DISCARDDATA parameter to YES
AUDIT VOLUME with FIX parameter set to YES
MIGRATE STGPOOL
RECLAIM STGPOOL
v You can prevent reclamation processing for your copy storage pools and
active-data pools by setting the RECLAIM parameter to 100 with the UPDATE
STGPOOL command.
After files are restored, the server deletes database references to files on the
original primary storage pool volumes. Tivoli Storage Manager now locates these
files on the volumes to which they were restored, rather than on the volume on
which they were previously stored. A primary storage pool volume becomes empty
if all files that were stored on that volume are restored to other volumes. In this
case, the server automatically deletes the empty volume from the database.
To recreate files for one or more volumes that were lost or damaged, use the
RESTORE VOLUME command. The RESTORE VOLUME command changes the access mode
of the volumes being restored to destroyed. When the restoration is complete (when
Attention:
v Cached copies of files in a disk storage pool are never restored. References to
any cached files that are on a volume that is being restored are removed from
the database during restore processing.
v You can also recreate active versions of client backup files in storage pool
volumes by using duplicate copies in active-data pools. However, do not
consider active-data pools for recovery of a volume unless the loss of inactive
data is acceptable. If the server determines that an inactive file must be replaced
but cannot find it in the active-data pool, restoring from an active-data pool
might cause some or all inactive files to be deleted from the database.
v You cannot restore volumes in a storage pool defined with a CENTERA device
class.
Note: This precaution prevents the movement of files stored on these volumes
until volume DSM087 is restored.
3. Bring the identified volumes to the on-site location and set their access mode to
READONLY to prevent accidental writes. If these offsite volumes are being
used in an automated library, the volumes must be checked into the library
when they are brought back on-site.
4. Restore the destroyed files. Issue this command:
restore volume dsm087
This command sets the access mode of DSM087 to DESTROYED and attempts
to restore all the files that were stored on volume DSM087. The files are not
restored to volume DSM087, but to another volume in the TAPEPOOL storage
pool. All references to the files on DSM087 are deleted from the database and
the volume itself is deleted from the database.
5. Set the access mode of the volumes used to restore DSM087 to OFFSITE using
the UPDATE VOLUME command.
6. Set the access mode of the restored volumes that are now on-site, to
READWRITE.
7. Return the volumes to the offsite location. If the offsite volumes used for the
restoration were checked into an automated library, these volumes must be
checked out of the automated library when the restoration process is complete.
Chapter 33. Protecting and recovering the server infrastructure and client data 955
Fixing an incomplete volume restoration:
When the restoration of a volume might be incomplete, you can get more
information about the remaining files on volumes for which restoration was
incomplete.
The restoration might be incomplete for one or more of the following reasons:
v Files were either never backed up or the backup copies are marked as damaged.
v A copy storage pool or active-data pool was specified on the RESTORE VOLUME
command, but files were backed up to a different copy storage pool or a
different active-data pool. If you suspect this problem, use the RESTORE VOLUME
command again without specifying a copy storage pool or active-data pool from
which to restore files. You can specify the PREVIEW parameter on the second
RESTORE VOLUME command, if you do not actually want to restore files.
v Volumes in the copy storage pool or active-data pool needed to perform the
restore operation are offsite or unavailable. Check the activity log for messages
that occurred during restore processing.
v Backup file copies in copy storage pools or active-data pools were moved or
deleted by other processes during restore processing. To prevent this problem,
do not issue the following commands for copy storage pool volumes or
active-data pool volumes while restore processing is in progress:
MOVE DATA
DELETE VOLUME with the DISCARDDATA parameter set to YES
AUDIT VOLUME with the FIX parameter set to YES
MIGRATE STGPOOL
RECLAIM STGPOOL
You can prevent reclamation processing for your copy storage pools and
active-data pools by setting the RECLAIM parameter to 100 with the UPDATE
STGPOOL command.
The destroyed volume access mode designates primary volumes for which files are
to be restored.
If duplication occurs, Tivoli Storage Manager uses volumes from multiple copy
storage pools or active-data pools to restore the data. This process can result in
duplicate data being restored. To prevent this duplication, keep one complete set of
copy storage pools and one complete set of active-data pools available to the
server. Alternatively, ensure that only one copy storage pool or one active-data
pool has an access of read/write during the restore operation.
The primary storage pool Main contains volumes Main1, Main2, and Main3.
v Main1 contains files File11, File12, File13
v Main2 contains files File14, File15, File16
v Main3 contains files File17, File18, File19
The copy storage pool DuplicateA contains volumes DupA1, DupA2, and DupA3.
v DupA1 contains copies of File11, File12
v DupA2 contains copies of File13, File14
v DupA3 contains copies of File15, File16, File17, File18 (File19 is missing because
BACKUP STGPOOL was run on the primary pool before the primary pool
contained File 19.)
The copy storage pool DuplicateB contains volumes DupB1 and DupB2.
v DupB1 contains copies of File11, File12
v DupB2 contains copies of File13, File14, File15, File16, File17, File18, File19
If you do not designate copy storage pool DuplicateB as the only copy storage
pool to have read/write access for the restore operation, then Tivoli Storage
Manager can choose the copy storage pool DuplicateA, and use volumes DupA1,
DupA2, and DupA3. Because copy storage pool DuplicateA does not include file
File19, Tivoli Storage Manager would then use volume DupB2 from the copy
storage pool DuplicateB. The program does not track the restoration of individual
files, so File15, File16, File17, and File18 are restored a second time, and duplicate
copies are generated when volume DupB2 is processed.
Chapter 33. Protecting and recovering the server infrastructure and client data 957
Restoring and recovering an LDAP server
If you use an LDAP directory server to authenticate passwords, you might need to
restore its contents at some time.
There are ways to avoid locking your ID and not being able to logon to the server
or rendering data unavailable.
v Give system privilege class to the console administrator ID.
v Make sure that at least one administrator with system privilege class can access
the server with LOCAL authentication.
v Do not back up the LDAP directory server to the IBM Tivoli Storage Manager
server. An administrator who backs up the Windows Active Directory or the
IBM Tivoli Directory Server to the Tivoli Storage Manager server might render
them unusable. The Tivoli Storage Manager server requires an external directory
for the initial administrator authentication. Backing up the directory server to
the Tivoli Storage Manager server locks the administrator ID and renders them
unable to logon to the LDAP directory server.
You must configure the LDAP settings on a target server before replicating,
exporting, or importing nodes and administrators onto it.
You must run the SET LDAPUSER and SET LDAPPASSWORD commands, and define the
LDAPURL option on the target server. If it is not configured properly, you can
replicate, export, import, or use Enterprise Configuration on the target server. But
all nodes and administrators that are transferred from the source to the target with
the LDAP server are then changed to use LDAP authentication. Nodes and
administrators that changed to LDAP authentication on the target server become
inaccessible.
You can configure the target server for LDAP authentication after replicating or
exporting to it, but the data is unavailable until that occurs. After configuring the
LDAP settings at the target server level, the node or administrator entries must be
set up on the LDAP server. Either share the LDAP server between the source and
the target server, or replicate the source LDAP server to the target server. All
applicable nodes and administrators are transferred to the target.
If the transfer is unsuccessful, the LDAP administrator must manually add the
node and administrator passwords onto the LDAP server. Or you can issue the
UPDATE NODE or UPDATE ADMIN commands on the IBM Tivoli Storage Manager server.
After you issue the AUDIT LDAPDIRECTORY FIX=YES command, the following events
occur:
v All nodes and administrators that were removed from the LDAP directory
server are listed for you.
v All nodes and administrators that are missing from the LDAP directory server
are listed for you. You can correct these missing entries by issuing the UPDATE
NODE or UPDATE ADMIN command.
If multiple Tivoli Storage Manager servers share an LDAP directory server, avoid
issuing the AUDIT LDAPDIRECTORY FIX=YES command.
The restore operation removes all library client server transactions that occurred
after the point in time from the volume inventory of the library manager server.
However, the volume inventory of the library client server still contains those
transactions. New transactions can then be written to these volumes, resulting in a
loss of client data. Complete the following steps after the restore:
1. Halt further transactions on the library manager server: Disable all schedules,
migration, and reclamations on the library client and library manager servers.
2. Audit all libraries on all library client servers. The audits re-enter those volume
transactions that were removed by the restore on the library manager server.
Audit the library clients from the oldest to the newest servers. Use the volume
history file from the library client and library manager servers to resolve any
conflicts.
3. Delete the volumes from the library clients that do not own the volumes.
4. Resume transactions by enabling all schedules, migration, and reclamations on
the library client and library manager servers.
If a library client server acquired scratch volumes after the point-in-time to which
the server is restored, these volumes would be set to private in the volume
inventories of the library client and library manager servers. After the restore, the
volume inventory of the library client server can be regressed to a point-in-time
before the volumes were acquired, thus removing them from the inventory. These
volumes would still exist in the volume inventory of the library manager server as
private volumes owned by the client.
Chapter 33. Protecting and recovering the server infrastructure and client data 959
The restored volume inventory of the library client server and the volume
inventory of the library manager server would be inconsistent. The volume
inventory of the library client server must be synchronized with the volume
inventory of the library manager server in order to return those volumes to scratch
and enable them to be overwritten. To synchronize the inventories, complete the
following steps:
1. Audit the library on the library client server to synchronize the volume
inventories of the library client and library manager servers.
2. To resolve any remaining volume ownership concerns, review the volume
history and issue the UPDATE VOLUME command as needed.
The processor on which Tivoli Storage Manager is located, the database, and all
on-site storage pool volumes are destroyed by fire. You can use either full and
incremental backups or snapshot database backups to restore a database to a
point-in-time.
Note: Do not change the access mode of these volumes until after you
complete step 7.
3. If a current, undamaged volume history file exists, save it.
4. Restore the volume history and device configuration files, the server options,
and the database and recovery log setup. For example, the recovery site might
require different device class, library, and drive definitions.
5. Restore the database from the latest backup level by issuing the DSMSERV
RESTORE DB utility.
6. Change the access mode of all the existing primary storage pool volumes in
the damaged storage pools to DESTROYED. For example, issue the following
commands:
update volume * access=destroyed wherestgpool=backuppool
update volume * access=destroyed wherestgpool=archivepool
update volume * access=destroyed wherestgpool=spacemgpool
update volume * access=destroyed wherestgpool=tapepool
7. Issue the QUERY VOLUME command to identify any volumes in the
DISASTER-RECOVERY storage pool that were on-site at the time of the
disaster. Any volumes that were on-site would were destroyed in the disaster
and could not be used for restore processing. Delete each of these volumes
from the database by using the DELETE VOLUME command with the
DISCARDDATA option. Any files backed up to these volumes cannot be
restored.
8. Change the access mode of the remaining volumes in the
DISASTER-RECOVERY pool to READWRITE. Issue the following command:
Clients can now access files. If a client tries to access a file that was stored on
a destroyed volume, the retrieval request goes to the copy storage pool. In this
way, clients can restore their files without waiting for the primary storage pool
to be restored. When you update volumes brought from offsite to change their
access, you greatly speed recovery time.
9. Define new volumes in the primary storage pool so the files on the damaged
volumes can be restored to the new volumes. With the new volumes, clients
can also back up, archive, or migrate files to the server. If you use only scratch
volumes in the storage pool, you are not required to complete this step.
10. Restore files in the primary storage pool from the copies in the
DISASTER-RECOVERY pool. To restore files from DISASTER-RECOVERY
pool, issue the following commands:
restore stgpool backuppool maxprocess=2
restore stgpool tapepool maxprocess=2
restore stgpool archivepool maxprocess=2
restore stgpool spacemgpool maxprocess=2
Chapter 33. Protecting and recovering the server infrastructure and client data 961
962 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Chapter 34. Replicating client node data
Node replication is the process of incrementally copying, or replicating, data that
belongs to backup-archive client nodes. Data is replicated from one IBM Tivoli
Storage Manager server to another Tivoli Storage Manager server.
The server from which client node data is replicated is called a source replication
server. The server to which client node data is replicated is called a target replication
server. A server can function as the source of replicated data for some client nodes
and as the target of replicated data for other client nodes.
The purpose of replication is to maintain the same level of files on the source and
the target replication servers. As part of replication processing, client node data
that was deleted from the source replication server is also deleted from the target
replication server. When client node data is replicated, only the data that is not on
the target replication server is copied.
| You can use only Tivoli Storage Manager V6.3 servers for node replication.
| However, you can replicate data for client nodes that are V6.3 or earlier. You can
| also replicate data that was stored on a Tivoli Storage Manager V6.2 or earlier
| server before you upgraded it to V6.3. You cannot replicate nodes from a Tivoli
| Storage Manager V6.3.3 server to a server that is running on an earlier version of
| Tivoli Storage Manager.
Before you configure your system, however, be sure to read about basic replication
concepts in the overview topic. When you are ready to begin implementation, read
Setting up the default replication configuration on page 988.
Related concepts:
Managing passwords and logon procedures
NODE4
NODE1
CHICAGO_SRV
PHOENIX_SRV DALLAS_SRV
NODE4 data
NODE1 and NODE2 data
NODE3 data
NODE5 data
NODE2 ATLANTA_SRV
NODE3
NODE5
When a client node is registered on a target replication server, the domain for the
node is sent to the target server. If the target server does not have a domain with
the same name, the node on the target server is placed in the standard domain on
the target server and bound to the default management class.
| To maintain the same number of file versions on the source and the target
| replication servers, the source replication server manages file expiration and
| deletion. If a file on the source replication server is marked for deletion, but not
| yet deleted by the expiration processing, the target replication server deletes the
| file during the next replication process. Expiration processing on the target
| replication server is disabled for replicated data. The file on the target replication
| server is deleted by the source replication server after the file is expired and
| deleted on the source.
If a client node is removed from replication on the target replication server, the
policies on the target replication server are enabled. Data on the target replication
server is then managed by the policies that are on the target replication server, and
expiration processing can delete expired files.
Important: Policies that are defined on replication servers and that are dissimilar
can cause undesirable side-effects. As newer versions of backup files are replicated,
versions that exceed the value of the VEREXISTS parameter for the copy group are
marked for immediate deletion. If the node that owns the files is configured for
replication, expiration does not delete the files. However, because these files are
marked for immediate deletion, they are not available for the client to restore. The
files remain in the storage pool until replication deletes them based on the policy
on the source server.
Tips:
v Policies and storage pool hierarchies on the source and target replication servers
can be different. You can use deduplicated storage pools on the source server, on
the target server, or both. However, to keep the data on source and target
replication servers synchronized, configure the management classes on the
source and target servers to manage data similarly. To coordinate policies,
consider using Tivoli Storage Manager enterprise configuration.
v Ensure that sufficient space is available in the storage pool on the target
replication server.
Replication rules
Replication rules control what data is replicated and the order in which it is
replicated.
The Tivoli Storage Manager server has the following predefined set of replication
rules. You cannot create replication rules.
ALL_DATA
Replicates backup, archive, or space-managed data. The data is replicated
with a normal priority. For example, you can assign the ALL_DATA rule to
backup data and archive data, and assign a different rule to
space-managed data.
ACTIVE_DATA
Replicates only active backup data. The data is replicated with a normal
priority. You can assign this rule only to the backup data type.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority. In a replication process that includes both
normal-priority and high-priority data, high-priority data is replicated first.
ACTIVE_DATA_HIGH_PRIORITY
Replicates active backup data. The data is replicated with a high priority.
You can assign this rule only to the backup data type.
DEFAULT
Replicates data according to the rule that is assigned to the data type at the
next higher level in the replication-rule hierarchy. The replication-rule
hierarchy comprises file space rules, individual client-node rules, and
server rules. Server rules apply collectively to all nodes that are defined to
a source replication server and that are configured for replication.
Rules that are assigned to data types in file spaces take precedence over
rules that are assigned to data types for individual nodes. Rules that are
assigned to data types for individual nodes take precedence over server
rules. For example, if the DEFAULT replication rule is assigned to backup
data in a file space, the server checks the replication rule for backup data
that is assigned to the client node. If the client node rule for backup data is
DEFAULT, the server checks the server rule for backup data.
The DEFAULT rule is valid only for data types at the file space level and
the client node level. It is not valid for data types at the server level.
Tip: When you set up the default replication configuration, you do not have to
assign or change replication rules. Tivoli Storage Manager automatically assigns
the DEFAULT replication rule to all data types in the file spaces and in the client
nodes that you configured. The system-wide replication rules are automatically set
to ALL_DATA. You can change file-space, client-node, and system-wide rules after
you set up the default configuration.
If a file space is added to a client node that is configured for replication, the file
space rules for data types are initially set to DEFAULT. If you do not change the
file space rules, the client node and server rules determine whether data in the file
space is replicated.
To display the attributes of replication rules, issue the QUERY REPLRULE command.
In a client node that is configured for replication, each file space has three
replication rules. One rule applies to backup data in the file space. The other rules
apply to archive data and to space-managed data. The rules for the file space exist
regardless of whether the file space has backup, archive, or space-managed data. If
Similarly, each client node that is configured for replication has replication rules for
backup data, archive data, and space-managed data. Client node rules apply to all
the file spaces that belong to a node. Replication rules also exist at the server level
that apply collectively to every client node that is configured for replication on a
source replication server.
During replication processing, file space rules take precedence over rules for
individual nodes. Rules for individual nodes take precedence over server rules.
The replication rule that has precedence is called the controlling replication rule.
Replication process
File space /a, NODE1: File space /b, NODE1: File space /a, NODE2:
ALL BACKUP DATA
ARCHIVE DATA ARCHIVE DATA ARCHIVE DATA
File space /a, NODE1: File space /b, NODE1: File space /a, NODE2:
Target replication
server
When the REPLICATE NODE command is issued, a single replication process begins.
The source replication server identifies client nodes that are configured for
replication and the rules that apply to the file spaces in nodes that are enabled.
The backup data in file space /a that belongs to NODE2 is also high priority. The
file space rule for backup data, which is ALL_DATA_HIGH_PRIORITY, takes
precedence over the client node rule of DEFAULT and the server rule of
ALL_DATA.
Tips:
v Figure 111 on page 969 shows one possible configuration to achieve the specified
results. In general, multiple configurations can exist that accomplish the same
purpose.
For example, to replicate archive data first, you can assign the
ALL_DATA_HIGH_PRIORITY replication rule to the archive data type in each
file space that belongs to NODE1 and NODE2.
v Figure 111 on page 969 shows one replication process. To replicate certain client
nodes ahead of other client nodes, you can issue multiple REPLICATE NODE
commands in sequence, either manually or in a maintenance script. Each
command can specify a different client node or different file spaces in an
individual client node. For example, suppose NODE1 contains a large amount of
data and you want to conserve bandwidth. To replicate client node data
sequentially, you can specify NODE1 in a single REPLICATE NODE command and
NODE2 in another REPLICATE NODE command.
Related concepts:
Replication rule hierarchy on page 967
Replication rule definitions on page 966
Replication state
Replication state indicates whether replication is enabled or disabled. When you
disable replication, replication does not occur until you enable it.
Figure 112 on page 972 shows the interaction of replication states and replication
rules. In the example, NODE1 has a single file space /a that contains archive data.
Assume that the replication state of NODE1 on the target replication server is
ENABLED and that replication processing for all nodes is enabled.
ALL_DATA
File space ENABLED Replication Archive data
ALL_DATA_HIGH_PRIORITY DISABLED
rule for state of archive in /a
archive data data type is not replicated
? in /a
?
DEFAULT
ALL_DATA
ALL_DATA_HIGH_PRIORITY Node rule NONE Archive data
for in /a
archive data is not replicated
?
DEFAULT
ALL_DATA
ALL_DATA_HIGH_PRIORITY
ENABLED
Archive data
in /a
is replicated
Replication
processing
ends
To determine the replication state of a file space, issue the QUERY FILESPACE
command. To determine the replication state of a client node, issue the QUERY NODE
command, and to determine the replication state of a rule, issue the QUERY
REPLRULE command.
Replication mode
Replication mode is part of a client node definition and indicates whether a client
node is set up to send or receive replicated data. The replication mode can also
indicate whether the data that belongs to a client node is to be synchronized the
first time that replication occurs. Data synchronization applies only to client nodes
whose data was exported from the source replication server and imported on the
target replication server.
The following modes are possible for a client node whose data is not being
synchronized:
SEND Indicates that the client node is set up to send data to the target replication
server. The SEND replication mode applies only to the client node
definition on a source replication server.
RECEIVE
Indicates that the client node is set up to receive replicated data from the
source replication server. The RECEIVE replication mode applies only to
the client node definition on a target replication server.
NONE
The client node is not configured for replication. To be configured for
replication, the client node must be enabled or disabled for replication.
If the data that belongs to a client node was previously exported from a source
replication server and imported on a target replication server, the data must be
synchronized. Synchronization is also required after a database restore to preserve
the client node data that is on the target replication server. When the data that
belongs to a client node is synchronized, entries in the databases of the source and
target replication servers are updated.
The following special settings for replication mode are required to synchronize
data.
Restriction: To synchronize data, the date of the imported data on the target
replication server must be the original creation date.
SYNCSEND
Indicates that data that belongs to the client node on the source replication
server is to be synchronized with the client node data on the target
replication server. The SYNCSEND mode applies only to the client node
definition on a source replication server.
When data synchronization is complete, the replication mode for the node
on the source replication server is set to SEND.
SYNCRECEIVE
Indicates that data that belongs to the client node on the target replication
server is synchronized with the client node data on the source replication
server. This SYNCRECEIVE mode applies only to the client node definition
on the target replication server.
The following table shows the results when storage pools on source and target
replication servers are enabled for data deduplication. The destination storage pool
is specified in the backup or archive copy-group definition of the management
class for each file. If the destination storage pool does not have enough space and
data is migrated to the next storage pool, the entire file is sent, whether or not the
next storage pool is set up for deduplication.
Tip: If you have a primary storage pool that is enabled for deduplication on a
source replication server, you can estimate a size for a new deduplicated storage
pool on the target replication server. Issue the QUERY STGPOOL command for the
primary deduplicated storage pool on the source replication server. Obtain the
The following client node attributes are updated during node replication:
v Aggregation (on or off)
v Automatic file space rename
v Archive delete authority
v Backup delete authority
v Backup initiation (root user or all users)
v Cipher strength
v Compression option
v Contact
v Data-read path
v Data-write path
v Email address
v High-level address
v Low-level address
v Node lock state
v Option set name
v Password
Attention: A conflict occurs if a node password is authenticated on the source
server by one server and on the target server by a different server. Because
authentication can happen on a Lightweight Directory Access Protocol (LDAP)
directory server or the Tivoli Storage Manager server, data can be lost. In the
case of this kind of dual authentication, the password is not updated during
replication.
v Password expiration period
v Operating system
v Role override (client, server, other, or usereported)
v Session initiation (client or server, or server only)
v Transaction group maximum
v URL
v Validate protocol (no, data only, or all)
The following client node attributes are not updated during node replication:
v Domain name (might not exist on target server)
v File-space access rules that are created with the client SET ACCESS command
v Node conversion type
v Client option sets
Tip: If you want to convert client nodes for store operations to a target
replication server, you can duplicate client schedules that are on the
source replication server.
v Client option sets.
Tip: If you want client option sets on the target replication server, you
must duplicate them.
v Backup sets.
Tip: You can generate backup sets on the target replication server for a
replicated client node.
v Network-attached storage data in nonnative storage pools.
Retention protection
You cannot configure servers for replication on which archive retention
protection is enabled.
Replication and file groups
When you are replicating files from one server to another, it is possible that
some of the files that are being replicated belong to a group of files that
are managed as a single logical entity. If a replication process ends without
replicating all the files in a group, client nodes will be unable to restore,
retrieve, or recall the file group. When replication runs again, the source
replication server attempts to replicate the missing files.
Renaming a node
If a node is configured for replication, it cannot be renamed.
Backing up a single client node to two source replication servers
If you have been backing up, archiving, or migrating a client node to two
different servers, do not set up replication of the node from both source
replication servers to the same target replication server. Replicating from
two source servers might create different versions of the same file on the
target server and cause unpredictable results when restoring, retrieving, or
recalling the file.
Password propagation to the target replication server
When client node data is replicated for the first time, the source server
sends the node definition, including the password, to the target server.
During subsequent replications, if the node password is updated, the
source server attempts to send the updated password to the target server.
Whether these attempts succeed depends on the node authentication
method and on the combination of methods that are used on the source
and target servers. A conflict occurs if a node password is authenticated on
the source server by one server and on the target server by a different
server. Because authentication can happen on an LDAP (Lightweight
Directory Access Protocol) directory server or the Tivoli Storage Manager
If you want to... Use these commands... For more information, see...
Add client nodes for REGISTER NODE and UPDATE Adding client nodes for
replication. NODE replication processing on
page 1003
If you want to... Use these commands... For more information, see...
Set up Secure Sockets Layer DEFINE SERVER and UPDATE Configuring a server for
(SSL) communications SERVER SSL communications on
between a source and target page 1007
replication server.
Change a target replication SET REPLSERVER Selecting a new target
server. replication server on page
1006
Remove a target replication SET REPLSERVER Removing a target
server. replication server on page
1007
Control the number of node REPLICATE NODE Controlling throughput for
replication sessions. node replication on page
1014
Disable or enable inbound or DISABLE SESSIONS and ENABLE Disabling and enabling
outbound sessions from a SESSIONS outbound or inbound
source or target replication sessions on page 1018
server.
Disable or enable outbound DISABLE REPLICATION and Disabling and enabling
replication processing from a ENABLE REPLICATION outbound node replication
source replication server. processing on page 1019
Remove a replication REMOVE REPLNODE and SET Removing a node
configuration. REPLSERVER replication configuration on
page 1027
If you want to... Use these commands... For more information, see...
Replicate data. You can REPLICATE NODE and DEFINE Replicating data by
replicate data by individual SCHEDULE command on page 1010
file space, by priority, and by
data type.
Temporarily disable UPDATE FILESPACE Disabling and enabling
replication for a data type in replication of data types in a
a file space. file space on page 1016
Temporarily disable UPDATE NODE Disabling and enabling
replication for an individual replication for individual
client node. client nodes on page 1017
Temporarily disable UPDATE REPLRULE Disabling and enabling
replication of data that is replication rules on page
assigned a particular 1019
replication rule.
Temporarily disable inbound DISABLE SESSIONS and ENABLE Disabling and enabling
and outbound server SESSIONS outbound or inbound
sessions, including sessions on page 1018
replication sessions for all
client nodes.
Temporarily disable DISABLE REPLICATION and Disabling and enabling
outbound replication ENABLE REPLICATION outbound node replication
processing from a source processing on page 1019
replication server.
Prevent replication of UPDATE FILESPACE Purging replicated data in a
backup, archive, or file space on page
space-managed data in a file 1020Purging replicated data
space on a source replication in a file space on page 1020
server, and delete the data
from the target replication
server.
Cancel all replication CANCEL REPLICATION Canceling replication
processes. processes on page 1022
If you want to... Use these commands... For more information, see...
Specify the number of days SET REPLRETENTION Retaining replication
to retain replication records records on page 1025
in the Tivoli Storage
Manager database.
Display information about QUERY FILESPACE Displaying information
the replication settings for a about node replication
file space. settings for file spaces on
page 1022
Display information about QUERY NODE Displaying information
the replication settings for a about node replication
client node. settings for client nodes on
page 1023
Display information about QUERY REPLRULE Displaying information
replication rules. about node replication rules
on page 1023
Display records of running QUERY REPLICATION Displaying information
and ended replication about node replication
processes. processes on page 1023
Determine whether QUERY REPLNODE Measuring the effectiveness
replication to the target of a replication
replication server is keeping configuration on page 1024
pace with the number of files
that are eligible for
replication on the source
replication server.
Measure the effects of data QUERY REPLICATION Measuring the effects of
deduplication. data deduplication on node
replication processing on
page 1025
As you plan, remember that a target replication server must be accessible from a
source replication server by using an IP connection. The connection must provide
sufficient bandwidth to accommodate the volume of data to be replicated. If the
connection is insufficient and becomes a bottleneck for replication, keeping the
data on the two servers synchronized can be a problem. Keep in mind that you
can use client-side data deduplication with node replication to reduce network
bandwidth requirements and storage requirements.
The destination storage pool on a target replication server must have sufficient
space to store replicated data.
To determine whether the database can manage more space requirements, you
must estimate how much more database space that node replication will use.
Requirement: Place the database and database logs on separate disks that have a
high performance capability. Use a separate disk or mount point for the following
options:
v Other applications that use the database and logs
v System tasks, such as system paging
1. Determine number of files for each node and data type that is in use. Issue the
QUERY OCCUPANCY command for each node and data type that you plan to
replicate. For example, you can display information about the file spaces that
are assigned to the node named PAYROLL by issuing the following command:
query occupancy payroll
2. Determine how much more database space is required by using the value for
the total number of files that are used by all nodes and data types. Use the
following formula to calculate the amount of database space that is required:
Total_number_of_files_from_all_nodes_and_data_types * 300 (the number of
additional bytes needed for each replicated file)
Important: You must increase the available database space when the additional
required space approaches or exceeds the size of your database. Ensure that
you examine both replication servers and their databases and increase the
database size if necessary.
3. Increase the size of the database by the additional database space required and
include an additional 10% of the database size.
Tip: Tune the performance of replication to the data type. For example, if you
do not plan to replicate a data type in a file space, exclude the number of files
for that data type.
2. Determine the amount of data that is backed up daily by the client nodes.
Complete the following steps to estimate the amount of data that is replicated
incrementally daily:
a. When client nodes complete a store operation, the client logs completion
messages with the server. The completion messages report statistics or
If the value for required network bandwidth exceeds the capabilities of your
network, you must adjust the values in the formula. Reduce the TD value or
increase the replication time, to reduce the value for Required_Network_Bandwidth. If
you cannot adjust the TD or the RWT time values, adjust or replace your existing
network to reduce the additional workload.
When you determine how long it takes for replication to finish, you can decide
which method you use to complete the initial replication. The method that you use
for the initial replication is based on the data, time, and bandwidth values that you
calculated.
Related tasks:
Selecting a method for the initial replication
| Tip: You can also export client data directly to another server so that it can be
| immediately imported. For example, to export client node information and all
| client files for NODE1 directly to SERVERB, issue the following command:
| export node node1 filedata=all toserver=serverb
When you decide how many nodes to add to a group, consider the amount of data
that is replicated daily by the nodes.
1. Prioritize the subset of nodes that have critical data. Replicate critical data first,
by issuing the REPLICATE NODE command.
2. Continue to replicate the high-priority nodes daily while incrementally adding
the replication of other subsets of nodes that contain important, but not critical,
data.
3. Repeat this process until all subsets of all nodes that must be replicated
complete their initial replication.
Related concepts:
Node replication processing on page 966
Replication rules on page 966
During the next scheduled replication, any new active versions, including all
inactive versions, are replicated. The files that were active but are now inactive are
not replicated again.
Remember: If you do not have time to complete replication, you can cancel it
after it has started, by issuing the CANCEL REPLICATION command.
4. Use the summary information to determine whether the values of the
controlled test match the actual replication values. You calculate the values of
the controlled test in Tuning replication processing on page 1015. For
example, to display information about a replication process 23, issue the
following command:
query process 23
| If you are unable to complete the replication process in the amount of time that
| you scheduled, increase the number of data sessions that transfer data to the target
| server. Replication performance improves when more deduplicated data is stored
| on the target server. When more extents are stored on the target server, more
| duplicates are found for an extent.
| If you are replicating data from storage pools that are enabled for data
| deduplication, run processes in the following order:
1. To identify duplicates, issue the IDENTIFY DUPLICATES command. Break files
into extents to reduce the amount of data that is sent to the target server when
replication occurs.
The following figure shows the replication rules that are created in the default
configuration. Backup data includes both active and inactive backup data.
Replication process
Normal priority data
Target replication
server
After you complete the default configuration, you can change the replication rules
to meet your specific replication requirements.
Server definitions are required for the source replication server to communicate
with the target replication server and for the target replication server to report
status to the source replication server.
Important: You can specify only one target replication server for a source
replication server. However, you can specify one or more source replication servers
for a single target replication server. Source and target replication servers must be
V6.3.
The method that you use to set up servers depends on whether the server
definitions exist and on whether you are using the cross-define function to
automatically define one server to another.
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the target replication server must be an SSL
port. The value of the SET SERVERNAME command must match the server
name in the server definition.
2. On the source replication server, issue the following commands:
| set servername source_server_name
| set serverpassword source_server_password
| set serverhladdress source_server_ip_address
| set serverlladdress source_server_tcp_port
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the source replication server must be an SSL
port. The value of the SET SERVERNAME command must match the server
name in the server definition.
| 3. On the source replication server connect to the target replication server by
| using the DEFINE SERVER command. If you want an SSL connection, specify
| SSL=YES, for example:
A server definition is created on the source replication server, and the source
replication server is connected to the target replication server. A definition for
the target replication server is created that points to the source replication
server.
v If server definitions do not exist and you are not using the cross-define function,
complete the following steps:
1. Issue the following commands on both the source and target replication
servers:
set servername server_name
set serverpassword server_password
set serverhladdress ip_address
set serverlladdress tcp_port
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the replication servers must be an SSL port.
The value of the SET SERVERNAME command must match the server name in
the server definition.
2. Issue the DEFINE SERVER command on each server. Do not specify the
CROSSDEFINE parameter. If you want an SSL connection, specify SSL=YES, for
example:
On the source server:
define server target_server_name hladdress=target_server_ip_address
lladdress=target_server_tcp_port serverpassword=target_server_password
ssl=yes
On the target server:
define server source_server_name hladdress=source_server_ip_address
lladdress=source_server_tcp_port serverpassword=source_server_password
ssl=yes
v If definitions exist for both the source and target replication servers, issue the
UPDATE SERVER command on each server. Do not specify the CROSSDEFINE
parameter. You can use the QUERY STATUS command to determine the server
names. If you want an SSL connection, specify SSL=YES, for example:
On the source server:
update server target_server_name hla=target_server_ip_address
lladdress=target_server_tcp_port serverpassword=target_server_password
ssl=yes
On the target server:
update server source_server_name hladdress=source_server_ip_address
lladdress=source_server_tcp_port serverpassword=
source_server_password
ssl=yes
Before beginning this procedure, issue the PING SERVER command to verify that the
definitions for the source and target replication servers are valid and that the
servers are connected.
To specify a target replication server, issue the SET REPLSERVER command on the
source replication server. For example, to specify a server named PHOENIX_SRV
as the target replication server, issue the following command:
set replserver phoenix_srv
Issuing the SET REPLSERVER command also sets replication rules to ALL_DATA. To
display replication rules, you can issue the QUERY STATUS command.
Related concepts:
Replication server configurations on page 964
Restrictions:
v If a client node definition does not exist on the target replication server, do not
create it. The definition for the client node on the target server is created
automatically when the node's data is replicated the first time.
v If a client node definition exists on both the source and target replication servers,
but the data that belongs to the client node was not exported and imported, you
must rename or remove the client node on the target replication server before
data can be replicated.
v If you previously removed a client node from replication on the source
replication server, but not on the target replication server, you do not have to
rename or remove the node on the target replication server.
To configure a client node for replication, take one of the following actions,
depending on whether a nodes data was exported from the source server and
imported on the target server:
v If the nodes data was not exported from the source server and imported on the
target server, complete one of the following steps:
992 IBM Tivoli Storage Manager for AIX: Administrator's Guide
If the client node is not already registered on a source replication server, issue
the REGISTER NODE command on the source replication server. Specify
REPLSTATE=ENABLED or REPLSTATE=DISABLED.
For example, to enable a new client node, NODE1, for replication, issue the
following command:
register node node1 password replstate=enabled
If the client node is already registered on a source replication server, issue the
UPDATE NODE command on the source replication server. Specify
REPLSTATE=ENABLED or REPLSTATE=DISABLED.
For example, to enable an existing client node, NODE1, for replication, issue
the following command:
update node node1 replstate=enabled
v If the nodes data was exported from the source replication server and imported
to the target replication server, complete the following steps:
1. On the source replication server, issue the UPDATE NODE command:
a. Specify REPLSTATE=ENABLED or REPLSTATE=DISABLED.
b. Specify REPLMODE=SYNCSEND.
2. On the target replication server, issue the UPDATE NODE command and specify
REPLMODE=SYNCRECEIVE.
Data is synchronized during replication. After replication is complete, the
REPLMODE parameter in the client node definition on the source replication server
is set to SEND. The REPLMODE parameter in the client node definition on the
target replication server is set to RECEIVE, and the REPLSTATE parameter is set to
ENABLED.
If you set the replication state of the client node to DISABLED, the replication
mode is set to SEND, but replication does not occur. If you set the replication state
of the client node to ENABLED, the client node definition is created on the target
replication server when replication occurs for the first time. In addition, the
replication mode of the client node on the target replication server is set to
RECEIVE, and the replication state is set to ENABLED.
If you add a file space to a client node that is configured for replication, the
file-space replication rules for data types are automatically set to DEFAULT. To
change file-space replication rules, issue the UPDATE FILESPACE command.
To determine the replication mode and the replication state that a client node is in,
issue the QUERY NODE command.
The default configuration is complete after client nodes are configured for
replication. You are now ready to replicate. If you do not change the default
replication rules, all backup, archive, and space-managed data in all
replication-enabled client nodes is replicated.
Related concepts:
Replication mode on page 973
Replication state on page 970
Rules for file spaces are either normal priority or high priority. In a replication
process that includes both normal-priority and high-priority data, high-priority
data is replicated first. If you issue the REPLICATE NODE command for two or more
clients, all high priority data for all file spaces in the specified nodes is processed
before normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that a file space contains active backup data and
archive data. Replication of the active backup data is a higher priority than the
archive data. To prioritize the active backup data, specify DATATYPE=BACKUP
REPLRULE=ACTIVE_DATA_HIGH_PRIORITY. To prioritize the archive data, issue the
UPDATE FILESPACE command again, and specify DATATYPE=ARCHIVE
REPLRULE=ALL_DATA.
Attention:
v If you specify ACTIVE_DATA, inactive backup data in the file space is
not replicated, and inactive backup data in the file space on the target
replication server is deleted.
v If you specify ACTIVE_DATA, you cannot specify ARCHIVE or
SPACEMANAGED as values for the parameter DATATYPE in the same
command instance.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
ACTIVE_DATA_HIGH_PRIORITY
Replicates active backup data. The data is replicated with a high priority.
To display the replication rules for a file space, issue the QUERY FILESPACE
command. Specify FORMAT=DETAILED.
In the following example, assume that you have two client nodes, NODE1 and
NODE2. The nodes have the following file spaces:
v NODE1: /a, /b, /c
v NODE2: /a, /b, /c, /d, /e
All the file space rules are set to DEFAULT. The backup, archive, and
space-managed rules for NODE1 and NODE2 are also set to DEFAULT. The server
The data that belongs to the two nodes is replicated in the following order:
1. High Priority: Data in file space /a that belongs to NODE1 and data in file
space /c in NODE2
2. Normal priority: Data in file spaces /b and /c that belongs to NODE1 and data
in file spaces /a, /b, /d, and /e that belongs to NODE2
Important: Data types in new file spaces that are added to a client node after the
node is configured for replication are automatically assigned the DEFAULT
replication rule.
Related concepts:
Replication rules on page 966
Rules for client nodes are either normal priority or high priority. In a replication
process that includes both normal-priority and high-priority data, high-priority
data is replicated first. If you issue the REPLICATE NODE command for two or more
clients, all high priority data for all file spaces in the specified nodes is processed
before normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that a client node contains active backup data
and archive data. Replication of the active backup data is a higher priority than
replication of the archive data. To prioritize the active backup data, specify the
ACTIVE_DATA_HIGH_PRIORITY replication rule for backup data. Specify the
ALL_DATA rule for archive data.
Attention:
v If you specify ACTIVE_DATA, inactive backup data that belongs to the
client node is not replicated.
v If the replication rule for backup data in any file spaces that belong to
the client node is DEFAULT, inactive backup data in those file spaces on
the target replication server is deleted.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
Attention:
v If you specify ACTIVE_DATA_HIGH_PRIORITY, inactive backup data
that belongs to the client node is not replicated.
v If the replication rule for backup data in any file spaces that belong to
the client node is DEFAULT, inactive backup data in those file spaces on
the target replication server is deleted.
DEFAULT
Replicates data according to the server rule for the data type.
For example, suppose that you want to replicate the archive data in all
client nodes that are configured for replication. Replication of the archive
data is a high priority. One method to accomplish this task is to set the
file-space and client-node replication rules for archive data to DEFAULT.
Set the server rule for archive data to ALL_DATA_HIGH_PRIORITY.
NONE
Data is not replicated. For example, if you do not want to replicate the
space-managed data in a client node, specify the NONE replication rule for
space-managed data.
To display the replication rules that apply to all file spaces that belong to a node,
issue the QUERY NODE command and specify FORMAT=DETAILED.
Remember: File spaces are not displayed for client nodes that are registered on the
source replication server but that have not performed store operations. Only after
the client stores data to the source replication server are file spaces created.
Replication rules for data types in file spaces are automatically assigned values of
DEFAULT.
To change replication rules for a node, issue one or more of the following
commands on the source replication server:
v To change a replication rule for backup data, issue the UPDATE NODE command
and specify the BKREPLRULEDEFAULT parameter. For example, to specify the
ACTIVE_DATA rule for backup data in NODE1, issue the following command:
update node node1 bkreplruledefault=active_data
v To change a replication rule for archive data, issue the UPDATE NODE command
and specify the ARREPLRULEDEFAULT parameter. For example, to specify the
ALL_DATA_HIGH_PRIORITY rule for archive data in NODE1, issue the
following command:
update node node1 arreplruledefault=all_data_high_priority
v To change a replication rule for space-managed data, issue the UPDATE NODE
command and specify the SPREPLRULEDEFAULT parameter. For example, to specify
the NONE rule for space-managed data in NODE1, issue the following
command:
update node node1 spreplruledefault=none
Related concepts:
Replication rules on page 966
Server rules are either normal priority or high priority. In a replication process that
includes both normal-priority and high-priority data, high-priority data is
replicated first. If you issue the REPLICATE NODE command for two or more clients,
all high priority data for all file spaces in the specified nodes is processed before
normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that your client nodes contain active backup data
and archive data. Replication of the active backup data is a high priority. To
prioritize the active backup data, specify the ACTIVE_DATA_HIGH_PRIORITY
replication rule. Specify the ALL_DATA rule for archive data.
Attention:
v If you specify ACTIVE_DATA, inactive backup data that belongs to
client nodes is not replicated.
v If the replication rules for backup data in any file spaces and any client
nodes is DEFAULT, inactive backup data in those file spaces on the
target replication server is deleted. For example, suppose the rules for
backup data in file space /a in NODE1 and file space /c in NODE2 are
DEFAULT. The rules for backup data in NODE1 and NODE2 are also
DEFAULT. If you specify ACTIVE_DATA as the server rule, inactive data
in file spaces /a and /c is deleted.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
ACTIVE_DATA_HIGH_PRIORITY
Replicates only the active backup data in client nodes. The data is
replicated with a high priority.
To change server replication rules, issue one or more of the following commands
on the source replication server:
v To change the server replication rule that applies to backup data, issue the SET
BKREPLRULEDEFAULT command on the source replication server. For example, to
specify the ACTIVE_DATA rule for backup data, issue the following command:
set bkreplruledefault active_data
v To change the server replication rule that applies to archive data, issue the SET
ARREPLRULEDEFAULT command on the source replication server. For example, to
specify the ALL_DATA_HIGH_PRIORITY rule for archive data, issue the
following command:
set arreplruledefault all_data_high_priority
v To change the server replication rule that applies to space-managed data, issue
the SET SPREPLRULEDEFAULT command on the source replication server. For
example, to specify the NONE rule for space-managed data, issue the following
command:
set spreplruledefault none
Related concepts:
Replication rules on page 966
NODE1 has two file spaces, /a and /b. NODE2 has one file space, /a. File space
and client replication rules for backup, archive, and space-managed data are set to
DEFAULT. Server replication rules are set to ALL_DATA. You have the following
goals:
v Replicate only the active backup data in file space /a that belongs to NODE1.
v Do not replicate any space-managed data in any of the file spaces that belong to
NODE1.
v Replicate the archive data in all file spaces that belong to NODE1 and NODE2.
Make the replication of this data a high priority.
v Replicate the active and inactive backup data in file space /a that belongs to
NODE2. Make replication of this data a high priority.
Replication process
File space /a, NODE 1: File space /b, NODE 1: File space /a, NODE 2:
ALL BACKUP DATA
ARCHIVE DATA ARCHIVE DATA ARCHIVE DATA
File space /a, NODE 1: File space /b, NODE 1: File space /a, NODE 2:
Target replication
server
Tips:
v In Figure 113 on page 1000, all the data in all the files spaces of both client nodes
is replicated in one process. However, if the amount of node data is large and
you do not have enough bandwidth to replicate data in a single process, you can
use one of the following methods:
Schedule or manually issue separate REPLICATE NODE commands at different
times for NODE1 and NODE2.
Replicate high-priority and normal-priority data separately at different times
by specifying the PRIORITY parameter on the REPLICATE NODE command.
Replicate different data types at different times by specifying the DATATYPE
parameter on the REPLICATE NODE command.
Combine replication by priority and by data type by specifying both the
PRIORITY and DATATYPE parameters on the REPLICATE NODE command.
v To verify the replication rules that apply to the file spaces in the client nodes,
issue the VALIDATE REPLICATION command. You can also use this command to
verify that the source replication server can communicate with the target
replication server. To preview results issue the REPLICATE NODE command and
specify PREVIEW=YES.
Related concepts:
Replication rules on page 966
Client node data that was exported and imported must be synchronized between
the source and target replication servers. You set up client nodes to synchronize
their data as part of the process of configuring nodes for replication. Data is
synchronized the first time that replication occurs. To synchronize data, the data
must be imported to the disaster recovery server using ABSOLUTE as the value for
the DATES parameter on the IMPORT NODE command.
Important: You cannot display information about running replication processes for
client nodes that are being converted from import and export operations to
replication operations. The conversion process might run for a long time, but it
occurs only once for a client node that is being converted.
After you set up a basic replication configuration, you can change file-space,
client-node, and server replication rules. To replicate data, issue the REPLICATE NODE
command in an administrative schedule or on a command line.
Before adding a client node for replication, ask the following questions:
v Was the data that belongs to the client node previously exported from the server
that is to be the source for replicated data?
v If the data was exported, was it imported on the server that is now the target for
replicated data?
v When you imported the data, did you specify DATES=ABSOLUTE on the IMPORT
NODE command?
If you answered "yes" to all of the preceding questions, you must set up to
synchronize the data on the source and target replication servers. The following
procedure explains how to set up synchronization when adding client nodes for
replication. Synchronization occurs during replication.
Restrictions:
v If a client node definition does not exist on the target replication server, do not
create it. The definition for the client node on the target server is created
automatically when the node's data is replicated the first time.
v If a client node definition exists on both the source and target replication servers,
but the data that belongs to the client node was not exported and imported, you
must rename or remove the client node on the target replication server before
data can be replicated.
v If you previously removed a client node from replication on the source
replication server, but not on the target replication server, you do not have to
rename or remove the node on the target replication server.
If you set the replication state of the client node to DISABLED, the replication
mode is set to SEND, but replication does not occur. If you set the replication state
of the client node to ENABLED, the client node definition is created on the target
replication server when replication occurs for the first time. In addition, the
replication mode of the client node on the target replication server is set to
RECEIVE, and the replication state is set to ENABLED.
If you add a file space to a client node that is configured for replication, the
file-space replication rules for data types are automatically set to DEFAULT.
After you add client nodes for replication, ensure that they are included in any
existing administrative schedules for replication. Alternatively, you can create a
schedule for replication that includes the new client nodes.
Related concepts:
Replication mode on page 973
Replication state on page 970
Removing a client node from replication deletes only information about replication
from the server database. Removing a node from replication does not delete the
data that belongs to the node that was replicated.
To completely remove a client node from replication, issue the REMOVE REPLNODE
command on the source and target replication servers that have the node
configured for replication. For example, to remove NODE1 and NODE2 from
replication, issue the following command:
remove replnode node1,node2
To verify that the node was removed, issue the QUERY NODE command on the source
and the target replication servers. For example, to verify that NODE1 and NODE2
were removed, issue the following command:
query node node1,node2 format=detailed
If the node was removed, the fields Replication State and Replication Mode are
blank. If you do not want to keep the node data that is stored on the target
If you remove a client node from replication, rename the node, or delete the node
data, and then remove the node, you can later add the node for replication. All the
data that belongs to the node will be replicated to the target replication server.
For example, suppose that you updated the definition of a client node whose data
you wanted to replicate. The data that belongs to the node was previously
exported from the source replication server and imported to the target replication
server. You specified ENABLED as the setting of the REPLSTATE parameter.
However, you did not specify SYNCSEND as the replication mode on the source
replication server. As a result, the REPLMODE parameter was automatically set to
SEND, and data that belongs to the node was not synchronized or replicated.
To reconfigure the client node for replication, complete the following steps:
1. Issue the REMOVE REPLNODE command for the client node. For example, to
remove a client node, NODE1, from replication, issue the following command:
remove replnode node1
Issuing the REMOVE REPLNODE command resets the replication state and the
replication mode for the client node to NONE.
2. Issue the UPDATE NODE command with the correct parameters and values.
For example, to enable NODE1 for replication and synchronize the data that
belongs to the node, complete the following steps:
a. On the source replication server, issue the following command:
update node node1 replstate=enabled replmode=syncsend
b. On the target replication server, issue the following command:
update node node1 replstate=enabled replmode=syncreceive
After synchronization and replication are complete, the REPLMODE parameter in the
client node definition on the source replication server is set to SEND. The REPLMODE
parameter in the client node definition on the target replication server is set to
RECEIVE.
Related concepts:
Replication mode on page 973
Replication state on page 970
You can add a source replication server to an existing configuration. For example,
suppose that you have a replication configuration comprising a single
source-replication server and a single target-replication server. You can add another
source server that replicates data to the existing target server.
Related concepts:
Replication server configurations on page 964
To change a target replication server, issue the SET REPLSERVER command on the
source replication server. Specify the name of the new target replication server. For
example, to specify NEW_TGTSRV as the new target replication server, issue the
following command:
set replserver new_tgtsrv
The following example describes what occurs when you change or add target
replication servers. Suppose that DRSERVER is the target replication server for
PRODSERVER. PRODSERVER has one client, NODE1.
1. Files A, B, and C that belong to NODE1 are replicated to TGTSRV.
2. You change the target replication server to NEW_TGTSRV.
3. NODE1 backs up files D, E, and F to SRCSRV.
4. Replication occurs for NODE 1. Files, A, B, and C, which were replicated to
TGTSRV, are replicated to NEW_TGTSRV. New files D, E, and F are also
replicated to NEW_TGTSRV.
Before you begin this procedure, delete any administrative schedules on the source
replication server that issue the REPLICATE NODE command.
To remove a target replication server, issue the SET REPLSERVER command. Do not
specify the name of a target replication server. For example, to remove a target
server TGTSRV, issue the following command:
set replserver
Remember: If you do not want to keep replicated node data on the target
replication server, you can delete it.
A server that uses SSL can obtain a unique certificate that is signed by a certificate
authority (CA), or the server can use a self-signed certificate. Before starting the
source and target replication servers, install the certificates and add them to the
key database files. Required SSL certificates must be in the key database file that
belongs to each server. SSL support is active if the server options file contains the
SSLTCPPORT or SSLTCPADMINPORT option or if a server is defined with SSL=YES at
startup.
The server and its database are updated with the new password. After updating
the password, shut down the server, add the certificates, and start the server.
To determine whether a server is using SSL, issue the QUERY SERVER command.
To update a server definition for SSL, issue the UPDATE SERVER command. For
example, to update the server definition for server PHOENIX_SRV, issue the
following command:
update server phoenix_srv ssl=yes
Restriction: For event servers, library servers, and target replication servers, the
name of the source replication server must match the value of the SET SERVERNAME
command on the target. Because the source replication server uses the name of the
If you enable SSL communications and are using the following functions, you must
create separate source and target definitions that use TCP/IP for the corresponding
server-to-server communications:
v Enterprise configuration
v Command routing
v Virtual volumes
v LAN-free
Replication is the only server-server function that can use SSL.
If you use SSL with node replication, you must create separate server definitions
for enterprise configuration, command routing, virtual volumes, and LAN-free
communications.
Suppose that you want to use a source replication server to replicate data and to
route commands. In the option file of the target replication server, the value of the
TCPPORT option is 1500. The value of the SSLTCPPORT option is 1542.
You can use the server name SSL for node replication:
define server ssl hladdress=1.2.3.4 lladdress=1542 ssl=yes
serverpassword=xxxxx
A controlling rule is the rule that the source replication server uses to replicate data
in a file space. For example, suppose the replication rule for backup data in file
space /a is DEFAULT. If the client-node rule for backup data is ALL_DATA, the
controlling rule for the backup data in file space /a is ALL_DATA.
All file spaces are displayed regardless of whether the state of the data types in
the file spaces is enabled or disabled.
v To display the controlling replication rules and verify the connection with the
target replication server, issue the following command:
validate replication node1,node2 verifyconnection=yes
Specifying the LISTFILES parameter signifies that the WAIT parameter is set to
YES and that you cannot the issue the WAIT parameter from the server console.
If the data that belongs to a client node is being replicated, any attempt to replicate
the data by issuing another REPLICATE NODE command fails. For example, suppose
the backup data that belongs to a client node is scheduled for replication at 6:00
a.m. Replication of the archive data is scheduled for 8:00 a.m. Replication of the
backup-data must complete before replication of the archive data starts.
Example
If you have many client nodes and are replicating a large amount of data, you can
replicate data more efficiently by issuing several REPLICATE NODE commands in
separate schedules. For example, replicate the data that belongs to the most
important client nodes first in a single command. After the data that belongs to
those client nodes is replicated, replicate the data that belongs to the other nodes.
Tip: To ensure that replication for first group of client nodes finishes before the
replication for the other nodes starts, specify WAIT=YES on the first REPLICATE NODE
command. For example, if you want to replicate the data that belongs to NODE1
and NODE2 before the data that belongs to NODE3 and NODE4, issue the
following commands:
replicate node node1,node2 wait=yes
replicate node node3,node4
Data is replicated for a file space only when the following conditions are true:
v The replication state for data types in file spaces are enabled. For example, if the
replication state for archive data in a file space is enabled, archive data in the
file space is replicated.
v The controlling rule for the data type in the file space cannot be NONE. For
example, suppose the replication rule for archive data in a file space is
DEFAULT. If the file space rules and client node rules for archive data are both
DEFAULT and the server rule for archive data is NONE, archive data in the file
space is not replicated.
To replicate data by file space, issue the REPLICATE NODE command and specify the
file space name or file space identifier. For example, to replicate data in file space
/a in NODE1, issue the following command:
replicate node node1 /a
Tip: With the REPLICATE NODE command, you can also replicate data by priority
and by data type. To achieve greater control over replication processing, you can
combine replication by file space, data type, and priority.
To obtain information about the node replication process while it is running, issue
the QUERY PROCESS command:
query process
For node replication purposes, each file space contains three logical file spaces:
v One for backup objects
v One for archive objects
v One for space-managed objects
By default, the QUERY PROCESS command reports results for each logical file space.
Other factors also affect the output of the QUERY PROCESS command:
v If a file space has a replication rule that is set to NONE, the file space is not
included in the count of file spaces that are being processed.
v If you specify data types in the REPLICATE NODE command, only those data types
are included in the count of file spaces that are being processed, minus any file
spaces that are specifically excluded.
In this example, NODE1 has four file spaces with three object types. The QUERY
PROCESS command generates the following output for node replication:
Because the example includes four file spaces with three object types, 12 logical file
spaces are processed for replication. The QUERY PROCESS command output shows
that 11 logical file spaces completed replication.
Related concepts:
Node replication processing on page 966
If you do not specify a type on the REPLICATE NODE command, all data types are
replicated.
Tip: Using the REPLICATE NODE command, you can also replicate data by file space
and by priority. To achieve greater control over replication processing, you can
combine replication by data type, file space, and priority.
Related concepts:
Node replication processing on page 966
Tip: Using the REPLICATE NODE command, you can also replicate data by file space
and by data type. To achieve greater control over replication processing, you can
combine replication by priority, file space, and data type.
Related concepts:
Node replication processing on page 966
The name of the file space is /a. It is common to NODE1 and NODE2.
To replicate the data in the file space, issue the following command:
replicate node node1,node2 /a priority=normal datatype=archive,spacemanaged
Issuing this command replicates archive and space-managed data that is assigned
the replication rule ALL_DATA.
Related concepts:
Node replication processing on page 966
Use the MAXSESSIONS parameter to specify the maximum number of sessions to use.
When you calculate the value for the MAXSESSIONS parameter, consider the available
network bandwidth and the processor capacity of source and target replication
servers.
Consider the number of logical and physical drives that can be dedicated to the
replication process. You must ensure that there are enough drives available for
replication processing because other server processes or client sessions might also
be using drives. The number of mount points and drives available for replication
operations depends on the following factors:
v Tivoli Storage Manager server activity that is not related to replication
v System activity
v The mount limits of the device classes for the sequential-access storage pools
that are involved
v The availability of a physical drive on the source and target replication servers,
if the device type is not FILE
v The available network bandwidth and the processor capacity of source and
target replication servers
Issue the REPLICATE NODE command and specify the MAXSESSIONS parameter to
determine the number of data sessions. For example, to set the maximum number
of replication sessions to 6 for NODE_GROUP1, issue the following command:
replicate node node_group1 maxsessions=6
| Do not use a storage pool that is enabled for data deduplication to test replication.
| By using storage pools that are not enabled for data deduplication to test
| replication processing, you avoid processing extents that can increase the amount
| of preprocessing time of the replication process. By determining the data transfer
| and network capability of your replication operation without extent processing,
| you get a better representation of the capability of your system. Test replication
| processing with storage pools that are enabled for data deduplication if you want
| to determine the effect of data deduplication on replication performance alone.
| You must calculate the bytes-per-hour value for each source server individually.
| You can determine which method is the most suitable for the server, based on its
| bytes-per-hour value.
Complete the following steps to determine how much data you can replicate
during a specified timeframe so that you can tune replication processing for a
server. Repeat these steps to obtain bytes-per-hour value for each server that you
want to use for replication processing.
| 1. Complete the following steps to select the appropriate data:
| a. Select one or more nodes and file spaces that have approximately 500 GB to
| 1 TB of total data.
| b. Select data that is typical of the data that you replicate on a routine basis.
| c. Select nodes that are configured for replication.
| 2. To display the amount of data in a file space, issue the QUERY OCCUPANCY
| command.
3. Select a timeframe during which replication is running normally.
4. If you plan to use Secure Sockets Layer (SSL) as the communication protocol
for replication processing, ensure that SSL is enabled.
When you determine the bytes-per-hour value for each server, you can determine a
method to use for the initial replication.
To see how your network manages more workload during replication, complete
the following tasks:
1. Increase the value of the MAXSESSIONS parameter by 10 on the REPLICATE NODE
command and run the test again.
2. Increase the number of replication sessions by 10 to transfer more data
concurrently during replication. Alternatively, if you determine that 10
replication sessions (the default MAXSESSIONS value) cause your network to
degrade below acceptable levels, decrease the value of the MAXSESSIONS
parameter.
3. Repeat the process, and adjust the value of the MAXSESSIONS parameter to
determine optimal data transfer capability.
To determine the replication state of a data type in a file space, issue the QUERY
FILESPACE command with the FORMAT parameter set to DETAILED.
Restriction: You cannot disable or enable replication for an entire file space. You
can only disable and enable replication of a data type in a file space.
To determine the replication state of a node, issue the QUERY NODE command.
v To disable replication for a node, issue the UPDATE NODE command and specify
REPLSTATE=DISABLED. For example, to disable replication for NODE1, issue the
following command:
update node node1 replstate=disabled
v To enable replication for a node, issue the UPDATE NODE command and specify
REPLSTATE=ENABLED. For example, to enable replication for NODE1, issue the
following command:
update node node1 replstate=enabled
Remember: If you disable replication for a client node while data that belongs to
the node is being replicated, the replication process is not affected. Replication of
the data continues until all the data that belongs to the client node is replicated.
However, replication for the client node will be skipped the next time that
replication runs.
Related concepts:
Replication state on page 970
Disabling outbound or inbound sessions can be useful if, for example, you have a
planned network outage that will affect communication between source and target
replication servers. Disabling and enabling sessions affects not only node
replication operations but also certain other types of operations.
To display the status and direction of sessions for a particular server, issue the
QUERY STATUS command.
Remember:
v When you disable sessions for a particular server, you disable the following
types of sessions in addition to replication:
Server-to-server event logging
Enterprise management
Server registration
LAN-free sessions between storage agents and the Tivoli Storage Manager
server
Data storage using virtual volumes
v If you disable only outbound sessions on a source replication server, data that
belongs to client nodes that store data on the source server do not have their
data replicated. However, inbound sessions to the target server can occur.
If a server is the target for multiple source replication servers and you disable
outbound sessions on a single source server, the target replication server
continues to receive replicated data from the other source replication servers.
When you disable outbound node replication processing, you prevent new
replication processes from starting on a source replication server. Enabling
outbound node replication processing is required after a database restore.
Restriction: When you restore the Tivoli Storage Manager database, replication is
automatically disabled. Disabling replication prevents the server from deleting
copies of data on the target replication server that are not referenced by the
restored database. After a database restore, you must re-enable replication.
To display the status of replication processing for a particular server, issue the
QUERY STATUS command.
Issue the following commands on the source replication server to disable and
enable replication processing:
v To disable replication, issue the DISABLE REPLICATION command.
v To enable replication, issue the ENABLE REPLICATION command.
Disabling a replication rule can be useful if, for example, you replicate groups of
normal-priority and high-priority client nodes on different schedules. For example,
suppose that the data that belongs to some client nodes is assigned the
ALL_DATA_HIGH_PRIORITY replication rule. The data that belongs to other
client nodes is assigned the ALL_DATA replication rule. The client nodes are
separated into groups, in which some of the nodes in each group have
high-priority data and other nodes in the group have normal-priority data.
You schedule replication for each group to take place at different times. However, a
problem occurs, and replication processes take longer than expected to complete.
As a result, the high-priority data that belongs to client nodes in groups that are
scheduled late in the replication cycle is not being replicated.
To replicate the high-priority data as soon as possible, you can disable the
ALL_DATA rule and rerun replication. When you rerun replication, only the client
node data that is assigned the ALL_DATA_HIGH_PRIORITY rule is replicated.
To disable and enable replication rules, complete one of the following steps:
v To disable a replication rule, issue the UPDATE REPLRULE command and specify
STATE=DISABLED. For example, to disable the replication rule
ACTIVE_DATA_HIGH_PRIORITY, issue the following command:
update replrule active_data_high_priority state=disabled
v To enable a replication rule, issue the UPDATE REPLRULE command and specify
STATE=ENABLED. For example, to enable the replication rule
ACTIVE_DATA_HIGH_PRIORITY, issue the following command:
update replrule active_data_high_priority state=enabled
Related concepts:
Replication state on page 970
To prevent replication of a data type and purge the data from the file space on the
target replication server, issue the UPDATE FILESPACE command and specify
REPLSTATE=PURGEDATA. For example, to prevent replication of backup data in file
space /a on NODE1 and delete the backup data in file space /a on the target
replication server, issue the following command:
update filespace node1 /a datatype=backup replstate=purgedata
Data is purged the next time that replication runs for the file space. After data is
purged, the replication rule for the specified data type is set to DEFAULT.
Replication for the data type is disabled.
Disabling replication prevents the Tivoli Storage Manager server from deleting
copies of data on the target replication server that are not referenced by the
restored database. Before re-enabling replication, determine whether copies of data
that are on the target replication server are needed. If they are, complete the steps
described in the following example. In the example, the name of the source
replication server is PRODSRV. DRSRV is the name of the target replication server.
NODE1 is a client node with replicated data on PRODSRV and DRSRV.
Restriction: You cannot use Secure Sockets Layer (SSL) for database restore
operations.
1. Remove NODE1 from replication on PRODSRV and DRSRV by issuing the
REMOVE REPLNODE command:
remove replnode node1
2. Update NODE1 definitions PRODSRV and DRSRV. When replication occurs,
DRSRV sends the data to PRODSRV that was lost because of the database
restore.
a. On DRSRV, issue the UPDATE NODE command and specify the replication
mode SYNCSEND:
update node node1 replstate=enabled replmode=syncsend
b. On PRODSRV, issue the UPDATE NODE command and specify the replication
mode SYNCRECEIVE:
update node node1 replstate=enabled replmode=syncreceive
3. On DRSRV, set the replication rules to match the rules on PRODSRV. For
example, if only archive data was being replicated from PRODSRV to DRSRV,
set the rules on DRSRV to replicate only archive data from DRSRV to
PRODSRV. Backup and space-managed data will not be replicated to
PRODSRV.
To set rules, you can issue the following commands:
v UPDATE FILESPACE
v UPDATE NODE
v SET ARREPLRULEDEFAULT
v SET BKREPLRULEDEFAULT
v SET SPREPLRULE
4. On DRSRV, issue the SET REPLSERVER command to set PRODSRV as the target
replication server:
set replserver prodsrv
5. On DRSRV, issue the REPLICATE NODE command to replicate data belonging to
NODE1:
replicate node node1
The original replication configuration is restored. PRODSRV has all the data that
was lost because of the database restore.
Remember: In step 4 on page 1021 you set the PRODSRV as the target replication
server for DRSRV. If, in your original configuration, you were replicating data from
DRSRV to another server, you must reset the target replication server on DRSRV.
For example, if you were replicating data from DRSRV to BKUPDRSRV, issue the
following command on DRSRV:
set replserver bkupdrsrv
Important: You cannot display information about running replication processes for
client nodes that are being converted from import and export operations to
replication operations. The data synchronization process might run for a long time,
but it occurs only once for a client node that is being converted.
The default record-retention period for completed processes is 30 days. To display
the retention period, issue the QUERY STATUS command and check the value in the
Replication Record Retention Period field.
The record for a running process is updated only after a group of files is processed
and committed. A file group consists of 2,000 files or 2 GB of data, whichever is
smaller. For example, if a single file is 450 GB, the record is not updated for a
relatively long time. If you notice that the number of files not yet replicated for a
running process is not decreasing fast enough, network bandwidth or time might
be insufficient to replicate the amount of data. Take one of the following actions:
v Provide more time for replication.
v Decrease the amount of data to replicate.
v Create more parallel data-transmission sessions between the source and target
replication servers by increasing the value of the MAXSESSIONS parameter.
Increase the value of the MAXSESSIONS parameter only if network bandwidth and
processor resources for the source and target replication servers are sufficient.
The server activity log contains messages with the following information:
v The nodes that were enabled or disabled for replication
v The number of files that were eligible to be replicated compared to the number
of those files that were already stored on the target server
v The number of files that were successfully replicated and the number of files
that were missed
v The number of files on the target server that were deleted
To display the number of files stored on source and target replication servers, issue
the QUERY REPLNODE command. You can issue the command on a source or a target
replication server.
The information in the output for QUERY REPLNODE includes files that are stored at
the time the command is issued. If a replication process is running, the information
does not include files that are waiting to be transferred. Information is reported by
data type. For example, you can determine the number of backup files that belong
to a client node that are stored on the source and the target replication servers.
In the output, check the values in the fields that represent bytes replicated and
bytes transferred for each data type:
v Replicated bytes are bytes that were replicated to the target replication server. If
a file was stored in a deduplicated storage pool, the number of bytes in the
stored file might be less than the number of bytes in the original file. This value
in this field represents the number of physical bytes in the original file.
v Transferred bytes represent the number of bytes that were sent to the target
replication server. For files stored in a deduplicated storage pool, the value in
this field includes the number of bytes in the original file before duplicate
extents were removed. If duplicate extents were already on the target replication
server, the number of bytes in the original file is more than the number of bytes
transferred.
Related concepts:
Replication of deduplicated data on page 974
Active log mirror on page 660
Related tasks:
Part 6, Protecting the server, on page 883
To display the retention period for replication records, issue the QUERY STATUS
command on the source replication server.
To set the retention period for replication records, issue the SET REPLRETENTION
command.
Replication records that exceed the retention period are deleted from the database
by Tivoli Storage Manager during automatic inventory-expiration processing. As a
result, the amount of time that retention records are retained can exceed the
specified retention period
If a replication process runs longer than the retention period, the record of the
process is not deleted until the process ends, the retention period passes, and
expiration runs.
If any schedules were defined on the source replication server, you can redefine
them on the target replication server. Client node data on the target replication
server is now managed by policies on the target replication server. For example,
file expiration and deletion are managed by the target replication server.
Before you begin this procedure, delete any administrative schedules on source
replication servers that issue the REPLICATE NODE command for the client nodes that
are included in the configuration.
To verify that the target replication server was removed, issue the QUERY STATUS
command on the source replication server. If the target replication server was
removed, the field Target Replication Server is blank.
Tip: If you do not want to keep replicated node data on the target replication
server, you can delete it.
To recover from a disaster, you must know the location of your offsite recovery
media. DRM helps you to determine which volumes to move offsite and back
onsite and track the location of the volumes.
| You can use complementary technologies to protect the Tivoli Storage Manager
| server and to provide an alternative to disaster recovery. For example, you can use
| DB2 HADR to replicate the Tivoli Storage Manager database or device-to-device
| replication.
Before you use DRM, familiarize yourself with Chapter 33, Protecting and
recovering the server infrastructure and client data, on page 917.
| Note: Unless otherwise noted, you need system privilege class to perform DRM
| tasks.
Related reference:
Disaster recovery manager checklist on page 1063
The disaster recovery plan file on page 1068
The following table describes how to set defaults for the disaster recovery plan file.
Table 88. Defaults for the disaster recovery plan file
Process Default
Primary storage pools to be When the recovery plan file is generated, you can limit processing to specified pools.
processed The recovery plan file will not include recovery information and commands for
storage pools with a data format of NETAPPDUMP.
For example, to specify that only the primary storage pools named PRIM1 and PRIM2
are to be processed, enter:
set drmprimstgpool prim1,prim2
Note: To remove all previously specified primary storage pool names and thus select
all primary storage pools for processing, specify a null string ("") in SET
DRMPRIMSTGPOOL.
To override the default: Specify primary storage pool names in the PREPARE
command
For example, to specify that only the copy storage pools named COPY1 and COPY2
are to be processed, enter:
set drmcopystgpool copy1,copy2
To remove any specified copy storage pool names, and thus select all copy storage
pools, specify a null string ("") in SET DRMCOPYSTGPOOL. If you specify both
primary storage pools (using the SET DRMPRIMSTGPOOL command) and copy
storage pools (using the SET DRMCOPYSTGPOOL command), the specified copy
storage pools should be those used to back up the specified primary storage pools.
To override the default: Specify copy storage pool names in the PREPARE command
Active-data pools to be When the recovery plan file is generated, you can limit processing to specified pools.
processed
The default at installation: None
For example, to specify that only the active-data pools named ACTIVEPOOL1 and
ACTIVEPOOL2 are to be processed, enter:
set drmactivedatastgpool activepool1,activepool2
To remove any specified active-data pool names, specify a null string ("") in SET
DRMACTIVEDATASTGPOOL.
Active-data pool volumes in MOUNTABLE state are processed only if you specify the
active-data pools using the SET DRMACTIVEDATASTGPOOL command or the
ACTIVEDATASTGPOOL parameter on the MOVE DRMEDIA, QUERY DRMEDIA,
and PREPARE commands. Processing of active-data pool volumes in MOUNTABLE
state is different than the processing of copy storage pool volumes in MOUNTABLE
state. All MOUNTABLE copy storage pool volumes are processed regardless whether
you specify copy storage pools with either the SET DRMCOPYSTGPOOL command
or the COPYSTGPOOL parameter.
To override the default: Specify active-data pool names using the MOVE DRMEDIA,
QUERY DRMEDIA, or PREPARE command.
The default at installation: For a description of how DRM determines the default
prefix, see the INSTRPREFIX parameter of the PREPARE command section in the
Administrator's Reference or enter HELP PREPARE from administrative client
command line.
The disaster recovery plan file will include, for example, the following file:
/u/recovery/plans/rpp.RECOVERY.INSTRUCTIONS.GENERAL
To override the default: The INSTRPREFIX parameter with the PREPARE command
Prefix for the recovery plan You can specify a prefix to the path name of the recovery plan file. DRM uses this
file prefix to identify the location of the recovery plan file and to generate the macros and
script file names included in the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
and RECOVERY.SCRIPT.NORMAL.MODE stanzas.
The default at installation: For a description of how DRM determines the default
prefix, see the PLANPREFIX parameter of the PREPARE command section in the
Administrator's Reference or enter HELP PREPARE from administrative client
command line.
The disaster recovery plan file name created by PREPARE processing will be in the
following format:
/u/server/recoveryplans/20000603.013030
To override the default: The PLANPREFIX parameter with the PREPARE command
The default at installation: All copy storage pool volumes in the MOUNTABLE state
For example, to specify that DRM should not read the volume labels, enter:
set drmchecklabel no
Expiration period of a A database backup series (full plus incremental and snapshot) is eligible for
database backup series expiration if all of these conditions are true:
v The volume state is VAULT or the volume is associated with a device type of
SERVER (for virtual volumes).
v It is not the most recent database backup series.
v The last volume of the series exceeds the expiration value, number of days since
the last backup in the series.
For example, to specify the vault name as IRONVAULT, the contact name as J.
SMITH, and the telephone number as 1-555-000-0000, enter:
set drmvaultname "Ironvault, J. Smith, 1-555-000-0000"
Tip: Enter your site-specific information in the stanzas when you first create the
plan file or after you test it.
Enter your instructions in flat files that have the following names:
v prefix.RECOVERY.INSTRUCTIONS.GENERAL
v prefix.RECOVERY.INSTRUCTIONS.OFFSITE
v prefix.RECOVERY.INSTRUCTIONS.INSTALL
v prefix.RECOVERY.INSTRUCTIONS.DATABASE
v prefix.RECOVERY.INSTRUCTIONS.STGPOOL
Note: The files created for the recovery instructions must be physical sequential
files.
RECOVERY.INSTRUCTIONS.GENERAL
Include information such as administrator names, telephone numbers, and
location of passwords. For example:
Recovery Instructions for Tivoli Storage Manager Server ACMESRV on system ZEUS
Joe Smith (wk 002-000-1111 hm 002-003-0000): primary system programmer
Sally Doe (wk 002-000-1112 hm 002-005-0000): primary recovery administrator
Jane Smith (wk 002-000-1113 hm 002-004-0000): responsible manager
Security Considerations:
Joe Smith has the password for the Admin ID ACMEADM. If Joe is unavailable,
you need to either issue SET AUTHENTICATION OFF or define a new
administrative user ID at the replacement Tivoli Storage Manager server console.
RECOVERY.INSTRUCTIONS.OFFSITE
Include information such as the offsite vault location, courier name, and
telephone numbers. For example:
RECOVERY.INSTRUCTIONS.INSTALL
Include the following installation information:
Restoring the base server system from boot media or, if boot media is
unavailable, server installation and the location of installation volumes.
For example:
Most likely you will not need to reinstall the Tivoli Storage Manager server and
administrative clients because we use
mksysb to backup the rootvg volume group, and the Tivoli Storage Manager server
code and configuration files exist in this group.
However, if you cannot do a mksysb restore of the base server system,
and instead have to start with a fresh AIX build, you may need
to add Tivoli Storage Manager server code to that AIX system.
The install volume for the Tivoli Storage Manager server is INS001. If that is
lost, you will need to contact Copy4You Software, at 1-800-000-0000, and
obtain a new copy. Another possibility is the local IBM Branch office
at 555-7777.
RECOVERY.INSTRUCTIONS.DATABASE
Include information about how to recover the database and about how
much hardware space requirements. For example:
You will need to find replacement disk space for the server database. We
have an agreement with Joe Replace that in the event of a disaster, he
will provide us with disk space.
RECOVERY.INSTRUCTIONS.STGPOOL
Include information on primary storage pool recovery instructions. For
example:
Do not worry about the archive storage pools during this disaster recovery.
Focus on migration and backup storage pools.
The most important storage pool is XYZZZZ.
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
Use the following procedure to specify information about server and client
machines and to store it in the server database:
You can define media that contain softcopy manuals that you would need during
recovery. For example, to define a CD-ROM containing the AIX 5.1 manuals that
are on volume CD0001, enter:
define recoverymedia aix51manuals type=other volumes=cd0001
description="AIX 5.1 Bookshelf"
For details about the recovery plan file, see The disaster recovery plan file on
page 1068.
Before creating a disaster recovery plan, back up your storage pools then backup
the database. See Backing up primary storage pools on page 930 and Backing
up the server database on page 918 for details about these procedures.
If you manually send backup media offsite, see Moving copy storage pool and
active-data pool volumes offsite on page 1046. If you use virtual volumes, see
Using virtual volumes to store data on another server on page 737.
When your backups are both offsite and marked offsite, you can create a disaster
recovery plan.
You can use the Tivoli Storage Manager scheduler to periodically run the
PREPARE command (see Chapter 20, Automating server operations, on page
633).
Tips:
v The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan
or to store additional instructions that you will need during recovery from an
actual disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your
site-specific information in the stanzas when you first create the plan file or after
you test it.
v DRM creates a plan that assumes that the latest database full plus incremental
series would be used to restore the database. However, you may want to use
DBSNAPSHOT backups for disaster recovery and retain your full plus
incremental backup series on site to recover from possible availability problems.
In this case, you must specify the use of DBSNAPSHOT backups in the
PREPARE command. For example:
prepare source=dbsnapshot
For example, to store the recovery plan file locally in the /u/server/
recoveryplans/ directory, enter:
prepare planprefix=/u/server/recoveryplans/
Recovery plan files that are stored locally are not automatically expired. You
should periodically delete down-level recovery plan files manually. DRM appends
to the file name the date and time (yyyymmdd.hhmmss). For example:
/u/server/recoveryplans/20000925.120532
Set up the source and target servers and define a device class a device type of
SERVER (see Setting up source and target servers for virtual volumes on page
739 for details). For example, assume a device class named TARGETCLASS is
defined on the source server where you create the recovery plan file. Then to
create the plan file, enter:
prepare devclass=targetclass
The recovery plan file is written as an object on the target server, and a volume
history record is created on the source server. For more about recovery plan files
that are stored on target servers, see Displaying information about recovery plan
files on page 1042.
For an example of the contents of a recovery plan file, see The disaster recovery
plan file on page 1068. You cannot issue the commands shown below from a
server console. An output delay can occur if the plan file is located on tape.
v From the source server: Issue the following command for a recovery plan file
created on September 1, 2000 at 4:39 a.m. with the device class TARGETCLASS:
query rpfcontent marketing.20000901.043900 devclass=targetclass
v From the target server: Issue the following command for a recovery plan file
created on August 31,2000 at 4:50 a.m. on a source server named MARKETING
whose node name is BRANCH8:
query rpfcontent marketing.20000831.045000 nodename=branch8
To display a list of recovery plan files, use the QUERY RPFILE command. See
Displaying information about recovery plan files on page 1042 for more
information.
All recovery plan files that meet the criteria are eligible for expiration if both of the
following conditions exist:
v The last recovery plan file of the series is over 90 days old.
v The recovery plan file is not associated with the most recent backup series. A
backup series consists of a full database backup and all incremental backups that
apply to that full backup. Another series begins with the next full backup of the
database.
Expiration applies to plan files based on both full plus incremental and snapshot
database backups. Note, however, that expiration does not apply to plan files
stored locally. See Storing the disaster recovery plan locally on page 1041.
When the records are deleted from the source server and the grace period is
reached, the objects are deleted from the target server The record for the latest
recovery plan file is not deleted.
To delete recovery plan files, issue the DELETE VOLHISTORY command For
example, to delete records for recovery plan files that were created on or before
08/30/2000 and assuming full plus incremental database backup series, enter the
following command:
delete volhistory type=rpfile todate=08/30/2000
To limit the operation to recovery plan files that were created assuming database
snapshot backups, specify TYPE=RPFSNAPSHOT.
1. Move new backup media offsite and update the database with their locations.
See Moving copy storage pool and active-data pool volumes offsite on page
1046 for details.
2. Return expired or reclaimed backup media onsite and update the database with
their locations. See Moving copy storage pool and active-data pool volumes
on-site on page 1048 for details.
3. Offsite recovery media management does not process virtual volumes. To
display all virtual copy storage pool, active-data pool, and database backup
volumes that have their backup objects on the remote target server, issue the
QUERY DRMEDIA command. For example, enter the following command.
query drmedia * wherestate=remote
The disaster recovery plan includes the location of copy storage pool volumes and
active-data pool volumes. The plan can provide a list of offsite volumes required to
restore a server.
The following diagram shows the typical life cycle of the recovery media:
Storage Hierarchy
COURIER
Backup Active-
storage data
pool pool
NOTMOUNTABLE
MOUNTABLE
KUPEXPIREDAYS
VAULT
EDELAY
Backup
database
US
Private Scratch
RE
BAC
r/w
DB
M
DR
Scratch
VAULTRETRIEVE
ONSITERETRIEVE
COURIERRETRIEVE
DRM assigns the following states to volumes. The location of a volume is known
at each state.
MOUNTABLE
The volume contains valid data, and Tivoli Storage Manager can access it.
NOTMOUNTABLE
The volume contains valid data and is onsite, but Tivoli Storage Manager
cannot access it.
COURIER
The volume contains valid data and is in transit to the vault.
VAULT
The volume contains valid data and is at the vault.
VAULTRETRIEVE
The volume, which is located at the offsite vault, no longer contains valid
data and is to be returned to the site. For more information about
reclamation of offsite copy storage pool volumes and active-data pool
volumes, see Reclamation of off-site volumes on page 379. For
information on expiration of database backup volumes, see step 1 on page
1048.
Complete the following steps to identify the database backup, copy storage pool,
and active-data pool volumes and move them offsite:
1. Identify the copy storage pool, active-data pool, and database backup volumes
to be moved offsite For example, issue the following command:
query drmedia * wherestate=mountable
DRM displays information similar to the following output:
Volume Name State Last Update Automated
Date/Time LibName
--------------- ---------------- ------------------- -----------------
TPBK05 Mountable 01/01/2000 12:00:31 LIBRARY
TPBK99 Mountable 01/01/2000 12:00:32 LIBRARY
TPBK06 Mountable 01/01/2000 12:01:03 LIBRARY
| Restriction: Do not run the MOVE DRMEDIA and BACKUP STGPOOL commands
| concurrently. Ensure that the storage pool backup processes are complete before
| you issue the MOVE DRMEDIA command.
For all volumes in the MOUNTABLE state, DRM does the following:
v Updates the volume state to NOTMOUNTABLE and the volume location
according to the SET DRMNOTMOUNTABLENAME. If this command is not
issued, the default location is NOTMOUNTABLE.
v For a copy storage pool volume or active-data pool volume, updates the
access mode to unavailable.
v For a volume in an automated library, checks the volume out of the library.
a. During checkout processing, SCSI libraries request operator intervention. To
bypass these requests and eject the cartridges from the library, first issue the
following command:
move drmedia * wherestate=mountable remove=no
b. Access a list of the volumes by issuing the following command:
query drmedia wherestate=notmountable
From this list identify and remove the cartridges (volumes) from the library.
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to COURIER and the volume location according to the SET
DRMCOURIERNAME. If the SET command is not yet issued, the default
location is COURIER. For more information, see Specifying defaults for offsite
recovery media management on page 1033
4. When the vault location confirms receipt of the volumes, issue the MOVE
DRMEDIA command in the COURIER state. For example:
move drmedia * wherestate=courier
For all volumes in the COURIER state, DRM updates the volume state to
VAULT and the volume location according to the SET DRMVAULTNAME command.
If the SET command is not yet issued, the default location is VAULT. For more
information, see Specifying defaults for offsite recovery media management
on page 1033.
5. Display a list of volumes that contain valid data at the vault. Issue the
following command:
query drmedia wherestate=vault
6. If you do not want to step through all the states, you can use the TOSTATE
parameter on the MOVE DRMEDIA command to specify the destination state. For
example, to change the volumes from NOTMOUNTABLE state to VAULT state,
issue the following command:
move drmedia * wherestate=notmountable tostate=vault
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to VAULT and the volume location according to the SET DRMVAULTNAME
command. If the SET command is not yet issued, the default location is VAULT.
See Preparing for disaster recovery on page 1051 for an example that
demonstrates sending server backup volumes offsite using MOVE DRMEDIA and QUERY
DRMEDIA commands.
To ensure that the database can be returned to an earlier level and database
references to files in the copy storage pool or active-data pool are still valid,
specify the same value for the REUSEDELAY parameter in your copy storage
pool and active-data pool definitions. If copy storage pools or active-data pools
managed by DRM have different REUSEDELAY values, set the
DRMDBBACKUPEXPIREDAYS value to the highest REUSEDELAY value.
A database backup volume is considered eligible for expiration if all of the
following conditions are true:
v The age of the last volume of the series has exceeded the expiration value.
This value is the number of days since the last backup in the series. At
installation, the expiration value is 60 days. To override this value, issue the
SET DRMDBBACKUPEXPIREDAYS command.
v For volumes that are not virtual volumes, all volumes in the series are in the
VAULT state.
v The volume is not part of the most recent database backup series.
Database backup volumes that are virtual volumes are removed during
expiration processing. This processing is started manually by issuing the
EXPIRE INVENTORY command or automatically through the EXPINTERVAL
option setting specified in the server options file.
2. Move a copy storage pool volume or an active-data pool volume on-site for
reuse or disposal. A copy storage pool volume or an active-data pool volume
can be moved on-site if it has been EMPTY for at least the number of days
specified with the REUSEDELAY parameter on the DEFINE STGPOOL
command. A database backup volume can be moved on-site if the database
backup series is EXPIRED according to the rules outlined in step 1. To
determine which volumes to retrieve, issue the following command:
query drmedia * wherestate=vaultretrieve
The server dynamically determines which volumes can be moved back on-site.
When you issue QUERY DRMEDIA WHERESTATE=VAULTRETRIEVE, the
field Last Update Date/Time in the output will contain the data and time that
the state of the volume was moved to VAULT, not VAULTRETRIEVE. Because
the server makes the VAULTRETRIEVE determination dynamically, issue
QUERY DRMEDIA WHERESTATE=VAULTRETRIEVE without the
BEGINDATE, ENDDATE, BEGINTIME or ENDTIME parameters. Doing so will
ensure that you identify all volumes that are in the VAULTRETRIEVE state.
3. After the vault location acknowledges that the volumes have been given to the
courier, issue the MOVE DRMEDIA command.
move drmedia * wherestate=vaultretrieve
The server does the following for all volumes in the VAULTRETRIEVE state:
v Change the volume state to COURIERRETRIEVE.
The server does the following for all volumes with in the VAULTRETRIEVE
state:
v Moves the volumes on-site where they can be can be reused or disposed of.
v Deletes the database backup volumes from the volume history table.
v For scratch copy storage pool volumes or active-data pool volumes, deletes
the record in the database. For private copy storage pool volumes or
active-data pool volumes, updates the access to read/write.
If IBM Tivoli Storage Manager is set up to use Secure Sockets Layer (SSL) for
client/server authentication, a digital certificate file, cert.kdb, is created as part of
the process. This file includes the server's public key, which allows the client to
encrypt data. The digital certificate file cannot be stored in the server database
because the Global Security Kit (GSKit) requires a separate file in a certain format.
1. Keep backup copies of the cert.kdb and cert256.arm files.
2. Regenerate a new certificate file, if both the original files and any copies are
lost or corrupted. For details about this procedure, see Troubleshooting the
certificate key database on page 890.
| Ensure that you set up the DRM and perform the daily operations to protect the
| database, data, and storage pools.
Setup
| 1. License DRM by issuing the REGISTER LICENSE command.
| 2. Ensure that the device configuration and volume history files exist.
| 3. Back up the storage pools by issuing the BACKUP STGPOOL command.
| 4. Copy active data to active-data pools by using the COPY ACTIVEDATA
| command.
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP
| DB command are complete before you issue the MOVE DRMEDIA
| command.
6. Send the backup volumes and disaster recovery plan file to the vault.
7. Generate the disaster recovery plan.
Day 2
1. Back up client files
2. Back up active and inactive data that is in the primary storage pools to
copy storage pools. Copy the active data that is in primary storage
pools to active-data pools.
3. Back up the database (for example, a database snapshot backup).
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP
| DB command are complete before you issue the MOVE DRMEDIA
| command.
5. Send the backup volumes and disaster recovery plan file to the vault.
6. Generate the disaster recovery plan.
Day 3
1. Automatic storage pool reclamation processing occurs.
2. Back up client files.
3. Back up the active and inactive data that is in primary storage pools to
copy storage pools. Copy the active data that is in primary storage
pools to active-data pools.
4. Back up the database (for example, a database snapshot backup).
| Tip: You can maintain and schedule custom maintenance scripts, by using the
| Administration Center.
1. Record the following information in the RECOVERY.INSTRUCTIONS stanza
source files:
v Software license numbers
v Sources of replacement hardware
v Any recovery steps specific to your installation
2. Store the following information in the database:
| v Server and client node machine information (DEFINE MACHINE, DEFINE
| MACHINENODE ASSOCIATION, and INSERT MACHINE commands)
| v The location of the boot recovery media (DEFINE RECOVERYMEDIA command)
3. Schedule automatic nightly backups to occur in the following order:
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP DB
| command are complete before you issue the MOVE DRMEDIA command.
b. Send the volumes offsite and record that the volumes were given to the
courier:
move drmedia * wherestate=notmountable
| 5. Create a recovery plan:
prepare
6. Give a copy the recovery plan file to the courier.
7. Create a list of tapes that contain data that is no longer valid and that should
be returned to the site:
query drmedia * wherestate=vaultretrieve
8. Give the courier the database backup tapes, storage pool backup tapes,
active-data pool tapes, the recovery plan file, and the list of volumes to be
returned from the vault.
9. The courier gives you any tapes that were on the previous day's return from
the vault list.
Update the state of these tapes and check them into the library:
move drmedia * wherestate=courierretrieve cmdf=/drm/checkin.libvol
cmd="checkin libvol libauto &vol status=scratch"
| The volume records for the tapes that were in the COURIERRETRIEVE state
| are deleted from the database. The MOVE DRMEDIA command also generates
| the CHECKIN LIBVOL command for each tape that is processed in the file
| /drm/checkin.libvol.. For example:
checkin libvol libauto tape01 status=scratch
checkin libvol libauto tape02 status=scratch
...
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP DB
| command complete before you issue other commands, for example, the MOVE
| DRMEDIA command.
Related tasks:
Creating a custom maintenance script on page 642
Recovering the Server: Here are guidelines for recovering your server:
1. Obtain the latest disaster recovery plan file.
2. Break out the file to view, update, print, or run as macros or scripts (for
example, batch programs or batch files).
3. Obtain the copy storage pool volumes and active-data pool volumes from the
vault.
4. Locate a suitable replacement machine.
5. If a mksysb or sysback was done that included a Tivoli Storage Manager server
that was already installed, restore an AIX image to your replacement machine.
6. Review the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
RECOVERY.SCRIPT.NORMAL.MODE scripts because they are important for
restoring the server to a point where clients can be recovered (see Disaster
recovery mode stanza on page 1077).
Restriction: When running the disaster recovery script or the commands that the
script contains, the determination must be made whether to run as root or as the
DB2 instance user ID.
1. Review the recovery steps described in the
RECOVERY.INSTRUCTIONS.GENERAL stanza of the plan.
2. Request the server backup tapes from the offsite vault.
3. Break out the recovery plan file stanzas into multiple files (see Breaking out a
disaster recovery plan file on page 1068.) These files can be viewed, updated,
printed, or run as Tivoli Storage Manager macros or scripts.
4. Print the RECOVERY.VOLUMES.REQUIRED file. Give the printout to the
courier to retrieve the copy storage pool volumes and active-data pool
volumes.
5. Find a replacement server. The RECOVERY.DEVICES.REQUIRED stanza
specifies the device type that is needed to read the backups. The
SERVER.REQUIREMENTS stanza specifies the disk space required.
6. The recovery media names and their locations are specified in the
RECOVERY.INSTRUCTIONS.INSTALL stanza and the
MACHINE.RECOVERY.MEDIA.REQUIRED stanza. Ensure that the
environment is the same as when the disaster recovery plan file was created.
The environment includes:
v The directory structure of the Tivoli Storage Manager server executable and
disk formatting utility
v The directory structure for Tivoli Storage Manager server configuration files
(disk log, volume history file, device configuration file, and server options
file)
v The directory structure and the files created when the disaster recovery plan
file was split into multiple files
7. Restore the operating system and the Tivoli Storage Manager server software
to the replacement server in one of the following ways:
v Build a new replacement server instead of restoring the environment from a
backup:
a. Install the Tivoli Storage Manager server software
b. Create the database instance user ID and group as in the original.
c. Create the database directories, the active directories, and the archive
directories as in the original.
d. Run the dsmicfgx utility to configure the replacement instance. This step
configures the API for the DSMSERV RESTORE DB utility.
1) Specify the instance user ID and password.
Verify that the DB2 instance and database information are restored. Create
any additional AIX file systems that might have contained directories that
are needed by the Tivoli Storage Manager server. Create the directories and
set the ownership and group to match the instance ownership. Verify the
following file system directory types:
Archive
Active
Failover
Database
You can find these directories by issuing the DB2 GET DB CFG SHOW DETAIL
command and reviewing the output or looking in the recovery plan file.
8. Review the Tivoli Storage Manager macros contained in the recovery plan:
v If, at the time of the disaster, the courier had not picked up the previous
night's database and storage pool incremental backup volumes but they
were not destroyed, remove the entry for the storage pool backup volumes
from the COPYSTGPOOL.VOLUMES.DESTROYED file.
v If, at the time of the disaster, the courier had not picked up the previous
night's database and active-data pool volumes but they were not destroyed,
remove the entry for the active-data pool volumes from the
ACTIVEDATASTGPOOL.VOLUMES.DESTROYED file.
9. If some required storage pool backup volumes could not be retrieved from the
vault, remove the volume entries from the
COPYSTGPOOL.VOLUMES.AVAILABLE file.
If some required active-data pool volumes could not be retrieved from the
vault, remove the volume entries from the
ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE file.
10. If all primary volumes were destroyed, no changes are required to the
PRIMARY.VOLUMES script and Tivoli Storage Manager macro files.
The copy storage pool volumes and active-data pool volumes used in the
recovery already have the correct ORMSTATE.
15. Issue the BACKUP DB command to back up the newly restored database.
16. Issue the following command to check the volumes out of the library:
move drmedia * wherestate=mountable
17. Create a list of the volumes to be given to the courier:
query drmedia * wherestate=notmountable
18. Give the volumes to the courier and issue the following command:
move drmedia * wherestate=notmountable
19. Issue the PREPARE command.
Identify which client machines have the highest priority so that restores can
begin using active-data pool volumes.
2. For each machine, issue the following commands:
a. Determine the location of the boot media. For example:
Note: You may also need to audit the library after the database is restored in order
to update the server inventory of the library volumes.
In this example, database backup volume DBBK01 was placed in element 1 of the
automated library. Then a comment is added to the device configuration file to
identify the location of the volume. Tivoli Storage Manager needs this information
to restore the database restore. Comments that no longer apply at the recovery site
are removed.
For example, if an automated tape library was used originally and cannot be used
at the recovery site, update the device configuration file. Include the DEFINE
LIBRARY and DEFINE DRIVE commands that are needed to define the manual
drive to be used. In this case, you must manually mount the backup volumes.
Note: If you are using an automated library, you may also need to update the
device configuration file to specify the location of the database backup volume.
After you restore the database, modify the device configuration file in the
database. After starting the server, define, update, and delete your library and
drive definitions to match your new configuration.
Note: If you are using an automated library, you may need to use the AUDIT
LIBRARY command to update the server inventory of the library volumes.
The restored server uses copy storage pool volumes to satisfy requests (for
example, from backup/archive clients) and to restore primary storage pool
volumes that were destroyed. If they are available, the server uses active-data
pools to restore critical client data.
After the database is restored, you can handle copy storage pool volumes and
active-data pool volumes at the recovery site in three ways:
v Mount each volume as requested by Tivoli Storage Manager. If an automated
library is used at the recovery site, check the volumes into the library.
v Check the volumes into an automated library before Tivoli Storage Manager
requests them.
If you are using an automated library, you may also need to audit the library after
the database is restored in order to update the Tivoli Storage Manager inventory of
the volumes in the library.
Locally:
v What is the recovery plan file pathname
prefix?
v How will recovery plan files be made
available at the recovery site?
Print and store offsite
Copy stored offsite
Copy sent/NFS to recovery site
On Another Server:
v What server is to be used as the target
server?
v What is the name of the target server's
device class?
v How long do you want to keep recovery
plan files?
Determine where you want to create the
user-specified recovery instructions
Issue:
v SET DRMDBBACKUPEXPIREDAYS to
define the database backup expiration
v SET DRMPRIMSTGPOOL to specify the
DRM-managed primary storage pools
v SET DRMCOPYSTGPOOL to specify the
DRM-managed copy storage pools
v SET DRMACTIVEDATASTGPOOL to
specify the DRM-managed active-data
pools
v SET DRMPLANVPOSTFIX to specify a
character to be appended to new storage
pools
v SET DRMPLANPREFIX to specify the
RPF prefix
v SET DRMINSTRPREFIX to specify the
user instruction file prefix
v SET DRMNOTMOUNTABLENAME to
specify the default location for media to
be sent offsite
v SET DRMCOURIERNAME to specify the
default courier
v SET DRMVAULTNAME to specify the
default vault
v SET DRMCMDFILENAME to specify the
default file name to contain the
commands specified with the CMD
parameter on MOVE and QUERY
DRMEDIA
v SET DRMCHECKLABEL to specify
whether volume labels are verified when
checked out by the MOVE DRMEDIA
command
v SET DRMRPFEXPIREDAYS to specify a
value for the frequency of RPF expiration
(when plan files are stored on another
server)
Identify:
v Target disaster recovery server location
v Target server software requirements
v Target server hardware requirements
(storage devices)
v Tivoli Storage Manager administrator
contact
v Courier name and telephone number
v Vault location and contact person
Create:
v Enter the site-specific recovery
instructions data into files created in the
same path/HLQ as specified by SET
DRMINSTRPREFIX
Test disaster recovery manager
Test the installation and customization
v QUERY DRMSTATUS to display the
DRM setup
v Back up the active and inactive data that
is in primary storage pools to copy
storage pools. Copy the active data that
is in primary storage pools to active-data
pools.
v Back up the Tivoli Storage Manager
database
v QUERY DRMEDIA to list the copy
storage pool and active-data pool
volumes
v MOVE DRMEDIA to move offsite
v PREPARE to create the recovery plan file
Examine the recovery plan file created
Test the recovery plan file break out
v awk script planexpl.awk
v Locally written procedure
Set up the schedules for automated
functions
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
You can use an awk script or an editor to break out the stanzas into individual
files. A sample procedure, planexpl.awk.smp, is shipped with DRM and is located in
/opt/tivoli/tsm/server/bin or wherever the server resides. You can modify this
procedure for your installation. Store a copy of the procedure offsite for recovery.
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
Command stanzas
Consist of scripts (for example, batch programs or batch files) and Tivoli
Storage Manager macros. You can view, print, and update these stanzas,
and run them during recovery.
Table 91 lists the recovery plan file stanzas, and indicates what type of
administrative action is required during set up or periodic updates, routine
processing, and disaster recovery. The table also indicates whether the stanza
contains a macro, a script, or a configuration file.
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
PLANFILE.DESCRIPTION
begin PLANFILE.DESCRIPTION
end PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
begin PLANFILE.TABLE.OF.CONTENTS
PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
end PLANFILE.TABLE.OF.CONTENTS
The replacement server must have enough disk space to install the database and
recovery log.
This stanza also identifies the directory where the server executable file resided
when the server was started. If the server executable file is in a different directory
on the replacement server, edit the plan file to account for this change.
If you use links to the server executable file, you must create the links on the
replacement machine or modify the following plan file stanzas:
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
Location: E:\tsmdata\DBSpace
Total Space(MB): 285,985
Used Space(MB): 457
Free Space(MB): 285,527
end SERVER.REQUIREMENTS
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
begin RECOVERY.VOLUMES.REQUIRED
Location = dkvault
Device Class = VTL
Volume Name =
003902L4
Location = dkvault
Copy Storage Pool = COPYPOOL
Device Class = VTL
Volume Name =
003900L4
Location = dkvault
Active-data Storage Pool = ADP1
Device Class = VTL
Volume Name =
003901L4
end RECOVERY.VOLUMES.REQUIRED
See Specifying recovery instructions for your site on page 1035 for details. In the
following descriptions, prefix represents the prefix portion of the file name. See
Specifying defaults for the disaster recovery plan file on page 1030 for details.
RECOVERY.INSTRUCTIONS.GENERAL
Identifies site-specific instructions that the administrator has entered in the file
identified by prefix RECOVERY.INSTRUCTIONS.GENERAL. The instructions
should include the recovery strategy, key contact names, an overview of key
applications backed up by this server, and other relevant recovery instructions.
begin RECOVERY.INSTRUCTIONS.GENERAL
This server contains the backup and archive data for FileRight Company
accounts receivable system. It also is used by various end users in the
finance and materials distribution organizations.
The storage administrator in charge of this server is Jane Doe 004-001-0006.
If a disaster is declared, here is the outline of steps that must be completed.
1. Determine the recovery site. Our alternate recovery site vendor is IBM
BRS in Tampa, Fl, USA 213-000-0007.
2. Get the list of required recovery volumes from this recovery plan file
and contact our offsite vault so that they can start pulling the
volumes for transfer to the recovery site.
3. etc...
end RECOVERY.INSTRUCTIONS.GENERAL
RECOVERY.INSTRUCTIONS.OFFSITE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.OFFSITE. The instructions should include the
name and location of the offsite vault, and how to contact the vault (for example, a
name and phone number).
begin RECOVERY.INSTRUCTIONS.OFFSITE
end RECOVERY.INSTRUCTIONS.OFFSITE
RECOVERY.INSTRUCTIONS.INSTALL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.INSTALL. The instructions should include how
to rebuild the base server machine and the location of the system image backup
copies.
end RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.INSTRUCTIONS.DATABASE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.DATABASE. The instructions should include
how to prepare for the database recovery. For example, you may enter instructions
on how to initialize or load the backup volumes for an automated library. No
sample of this stanza is provided.
RECOVERY.INSTRUCTIONS.STGPOOL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.STGPOOL. The instructions should include the
names of your software applications and the copy storage pool names containing
the backup of these applications. No sample of this stanza is provided.
RECOVERY.VOLUMES.REQUIRED
Provides a list of the database backup, copy storage-pool volumes, and active-data
pool volumes required to recover the server. This list can include both virtual
volumes and nonvirtual volumes. A database backup volume is included if it is
part of the most recent database backup series. A copy storage pool volume or an
active-data pool volume is included if it is not empty and not marked destroyed.
If you are using a nonvirtual volume environment and issuing the MOVE
DRMEDIA command, a blank location field means that the volumes are onsite and
available to the server. This volume list can be used in periodic audits of the
volume inventory of the courier and vault. You can use the list to collect the
required volumes before recovering the server.
For virtual volumes, the location field contains the target server name.
Location = dkvault
Device Class = VTL
Volume Name =
003902L4
Location = dkvault
Copy Storage Pool = COPYPOOL
Device Class = VTL
Volume Name =
003900L4
Location = dkvault
Active-data Storage Pool = ADP1
Device Class = VTL
Volume Name =
003901L4
end RECOVERY.VOLUMES.REQUIRED
RECOVERY.DEVICES.REQUIRED
Provides details about the devices needed to read the backup volumes.
begin RECOVERY.DEVICES.REQUIRED
end RECOVERY.DEVICES.REQUIRED
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
You can use the script as a guide and run the commands from a command line. Or
you can copy it to a file, modify it and the files it refers to, and run the script.
Tip: The commands in the plan file that is generated by DRM might not work on
your replacement systems. If necessary, use the recovery.instructions stanzas in the
plan file to store information about the particular commands to be used during
recovery from an actual disaster. Enter your site-specific information in the
recovery.instructions stanzas when you first create the plan file or after you test it.
At the completion of these steps, client requests for file restores are satisfied
directly from copy storage pool volumes and active-data pool volumes.
The disaster recovery plan issues commands by using the administrative client.
The disaster recovery plan file issues commands by using the administrative client.
Ensure that the path to the administrative client is established before running the
script. For example, set the shell variable PATH or update the scripts with the path
specification for the administrative client.
For more information, see the entry for the recovery plan prefix in Table 88 on
page 1030.
@echo off
rem Purpose: This script contains the steps required to recover the server
rem to the point where client restore requests can be satisfied
rem directly from available copy storage pool volumes.
rem Note: This script assumes that all volumes necessary for the restore have
rem been retrieved from the vault and are available. This script assumes
rem the recovery environment is compatible (essentially the same) as the
rem original. Any deviations require modification to this script and the
rem macros and scripts it runs. Alternatively, you can use this script
rem as a guide, and manually execute each step.
rem Restore the server database to latest version backed up per the
rem volume history file.
"D:\TSM\SERVER\DSMSERV" -k "Server1" restore db todate=09/26/2008 totime=13:28:52 +
source=dbb
rem Active-data pool volumes in this macro were not marked as offsite at the time
rem PREPARE ran. They were likely destroyed in the disaster.
rem Recovery Administrator: Remove from macro any volumes not destroyed.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.ACTIVEDATASTGPOOL.VOLUMES.DESTROYED.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.ACTIVEDATASTGPOOL.VOLUMES.DESTROYED.MAC"
rem Tell the server these copy storage pool volumes are available for use.
rem Recovery Administrator: Remove from macro any volumes not obtained from vault.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.AVAILABLE.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.AVAILABLE.MAC"
rem Copy storage pool volumes in this macro were not marked as offsite at the time
rem PREPARE ran. They were likely destroyed in the disaster.
rem Recovery Administrator: Remove from macro any volumes not destroyed.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.DESTROYED.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.DESTROYED.MAC"
:end
end RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script
Related tasks:
Restoring to a point-in-time in a shared library environment on page 959
Scenario: Protecting the database and storage pools on page 944
Scenario: Recovering a lost or damaged storage pool volume on page 955
Example: Restoring a library manager database on page 949
Example: Restoring a library client database on page 950
Related reference:
Recovery instructions stanzas on page 1074
RECOVERY.SCRIPT.NORMAL.MODE
You can use the script as a guide and run the commands from a command line. Or
you can copy it to a file, modify it and the files it refers to, and run the script. You
may need to modify the script because of differences between the original and the
replacement systems.
The disaster recovery plan issues commands using the administrative client.
At the completion of these steps, client requests for file restores are satisfied from
primary storage pool volumes. Clients should also be able to resume file backup,
archive, and migration functions.
For more information, see the entry for the recovery plan prefix in Table 88 on
page 1030.
@echo off
rem Purpose: This script contains the steps required to recover the server
rem primary storage pools. This mode allows you to return the
rem copy storage pool volumes to the vault and to run the
rem server as normal.
rem Note: This script assumes that all volumes necessary for the restore
rem have been retrieved from the vault and are available. This script
rem assumes the recovery environment is compatible (essentially the
rem same) as the original. Any deviations require modification to this
rem this script and the macros and scripts it runs. Alternatively, you
rem can use this script as a guide, and manually execute each step.
rem Restore the primary storage pools from the copy storage pools.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.STGPOOLS.RESTORE.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.STGPOOLS.RESTORE.MAC"
:end
end RECOVERY.SCRIPT.NORMAL.MODE script
Related tasks:
Restoring to a point-in-time in a shared library environment on page 959
Scenario: Protecting the database and storage pools on page 944
Scenario: Recovering a lost or damaged storage pool volume on page 955
Example: Restoring a library manager database on page 949
Example: Restoring a library client database on page 950
LICENSE.REGISTRATION
COPYSTGPOOL.VOLUMES.AVAILABLE
Contains a macro to mark copy storage pool volumes that were moved offsite and
then moved back onsite. This stanza does not include copy storage pool virtual
volumes. You can use the information as a guide and issue the administrative
commands, or you can copy it to a file, modify it, and run it. This macro is
invoked by the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
After a disaster, compare the copy storage pool volumes listed in this stanza with
the volumes that were moved back onsite. You should remove entries from this
stanza for any missing volumes.
begin COPYSTGPOOL.VOLUMES.AVAILABLE macro
/* Purpose: Mark copy storage pool volumes as available for use in recovery. */
/* Recovery Administrator: Remove any volumes that have not been obtained */
/* from the vault or are not available for any reason. */
/* Note: It is possible to use the mass update capability of the server */
/* UPDATE command instead of issuing an update for each volume. However, */
/* the update by volume technique used here allows you to select */
/* a subset of volumes to be processed. */
COPYSTGPOOL.VOLUMES.DESTROYED
After a disaster, compare the copy storage pool volumes listed in this stanza with
the volumes that were left onsite. If you have any of the volumes and they are
usable, you should remove their entries from this stanza.
begin COPYSTGPOOL.VOLUMES.DESTROYED macro
ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE
Contains a macro to mark active-data pool volumes that were moved offsite and
then moved back onsite. This stanza does not include active-data pool virtual
volumes. You can use the information as a guide and issue the administrative
commands, or you can copy it to a file, modify it, and run it. This macro is
invoked by the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
After a disaster, compare the active-data pool volumes listed in this stanza with the
volumes that were moved back onsite. You should remove entries from this stanza
for any missing volumes.
begin ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE macro
/* Purpose: Mark active-data storage pool volumes as available for use in recovery. */
/* Recovery Administrator: Remove any volumes that have not been obtained */
/* from the vault or are not available for any reason. */
/* Note: It is possible to use the mass update capability of the server */
/* UPDATE command instead of issuing an update for each volume. However, */
/* the update by volume technique used here allows you to select */
/* a subset of volumes to be processed. */
ACTIVEDATASTGPOOL.VOLUMES.DESTROYED
After a disaster, compare the active-data pool volumes listed in this stanza with the
volumes that were left onsite. If you have any of the volumes and they are usable,
you should remove their entries from this stanza.
begin ACTIVEDATASTGPOOL.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.DESTROYED
During recovery, compare the primary storage pool volumes listed in this stanza
with the volumes that were onsite. If you have any of the volumes and they are
usable, remove their entries from the stanza.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
begin PRIMARY.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.REPLACEMENT
Contains a macro to define primary storage pool volumes to the server. You can
use the macro as a guide and run the administrative commands from a command
line, or you can copy it to a file, modify it, and execute it. This macro is invoked
by the RECOVERY.SCRIPT.NORMAL.MODE script.
Primary storage pool volumes with entries in this stanza have at least one of the
following three characteristics:
v Original volume in a storage pool whose device class was DISK.
The SET DRMPLANVPOSTFIX command adds a character to the end of the names
of the original volumes listed in this stanza. This character does the following:
v Improves the retrievability of volume names that must be renamed in the
stanzas. Before using the volume names, change these names to new names that
are valid for the device class on the replacement system.
v Generates a new name that can be used by the replacement server. Your naming
convention must take into account the appended character.
Note:
1. Replacement primary volume names must be different from any other
original volume name or replacement name.
2. The RESTORE STGPOOL command restores storage pools on a logical basis.
There is no one-to-one relationship between an original volume and its
replacement.
3. There could be entries for the same volume in
PRIMARY.VOLUMES.REPLACEMENT if the volume has a device class of
DISK.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
STGPOOLS.RESTORE
You can use the stanza as a guide and execute the administrative commands from
a command line. You can also can copy it to a file, modify it, and execute it. This
macro is invoked by the RECOVERY.SCRIPT.NORMAL.MODE script.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
/* Purpose: Restore the primary storage pools from copy storage pool(s). */
/* Recovery Administrator: Delete entries for any primary storage pools */
/* that you do not want to restore. */
Configuration stanzas
These stanzas contain copies of the following information: volume history, device
configuration, and server options.
VOLUME.HISTORY.FILE
Contains a copy of the volume history information when the recovery plan was
created. The DSMSERV RESTORE DB command uses the volume history file to
determine what volumes are needed to restore the database. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
The following rules determine where to place the volume history file at restore
time:
v If the server option file contains VOLUMEHISTORY options, the server uses the
fully qualified file name associated with the first entry. If the file name does not
begin with a directory specification, the server uses the prefix volhprefix.
v If the server option file does not contain VOLUMEHISTORY options, the server
uses the default name volhprefix followed by drmvolh.txt. The directory where the
server is started from is used as the volhprefix.
If a fully qualified file name was not specified in the server options file for the
VOLUMEHISTORY option, the server adds it to the DSMSERV.OPT.FILE stanza.
DEVICE.CONFIGURATION.FILE
Contains a copy of the server device configuration information when the recovery
plan was created. The DSMSERV RESTORE DB command uses the device
configuration file to read the database backup volumes. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
At recovery time, you may need to modify this stanza. You must update the device
configuration information if the hardware configuration at the recovery site has
changed. Examples of changes requiring updates to the configuration information
are:
v Different device names
v Use of a manual library instead of an automated library
v For automated libraries, the requirement to manually place the database backup
volumes in the automated library and update the configuration information to
identify the element within the library. This allows the server to locate the
required database backup volumes.
For details, see Updating the device configuration file on page 951.
The following rules determine where the device configuration file is placed at
restore time:
If a fully qualified file name was not specified for the DEVCONFIG option in the
server options file, the server adds it to the stanza DSMSERV.OPT.FILE.
DSMSERV.OPT.FILE
Contains a copy of the server options file. This stanza is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
Note: The following figure contains text strings that are too long to display in
hardcopy or softcopy publications. The long text strings have a plus symbol (+) at
the end of the string to indicate that they continue on the next line.
The disaster recovery plan file adds the DISABLESCHEDS option to the server
options file and sets it to YES. This option disables administrative and client
schedules while the server is being recovered. After the server is recovered, you
can enable scheduling by deleting the option or setting it to NO and then
restarting the server.
LICENSE.INFORMATION
begin LICENSE.INFORMATION
end LICENSE.INFORMATION
MACHINE.GENERAL.INFORMATION
Provides information for the server machine (for example, machine location). This
stanza is included in the plan file if the machine information is saved in the
database using the DEFINE MACHINE with ADSMSERVER=YES.
begin MACHINE.GENERAL.INFORMATION
Purpose: General information for machine DSMSRV1.
This is the machine that contains DSM server DSM.
Machine Name: DSMSRV1
Machine Priority: 1
Building: 21
Floor: 2
Room: 2749
Description: DSM Server for Branch 51
Recovery Media Name: DSMSRVIMAGE
end MACHINE.GENERAL.INFORMATION
MACHINE.RECOVERY.INSTRUCTIONS
Provides the recovery instructions for the server machine. This stanza is included
in the plan file if the machine recovery instructions are saved in the database.
begin MACHINE.RECOVERY.INSTRUCTIONS
Purpose: Recovery instructions for machine DSMSRV1.
Primary Contact:
Jane Smith (wk 520-000-0000 hm 520-001-0001)
Secondary Contact:
John Adams (wk 520-000-0001 hm 520-002-0002)
end MACHINE.RECOVERY.INSTRUCTIONS
MACHINE.RECOVERY.CHARACTERISTICS
Provides the hardware and software characteristics for the server machine. This
stanza is included in the plan file if the machine characteristics are saved in the
database.
begin MACHINE.CHARACTERISTICS
Purpose: Hardware and software characteristics of machine DSMSRV1.
devices
aio0 Defined Asynchronous I/O
bbl0 Available 00-0J GXT150 Graphics Adapter
bus0 Available 00-00 Microchannel Bus
DSM1509bk02 Available N/A
DSM1509db01x Available N/A
DSM1509lg01x Available N/A
en0 Defined Standard Ethernet Network Interface
end MACHINE.CHARACTERISTICS
MACHINE.RECOVERY.MEDIA
Provides information about the media (for example, boot media) needed for
rebuilding the machine that contains the server. This stanza is included in the plan
end MACHINE.RECOVERY.MEDIA.REQUIRED
| The framework for evaluating disaster recovery strategies consists of the following
| tiers:
|
|
|
| Figure 115. Tiers of disaster recovery
|
| Each tier corresponds to different recovery times and potentials for data loss. For
| example, in a tier 1 production site data is typically saved only selectively, and
| volumes that are stored at an offsite facility can be difficult to track. In addition,
| recovery time is unpredictable. After a disaster, hardware and software must be
| restored, and storage volumes must be sent back to the production site.
| Use the following questions as a guide to help you in the planning process:
| Cost How much can you afford for your disaster recovery implementation?
| Performance
| Do you want a low or a high performance disaster recovery solution?
| Recovery Time Objective (RTO) and Recovery Point Objective (RPO)
| What are your system requirements?
| Current disaster recovery strategy
| What disaster recovery strategy is implemented in your environment?
| Data What data do you need? Categorize and prioritize the data that you
| require.
| When you plan a disaster recovery strategy that might be suitable for your site,
| consider using DRM and Tivoli Storage Manager node replication for these
| reasons:
| v DRM is an effective tool for managing offsite vaulting. With DRM, you can
| configure and automatically generate a disaster recovery plan that contains the
| information, scripts, and procedures that are required to automatically restore
| the server and recover client data after a disaster.
| DRM also manages and tracks the media on which client data is stored, whether
| the data is on site, in-transit, or in a vault, so that the data can be more easily
| located if disaster strikes. DRM also generates scripts that assist you in
| documenting information-technology systems and recovery procedures that you
| can use, including procedures to rebuild the server.
| Use DRM alone to meet the disaster recovery objectives in tier 1, or use it
| together with other backup-and-recovery tools and technologies in tiers 2, 3 and
| 4.
| v Tivoli Storage Manager node replication meets the objectives of tier 5. After a
| successful node replication, the target server contains all metadata updates and
| data that is stored on the source server.
| In addition to fast recovery and minimal potential data loss, Tivoli Storage
| Manager node replication offers the following advantages:
| Node replication is easier to manage than device-based replication.
| Device-based replication requires that you keep the database and the data it
| represents synchronized. You manually schedule database backups to match
| the point in time when the device synchronizes.
| Results for Tivoli Storage Manager operations are reported in terms such as
| "node names" and "file names." In contrast, device-based replication results
| are reported in terms of "disks," "sectors," and "blocks."
|
| In the following figure, the Tivoli Storage Manager server and database, tape
| libraries, and tapes are in a single facility. If a disaster occurs, recovery time is
| unpredictable. Tier 0 is not recommended and data might never be recovered.
Data center A
| As shown in the following figure, storage volumes, such as tape cartridges and
| media volumes, and are vaulted at an offsite location. Transportation is typically
| handled by couriers. If a disaster occurs, the volumes are sent back to the
| production site after hardware and the Tivoli Storage Manager server is restored.
Data center A
DRM
Offsite vault
Daily backups
Chapter 36. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1093
| consider that an extended recovery time can impact business operations for several
| months or longer.
Daily At recovery
backups time
| A dedicated recovery site can reduce recovery time compared to the single
| production site in tier 1. The potential for data loss is also less. However, tier 2
| architecture increases the cost of disaster recovery because more hardware and
| software must be maintained. The recovery site must also have hardware and
| software that are compatible with the hardware and software at the primary site.
| For example, the recovery site must have compatible tape devices and Tivoli
| Storage Manager server software. Before the production site can be recovered, the
| hardware and software at the recovery site must be set up and running.
| Transporting the storage volumes to the recovery site also affects recovery time.
Electronic vaulting moves critical data offsite faster and more frequently than
traditional courier methods. Recovery time is reduced because critical data is
already stored at the recovery site. The potential for lost or misplaced data is also
reduced. However, because the recovery site runs continuously, a tier 3 strategy is
relatively more expensive than a tier 1 or a tier 2 strategy.
As shown in the following figure, the recovery site is physically separated from the
production site. Often, the recovery site is a second data center that is operated by
the same organization or by a storage service provider. If a disaster occurs at the
primary site, storage media with the non-critical data are transported from the
offsite storage facility to the recovery site.
Electronic vaulting
Daily At recovery
backups time
| If you implement a tier 3 strategy, you can use Tivoli Storage Manager
| server-to-server communications for enterprise configuration of the Tivoli Storage
| Manager servers and command routing.
| As shown in the following figure, critical data is replicated the two sites by using
| high-bandwidth connections and data replication technology, for example,
| Peer-to-Peer Remote Connection (PPRC). Data is transmitted over long distances
| by using technologies such as extended storage area network (SAN), Dense Wave
| Division Multiplexing (DWDM), and IP/WAN channel extenders.
Chapter 36. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1095
High bandwidth connections
Data center A Data center B
(hotsite)
DRM
Daily backups
Offsite
vault
At recovery time
Non-critical backups from both sites are moved to a single offsite storage facility. If
a disaster occurs, the backup volumes are recovered by courier from the offsite
vault and transported the designated recovery site.
If you implement a tier-4 disaster-recovery strategy, you can use Tivoli Storage
Manager server-to-server communications for enterprise configuration of multiple
Tivoli Storage Manager servers and command routing.
| Recovery time for a tier 4 strategy is faster than the recovery time for a tier 1, tier
| 2, or tier 3 strategy. Recovery time is faster because hardware, software, and data
| are available or can be made available at two sites.
High bandwidth
connections
| Copies of critical data are available at both sites, and each server is able to recover
| the server at the alternate site. Only the data transactions that are in-flight are lost
| during a disaster.
| If you implement a tier-5 disaster-recovery strategy, you can also use Tivoli Storage
| Manager server-to-server communications to configure multiple Tivoli Storage
| Manager servers and command routing.
| As shown in the following figure, two sites are fully synchronized by using a
| high-bandwidth connection.
Data sharing
Tier 6 is the most expensive disaster recovery strategy because it requires coupling
or clustering applications, additional hardware to support data sharing, and
high-bandwidth connections over extended distances. However, this strategy also
offers the fastest recovery time and the least amount of data loss. The typical
length of time for recovery is normally a few minutes.
Chapter 36. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1097
1098 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Part 7. Appendixes
You can use a clustered environment for the following operating systems:
v IBM PowerHA SystemMirror for AIX
v IBM Tivoli System Automation for Multiplatforms for AIX and Linux
v Microsoft Failover Cluster for Windows
You can use other cluster products with Tivoli Storage Manager, however,
documentation is not available and support is limited. For the latest information
about support for clustered environments, see http://www.ibm.com/support/
docview.wss?uid=swg21609772.
Before you use another cluster product, verify that DB2 supports the required file
systems. For more information about the level of DB2 that you are using, refer to
the DB2 documentation at: http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
Search on Recommended file systems.
For more information about upgrading the server in a clustered environment, see
the Installation Guide.
This configuration provides the nodes with the ability to share data, which allows
higher server availability and minimized downtime. For example, you can
configure, monitor, and control applications and hardware components that are
deployed on a cluster. You can use an administrative cluster interface and Tivoli
Storage Manager to designate cluster arrangements and define a failover pattern.
The server is part of the cluster which provides an extra level of security by
ensuring that no transactions are missed due to a failed server. The failover pattern
you establish prevents future failures.
Components in a server cluster are known as cluster objects. Cluster objects are
associated with a set of properties that have data values that describe the identity
and behavior of an object in the cluster. Cluster objects can include the following
components:
v Nodes
v Storage
v Services and applications
v Networks
You manage cluster objects by manipulating their properties, typically through a
cluster management application.
A clustering option is also available for the Tivoli Storage Manager for AIX server
by using Tivoli System Automation for Multiplatforms V3.2.1. You can download a
deployment guide and configuration scripts at https://www.ibm.com/software/
brandcatalog/ismlibrary/details?catalog.label=1TW10SM35. Select the Download
link to access the files.
The hardware requirements for configuring a Tivoli Storage Manager server for
failover are:
v A hardware configuration that is suitable for PowerHA. The removable media
storage devices for the Tivoli Storage Manager server must be physically
connected to at least two nodes of the PowerHA cluster on a shared bus
(including a SAN).
v Sufficient shared (twin-tailed) disk space to hold the Tivoli Storage Manager
database, recovery logs, instance directory, and disk storage pools to be used.
See Chapter 21, Managing the database and recovery log, on page 655 to
determine how much space is required for the database and recovery log and to
ensure the availability of the database and recovery log.
v A TCP/IP network.
Tip: If a Tivoli Storage Manager server manages removable media storage devices,
you can configure two Tivoli Storage Manager servers to run on different systems
in an PowerHA cluster. Either system can run both servers if the other system fails.
To configure two Tivoli Storage Manager servers to run on different systems in an
PowerHA cluster, use another file system that is accessible to both servers.
The terms production node and standby node refer to the two PowerHA nodes on
which the Tivoli Storage Manager server runs.
PowerHA manages taking over the TCP/IP address and mounting the shared file
system on the standby node or production node, as appropriate.
When a failover or failback occurs, any transactions that were being processed at
the time are rolled back. To Tivoli Storage Manager clients, failover or failback
represents a communications failure. Therefore, you must reestablish a connection
that is based on their COMMRESTARTDURATION and COMMRESTARTINTERVAL option
settings.
Typically, you can restart the backup-archive client from the last committed
transaction. If a client schedule is running when a failover occurs, the client
operation is likely to fail. If you can restart client operations you must restart them
from the beginning of the processing. The clients and agent operations complete as
they normally do if the server was halted and restarted while they were connected.
The only difference is that the server is physically restarted on different hardware.
If you do not want automatic failback to occur, you can configure the resource as a
cascading resource group without failback.
Complete the following steps to install and configure the PowerHA cluster:
1. Define the shared file systems and logical volumes, as required. You might
want to put files in separate file systems or on separate physical disks for
integrity or performance reasons. As a best practice, do not put the home
directory of the user instance on a shared disk. Mirror the logical volumes to
provide maximum availability (including the underlying file systems). The file
systems that must be defined include the IBM Tivoli Storage Manager server
instance directory, the database directories, the log directories, all disk storage
pool directories, and FILE device type storage pool directories.
2. Configure PowerHA so that the production node owns the shared volume
groups and the standby node takes over the shared volume groups if the
production node fails.
3. Configure PowerHA so that the file systems also fail over.
4. Set up a Service IP address for the Tivoli Storage Manager server. The Service
IP address must be different from each host IP address. The Service IP is
moved from host to host, not the actual host IP address.
You must configure the removable media storage devices for failover and define
the Tivoli Storage Manager server as an application to PowerHA.
| To configure a server instance on the secondary node with a shared DB2 instance,
| complete the following steps:
| 1. On each node in the cluster, add the following text to the /opt/tivoli/tsm/
| server/bin/rc.dsmserv script:
| DB2NODES_TEMP=/tmp/db2nodes.tmp
| DB2NODES=${homeDir}/sqllib/db2nodes.cfg
| Current hostname
| HOSTNAME=$(/bin/hostname)
| hostname saved in db2nodes.cfg
| DB2_HOST=$(cat $DB2NODES | cut -d -f 2)
| if the are different update the file
| if [[ "$HOSTNAME" != "$DB2_HOST" ]]
| then
| echo "Updating hostname in db2nodes.cfg"
| sed -e s_${DB2_HOST}_${HOSTNAME}_g $DB2NODES > $DB2NODES_TEMP
| cp $DB2NODES_TEMP $DB2NODES
| fi
| Tip: If the text is not included in the script, you can include it before you issue
| /opt/tivoli/tsm/server/bin/rc.dsmserv script.
| 2. Move all the shared resources to the secondary node.
| 3. Update the following variables in the /opt/tivoli/tsm/server/bin/startserver
| script, by using the following values:
| Table 92. Variables in the /opt/tivoli/tsm/server/bin/startserver script
| Description Variable Example
| Set the INST_USER to the INST_USER INST_USER='tsmuser1'
| instance user ID.
| Set the INST_DIR to the INST_DIR INST_DIR='/home/tsmuser1/
| location of the Tivoli Storage tsminst1'
| Manager instance directory.
| This directory contains
| dsmserv.dbid and
| dsmserv.opt.
| You can configure the Tivoli Storage Manager server on a secondary node by using
| the dsmicfgx wizard or manually.
| v To create a DB2 instance on a secondary node by using the dsmicfgx wizard,
| complete the following steps:
| 1. Run the dsmicfgx wizard.
| 2. From the Instance Directory panel, select the Check this if you are
| configuring the server instance on a secondary node of a high availability
| cluster check box.
| v To create a DB2 instance on a secondary node manually, complete the following
| steps:
| 1. Move all the shared resources to the secondary node.
| 2. Create a DB2 instance by issuing the following db2icrt command:
| /opt/tivoli/tsm/db2/instance/db2icrt -s ese -u instance_user instance_user
| where instance_user is the same user that owns the DB2 instance on the
| primary node.
| 3. When the DB2 instance is created, log in as the instance user or by issuing
| the su command:
| su - <instance_user>
| 4. As the instance user, issue the following commands:
| db2start
| db2 update dbm cfg using DFTDBPATH shared_db_path
| db2 catalog db TSMDB1
| dbset -i instance_user DB2CODEPAGE=819
| db2stop
Perform the following steps to install the Tivoli Storage Manager server on the
production node:
1. Install Tivoli Storage Manager. Select one of the following components:
v The Tivoli Storage Manager server
v The Tivoli Storage Manager device driver, if needed
v The Tivoli Storage Manager license
The executable files are typically installed on the internal disks of the
production node, not on the shared Tivoli Storage Manager disk space. Tivoli
Storage Manager server executable files are installed in the
/opt/tivoli/tsm/server/bin directory.
For more information about installing the server, see the Installation Guide for
AIX.
2. Configure Tivoli Storage Manager to use the TCP/IP communication method.
To specify the server and client communications, see the Installation Guide.
Perform the following steps to install the Tivoli Storage Manager client on the
production node:
1. Install the Tivoli Storage Manager client executable files in the
/usr/tivoli/tsm/client/ba/bin directory. These files are typically installed on
the internal disks of the production node.
For detailed instructions on installing the Tivoli Storage Manager client, see the
Backup-Archive Clients Installation and User's Guide.
| 2. For the client to find the server, ensure that the client options file, dsm.sys,
| points to the Tivoli Storage Manager server. The server name in dsm.sys is used
| only on the -servername parameter of the dsmadmc command to specify the
| server to be contacted.
Related tasks:
Verifying the configuration of the Tivoli Storage Manager server for PowerHA
When you use PowerHA, all database, log, storage, and instance directories must
be on shared disks that are configured to fail over by PowerHA. To identify the
directories that are on shared disk, complete the following steps:
1. Log on as the instance user.
2. Run the /opt/tivoli/tsm/server/bin/dsmclustfs script.
3. Examine the file systems that are reported by the script and verify that they are
on shared disks. The following example script displays the type of information
that you must review:
tsmuser@ss2["/home/tsmuser"]
%/opt/tivoli/tsm/server/bin/dsmclustfs
SQL1026N The database manager is already active.
The following mandatory DB2 file systems are displayed in the script:
/tsmdb1 /tsmdb4 /tsmdb3 /tsmdb2 /tsmlogs/active /tsmlogs/archive
Note: If the dsm.sys file is changed on one node, you must copy it to the other
node.
Related tasks:
Installing the Tivoli Storage Manager server on a production node for PowerHA
on page 1106
Verifying the configuration of the Tivoli Storage Manager server for PowerHA
on page 1107
Installing the Tivoli Storage Manager client on a production node for PowerHA
on page 1107
Prerequisite: If you define a library manager server that is not shared with the
Tivoli Storage Manager server, ensure that the RESETDRIVES parameter for the
DEFINE LIBRARY command or the UPDATE LIBRARY command is specified as YES.
If your SAN device mapping is accurate, continue to the Completing the cluster
manager and Tivoli Storage Manager configurations section. Otherwise, ensure
that the removable media storage devices are configured with the same names on
the production and standby nodes. You might have to define dummy devices on
one of the nodes to ensure that the removable media storage devices are
configured correctly. To define a dummy device, complete the following steps:
1. Issue the command smit devices, and follow the SMIT panels to define the
device.
2. Choose an unused SCSI address for the device.
3. Instead of clicking Enter on the last panel to define the device, click F6 instead
to obtain the command that SMIT is about to execute.
4. Exit from SMIT and enter the same command on the command line, adding the
-d flag to the command. If you use SMIT to try to define the device, the
attempt fails because there is no device at the unused SCSI address you chose.
Related tasks:
Chapter 5, Attaching devices for the server, on page 83
You can issue IBM PowerHA SystemMirror for AIX or Tivoli System Automation
(TSA) commands to set up the cluster. Continue with configuring the Tivoli
Storage Manager server. For more information about TSA, see
http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=
%2Fcom.ibm.samp.doc_3.2.2%2Fwelcome.html.
| This message means that the I/O device is locked with a SCSI RESERVE
| by a system other than the one on which the tctl command was run. If
| you are using persistent reservation, the Tivoli Storage Manager server
| preempts a drive reservation by default. If the device driver does not use
| persistent reservation, the server performs a target reset.
ANS4329S Server out of data storage space message
| If the ANS4329S Server out of data storage space message is displayed
| on a Tivoli Storage Manager client, the license for the Tivoli Storage
| Manager server might be non-compliant. Issue the QUERY LICENSE
| command to display the compliance information for the license. If the
| compliance state is valid, use the QUERY ACTLOG command on the server
| and review the messages that are displayed to identify the problem.
To use the interface, you must first define an EXTERNAL-type Tivoli Storage
Manager library that represents the media manager. You do not define drives, label
volumes, or check in media. Refer to your media manager's documentation for that
product's setup information and instructions for operational usage.
The details of the request types and the required processing are described in the
sections that follow. The request types are:
v Initialization of the external program
v Begin Batch
v End Batch
v Volume Query
v Volume Eject
v Volume Release
v Volume Mount
v Volume Dismount
The libraryname passed in a request must be returned in the response. The volume
specified in an eject request or a query request must be returned in the response.
The volume specified in a mount request (except for 'SCRTCH') must be returned
in the response. When 'SCRTCH' is specified in a mount request, the actual volume
mounted must be returned.
CreateProcess call
The server creates two anonymous unidirectional pipes and maps them to the
stdin and stdout streams during the CreateProcess call. When a standard handle is
redirected to refer to a file or a pipe, the handle can only be used by the ReadFile
and WriteFile functions.
This precludes normal C functions such as gets or printf. Since the server will
never terminate the external program process, it is imperative that the external
program recognize a read or write failure on the pipes and exit the process. In
addition, the external program should exit the process if it reads an unrecognized
command.
The external program may obtain values for the read and write handles using the
following calls:
and
writePipe=GetStdHandle(STD_OUTPUT_HANDLE)
For each external library defined to the server, the following must occur during
server initialization:
1. The server loads the external program (CreateProcess) in a newly created
process and creates pipes to the external program.
2. The server sends an initialization request description string, in text form, into
the standard input (stdin) stream of the external program. The server waits for
the response.
3. When the external process completes the request, the process must write an
initialization response string, in text form, into its standard output (stdout)
stream.
4. The server closes the pipes.
5. When the agent detects that the pipes are closed, it performs any necessary
cleanup and calls the stdlib exit routine.
The move commands cause a QUERY to be issued for a volume. If the QUERY
indicates that the volume is in the library, a subsequent EJECT for that volume is
issued. Because the move commands can match any number of volumes, a QUERY
and an EJECT request is issued for each matching volume.
The QUERY MEDIA command results in QUERY requests being sent to the agent.
During certain types of processing, Tivoli Storage Manager might need to know if
a volume is present in a library. The external agent should verify that the volume
is physically present in the library.
1. The server loads the external program in a newly created process and creates
pipes to the external program.
2. The server sends an initialization request description string (in text form) into
the standard input (stdin) stream of the external program. The server waits for
the response.
3. When the external process completes the request, the process must write an
initialization response string (in text form) into its standard output (stdout)
stream.
4. The server sends the BEGIN BATCH request (stdin).
5. The agent sends the BEGIN BATCH response (stdout).
6. The server sends 1 to n volume requests (n > 1). These can be any number of
QUERY or EJECT requests. For each request, the agent will send the applicable
QUERY response or EJECT response.
7. The server sends the END BATCH request (stdin).
8. The agent sends the END BATCH response (stdout), performs any necessary
cleanup, and calls the stdlib exit routine.
If the code for any response (except for EJECT and QUERY) is not equal to
SUCCESS, Tivoli Storage Manager does not proceed with the subsequent steps.
After the agent sends a non-SUCCESS return code for any response, the agent will
perform any necessary cleanup and call the stdlib exit routine.
However, even if the code for EJECT or QUERY requests is not equal to SUCCESS,
the agent will continue to send these requests.
If the server gets an error while trying to write to the agent, it will close the pipes,
perform any necessary cleanup, and terminate the current request.
where:
resultCode
One of the following:
v SUCCESS
v INTERNAL_ERROR
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume name to be queried.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume name queried.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
If resultCode is not SUCCESS, the exit must return statusValue set to UNDEFINED.
If resultCode is SUCCESS, STATUS must be one of the following values:
v IN_LIBRARY
v NOT_IN_LIBRARY
IN_LIBRARY means that the volume is currently in the library and available to be
mounted.
Tivoli Storage Manager does not attempt any other type of operation with that
library until an initialization request has succeeded. The server sends an
initialization request first. If the initialization is successful, the request is sent. If the
initialization is not successful, the request fails. The external media management
program can detect whether the initialization request is being sent by itself or with
another request by detecting end-of-file on the stdin stream. When end-of-file is
detected, the external program must end by using the stdlib exit routine (not the
return call).
When a valid response is sent by the external program, the external program must
end by using the exit routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
resultcode
One of the following:
v SUCCESS
v NOT_READY
v INTERNAL_ERROR
where:
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the ejected volume.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
The external program must send a response to the release request. No matter what
response is received from the external program, Tivoli Storage Manager returns the
volume to scratch. For this reason, Tivoli Storage Manager and the external
program can have conflicting information on which volumes are scratch. If an error
occurs, the external program should log the failure so that the external library
inventory can be synchronized later with Tivoli Storage Manager. The
synchronization can be a manual operation.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume returned to scratch (released).
resultcode
One of the following:
v SUCCESS
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v INTERNAL_ERROR
The volume mounted by the external media management program must be a tape
with a standard IBM label that matches the external volume label. When the
external program completes the mount request, the program must send a response.
If the mount was successful, the external program must remain active. If the
mount failed, the external program must end immediately by using the stdlib exit
routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the actual volume name if the request is for an existing volume. If a
scratch mount is requested, the volname is set to SCRTCH.
accessmode
Specifies the access mode required for the volume. Possible values are
READONLY and READWRITE.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume mounted for the request.
specialfile
The fully qualified path name of the device special file for the drive in which
the volume was mounted. If the mount request fails, the value should be set to
/dev/null.
The external program must ensure that the special file is closed before the
response is returned to the server.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
After the dismount response is sent, the external process ends immediately by
using the stdlib exit routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume to be dismounted.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume dismounted.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v INTERNAL_ERROR
The samples for the C, H, and make files are shipped with the server code in the
/usr/lpp/adsmserv/bin directory.
Attention:
1. Use caution in modifying these exits. A user exit abend will bring down the
server.
2. The file specified in the file exit option will continue to grow unless you prune
it.
You can also use Tivoli Storage Manager commands to control event logging. For
details, see Chapter 31, Logging IBM Tivoli Storage Manager events to receivers,
on page 861 and Administrator's Reference.
/*****************************************************************
* Name: userExitSample.h
* Description: Declarations for a user exit
*****************************************************************/
#ifndef _H_USEREXITSAMPLE
#define _H_USEREXITSAMPLE
#include <stdio.h>
#include <sys/types.h>
typedef struct
{
uchar year; /* Years since BASE_YEAR (0-255) */
uchar mon; /* Month (1 - 12) */
uchar day; /* Day (1 - 31) */
uchar hour; /* Hour (0 - 23) */
/******************************************
* Some field size definitions (in bytes) *
******************************************/
#define MAX_SERVERNAME_LENGTH 64
#define MAX_NODE_LENGTH 64
#define MAX_COMMNAME_LENGTH 16
#define MAX_OWNER_LENGTH 64
#define MAX_HL_ADDRESS 64
#define MAX_LL_ADDRESS 32
#define MAX_SCHED_LENGTH 30
#define MAX_DOMAIN_LENGTH 30
#define MAX_MSGTEXT_LENGTH 1600
/**********************************************
* Event Types (in elEventRecvData.eventType) *
**********************************************/
/***************************************************
* Application Types (in elEventRecvData.applType) *
***************************************************/
/*****************************************************
* Event Severity Codes (in elEventRecvData.sevCode) *
*****************************************************/
/**************************************************************
* Data Structure of Event that is passed to the User-Exit. *
* This data structure is the same for a file generated using *
* the FILEEXIT option on the server. *
**************************************************************/
/************************************
* Size of the Event data structure *
************************************/
/*************************************
* User Exit EventNumber for Exiting *
*************************************/
/**************************************
*** Do not modify above this line. ***
**************************************/
#endif
/***********************************************************************
* Name: userExitSample.c
* Description: Example user-exit program invoked by the server
* Environment: AIX 4.1.4+ on RS/6000
***********************************************************************/
#include <stdio.h>
#include "userExitSample.h"
/**************************************
*** Do not modify below this line. ***
**************************************/
/************
*** Main ***
************/
} /* End of main() */
/******************************************************************
* Procedure: adsmV3UserExit
* If the user-exit is specified on the server, a valid and
* appropriate event causes an elEventRecvData structure (see
/**************************************
*** Do not modify above this line. ***
**************************************/
/* Be aware that certain function calls are process-wide and can cause
* synchronization of all threads running under the TSM Server process!
* Among these is the system() function call. Use of this call can
* cause the server process to hang and otherwise affect performance.
* Also avoid any functions that are not thread-safe. Consult your
* systems programming reference material for more information.
*/
The following table presents the format of the output. Fields are separated by
blank spaces.
Table 94. Readable text file exit (FILETEXTEXIT) format
Column Description
0001-0006 Event number (with leading zeros)
0008-0010 Severity code number
0012-0013 Application type number
0015-0023 Session ID number
0025-0027 Event structure version number
0029-0031 Event type number
0033-0046 Date/Time (YYYYMMDDDHHmmSS)
0048-0111 Server name (right padded with spaces)
1
0113-0176 Node name
1
0178-0193 Communications method name
1
0195-0258 Owner name
Accessibility features
The following list includes the major accessibility features in the Tivoli Storage
Manager family of products:
v Keyboard-only operation
v Interfaces that are commonly used by screen readers
v Keys that are discernible by touch but do not activate just by touching them
v Industry-standard devices for ports and connectors
v The attachment of alternative input and output devices
If you install the IBM Tivoli Storage Manager Operations Center in console mode,
the installation is fully accessible.
The accessibility features of the Operations Center are fully supported only in the
Mozilla Firefox browser that is running on a Windows system.
The Tivoli Storage Manager Information Center, and its related publications, are
accessibility-enabled. For information about the accessibility features of the
information center, see the following topic: http://pic.dhe.ibm.com/infocenter/
tsminfo/v6r3/topic/com.ibm.help.ic.doc/iehs36_accessibility.html.
Keyboard navigation
Vendor software
The Tivoli Storage Manager product family includes certain vendor software that is
not covered under the IBM license agreement. IBM makes no representation about
the accessibility features of these products. Contact the vendor for the accessibility
information about its products.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who want to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows: (your company name) (year). Portions of
this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp.
_enter the year or years_.
If you are viewing this information in softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at Copyright and
trademark information at http://www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
LTO and Ultrium are trademarks of HP, IBM Corp. and Quantum in the U.S. and
other countries.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other product and service names might be trademarks of IBM or other companies.
Notices 1133
1134 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Glossary
This glossary includes terms and definitions for IBM Tivoli Storage Manager and IBM Tivoli Storage
FlashCopy Manager products.
Glossary 1137
client A software program or computer that distributed data processing in which a
requests services from a server. program on one computer sends a request
to a program on another computer and
client acceptor
awaits a response. The requesting
An HTTP service that serves the applet
program is called a client; the answering
for the web client to web browsers. On
program is called a server.
Windows systems, the client acceptor is
installed and run as a service. On AIX, client system-options file
UNIX, and Linux systems, the client A file, used on AIX, UNIX, or Linux
acceptor is run as a daemon, and is also system clients, containing a set of
called the client acceptor daemon (CAD). processing options that identify the
servers to be contacted for services. This
client acceptor daemon (CAD)
file also specifies communication methods
See client acceptor.
and options for backup, archive,
client domain hierarchical storage management, and
The set of drives, file systems, or volumes scheduling. This file is also called the
that the user selects to back up or archive dsm.sys file. See also client user-options file.
data, using the backup-archive client.
client user-options file
client node A file that contains the set of processing
A file server or workstation on which the options that the clients on the system use.
backup-archive client program has been The set can include options that
installed, and which has been registered determine the server that the client
to the server. contacts, and options that affect backup
operations, archive operations,
client node session
hierarchical storage management
A session in which a client node
operations, and scheduled operations.
communicates with a server to perform
This file is also called the dsm.opt file.
backup, restore, archive, retrieve, migrate,
For AIX, UNIX, or Linux systems, see also
or recall requests. Contrast with
client system-options file.
administrative session.
closed registration
client option set
A registration process in which only an
A group of options that are defined on
administrator can register workstations as
the server and used on client nodes in
client nodes with the server. Contrast
conjunction with client options files.
with open registration.
client options file
collocation
An editable file that identifies the server
The process of keeping all data belonging
and communication method, and
to a single-client file space, a single client
provides the configuration for backup,
node, or a group of client nodes on a
archive, hierarchical storage management,
minimal number of sequential-access
and scheduling.
volumes within a storage pool.
client-polling scheduling mode Collocation can reduce the number of
A method of operation in which the client volumes that must be accessed when a
queries the server for work. Contrast with large amount of data must be restored.
server-prompted scheduling mode.
collocation group
client schedule A user-defined group of client nodes
A database record that describes the whose data is stored on a minimal
planned processing of a client operation number of volumes through the process
during a specific time period. The client of collocation.
operation can be a backup, archive,
commit point
restore, or retrieve operation, a client
A point in time when data is considered
operating system command, or a macro.
consistent.
See also administrative command schedule.
client/server
Pertaining to the model of interaction in
Glossary 1139
media. Other instances of the same data destination
are replaced with a pointer to the retained A copy group or management class
instance. attribute that specifies the primary storage
pool to which a client file will be backed
data manager server
up, archived, or migrated.
A server that collects metadata
information for client inventory and device class
manages transactions for the storage A named set of characteristics that are
agent over the local area network. The applied to a group of storage devices.
data manager server informs the storage Each device class has a unique name and
agent with applicable library attributes represents a device type of disk, file,
and the target volume identifier. optical disk, or tape.
data mover device configuration file
A device that moves data on behalf of the (1) For a server, a file that contains
server. A network-attached storage (NAS) information about defined device classes,
file server is a data mover. and, on some servers, defined libraries
and drives. The information is a copy of
data storage-management application-
the device configuration information in
programming interface (DSMAPI)
the database.
A set of functions and semantics that can
monitor events on files, and manage and (2) For a storage agent, a file that contains
maintain the data in a file. In an HSM the name and password of the storage
environment, a DSMAPI uses events to agent, and information about the server
notify data management applications that is managing the SAN-attached
about operations on files, stores arbitrary libraries and drives that the storage agent
attribute information with a file, supports uses.
managed regions in a file, and uses
device driver
DSMAPI access rights to control access to
A program that provides an interface
a file object.
between a specific device and the
deduplication application program that uses the device.
See data deduplication.
disaster recovery manager (DRM)
default management class A function that assists in preparing and
A management class that is assigned to a using a disaster recovery plan file for the
policy set. This class is used to govern server.
backed up or archived files when a file is
disaster recovery plan
not explicitly associated with a specific
A file that is created by the disaster
management class through the
recovery manager (DRM) that contains
include-exclude list.
information about how to recover
demand migration computer systems if a disaster occurs and
The process that is used to respond to an scripts that can be run to perform some
out-of-space condition on a file system for recovery tasks. The file includes
which hierarchical storage management information about the software and
(HSM) is active. Files are migrated to hardware that is used by the server, and
server storage until space usage drops to the location of recovery media.
the low threshold that was set for the file
domain
system. If the high threshold and low
A grouping of client nodes with one or
threshold are the same, one file is
more policy sets, which manage data or
migrated.
storage resources for the client nodes. See
desktop client policy domain or client domain.
The group of backup-archive clients that
DRM See disaster recovery manager.
includes clients on Microsoft Windows,
Apple, and Novell NetWare operating DSMAPI
systems. See data storage-management
application-programming interface.
Glossary 1141
external library management support that is required. If
A type of library that is provided by no space management support is
Tivoli Storage Manager that permits required, the operation is passed to the
LAN-free data movement for StorageTek operating system, which performs its
libraries that are managed by Automated normal functions. The file system
Cartridge System Library Software migrator is mounted over a file system
(ACSLS). To activate this function, the when space management is added to the
Tivoli Storage Manager library type must file system.
be EXTERNAL.
file system state
F The storage management mode of a file
system that resides on a workstation on
file access time
which the hierarchical storage
On AIX, UNIX, or Linux systems, the
management (HSM) client is installed. A
time when the file was last accessed.
file system can be in one of these states:
file age native, active, inactive, or global inactive.
For migration prioritization purposes, the
frequency
number of days since a file was last
A copy group attribute that specifies the
accessed.
minimum interval, in days, between
file device type incremental backups.
A device type that specifies the use of
FSID See file space ID.
sequential access files on disk storage as
volumes. FSM See file system migrator.
file server full backup
A dedicated computer and its peripheral The process of backing up the entire
storage devices that are connected to a server database. A full backup begins a
local area network that stores programs new database backup series. See also
and files that are shared by users on the database backup series and incremental
network. backup. Contrast with database snapshot.
file space fuzzy backup
A logical space in server storage that A backup version of a file that might not
contains a group of files that have been accurately reflect what is currently in the
backed up or archived by a client node, file because the file was backed up at the
from a single logical partition, file system, same time as it was being modified.
or virtual mount point. Client nodes can
fuzzy copy
restore, retrieve, or delete their file spaces
A backup version or archive copy of a file
from server storage. In server storage,
that might not accurately reflect the
files belonging to a single file space are
original contents of the file because it was
not necessarily stored together.
backed up or archived the file while the
file space ID (FSID) file was being modified. See also backup
A unique numeric identifier that the version and archive copy.
server assigns to a file space when it is
G
stored in server storage.
General Parallel File System
file state
A high-performance shared-disk file
The space management mode of a file
system that can provide data access from
that resides in a file system to which
nodes in a cluster environment.
space management has been added. A file
can be in one of three states: resident, gigabyte (GB)
premigrated, or migrated. See also resident In decimal notation, 1 073 741 824 when
file, premigrated file, and migrated file. referring to memory capacity; in all other
cases, it is defined as 1 000 000 000.
file system migrator (FSM)
A kernel extension that intercepts all file global inactive state
system operations and provides any space The state of all file systems to which
Glossary 1143
for storage pools and file sets. See also the local area network. This process is
General Parallel File System. also referred to as LAN-free data transfer.
inode The internal structure that describes the LAN-free data transfer
individual files on AIX, UNIX, or Linux See LAN-free data movement.
systems. An inode contains the node,
leader data
type, owner, and location of a file.
Bytes of data, from the beginning of a
inode number migrated file, that are stored in the file's
A number specifying a particular inode corresponding stub file on the local file
file in the file system. system. The amount of leader data that is
stored in a stub file depends on the stub
IP address
size that is specified.
A unique address for a device or logical
unit on a network that uses the IP library
standard. (1) A repository for demountable recorded
media, such as magnetic disks and
J
magnetic tapes.
job file
(2) A collection of one or more drives, and
A generated file that contains
possibly robotic devices (depending on
configuration information for a migration
the library type), which can be used to
job. The file is XML format and can be
access storage volumes.
created and edited in the hierarchical
storage management (HSM) client for library client
Windows client graphical user interface. A server that uses server-to-server
communication to access a library that is
journal-based backup
managed by another storage management
A method for backing up Windows clients
server. See also library manager.
and AIX clients that exploits the change
notification mechanism in a file to library manager
improve incremental backup performance A server that controls device operations
by reducing the need to fully scan the file when multiple storage management
system. servers share a storage device. See also
library client.
journal daemon
On AIX, UNIX, or Linux systems, a local (1) Pertaining to a device, file, or system
program that tracks change activity for that is accessed directly from a user
files residing in file systems. system, without the use of a
communication line.
journal service
In Microsoft Windows, a program that (2) For HSM products, pertaining to the
tracks change activity for files residing in destination of migrated files that are
file systems. being moved.
K local area network (LAN)
A network that connects several devices
kilobyte (KB)
in a limited area (such as a single
For processor storage, real and virtual
building or campus) and that can be
storage, and channel volume, 210 or 1 024
connected to a larger network.
bytes. For disk storage capacity and
communications volume, 1 000 bytes. local shadow volumes
Data that is stored on shadow volumes
L
localized to a disk storage subsystem.
LAN See local area network.
LOFS See loopback virtual file system.
LAN-free data movement
logical file
The movement of client data between a
A file that is stored in one or more server
client system and a storage device on a
storage pools, either by itself or as part of
storage area network (SAN), bypassing
an aggregate. See also aggregate and
physical file.
Glossary 1145
(2) For processor storage, real and virtual been modified since the last time the file
storage, and channel volume, 2 to the was backed up. See modified mode and
power of 20 or 1 048 576 bits. For disk absolute mode.
storage capacity and communications
modified mode
volume, 1 000 000 bits.
In storage management, a backup
metadata copy-group mode that specifies that a file
Data that describes the characteristics of is considered for incremental backup only
data; descriptive data. if it has changed since the last backup. A
file is considered a changed file if the
migrate
date, size, owner, or permissions of the
To move data from one storage location to
file have changed. See also absolute mode.
another. In Tivoli Storage Manager
products, migrating can mean moving mount limit
data from a client node to server storage, The maximum number of volumes that
or moving data from one storage pool to can be simultaneously accessed from the
the next storage pool defined in the same device class. The mount limit
server storage hierarchy. In both cases the determines the maximum number of
movement is controlled by policy, such as mount points. See also mount point.
thresholds that are set. See also migration
mount point
threshold.
On the Tivoli Storage Manager server, a
migrated file logical drive through which volumes in a
A file that has been copied from a local sequential access device class are
file system to Tivoli Storage Manager accessed. For removable-media device
storage. For HSM clients on UNIX or types, such as tape, a mount point is a
Linux systems, the file is replaced with a logical drive that is associated with a
stub file on the local file system. On physical drive. For the file device type, a
Windows systems, creation of the stub file mount point is a logical drive that is
is optional. See also stub file and resident associated with an I/O stream. The
file. For HSM clients on UNIX or Linux number of mount points for a device class
systems, contrast with premigrated file. is defined by the value of the mount limit
attribute for that device class. See also
migrate-on-close recall mode
mount limit.
A mode that causes a migrated file to be
recalled back to its originating file system mount retention period
temporarily. Contrast with normal recall The maximum number of minutes that
mode and read-without-recall recall mode. the server retains a mounted
sequential-access media volume that is
migration job
not being used before it dismounts the
A specification of files to migrate, and
sequential-access media volume.
actions to perform on the original files
after migration. See also job file. mount wait period
The maximum number of minutes that
migration threshold
the server waits for a sequential-access
High and low capacities for storage pools
volume mount request to be satisfied
or file systems, expressed as percentages,
before canceling the request.
at which migration is set to start and
stop. MTU See maximum transmission unit.
mirroring N
The process of writing the same data to
Nagle algorithm
multiple locations at the same time.
An algorithm that reduces congestion of
Mirroring data protects against data loss
TCP/IP networks by combining smaller
within the recovery log.
packets and sending them together.
mode A copy group attribute that specifies
named pipe
whether to back up a file that has not
A type of interprocess communication
Glossary 1147
called dsm.opt. On AIX, UNIX, Linux, relationship between a source and a
and Mac OS X systems, the file is called destination. Using the path, the source
dsm.sys. accesses the destination. Data can flow
from the source to the destination, and
originating file system
back. An example of a source is a data
The file system from which a file was
mover (such as a network-attached
migrated. When a file is recalled using
storage [NAS] file server), and an
normal or migrate-on-close recall mode, it
example of a destination is a tape drive.
is always returned to its originating file
system. pattern-matching character
See wildcard character.
orphaned stub file
A file for which no migrated file can be physical file
found on the Tivoli Storage Manager A file that is stored in one or more
server that the client node is contacting storage pools, consisting of either a single
for space management services. For logical file, or a group of logical files that
example, a stub file can be orphaned are packaged together as an aggregate.
when the client system-options file is See also aggregate and logical file.
modified to contact a server that is
physical occupancy
different than the one to which the file
The amount of space that is used by
was migrated.
physical files in a storage pool. This space
out-of-space protection mode includes the unused space that is created
A mode that controls whether the when logical files are deleted from
program intercepts out-of-space aggregates. See also physical file, logical file,
conditions. See also execution mode. and logical occupancy.
P plug-in
A self-contained software component that
pacing
modifies (adds, or changes) the function
In SNA, a technique by which the
in a particular system. When a plug-in is
receiving system controls the rate of
added to a system, the foundation of the
transmission of the sending system to
original system remains intact.
prevent overrun.
policy domain
packet In data communication, a sequence of
A grouping of policy users with one or
binary digits, including data and control
more policy sets, which manage data or
signals, that is transmitted and switched
storage resources for the users. The users
as a composite whole.
are client nodes that are associated with
page A defined unit of space on a storage the policy domain.
medium or within a database volume.
policy privilege class
partial-file recall mode A privilege class that gives an
A recall mode that causes the hierarchical administrator the authority to manage
storage management (HSM) function to policy objects, register client nodes, and
read just a portion of a migrated file from schedule client operations for client
storage, as requested by the application nodes. Authority can be restricted to
accessing the file. certain policy domains. See also privilege
class.
password generation
A process that creates and stores a new policy set
password in an encrypted password file A group of rules in a policy domain. The
when the old password expires. rules specify how data or storage
Automatic generation of a password resources are automatically managed for
prevents password prompting. Password client nodes in the policy domain. Rules
generation can be set in the options file can be contained in management classes.
(passwordaccess option). See also options See also active policy set and management
file. class.
path An object that defines a one-to-one
Glossary 1149
is migrated back to Tivoli Storage a local file system that might also be a
Manager storage when it is closed, or is migrated file because a migrated copy can
read from Tivoli Storage Manager storage exist in Tivoli Storage Manager storage.
without storing it on the local file system. On a UNIX or Linux system, a complete
file on a local file system that has not
receiver
been migrated or premigrated, or that has
A server repository that contains a log of
been recalled from Tivoli Storage Manager
server and client messages as events. For
storage and modified. Contrast with stub
example, a receiver can be a file exit, a
file and premigrated file. See migrated file.
user exit, or the Tivoli Storage Manager
server console and activity log. See also restore
event. To copy information from its backup
location to the active storage location for
reclamation
use. For example, to copy information
The process of consolidating the
from server storage to a client
remaining data from many
workstation.
sequential-access volumes onto fewer,
new sequential-access volumes. retention
The amount of time, in days, that inactive
reclamation threshold
backed-up or archived files are kept in the
The percentage of space that a
storage pool before they are deleted.
sequential-access media volume must
Copy group attributes and default
have before the server can reclaim the
retention grace periods for the domain
volume. Space becomes reclaimable when
define retention.
files are expired or are deleted.
retrieve
reconciliation
To copy archived information from the
The process of synchronizing a file system
storage pool to the workstation for use.
with the Tivoli Storage Manager server,
The retrieve operation does not affect the
and then removing old and obsolete
archive version in the storage pool.
objects from the Tivoli Storage Manager
server. roll back
To remove changes that were made to
recovery log
database files since the last commit point.
A log of updates that are about to be
written to the database. The log can be root user
used to recover from system and media A system user who operates without
failures. The recovery log consists of the restrictions. A root user has the special
active log (including the log mirror) and rights and privileges needed to perform
archive logs. administrative tasks.
register S
To define a client node or administrator
SAN See storage area network.
ID that can access the server.
schedule
registry
A database record that describes client
A repository that contains access and
operations or administrative commands to
configuration information for users,
be processed. See administrative command
systems, and software.
schedule and client schedule.
remote
scheduling mode
(1) Pertaining to a system, program, or
The type of scheduling operation for the
device that is accessed through a
server and client node that supports two
communication line.
scheduling modes: client-polling and
(2) For HSM products, pertaining to the server-prompted.
origin of migrated files that are being
scratch volume
moved.
A labeled volume that is either blank or
resident file contains no valid data, that is not defined,
On a Windows system, a complete file on and that is available for use.
Glossary 1151
serialization. Contrast with dynamic stanza. Depending on the type of file, a
serialization, shared dynamic serialization, stanza is ended by the next occurrence of
and static serialization. a stanza name in the file, or by an explicit
end-of-stanza marker. A stanza can also
snapshot
be ended by the end of the file.
An image backup type that consists of a
point-in-time view of a volume. startup window
A time period during which a schedule
space-managed file
must be initiated.
A file that is migrated from a client node
by the space manager client. The space static serialization
manager client recalls the file to the client A copy-group serialization value that
node on demand. specifies that a file must not be modified
during a backup or archive operation. If
space management
the file is in use during the first attempt,
The process of keeping sufficient free
the storage manager cannot back up or
storage space available on a local file
archive the file. See also serialization.
system for new data by migrating files to
Contrast with dynamic serialization, shared
server storage. Synonymous with
dynamic serialization, and shared static
hierarchical storage management.
serialization.
space manager client
storage agent
A program that runs on a UNIX or Linux
A program that enables the backup and
system to manage free space on the local
restoration of client data directly to and
file system by migrating files to server
from storage attached to a storage area
storage. The program can recall the files
network (SAN).
either automatically or selectively. Also
called hierarchical storage management storage area network (SAN)
(HSM) client. A dedicated storage network that is
tailored to a specific environment,
space monitor daemon
combining servers, systems, storage
A daemon that checks space usage on all
products, networking products, software,
file systems for which space management
and services.
is active, and automatically starts
threshold migration when space usage on storage hierarchy
a file system equals or exceeds its high (1) A logical order of primary storage
threshold. pools, as defined by an administrator. The
order is typically based on the speed and
sparse file
capacity of the devices that the storage
A file that is created with a length greater
pools use. The storage hierarchy is
than the data it contains, leaving empty
defined by identifying the next storage
spaces for the future addition of data.
pool in a storage pool definition. See also
special file storage pool.
On AIX, UNIX, or Linux systems, a file
(2) An arrangement of storage devices
that defines devices for the system, or
with different speeds and capacities. The
temporary files that are created by
levels of the storage hierarchy include:
processes. There are three basic types of
main storage, such as memory and
special files: first-in, first-out (FIFO);
direct-access storage device (DASD)
block; and character.
cache; primary storage (DASD containing
SSL See Secure Sockets Layer. user-accessible data); migration level 1
(DASD containing data in a space-saving
stabilized file space
format); and migration level 2 (tape
A file space that exists on the server but
cartridges containing data in a
not on the client.
space-saving format).
stanza A group of lines in a file that together
storage pool
have a common function or define a part
A named set of storage volumes that are
of the system. Each stanza is identified by
the destination that is used to store client
a name that occurs in the first line of the
Glossary 1153
timeout supports the interchange, processing, and
A time interval that is allotted for an display of text that is written in the
event to occur or complete before common languages around the world,
operation is interrupted. plus some classical and historical texts.
The Unicode standard has a 16-bit
timestamp control mode
character set defined by ISO 10646.
A mode that determines whether
commands preserve the access time for a Unicode-enabled file space
file or set it to the current time. Unicode file space names provide support
for multilingual workstations without
Tivoli Storage Manager command script
regard for the current locale.
A sequence of Tivoli Storage Manager
administrative commands that are stored Unicode transformation format 8
in the database of the Tivoli Storage Unicode Transformation Format (UTF),
Manager server. The script can run from 8-bit encoding form, which is designed
any interface to the server. The script can for ease of use with existing ASCII-based
include substitution for command systems. The CCSID value for data in
parameters and conditional logic. UTF-8 format is 1208.
tombstone object Universal Naming Convention (UNC) name
A small subset of attributes of a deleted The server name and network name
object. The tombstone object is retained combined. These names together identify
for a specified period, and at the end of the resource on the domain.
the specified period, the tombstone object
Universally Unique Identifier (UUID)
is permanently deleted.
The 128-bit numeric identifier that is used
Transmission Control Protocol/Internet Protocol to ensure that two components do not
(TCP/IP) have the same identifier.
An industry-standard, nonproprietary set
UTF-8 See Unicode transformation format 8.
of communication protocols that provides
reliable end-to-end connections between UUID See Universally Unique Identifier.
applications over interconnected networks
V
of different types.
validate
transparent recall
To check a policy set for conditions that
The process that is used to automatically
can cause problems if that policy set
recall a file to a workstation or file server
becomes the active policy set. For
when the file is accessed. See also recall
example, the validation process checks
mode. Contrast with selective recall.
whether the policy set contains a default
trusted communications agent (TCA) management class.
A program that handles the sign-on
version
password protocol when clients use
A backup copy of a file stored in server
password generation.
storage. The most recent backup copy of a
U file is the active version. Earlier copies of
the same file are inactive versions. The
UCS-2 A 2-byte (16-bit) encoding scheme based
number of versions retained by the server
on ISO/IEC specification 10646-1. UCS-2
is determined by the copy group
defines three levels of implementation:
attributes in the management class.
Level 1-No combining of encoded
elements allowed; Level 2-Combining of virtual file space
encoded elements is allowed only for A representation of a directory on a
Thai, Indic, Hebrew, and Arabic; Level network-attached storage (NAS) file
3-Any combination of encoded elements system as a path to that directory.
are allowed.
virtual volume
UNC See Universal Naming Convention name. An archive file on a target server that
represents a sequential media volume to a
Unicode
source server.
A character encoding standard that
Glossary 1155
1156 IBM Tivoli Storage Manager for AIX: Administrator's Guide
Index
Special characters ACTIVATE POLICYSET command 512
active data 986
$$CONFIG_MANAGER$$ 727 active data, protecting with active-data pools 251
active files, storage-pool search order 253
active log 687, 924
Numerics description 659
3480 tape drive increasing the size 686
cleaner cartridge 179 move to another directory 689
device support 44 out of space 686
device type 191 space requirements 667
mixing drive generations 196 active log mirror 924
3490 tape drive description 660
cleaner cartridge 179 active log size 687
device support 44 ACTIVE policy set
device type 191 creating 502, 512
mixing drive generations 196 replacing 481
3494 automated library device 46, 108 active-data pool
3494 library 112 auditing volumes in 937
configuration with a single drive device 111 backup-set file source 545
3494 library manager 116 client restore operations, optimizing 557
3494SHARED server option 70 collocation on 369
3570 tape drive defining 411
ASSISTVCRRECOVERY server option 70 export file source 751, 759, 760
defining device class 66, 190 import operations 769
device support 44 overview 251, 275
3590 tape drive reclamation of 378
defining device class 66, 190, 191 RECONSTRUCT parameter 559
device support 109 simultaneous-write function 337
3592 drives and media specifying in policy definition 500
as element of storage hierarchy 270 storage pool search-and-selection order 253
cleaning 177 ACTIVELOGDIRECTORY server option 686, 689
data encryption 171, 198, 538 ACTIVELOGSIZE server option 686
defining device class 66 activity log
DEVICETYPE parameter 149 description of 803
enabling for WORM media 154 logging events to 863
mixing drive generations 196 monitoring 803
4mm tape device support 191 querying 804
8mm tape device support 191 setting size limit 805
setting the retention period 805
administration center
A deploying backup-archive packages automatically 434
using 597
absolute mode, description of 505 Administration center
ACCEPT DATE command 615 managing servers 597
access authority, client 450, 451 Administration Center
access mode, volume backing up 603
changing 267 commands not supported 600
description 268 features not supported 600
determining for storage pool 255, 412 protecting 603
access, managing 885, 903 restoring 604
accessibility features 1129 starting and stopping 600
accounting record Administration Center scripts 801
description of 811 administrative client
monitoring 811 description of 3
accounting variable 811 viewing information after IMPORT or EXPORT 773
ACSLS (Automated Cartridge System Library Software) administrative clients
StorageTek library 45 preventing from accessing the server 748
configuring 122 administrative commands
description 47 ACCEPT DATE 624
mixing 3592 drive generations 196 ASSIGN DEFMGMTCLASS 512
Tivoli Storage Manager server options for 70 AUDIT LIBVOLUME 162
ACSLS library 122
Index 1159
alerts (continued) auditing (continued)
sending and receiving by email 809 library's volume inventory 162
ANR8914I message 179 license, automatic by server 607
ANR9999D message 862 multiple volumes in sequential access storage pool 941
application client single volume in sequential access storage pool 942
adding node for 422 volume in disk storage pool 941
description 4 volume, reasons for 934
policy for 525 volumes by date 942
application program interface (API) volumes by storage pool 942
client, registering 425 authority
compression option 425 client access 451
deletion option 425 granting to administrators 896
registering to server 425 privilege classes 896
simultaneous-write function, version support for 339 server options 896
application programming interface (API) authorizing to start server
description of 3 root users 618
ARCHFAILOVERLOGDIRECTORY server option 692 auto deployment 434
archive auto-update 434, 435, 436, 437, 438
allowing while file changes 510 autochangers 91
backup set, uses for 9, 13 AUTOFSRENAME parameter 459
determining storage usage 402 AUTOLABEL parameter for tape volumes 148
directory 563 Automated Cartridge System Library Software (ACSLS)
instant 9, 13 StorageTek library
package 563 configuring 122
policy, defining 499 description 47
policy, introduction 29 mixing 3592 drive generations 196
process description 497 Tivoli Storage Manager server options for 70
storage usage, minimizing 564 automated library device
storage usage, reducing 564, 565 auditing 162
uses for 9, 12 changing volume status 161
archive copy group 30 checking in volumes 149
defining 510, 511 defining 46
deleting 534 informing server of new volumes 149
description of 485 labeling volumes 147
archive data overflow location 255
expiration 516 removing volumes 161
managing 563 scratch and private volumes 54
protection 516 updating 168
archive failover log 924 volume inventory 54
description 661 automatically renaming file spaces 459
move to another directory 689 automating
archive log 924 administrative commands 35
description 660 client operations 33, 568
move to another directory 689 server operations 634
space requirements 667 server startup 616
archiving awk script 1037, 1068
file 483, 497
file management 483
FILE-type volume, archiving many small objects to 200
ASCII restriction for browser script definition 640
B
background mode 620
ASSIGN DEFMGMTCLASS command 512
background processes 624
association, client with schedule
Backing up 849
defining 569
IBM Tivoli Monitoring 855
deleting 576
Tivoli Enterprise Portal 855
association, file with management class 491, 492
Backing up and restoring Tivoli Monitoring for Tivoli Storage
association, object with profile
Manager
administrative command schedule 718
DB2
administrator 715, 729
backing up for Tivoli Monitoring 842
client option set 715
Backing up DB2 WAREHOUS
deleting 720
AIX and Linux systems 844
policy domain 716
Backing up Tivoli Monitoring for Tivoli Storage Manager
script 715
DB2 843
AUDIT LIBVOLUME command 162
backup
AUDIT LICENSE command 607
amount of space used by client 402
AUDIT VOLUME command 934, 941
comparison of types 10, 13
auditing
default policy 479
LDAP directory server 958
defining criteria for client files 499
Index 1161
certificate cleaner cartridge
adding to the key database 888, 889 checking in 178
homegrown certificate authority 889 how often to use 177
changing date and time on server 624 operations with 179
changing hostname 628 restrictions on cleaning 177
changing SSL settings 594 CLEANFREQUENCY parameter 177
characteristics, machine 1037 client
check in access user ID 451
cleaner cartridge 178 administrative 3
library volume 149 API (application program interface) 425
setting a time interval for volume 195 API (application programming interface) 4
VolSafe-enabled volumes 207 application client 4, 525
CHECKIN LIBVOLUME command 149 backup-archive 4
checking the log file generated by processed schedules 578 controlling resource utilization 562
checklist for DRM project plan 1063 how to protect 8
CHECKOUT LIBVOLUME command 161 operations summary 10
CHECKTAPEPOS server option 70 options file 426
class, administrator privilege restore without primary volumes available 957
description 896 Tivoli Storage Manager for Space Management (HSM
granting authority 896 client) 4, 488
reducing 900 using to back up NAS file server 224, 240
revoking all 901 client file
class, device allowing archive while changing 479
3570 190, 191 allowing backup while changing 479, 504
3590 190, 191 archive package 563
3592 191 associating with management class 491, 492
4MM 190, 191 damaged 957
8MM 190, 191 delaying migration of 286
amount of space used 401 deleting 415
CARTRIDGE 191 deleting from a storage pool 414
CENTERA 49 deleting from cache 293
defining 190 deleting when deleting a volume 415
description of 22 duplication when restoring 957
DISK 190 eligible for archive 479, 494
DLT 190, 191 eligible for backup 479, 494
DTF 190, 191 eligible for expiration 481
ECARTRIDGE 191 eligible for space management 497
FILE 190 how IBM Tivoli Storage Manager stores 272
FORMAT parameter 193 on a volume, querying 392
GENERICTAPE 190, 191 server migration of 281
LTO 203 client migration 497, 498
OPTICAL 190 client node
QIC 190, 191 adding 421
REMOVABLEFILE 199 agent 446
requesting information about 210 amount of space used 400
selecting for import and export 757 creating backup sets for 546
sequential 191 file spaces, QUERY OCCUPANCY command 400
SERVER 190, 191, 739 finding tapes used by 395
StorageTek devices 191, 207 immediate processing 585
tape 191, 199 importing 770
Ultrium, LTO 191 locking 444
updating 191, 199 managing registration 422, 431, 605
VOLSAFE 207 options file 426
WORM 190, 191 performing operations for 537, 573, 579
WORM12 190, 191 privilege class for scheduling operations for 568
WORM14 190, 191 proxy node relationships 445
class, policy privilege querying 448
description 896, 898 reducing archive packages for 565
granting 900 registering 425
revoking 900, 901 removing 445
class, storage privilege renaming 444
description 896 scheduling operations for 568
granting 900 setting password authentication 915
reducing 900 setting scheduling mode 581
revoking 900, 901 setting up subfile backups 555
CLEAN DRIVE command 176 target 446
unlocking 444
Index 1163
commands, administrative (continued) commands, administrative (continued)
DEFINE STGPOOL 259, 260, 271, 272 QUERY DRMSTATUS 1030
DEFINE SUBSCRIPTION 725 QUERY ENABLED 876
DEFINE VIRTUALFSMAPPING 246 QUERY EVENT 570
DEFINE VOLUME 53, 54, 266 QUERY FILESPACE 466
DELETE ASSOCIATION 576 QUERY LIBRARY 168
DELETE BACKUPSET 553 QUERY LICENSE 607
DELETE COPYGROUP 534 QUERY MGMTCLASS 532
DELETE DOMAIN 535 QUERY MOUNT 167
DELETE DRIVE 180 QUERY NODE 448
DELETE EVENT 579 QUERY NODEDATA 401
DELETE GRPMEMBER 737 QUERY OCCUPANCY
DELETE LIBRARY 169 backed-up, archived, and space-managed files 402
DELETE MGMTCLASS 535 client file spaces 400
DELETE POLICYSET 535 client nodes 400
DELETE PROFASSOCIATION 720 device classes 401
DELETE PROFILE 721 storage pools 401
DELETE SCHEDULE 574 QUERY OPTION 796
DELETE SCRIPT 649 QUERY POLICYSET 533
DELETE SERVER 708 QUERY PROCESS 406
DELETE SERVERGROUP 736 QUERY REQUEST 165
DELETE STGPOOL 415 QUERY RESTORE 475
DELETE SUBSCRIBER 731 QUERY RPFCONTENT 1041
DELETE SUBSCRIPTION 721, 727 QUERY RPFILE 1041
DELETE VOLHISTORY 629 QUERY SCHEDULE 570
DELETE VOLUME 416, 417 QUERY SCRIPT 648
DISABLE EVENTS 862 QUERY SERVERGROUP 736
DISABLE SESSIONS 474 QUERY STGPOOL 386, 396, 764
DISMOUNT VOLUME 167 QUERY SUBSCRIPTION 726
DSMSERV DISPLAY DBSPACE 682 QUERY SYSTEM 796
DSMSERV DISPLAY LOG 682 QUERY VOLUME 388, 407
ENABLE EVENTS 862 RECONCILE VOLUMES 743
ENABLE SESSIONS 474 REGISTER ADMIN 898
END EVENTLOGGING 863 REGISTER LICENSE 606
EXPIRE INVENTORY 35 REMOVE ADMIN 902
EXPORT ADMIN 745 REMOVE NODE 445
EXPORT NODE 758 RENAME ADMIN 901
EXPORT POLICY 758 RENAME FILESPACE 772
EXPORT SERVER 758 RENAME NODE 444
EXTEND DBSPACE 683, 684 RENAME SCRIPT 649
GENERATE BACKUPSET 546 RENAME SERVERGROUP 736
GRANT AUTHORITY 896 RENAME STGPOOL 411
HALT 621, 622 RESTORE DB 623
HELP 631 RESTORE NODE 239, 240
IMPORT 772, 773 RESTORE STGPOOL 960
IMPORT ADMIN 761 ROLLBACK 654
IMPORT NODE 761, 770 RUN 649
IMPORT POLICY 761 SELECT 798
IMPORT SERVER 761, 770 SET ACCOUNTING 811
LABEL LIBVOLUME 100, 133 SET AUTHENTICATION 915
LOCK ADMIN 902 SET CLIENTACTDURATION 585
LOCK NODE 444 SET CONFIGMANAGER 711, 714
LOCK PROFILE 719, 720 SET CONFIGREFRESH 726
MOVE DATA 404 SET CONTEXTMESSAGING 862
MOVE NODEDATA 408 SET CROSSDEFINE 703, 706
NOTIFY SUBSCRIBERS 719, 720 SET DBREPORTMODE 682
PING SERVER 737 SET DRMCHECKLABEL 1033
PREPARE 1039 SET DRMCOPYSTGPOOL 1030
QUERY ACTLOG 804 SET DRMCOURIERNAME 1033
QUERY BACKUPSETCONTENTS 553 SET DRMDBBACKUPRXPIREDAYS 1033
QUERY CONTENT 392 SET DRMFILEPROCESS 1033
QUERY COPYGROUP 532, 768 SET DRMINSTPREFIX 1030
QUERY DB 682 SET DRMNOTMOUNTABLE 1033
QUERY DBSPACE 682 SET DRMPLANPOSTFIX 1030
QUERY DEVCLASS 210 SET DRMPLANPREFIX 1030
QUERY DOMAIN 533 SET DRMPRIMSTGPOOL 1030
QUERY DRIVE 170 SET DRMRPFEXPIREDAYS 1041
Index 1165
cross definition 701, 702, 706 data deduplication (continued)
current server status workspaces 815 turning off 317
custom Cognos report 827 virtual volumes, server-to-server
customer support data deduplication 316
contact xxi data format
cyclic redundancy check NATIVE 248
during a client session 537 data format for storage pool 218, 220, 248
for storage pool volumes 937 definition 255
for virtual volumes 737 operation restrictions 259
performance considerations for nodes 538 data movement, querying 406
performance considerations for storage pools 940 data mover
defining 187, 235
description 53
D managing 220
NAS file server 53
daily monitoring
data protection with WORM media 153
Tivoli Monitoring for Tivoli Storage Manager 789
data retention protection 516, 600
daily monitoring disk storage pools 784
data retention using Centera
daily monitoring of databases 781
overview 49
daily monitoring of server processes 780
unsupported functions 259
daily monitoring scheduled operations 788
data shredding
daily monitoring sequential access storage pools 785
BACKUP STGPOOL command 543
damaged files 943, 944
COPY ACTIVEDATA command 543
data
DEFINE STGPOOL command 543
active backup versions, storing 251
DELETE FILESPACE, command 543
considering user needs for recovering 67
DELETE VOLUME, command 543
exporting 745
description 541
importing 745
enforcing 543
data compression 424
EXPIRE INVENTORY command 543
data deduplication 324
EXPORT NODE command 543, 746
checklist for configuration 302
EXPORT SERVER command 543, 746
client-side 321
GENERATE BACKUPSET command 543, 545
changing location 323
MOVE DATA command 404, 543
client and server settings 294, 318
setting up 542
multiple nodes 322
UPDATE STGPOOL command 543
overview 295
data source configuration 835
single node 322
data storage
controlling duplicate-identification manually 319
active-data pools 251
data deduplication 304, 305, 306, 307, 308, 309, 310, 330,
client files, process for storing 5
331, 332, 333, 334, 335, 336
concepts overview 15
DEDUPLICATION parameter 318
considering user needs for recovering 67
DEDUPREQUIRESBACKUP server option 315
deleting files from 415
definition 293
evaluating 68
detecting security attacks 311
example 253
duplicate-identification processes 314, 318, 321
managing 20
IDENTIFY DUPLICATES command 319
monitoring 934
limitations 297
planning 68
managing 314
server options affecting 70
moving or copying data 316
tailoring definitions 768
node replication 974, 1025
using another IBM Tivoli Storage Manager server 737
options for 324
using disk devices 71
planning 299, 300
using the storage hierarchy 280
processing 313
data validation
protecting data 315
during a client session 537
reclamation 315
for storage pool volumes 937
requirements 302
for virtual volumes 737
server-side 294, 318
performance considerations for nodes 538
specifying the size of objects to be deduplicated 323
performance considerations for storage pools 940
statistics
database
displaying information about files with links to a
audits 656
volume 326
backup 918, 919, 923, 924
querying a duplicate-identification process 326, 327,
buffer size 656
329
description of 655
querying a storage pool 325
increase the size 36
testing
increasing the size 683, 684
restore operations 312
log files, alternative locations 690
space savings 313
managing 655
Tivoli Storage Manager for Virtual Environments 332
Index 1167
deployment (continued) device type (continued)
verifying 443 4MM 190, 191
descriptions, for archive packages 563, 564 8MM 190, 191
DESTINATION parameter (storage pool) 479, 504 CARTRIDGE 191
destroyed volume access mode 269, 956 CENTERA 49
determining DISK 190
cause of ANR9999D messages 862 DLT 190, 191
the time interval for volume check in 195 DTF 190, 191
device 87 ECARTRIDGE 191
attaching to server 229 FILE 190
multiple types in a library 60 GENERICTAPE 190, 191
name 86 LTO 192, 203
supported by IBM Tivoli Storage Manager 44 multiple in a single library 60
device class OPTICAL 190
3570 190, 191 QIC 190, 191
3590 190, 191 REMOVABLEFILE 190
3592 191 SERVER 190, 191, 739, 741
4MM 190, 191 VOLSAFE 207
8MM 190, 191 WORM 190, 191
amount of space used 401 WORM12 191
CARTRIDGE 191 WORM14 191
CENTERA 49 device utilities 87
defining 190 device, storage
description of 22 automated library device 110
DISK 190 disk 71
DLT 190, 191 manual library device 132
DTF 190, 191 optical device 128, 132
ECARTRIDGE 191 removable media device 128, 199
FILE 190 required IBM Tivoli Storage Manager definitions 66
FORMAT parameter 193 supported devices 44
GENERICTAPE 190, 191 devices 87
LTO 203 configure 140, 141, 142
OPTICAL 190 configuring 96
QIC 190, 191 defining 185
REMOVABLEFILE 199 diagnosing ANR9999D messages 862
requesting information about 210 differential backup
selecting for import and export 757 compared to incremental 13
sequential 191 of image, description 10, 59
SERVER 190, 191, 739 direct-to-tape, policy for 524
StorageTek devices 191, 207 directories
tape 191, 199 deleting from archive packages 565
Ultrium, LTO 191 directory-level backup 245
updating 191, 199 preventing archive of 566
VOLSAFE 207 storage usage for archive packages 563
WORM 190, 191 disability 1129
WORM12 190, 191 DISABLE EVENTS command 862
WORM14 190, 191 DISABLE SESSIONS command 474
device classes disaster recovery
database backups 919 auditing storage pool volumes 944
device configuration file 926, 951 general strategy 737
device driver methods 38, 737
configuring 92 node replication as a method for 1026
for automated library devices 84 providing 737
for IBM 3490, 3570, and 3590 tape drives 88 server
for IBM 3494 or 3495 libraries 90 disaster recovery 1054
for manual tape devices 83 server recovery 1054
IBM Tivoli Storage Manager, installing 83, 84 disaster recovery manager
installing 83, 85 awk script 1068
requirements 83, 85 client recovery information 1029
device drivers 85, 91 creating a disaster recovery plan 1039
installing 88 customizing 1030
device sharing 68 displaying a disaster recovery plan 1041
device special file names 87 expiring a disaster recovery plan 1041
device support 15 features 1029
device type moving volumes back on-site 1048
3570 190, 191 not available in Administration Center 600
3590 191 project plan, checklist 1063
Index 1169
establishing server-to-server communications Exporting BIRT reports
enterprise configuration 700 BIRT reports
enterprise event logging 700 exporting 851
virtual volumes 708 Exporting Cognos reports
estimate network bandwidth 984 Cognos reports
estimate replication 983 exporting 851
estimated capacity for storage pools 386 exporting workspaces 850
estimated capacity for tape volumes 390 EXPQUIET server option 515
event logging 600, 861, 867 EXTEND DBSPACE command 683, 684
event record (for a schedule) EXTERNAL library type 1116
deleting 579, 640 external media management
description of 570, 577 IBM Tivoli Storage Manager setup 130
managing 638 initialization requests 1116
querying 639 interface description 1111
removing from the database 578, 639 overview 130
setting retention period 578, 639 processing during server initialization 1112
event server 875 using with IBM Tivoli Storage Manager
example media-managed storage pools, deleting 131
assigning a default management class 512 volume dismount requests 1122
register three client nodes with CLI 429 volume mount requests 1118
validating and activating a policy set 514 volume release requests 1117
expiration 81
expiration date, setting 636
expiration processing 35
description 934
F
failback, PowerHA 1103
files eligible 481, 514
failover, PowerHA 1103
of subfiles 481, 506, 514, 555
fibre channel devices 90
starting 514
fibre channel SAN-attached devices 93
using disaster recovery manager 515
file data, importing 745
EXPIRE INVENTORY command 35
file deletion option
duration of process 515
setting 426
export
FILE device type
administrator information 754
active-data pools 251
client node information 755
backing up or archiving many small objects 200
data from virtual volumes 775
benefits 48
decided when 747
concurrent access to FILE volumes 49
directly to another server 748
defining device class 190
labeling tapes 750, 757
deleting scratch volumes 629
monitoring 772
free space in directories 402
options to consider 748
setting up storage pool 79
planning for sequential media 757
file exit 861
policy information 756
logging events to 864
PREVIEW parameter 756
file name for a device 86
previewing results 753
file path name 456
querying about a process 772
file retrieval date 293
querying the activity log 774
file server, network-attached storage (NAS)
replacing definitions before 750
backup methods 224
server data 756
registering a NAS node for 234
using scratch media 757
using NDMP operations 58, 215
viewing information about a process 772
file size, determining maximum for storage pool 255
EXPORT ADMIN command 759
file space
export and import data
deleting, effect on reclamation 372
sequential media volumes 756
deleting, overview 467
export Cognos reports 830
description of 454
EXPORT commands 773
merging on import 749, 762
EXPORT NODE command 759
names that do not display correctly 466
EXPORT POLICY command 760
QUERY OCCUPANCY command 400
EXPORT SERVER command 756, 760
querying 454
exporting
renaming 772
administrator data 759
Unicode enabled 465
client node data 759
viewing information about 454
data to tape 758
file space identifier (FSID) 465
description of 745
file spaces
policy data 760
defining 455
server data 760
FILE volumes
subfiles 555
shared 189
exporting and importing TEP workspaces and queries 850
Index 1171
importing (continued) KILL command 622
data storage definitions 766, 768 knowledge bases, searching xix
date of creation 763, 769
description of 745
directing messages to an output file 754, 767
duplicate file spaces 769
L
label
file data 769
automatic labeling in SCSI libraries 148
node replication restriction 976
checking media 152
policy definitions 766
overwriting existing labels 146, 147
server control data 767
sequential storage pools 145, 265
subfiles 555
volume examples 147
subsets of information 771
volumes using a library device 147
importing BIRT reports 856
LABEL LIBVOLUME command
importing customized BIRT reports 856
identifying drives 146
Importing customized Cognos reports 856
insert category 149
importing reports 856
labeling sequential storage pool volumes 146
include-exclude file 32
manually mounted devices 132
creating 32
overwriting existing volume labels 146
description of 29, 490
removable media volumes 146
for policy environment 485, 490
restrictions for VolSafe-enabled drives 207
incomplete copy storage pool, using to restore 957
using a library device 147
incremental backup 494
using a manual library 133
incremental backup, client
using an automated library 100, 114, 127
file eligibility for 494
volume labeling examples 147
frequency, specifying 582
LAN-free data movement 134
full 494
description 14, 57
partial 495
storage pool hierarchy restriction 270
progressive 13
suggested usage 10
incremental replication 987
LDAP-authenticated password xxvi
inheritance model for the simultaneous-write function 346
configuring an LDAP directory server 906
initial replication 986
configuring the server 908
initial start date for schedule 635
policy 907
initial start time for schedule 635
query admin 911
initializing
query node 911
tape volumes 22, 23
register nodes and admin IDs 909
installing IBM Tivoli Storage Manager 422
scenarios 912
instance user ID 620
transport layer security 892
instant archive
update node or admin 910
creating on the server 545
libraries
description of 9, 13
NDMP operations 228
interface, application program
virtual tape library 105
client, registering 425
library
compression option 425
ACSLS (Automated Cartridge System Library
deletion option 425
Software) 47, 122
description of 3
adding volumes 149
registering to server 425
attaching for NAS file server backup 229
simultaneous-write function, version support for 339
auditing volume inventory 162
interfaces to IBM Tivoli Storage Manager 19
automated 160
Internet, searching for problem resolution xix, xx
categories for volumes in IBM 3494 108
introduction to IBM Tivoli Storage Manager 3
configuration example 97, 110, 132
iPad 591
configure for more than one device type 60
defining 169, 185
defining path for 188
J deleting 169
Journal File System 263 detecting changes to, on a SAN 143, 186
external 47
full 161
K IBM 3494 46, 108, 109
managing 168
keepalive, TCP
manual 45, 132, 165
enabling 222
mixing device types 60, 196, 203
overview 221
mode, random or sequential 84
specifying connection idle time 222
overflow location 255
key database
querying 168
adding certificates 888, 889
SCSI 46
password change 888, 889
serial number 186
keyboard 1129
sharing among servers 101, 115
Index 1173
migration, client (continued) mount (continued)
automatic, for HSM client (continued) library 194
threshold 484 limit 194
using management class 498 mode 165
premigration for HSM client 484 operations 165
reconciliation 484 query 167
selective, for HSM client 484 retention period 195
stub file on HSM client 484 wait period 195
migration, server mount point 1016
canceling the server process 397 preemption 626
controlling by file age 286 queue, server option 70
controlling duration 290 relationship to mount limit in a device class 194, 203, 210
controlling start of, server 285 requirements for simultaneous-write operations 358
copy storage pool, role of 292 settings for a client session 423
defining threshold for disk storage pool 285 MOVE DATA command 404
defining threshold for tape storage pool 287 MOVE DRMEDIA command 1048
delaying by file age 286 MOVE NODEDATA 408
description, server process 283 moving a backup set
minimizing access time to migrated files 287 benefits of 550
monitoring thresholds for storage pools 396 to another server 550
multiple concurrent processes moving data 467
random access storage pool 255, 283 from off-site volume in a copy storage pool 405
sequential access storage pool 255, 291 monitoring the movement of 407
problems, diagnosing and fixing 281 procedure 405
providing additional space for server process 398 requesting processing information 406
starting manually 290 to another storage pool 404
starting server process 280, 285 to other volumes in same storage pool 404
threshold for a storage pool multipath I/O 89
random access 283 multiple
sequential access 287, 288 copy storage pools, restoring from 957
mirroring 924 managing IBM Tivoli Storage Manager servers 37
description of 38 managing Tivoli Storage Manager servers 695
MIRRORLOGDIRECTORY server option 692 multiple commands
mixed device types in a library 60, 196, 203 backup and restore 561
mobile client support 554 multiple drive device types 112
mobile connection 591 multiple server instances 620
mobile device 591 multiple servers 732
mobile phone 591 completing tasks 731
mode multiple sessions
client backup 505 on clients for a restore 562
library (random or sequential) 84 multistreaming, concurrent for database backups and
scheduling 580 restores 920, 946
modified mode, description of 505
modifying an existing Cognos report 829
modifying schedules 574
monitoring
N
name of device 86
server-to-server export 754
NAS file server, NDMP operations
monitoring administrator 591
backing up a NAS file server 240
monitoring the Tivoli Storage Manager operations 777
backing up a NAS file server to native pools 241, 242
monitoring workspaces
configuration checklist 222
agent status 815
data format 218
availability 815
data mover, description 53, 187
client missed files 815
defining a data mover 187, 235
client node status 815
defining a storage pool 228
client node storage 815
defining paths to drives
database 815
drives attached only to file server 236
node activity 815
drives attached to file server and Tivoli Storage
schedule 815
Manager server 235
server status 815
obtaining names for devices attached to file server 237
storage device 815
defining paths to libraries 238
storage pool 815
differential image backup, description 59
tape usage 815
full image backup, description 59
tape volume 815
interfaces used with 217
Tivoli Enterprise Portal
managing NAS nodes 219
monitoring workspaces 818
path, description 53, 188
mount
planning 226
count of number of times per volume 391
policy configuration 223, 527
Index 1175
node replication (continued) node, client (continued)
replicating (continued) target 446
data by priority 1013 unlocking 444
data by type 1012 updating 434
scheduling or starting manually 1010 viewing information about 448
throughput, managing 1014 nodes
restoring, retrieving, and recalling data from the moving nodes from schedule 576
target 1026 overview of client and server 421
results, previewing 1009 NOPREEMPT server option 626
retention protection, archive 976 NORETRIEVEDATE server option 293
rules NOTIFY SUBSCRIBERS command 719, 720
attributes 967 number of times mounted, definition 391
definitions 966
disabling and enabling 1019
file spaces 994
hierarchy 967
O
occupancy, querying 399
nodes, individual 996
off-site volume
processing example 968
limiting the number to be reclaimed 381
server 998
off-site volumes
Secure Sockets Layer (SSL) 1007, 1008
moving data in a copy storage pool 405
servers
offsite recovery media
communications, setting up 990
specify defaults 1033
configurations 964
offsite recovery media (for DRM)
source, adding 1006
volumes
target 1006, 1007, 1026
moving back on-site 1048
settings, displaying
sending offsite 1046
file spaces 1022
states 1044
nodes 1023
offsite volume access mode 269
rules 1023
offsite volumes
SSL (Secure Sockets Layer) 1007, 1008
limiting the number to be reclaimed 255
state, replication 970
one-drive library, volume reclamation 255, 376
task tips
open registration
monitoring processes 981
description 422
nodes, adding and removing 978
enabling 429
previewing results 980
process 423
processing, managing 980
setting 422
rules, changing replication 978
operations available to client 10
servers, managing 979
Operations Center
validating a configuration 980
getting started 590
verifying results 981
hub server 592
node replication method 985, 986
opening 589
node replication synchronization 985
reconfiguring 594
node replication tiering 1092
spoke server 592, 593
node, client
web server 595
adding 421
operator privilege class
agent 446
reducing 900
amount of space used 400
revoking 901
creating backup sets for 546
optical device
file spaces, QUERY OCCUPANCY command 400
defining device class 190
finding tapes used by 395
reclamation for media 377
immediate processing 585
option set, client
importing 770
adding client options to 469
locking 444
assigning clients to 470
managing registration 422, 431, 605
copying 470
options file 426
creating 469
performing operations for 537, 573, 579
deleting 470
privilege class for scheduling operations for 568
deleting an option from 470
proxy node relationships 445
for NAS node 224
querying 448
requesting information about 470
reducing archive packages for 565
updating description for 470
registering 425
option, server
removing 445
3494SHARED 70
renaming 444
ACSLS options 70
scheduling operations for 568
ASSISTVCRRECOVERY 70
setting password authentication 915
AUDITSTORAGEstorage audit 607
setting scheduling mode 581
changing with SETOPT command 629
setting up subfile backups 555
CHECKTAPEPOS 70
Index 1177
pool, storage PowerHA SystemMirror for AIX
3592, special considerations for 196 requirements 1102
active-data pool 251 preemption
amount of space used 401 mount point 626
auditing a volume 934 volume access 627
comparing primary and copy types 413 prefix, for recovery instructions 1030
copy 251 prefix, for recovery plan file 1030
creating a hierarchy 270 prefix, server 733
data format 218, 255, 259 premigration 484
defining 255 PREPARE command 1039
defining a copy storage pool 411 PREVIEW parameter 756, 764
defining for disk, example 259, 271 primary volumes unavailable for restore 957
defining for NDMP operations 228 private category, 349X library 108
defining for tape, example 259, 271 private volumes 53
deleting 415 privilege class, administrator
description of 250 description 896
destination in copy group 504, 510 granting authority 896
determining access mode 255, 412 reducing 900
determining maximum file size 255 revoking all 901
determining whether to use collocation 255, 363, 412 privilege class, policy
disk 25 description 896, 898
duplicate, using to restore 957 granting 900
enabling cache for disk 255, 292 revoking 900, 901
estimating space for archived files on disk 385 privilege class, storage
estimating space for backed up files on disk 384 description 896
estimating space for disk 383 granting 900
estimating space for sequential 385 reducing 900
estimating space in multiple 270 revoking 900, 901
incomplete, using to restore 957 problem determination
increasing sizes 25 describing problem for IBM Software Support xxii
LTO Ultrium, special considerations for 203 determining business impact for IBM Software
managing 249 Support xxii
monitoring 386 migration 281
moving files 404 submitting a problem to IBM Software xxii
moving files between 404 process
multiple, using to restore 957 background 624
next storage pool canceling 625
definition 270 drive clean error checking 180
deleting 415 expiration 934
migration to 281, 396 number for migration 255, 283
overview 51 reclamation 372, 380
policy use 504, 510 processor value unit 608
primary 250 Product ID (PID) 608
querying 386 production node, PowerHA
renaming 411 installing IBM Tivoli Storage Manager client 1107
search-and-selection order for active files 253 installing IBM Tivoli Storage Manager server 1106
simultaneous-write function 337 profile
updating 255 associating configuration information with 715
updating for disk, example 260, 272 changing 715, 718, 720
using cache on disk 255, 292 default 717, 724
validation of data 937 defining 715
viewing information about 386 deleting 720, 721
portable computer 591 description 714
portable media getting information about 722
description of 6, 8, 545 locking 719
restoring from 550 problems with synchronization 730
Power HA 1102 unlocking 719
PowerHA 1103, 1104, 1105 progressive incremental backup 13
configuring 1103, 1104 protecting your data 38, 153
configuring the server 1107 active-data pools 251
failback 1103 data deduplication 315
failover 1103 simultaneous-write function 337
installing 1103 protection options
installing and configuring 1103 client 8
production node 1102 server 14, 38
standby node 1102 proxy node relationships 447
troubleshooting 1109
Index 1179
recovery plan file (continued) reports (continued)
stanzas 1068 historical reports
recovery, disaster client activity 820, 835
auditing storage pool volumes 944 running reports 819
general strategy 737 server trends 838
media 1039 viewing reports 819
methods 737 REQSYSAUTHOUTFILE server option 896
providing 737 requirements for disk subsystems 71
REGISTER ADMIN command 898 resetting
REGISTER LICENSE command 606 administrative password 901
REGISTER NODE command 452 user password expiration 911
registering RESOURCETIMEOUT server option 70
client option sets 424 restartable export 751
workstation 425 restartable restore session, client
registration canceling 475
description of 422 interrupting, active 476
licensing for a client node 605 requesting information about 475
licensing for an administrator 605 restore
managing client node 422, 431 client 560
setting for a client node 422 entire file systems 558
source server 425 files to a point-in-time 560
relationships selecting individual files 550
among clients, storage, and policy 486 RESTORE DB command 623
remote access to clients 449 restore operations 920
removable file system device RESTORE STGPOOL command 960
labeling requirements 129 restore to point-in-time, enabling for clients 530
REMOVABLEFILE device type, defining and RESTOREINTERVAL server optionrestore interval for
updating 199 restartable restore sessions 474, 481, 514
support for 128, 199 restoring
removable media 48 clients, optimizing restore 251, 556
removable media storage devices file 482
defining 1109 storage pools with incomplete volumes 957
REMOVE ADMIN command 902 Restoring backups of Tivoli Monitoring for Tivoli Storage
REMOVE NODE command 445 Manager 853
RENAME ADMIN command 901 Restoring IBM Tivoli Monitoring for Tivoli Storage Manager
RENAME FILESPACE command 772 DB2
RENAME NODE command 444 backing up for Tivoli Monitoring 852
RENAME SCRIPT command 649 restoring image data
RENAME SERVERGROUP command 736 from backup sets 550
RENAME STGPOOL command 411 restriction
renamed file spaces 465 ASCII characters in administrative Web interface 640
renaming drive cleaning 177
administrator ID 901 non-root users performing backups 904
NAS node 219 serial number detection 143
storage pool 411 retain extra versions, description of 479, 506
renaming the host 627 retain only version, description of 479, 506
renaming the server 627 retaining data using Centera
replicate data 983 overview 49
replicate NAS node 248 unsupported functions 259
replication 984, 1016 retention grace period
See also node replication description of archive 500
recovering an LDAP server 958 description of backup 500
replication method 986, 987 for backup sets 548
replication performance 1015, 1016 using archive 500
replication time 984 using backup 500
replication workload 1016 RETEXTRA parameter 479, 506
report studio 830, 831 RETONLY parameter 479, 506
reporting retrieval date for files 293
custom 813 retrieval from archive
modifying for performance 840 archive package 563
modifying queries 840 file 483
reporting and monitoring 813 reuse of sequential volumes
reporting ANR9999D messages 862 delaying 382, 934
reports storage pool volumes 156
client activity 820, 835 volume pending state 391
custom reports 820, 827, 828, 829 roll-forward recovery 659
ROLLBACK command 654
Index 1181
security (continued) server (continued)
managing access 885, 903 utilities 19
password expiration for nodes 911 viewing information about 796
privilege class authority for administrators 896 viewing information about processes 625, 795
Secure Sockets Layer (SSL) for node replication 1007, 1008 wizards 19
server options 896 Server
security, replicating node data 963 starting 617
SELECT command 798 server console
customizing queries 799 logging events to 863
selective backup 482, 496 server console, description of 896
selective recall 484 SERVER device type 251, 737
sending commands to servers 732 server group
separate DB2 instance 1105 copying 736
sequence number 469, 470 defining 735
sequential mode for libraries 84 deleting 736
sequential storage pool member, deleting 737
auditing a single volume in 942 moving a member 737
auditing multiple volumes in 941 querying 736
collocation 369 renaming 736
estimating space 385 updating description 736
migration threshold 287 server instances
reclamation 372 owner 620
SERIAL command 643 server option
serial number 3494SHARED 70
automatic detection by the server 143, 186, 188 ACSLS options 70
for a drive 186 ACTIVELOGDIRECTORY 686, 689
for a library 186, 188 ACTIVELOGSIZE 686
serialization parameter 479, 504, 510 ASSISTVCRRECOVERY 70
server AUDITSTORAGEstorage audit 607
backing up subfiles on 554 changing with SETOPT command 629
canceling process 625 CHECKTAPEPOS 70
changing the date and time 624 COMMTIMEOUTcommunication timeout 472, 473
configure to use z/OS media server storage 136 DRIVEACQUIRERETRY 70
console, MMC snap-in 19 EXPINTERVAL 514
deleting 708 EXPQUIET 515
description of 3 IDLETIMEOUTidle timeout 472, 473, 794
disabling access 474 NOPREEMPT 70, 626
disaster recovery 38 NORETRIEVEDATEfile retrieval date 293
enabling access 474 overview 20
halting 621 QUERYAUTH 896
importing subfiles from 555 REQSYSAUTHOUTFILE 896
instances RESOURCETIMEOUT 70
multiple on single system 620 RESTOREINTERVALrestore interval 474, 481, 514
owner ID 620 SEARCHMPQUEUE 70
maintaining, overview 19 THROUGHPUTDATATHRESHOLD 473
managing multiple 37 THROUGHPUTTIMETHRESHOLD 473
managing operations 605 TXNGROUPMAXmaximum transaction group size 272
managing processes 624 using server performance options 630
messages 862 server options 629
Microsoft Management Console snap-in 19 ARCHFAILOVERLOGDIRECTORY 692
monitoring 697 MIRRORLOGDIRECTORY 692
multiple instances 620 TECUTF8EVENT 867
network of IBM Tivoli Storage Manager 37 server options file 927
network of Tivoli Storage Manager servers 695 server script
options continuation characters 644
adding or updating 629 copying 648
prefix 733 defining 640
protecting 38 deleting 649
querying about processes 625, 795 EXIT statement 646
querying options 796 GOTO statement 646
querying status 796 IF clause 645
running multiple servers 620 querying 648
setting the server name 627 renaming 649
starting 614, 615 routing commands in 733
root user ID 619 running 649
stopping 621, 622 running commands in parallel 643
updating 708 running commands serially 643
Index 1183
SnapLock storage area network (SAN)
data protection, ensuring 523 client access to devices 57
event-based retention 522 device changes, detecting 143
reclamation 519 LAN-free data movement 57
retention periods 519 NDMP operations 58, 215
WORM FILE volumes, setting up 523 policy for clients using LAN-free data movement 528
SnapMirror to Tape 247 sharing a library among servers 55, 101, 115
snapshot, using in backup 9, 11, 924 storage agent role 57
using in directory-level backups 246 storage devices 95, 190, 191
SNMP storage hierarchy 23
agent 869 copying active backup data 251
communications 869 defining in reverse order 259, 271
configuring 873 establishing 270
enabled as a receiver 861, 869 example 253
heartbeat monitor 861, 869 for LAN-free data movement 270
manager 869 how the server stores files in 272
subagent 869 next storage pool
software support definition 270
describing problem for IBM Software Support xxii deleting 415
determining business impact for IBM Software migration to 281, 396
Support xxii restrictions 270
submitting a problem xxii staging data on disk for tape storage 280
Software Support storage management policies
contact xxi description of 29, 485
Sony WORM media (AIT50 and AIT100) 153 managing 477
source server 739 tailoring 498
space using standard 479
directories associated with FILE-type device classes 402 storage occupancy, querying 399
space requirements 983 storage pool
space-managed file 483 3592, special considerations for 196
special file names 86, 87 active-data pool 251
spoke server 592 amount of space used 401
adding 593 auditing a volume 934
SQL 798 comparing primary and copy types 413
SQL activity summary table 802 copy 251
SQL SELECT * FROM PVUESTIMATE_DETAILS 611 creating a hierarchy 270
ssl 891 data format 218, 255, 259
configuration 891 defining 255
SSL defining a copy storage pool 411
changing settings 594 defining for disk, example 259, 271
SSL (Secure Sockets Layer) defining for NDMP operations 228
Administration Center 890 defining for tape, example 259, 271
certificate deleting 415
adding CA-signed 889 description of 250
adding to key database 888 destination in copy group 504, 510
communication using 885 determining access mode 255, 412
digital certificate file protection 928 determining maximum file size 255
SSLTCPADMINPORT determining whether to use collocation 255, 363, 412
server option 887 disk 25
SSLTCPPORT duplicate, using to restore 957
server option 887 enabling cache for disk 255, 292
stand-alone mode 617 estimating space for archived files on disk 385
stand-alone Tivoli Common Reporting 831, 833, 834, 835 estimating space for backed up files on disk 384
standard label 22, 23 estimating space for disk 383
standard management class, copying 503 estimating space for sequential 385
standard storage management policies, using 479 estimating space in multiple 270
standby node, PowerHA 1108 incomplete, using to restore 957
start time, randomizing for a schedule 583 increasing sizes 25
starting the server 614, 615, 620 LTO Ultrium, special considerations for 203
authorizing root users 618 managing 249
startup window, description of 583 monitoring 386
static serialization, description of 504, 510 moving files 404
status of a volume in an automated library 54 moving files between 404
stopping multiple, using to restore 957
server 622 next storage pool
stopping the server 621 definition 270
storage agent 57 deleting 415
Index 1185
Tivoli Storage Manager for Space Management (continued) Unicode (continued)
reconciliation between client and server 484 file space identifier (FSID) 465, 466
selective migration 484 how clients are affected by migration 463
setting policy for 498, 503 how file spaces are automatically renamed 461
simultaneous-write function, version support for 339 migrating client file spaces 458
space-managed file, definition 483 options for automatically renaming file spaces 459
stub file 484 Unicode versions
Tivoli Storage Manager for z/OS Media 136 planning for 461
Tivoli technical training xix UNIQUETDPTECEVENTS option 865
TLS (Transport Layer Security) UNIQUETECEVENTS option 865
specifying communication ports 887 UNLOCK ADMIN command 902
training, Tivoli technical xix UNLOCK NODE command 444
transactions, database 655, 692 UNLOCK PROFILE command 719, 720
transparent recall 484 unplanned shutdown 621
Transport Layer Security (TLS) 886 unreadable files 943, 944
specifying communication ports 887 UPDATE ADMIN command 901
troubleshooting UPDATE ARCHIVE command 565
errors in database with external media manager 132 UPDATE BACKUPSET command 552
TSMDLST 87 UPDATE CLIENTOPT command 470
tsmdlst utility 87 UPDATE CLOPTSET command 470
TXNBYTELIMIT client option 272 UPDATE COPYGROUP command 504, 510
TXNGROUPMAX server option 272 UPDATE DEVCLASS command 191
type, device UPDATE DOMAIN command 502
3570 190, 191 UPDATE DRIVE command 170
3590 191 UPDATE LIBRARY command 168
4MM 190, 191 UPDATE LIBVOLUME command 53, 161
8MM 190, 191 UPDATE MGMTCLASS command 503
CARTRIDGE 191 UPDATE NODE command 434, 464, 468
CENTERA 49 UPDATE POLICYSET command 502
DISK 190 UPDATE RECOVERYMEDIA command 1039
DLT 190, 191 UPDATE SCHEDULE command 635
DTF 190, 191 UPDATE SCRIPT command 647
ECARTRIDGE 191 UPDATE SERVER command 708, 709
FILE 190 UPDATE VOLUME command 266
GENERICTAPE 190, 191 URL for client node 422
LTO 192, 203 user exit 861
multiple in a single library 60 user exit declarations 877, 1123
OPTICAL 190 user ID, administrative
QIC 190, 191 creating automatically 452
REMOVABLEFILE 190 description of 422
SERVER 190, 191, 739, 741 preventing automatic creation of 452
VOLSAFE 207 user-exit program 879, 1125
WORM 190, 191 utilities, for server 19
WORM12 191
WORM14 191
typographic conventions xxiii V
validate
node data 538
U VALIDATE LANFREE command 135
ulimits 615 VALIDATE POLICYSET command 512
Ultrium, LTO device type validating data
device class, defining and updating 203 during a client session 537
encryption 171, 205, 538 for storage pool volumes 937
WORM 153, 207 for virtual volumes 737
unavailable access mode logical block protection 172
description 269 performance considerations for nodes 538
marked by server 167 performance considerations for storage pools 940
marked with PERMANENT parameter 166 variable, accounting log 811
uncertain, schedule status 578, 639 VARY command 80
Unicode varying volumes on or off line 80
automatically renaming file space 459 VERDELETED parameter 479, 506
client platforms supported 457 VEREXISTS parameter 479, 506
clients and existing backup sets 465 Verifying and deleting Tivoli Monitoring for Tivoli Storage
deciding which clients need enabled file spaces 458 Manager backups
description of 457 DB2
displaying Unicode-enabled file spaces 465 verifying and deleting backups 847
example of migration process 464 versions data deleted, description of 479, 506
Index 1187
zosmedia library 47
Printed in USA
SC23-9769-05