True Copy Extended Distance Users Guide
True Copy Extended Distance Users Guide
FASTFIND LINKS
Document Organization
Release Notes
Table of Contents
MK-97DF8054-01
Copyright © 2008 Hitachi Ltd., Hitachi Data
Systems Corporation, ALL RIGHTS RESERVED
ii
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Product Version . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . .x
Document Revision Level. . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . .x
Changes in This Release . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . .x
Release Notes . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . .x
Document Organization. . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . .x
Intended Audience . . . . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . xi
Referenced Documents . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . xi
Convention for Storage Capacity Values . . ... . . . . . . . . . . ... . . . . . . . . . . xii
Safety and Warnings. . . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . . xii
Getting Help. . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . ... . . . . . . . . . .xiii
Support Contact Information xiii
HDS Support Web Site xiii
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiii
1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
How TCE Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-2
Typical Environment . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-2
Volume Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-3
Data Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-4
Guaranteed Write Order and the Update Cycle . . . . . ... . . .. . . . . . . . . . 1-4
Extended Update Cycles . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-5
Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-6
Differential Management LUs (DMLU) . . . . . . . . . . . ... . . .. . . . . . . . . . 1-7
TCE Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . 1-7
Contents iii
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Calculating Data Pool Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-4
Data Pool Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-7
Determining Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-7
iv Contents
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
5 Requirements and Specifications . . . . . . . . . . . . . . . . . . . . . . . . . .5-1
TCE System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
TCE System Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Contents v
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Changing Data Pool Threshold Value . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-5
Monitoring the Remote Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-6
Changing Remote Path Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-6
Monitoring Cycle Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-6
Changing Cycle Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-7
Changing Copy Pace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-7
Checking RPO — Monitoring P-VOL/S-VOL Time Difference . . . . . . . . . . . . . .9-8
Routine Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-8
Deleting a Volume Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-8
Deleting Data Pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-9
Deleting a DMLU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-9
Deleting the Remote Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-9
TCE Tasks before a Planned Remote Array Shutdown . . . . . . . . . . . . . . . 9-10
TCE Tasks before Updating Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-10
10 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
Troubleshooting Overview . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-2
Correcting Data Pool Shortage . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-2
Correcting Array Problems . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-3
Delays in Settling of S-VOL Data . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-4
Correcting Resynchronization Errors . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-4
Using the Event Log. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-6
Miscellaneous Troubleshooting . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10-7
vi Contents
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Deleting a Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-14
Changing Pair Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-15
Monitoring Pair Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-15
Confirming Consistency Group (CTG) Status . . . . . . . . . . . . . . . . . . . . . A-16
Procedures for Failure Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Displaying the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Reconstructing the Remote Path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Sample Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Contents vii
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
WDM and Dark Fibre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .E-2
Glossary
Index
viii Contents
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Preface
Preface ix
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Product Version
This document applies to Hitachi AMS 2000 Family firmware versions 0850
and higher.
Release Notes
Make sure to read the Release Notes before enabling and using this product.
The Release Notes are located on the installation CD. They may contain
requirements and/or restrictions that are not fully described in this
document. The Release Notes may also contain updates and/or corrections
to this document.
Document Organization
Thumbnail descriptions of the chapters are provided in the following table.
Click the chapter title in the first column to go to that chapter. The first page
of every chapter or appendix contains a brief list of the contents of that
section of the manual, with links to the pages.
•
Table iii-1:
Chapter/Appendix
Description
Title
Chapter 1, Overview Provides descriptions of TrueCopy Extended Distance
components and how they work together.
Chapter 2, Plan and Provides instructions for measuring write-workload,
Design—Sizing Data calculating data pool size and bandwidth.
Pools, Bandwidth
Chapter 3, Plan and Provides supported iSCSI and fibre channel
Design—Remote Path configurations, with information on WDM and dark fibre.
Chapter 4, Plan and Discusses the arrays and volumes you can use for TCE.
Design—Arrays,
Volumes, Operating
Systems
Chapter 5, Provides system information.
Requirements and
Specifications
Chapter 6, Installation Provides procedures for installing and setting up the TCE
and Setup system and creating the initial copy.
x Preface
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Table iii-1:
Chapter/Appendix
Description
Title
Chapter 7, Pair Provides information and procedures for TCE operations.
Operations
Chapter 8, Example Provides backup, data moving, and disaster recovery
Scenarios and scenarios and procedures.
Procedures
Chapter 9, Monitoring Provides monitoring and maintenance information.
and Maintenance
Chapter 10, Provides troubleshooting information.
Troubleshooting
Appendix A, Provides detailed Command Line Interface instructions for
Operations Using CLI configuring and using TCE.
Appendix B, Provides detailed Command Line Interface instructions for
Operations Using CCI configuring and usng TCE.
Appendix C, Cascading Provides supported configurations, operations, etc. for
with SnapShot cascading TCE with SnapShot.
Appendix D, Installing Provides required information when using Cache Partition
TCE when Cache Manager.
Partition Manager in
Use
Wavelength Division Provides a discussion of WDM and dark fibre for channel
Multiplexing (WDM) extender.
and Dark Fibre
Glossary Provides definitions for terms and acronyms found in this
document.
Index Provides links and locations to specific information in this
document.
Intended Audience
This document is intended for users with the following background:
• Background in data processing and understands RAID storage systems
and their basic functions.
• Familiarity with Hitachi Modular Storage systems.
• Familiarity with operating systems such as the Windows 2000, Windows
Server 2003 operating system, or UNIX.
Referenced Documents
The following list of documents may include HDS documents and documents
from other companies. These documents contain information that is related
to the topics in this document and can provide additional information about
them.
• Hitachi AMS 2000 Family Copy-on-Write SnapShot User's Guide (MK-
97DF8124)
Preface xi
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
• Hitachi Storage Navigator 2 Command Line Interface (CLI) Reference
Guide for Replication (MK-97DF8153)
• Hitachi AMS 2000 Family Command Control Interface (CCI) Reference
Guide (MK-97DF8121)
• Hitachi AMS 2000 Family Command Control Interface (CCI) User’s
Guide (MK-97DF8123)
• Hitachi AMS 2000 Family Cache Partition Manager User’s Guide (MK-
97DF8012)
text.
xii Preface
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Getting Help
If you have questions after reading this guide, contact an HDS authorized
service provider or visit the HDS support website: http://support.hds.com
NOTE: To help improve the quality of our service and support, your calls
may be recorded or monitored.
Comments
Your comments and suggestions to improve this document are greatly
appreciated. When contacting HDS, please include the document title,
number, and revision. Please refer to specific section(s) and paragraph(s)
whenever possible.
• E-mail: doc.comments@hds.com
• Mail: Technical Writing, M/S 35-10
Hitachi Data Systems
Preface xiii
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
10277 Scripps Ranch Blvd.
San Diego, CA 92131
Thank you! (All comments become the property of Hitachi Data Systems
Corporation.)
xiv Preface
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
1
Overview
Typical Environment
TCE Interfaces
Overview 1–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
How TCE Works
With TrueCopy Extended Distance (TCE), you create a copy of your data at
a remote location. After the initial copy is created, only changed data
transfers to the remote location.
During and after the initial copy, the primary volume on the local side
continues to be updated with data from the host application. When the host
writes data to the P-VOL, the local array immediately returns a response to
the host. This completes the I/O processing. The array performs the
subsequent processing independently from I/O processing.
Updates are periodically sent to the secondary volume on the remote side
at the end of the “update cycle”. This is a time period established by the
user. The cycle time is based on the recovery point objective (RPO), which
is the amount of data in time (2-hours’ worth, 4 hour’s worth) that can be
lost after a disaster, until the operation is irreparably damaged. If the RPO
is two hours, the business must be able to recover all data up to two hours
before the disaster occurred.
For a detailed discussion of the disaster recovery process using TCE, please
refer to Process for Disaster Recovery on page 8-11.
Typical Environment
A typical configuration consists of the following elements. Many but not all
require user set up.
• Two AMS arrays—one on the local side connected to a host, and one on
the remote side connected to the local array. Connections are made via
fibre channel or iSCSI.
• A primary volume on the local array that is to be copied to the
secondary volume on the remote side.
• A differential management LU on local and remote arrays, which hold
TCE information when the array is powered down
1–2 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
• Interface and command software, used to perform TCE operations.
Command software uses a command device (volume) to communicate
with the arrays.
Figure 1-1 shows a typical TCE environment.
•
Volume Pairs
When the initial TCE copy is completed, the production and backup volumes
are said to be “Paired”. The two paired volumes are referred to as the
primary volume (P-VOL) and secondary volume (S-VOL). Each TCE pair
consists of one P-VOL and one S-VOL. When the pair relationship is
established, data flows from the P-VOL to the S-VOL.
While in the Paired status, new data is written to the P-VOL and then
periodically transferred to the S-VOL, according to the user-defined update
cycle.
When a pair is “split”, the data flow between the volumes stops. At this
time, all the differential data that has accumulated in the local array since
the last update is copied to the S-VOL. This insures that its data is the same
as the P-VOL’s and is consistent and usable data.
During normal TCE operations, the P-VOL remains available for read/write
from the host. When the pair is split, the S-VOL also is available for read/
write operations from a host.
Overview 1–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Data Pools
Data from the host is continually updated to the P-VOL, as it occurs. The
data pool on the local side stores the changed data that accumulates before
the next the update cycle. The local data pool is used to update the S-VOL.
As explained in the previous section, data is copied from the P-VOL and local
data pool to the S-VOL following the update cycle. When the update is
complete, S-VOL data is identical to P-VOL data at the end of the cycle.
Since the P-VOL continues to be updated while and after the S-VOL is being
updated, S-VOL data and P-VOL data are not identical.
However, the S-VOL and P-VOL can be made identical when the pair is split.
During this operation, all differential data in the local data pool is
transferred to the S-VOL, as well as all cached data in host memory. This
cached data is flushed to the P-VOL, then transferred to the S-VOL as part
of the split operation, thus ensuring that the two are identical.
Figure 1-2 shows how S-VOL data is maintained at one update cycle back
of P-VOL data.
1–4 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
•
If inflow to the P-VOL increases, all of the update data may not be sent
within the cycle time. This causes the cycle to extend beyond the user-
specified cycle time.
When inflow decreases, updates again complete within the cycle time. Cycle
time should be determined according to a realistic assessment of write
workload, as discussed in Chapter 2, Plan and Design—Sizing Data Pools,
Bandwidth.
Overview 1–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Consistency Groups
Application data often spans more than one volume. With TCE, it is possible
to manage operations spanning multiple volumes as a single group. In a
“consistency group” (CTG), all primary logical volumes are treated as a
single entity.
Managing primary volumes as a consistency group allows TCE operations to
be performed on all volumes in the group concurrently. Write order in
secondary volumes is guaranteed across application logical volumes.
Figure 1-3 shows TCE operations with a consistency group.
•
1–6 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Differential Management LUs (DMLU)
The DMLU is an exclusive volume used for storing TrueCopy information
when the local or remote array is powered down. The DMLU is hidden from
a host. User setup is required on the local and remote arrays.
TCE Interfaces
TCE can be setup, used and monitored using of the following interfaces:
• The GUI (Hitachi Storage Navigator Modular 2 Graphical User
Interface), which is a browser-based interface from which TCE can be
setup, operated, and monitored. The GUI provides the simplest method
for performing operations, requiring no previous experience. Scripting
is not available.
• CLI (Hitachi Storage Navigator Modular 2 Command Line Interface),
from which TCE can be setup and all basic pair operations can be
performed—create, split, resynchronize, restore, swap, and delete. The
GUI also provides these functionalities. CLI also has scripting capability.
• CCI (Hitachi Command Control Interface (CCI), which is used to display
volume information and perform all copying and pair-managing
operations. CCI provides a full scripting capability which can be used to
automate replication operations. CCI requires more experience than
the GUI or CLI. CCI is required for performing failover and fall back
operations, and, on Windows 2000 Server, mount/unmount operations.
HDS recommends using the GUI to begin operations for new users with no
experience with CLI or CCI. Users who are new to replication software but
have CLI experience in managing arrays may want to continue using CLI,
though the GUI is an option. The same recommendation applies to CCI
users.
Overview 1–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
1–8 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
2
Plan and Design—Sizing
Data Pools, Bandwidth
Measuring Write-Workload
Determining Bandwidth
2. At the end of the collection period, convert the data to MB/second and
import into a spreadsheet tool. In Figure 2-1, Write-Workload
Spreadsheet, column C shows an example of collected raw data over 10-
minute segments.
Determining Bandwidth
The purpose of this section is to ensure that you have sufficient bandwidth
between the local and remote arrays to copy all your write data in the time-
frame you prescribe. The goal is to size the network so that it is capable of
transferring estimated future write workloads.
TCE requires two remote paths, each with a minimum bandwidth of 1.5
Mbs.
To determine the bandwidth
1. Graph the data in column “C” in the Write-Workload Spreadsheet on
page 2-4.
2. Locate the highest peak. Based on your write-workload measurements,
this is the greatest amount of data that will need to be transferred to the
remote array. Bandwidth must accommodate maximum possible
workload to insure that the system does not become subject to its
capacity being exceeded. This would cause further problems, such as the
new write data backing up in the data pool, update cycles becoming
extended, and so on.
3. Though the highest peak in your workload data should be used for
determining bandwidth, you should also take notice of extremely high
peaks. In some cases a batch job, defragmentation, or other process
could be driving workload to abnormally high levels. It is sometimes
worthwhile to review the processes that are running. After careful
analysis, it may be possible to lower or even eliminate some spikes by
optimizing or streamlining high-workload processes. Changing the
timing of a process may lower workload.
4. Although bandwidth can be increased, Hitachi recommends that
projected growth rate be factored over a 1, 2, or 3 year period.
Table 2-1 shows TCE bandwidth requirements.
Item Requirements
Bandwidth • Bandwidth must be guaranteed.
• Bandwidth must be 1.5 Mb/s or more for each pair.
100 Mb/s recommended.
• Requirements for bandwidth depend on an average
inflow from the host into the array.
• See Table 2-1 on page 2-8 for bandwidth
requirements.
Remote Path Sharing • The remote path must be dedicated for TCE pairs.
• When two or more pairs share the same path, a
WOC is recommended for each pair.
•
•
Table 3-2 shows types of WAN cabling and protocols supported by TCE and
those not supported.
•
WAN Types
Supported • Dedicated Line (T1, T2, T3 etc)
Not-supported • ADSL, CATV, FTTH, ISDN
Item Condition
Latency, Distance • If round trip time is 5 ms or more, or distance
between the local site and the remote site is 100
miles (160 km) or further, WOC is highly
recommended.
WAN Sharing • If two or more pairs share the same WAN, A WOC
is recommended for each pair.
•
Item Requirements
LAN Interface • Gigabit Ethernet or fast Ethernet must be
supported.
Performance • Data transfer capability must be equal to or more
than bandwidth of WAN.
Functions • Traffic shaping, bandwidth throttling, or rate
limiting must be supported. These functions reduce
data transfer rates to a value input by the user.
• Data compression must be supported.
• TCP acceleration must be supported.
Fibre Channel
The fibre channel remote data path can be set up in the following
configurations:
• Direct connection
• Single fibre channel switch and network connection
• Double FC switch and network connection
• Wavelength Division Multiplexing (WDM) and dark fibre extender
The array supports direct or switch connection only. Hub connections are
not supported.
General Recommendations
The following is recommended for all supported configurations:
• TCE requires one path between the host and local array. However, two
paths are recommended; the second path can be used in the event of a
path failure.
Recommendations
• While this configuration may be used, it is not recommended since
failure in an FC switch or the network would halt copy operations.
• Separate switches should be set up for host I/O to the local array and
for data transfer between arrays. Using one switch for both functions
results in deteriorated performance.
Recommendations
• Separate switches should be set up for host I/O to the local array and
for data transfer between arrays. Using one switch for both functions
results in deteriorated performance.
Recommendations
• Only qualified components are supported.
For more information on WDM, see Appendix E, Wavelength Division
Multiplexing (WDM) and Dark Fibre.
NOTE: If your remote path is a direct connection, make sure that the
array power is off when modifying the transfer rate to prevent remote path
blockage.
Recommendations
The following is recommended for all supported configurations:
• Two paths should be configured from the host to the array. This
provides a backup path in the event of path failure.
Direct Connection
Figure 3-6, illustrates two remote paths directly connecting the local and
remote arrays. Direct connections are used when the local and remote
arrays are set up at the same site. In this case, the arrays can be linked with
category 5e9 or 6 copper LAN cable.
•
Recommendations
• When a large amount of data is to be copied to the remote site, the
initial copy between local side and remote systems may be performed
at the same location. In this case, category 5e9 or 6 copper LAN cable
is recommended.
Recommendations
• This configuration is not recommended because a failure in a LAN
switch or WAN would halt operations.
• Separate LAN switches and paths should be used for host-to-array and
array-to-array, for improved performance.
Recommendations
• Separate LAN switches and paths should be used for the host-to-array
and the array-to-array paths for better performance and to provide a
backup.
Recommendations
• If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
• Using separate LAN switch, WOC and WAN for each remote path
ensures that data copy automatically continues on the second path in
the event of a path failure.
Recommendations
• If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
• You can reduce the number of switches by using a switch with VLAN
capability. If a VLAN switch is used, port 0B of local array 1 and the
WOC1 should be in one LAN (VLAN1); port 0B of local array 2 and
WOC3 should be in another LAN (VLAN2). Connect the VLAN2 port
directly to Port 0B of the local array 2 and WOC3.
Recommendations
• If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
• You can reduce the number of switches by using a switch with VLAN
capability. If a VLAN switch is used, port 0B of local array 1 and WOC1
should be in one LAN (VLAN1); port 0B of local array 2 and WOC3
should be in another LAN (VLAN2). Connect the VLAN2 port directly to
Port 0B of the local array 2 and WOC3.
Host Time-out
I/O time-out from the host to the array should be more than 60 seconds.
You can figure I/O time-out by increasing the remote path time limit times
6. For example, if the remote path time-out value is 27 seconds, set host I/
O time-out to 162 seconds (27 x 6) or more.
A dynamic disk cannot be used with a cluster (MSCS, VCS, etc.) or VxVM
and HDLM.
Table 4-2, Table 4-3, Table 4-4, and Table 4-5 show the maximum
supported capacity per capacity ratio, as calculated from the above formula
and values in Table 3-1.
•
Minimum Requirements
AMS firmware version 0850 or higher.
Storage Navigator Modular 2 4.31 or higher
version
CCI version 01-21-03/06 or later
Number of AMS arrays 2
Supported array AMS models AMS2100/2300/2500
TCE license keys One per array.
Number of controllers: 2 (dual configuration)
Volume size S-VOL block count = P-VOL block count.
Command devices per array Max. 128. The command device is required only
(CCI only) when CCI is used. The command device volume
size must be greater than or equal to 33 MB.
Installation Procedures
Setup Procedures
Installing TCE
Prerequisites
• A key code or key file is required to install or uninstall TCE. If you do
not have the key file or code, you can obtain it from the download page
on the HDS Support Portal, http://support.hds.com.
• The array may require a restart at the end of the installation procedure.
If SnapShot is enabled at the time, no restart is necessary.
• If restart is required, it can be done either when prompted or at a later
time.
• TCE cannot be installed if more than 239 hosts are connected to a port
on the array.
To install TCE
1. In the Navigator 2 GUI, click the check box for the array where you want
to install TCE, then click the Show & Configure Array button.
2. Under Common Array Tasks, click Install License. The Install License
screen displays.
3. Select the Key File or Key Code radio button, then enter the file name
or key code. You may browse for the Key File.
4. Click OK.
5. Click Confirm on the subsequent screen to proceed.
6. On the Reboot Array screen, click the Reboot Array button to reboot,
or click Close to finish the installation without rebooting.
7. When the reboot is complete, click Close.
Uninstalling TCE
Prerequisite
• TCE pairs must be deleted. Volume status must be Simplex.
• Data pools must be deleted, unless SnapShot will continue to be used.
• The remote path must be deleted.
• A key code or key file is required. If you do not have the key file or
code, you can obtain it from the download page on the HDS Support
Portal, http://support.hds.com.
To uninstall TCE
1. In the Navigator 2 GUI, click the check box for the array, then click the
Show & Configure Array button.
2. In the navigation tree, click Settings, then click Licenses.
3. On the Licenses screen, select TC-Extended in the Licenses list and
click the De-install License button.
4. On the De-Install License screen, enter the file or code in the Key File
or Key Code box, and then click OK.
5. On the confirmation screen, click Close.
Setting up DMLUs
The DMLU (differential management-logical unit) must be set up prior to
using TCE. The DMLU is used by the system for storing TCE status
information when the array is powered down.
Prerequisites
• The logical unit used for the DMLU must be set up and formatted.
• The logical unit used for the DMLU must be at least 10 GB
(recommended size).
• DMLUs must be set up on both the local and remote arrays.
• One DMLU is required on each array; two are recommended, the
second used as backup. However, no more than two DMLUs can be
installed per array.
• When setting up more than one DMLU, assign them to different RAID
groups to provide a backup in the event of a drive failure.
• Specifications for DMLUs should also be reviewed. See TCE System
Specifications on page 5-2.
To define the DMLU
1. In the Navigator 2 GUI, select the array where you want to set up the
DMLU.
1. In the navigation tree, click Settings, then click DMLU. The DMLU
screen displays.
2. Click Add DMLU. The Add DMLU screen displays.
•
3. Select the LUN(s) that you want to assign as DMLUs, and then click OK.
A confirmation message displays.
4. Select the Yes, I have read ... check box, then click Confirm. When a
success message displays, click Close.
NOTE: The default Threshold value is 70%. When capacity reaches the
Threshold plus 1 percent, both data pool and pair status change to
“Threshold over”, and the array issues a warning. If capacity reaches 100
percent, the pair fails and all data in the S-VOL is lost.
5. Resynchronize the pairs after confirming that the remote path is set. See
Resynchronizing a Pair on page 7-6.
NOTE: In Windows 2003 Server, LUs are identified by HLUN. The LUN and
H-LUN may be different. See Identifying P-VOL and S-VOL LUs on Windows
on page 4-5 to map LUN to HLUN.
6. In the Secondary Volume box, enter the S-VOL LUN(s) on the remote
array that the primary volume(s) will be copied to. Remote LUNs must
be:
- the same as local LUNs.
- the same size as local LUNs.
7. From the Pool Number of Local Array drop-down list, select the
previously set up data pool.
8. In the Pool Number of Remote Array box, enter the LUN set up for
the data pool on the remote array.
9. For Group Assignment, you assign the new pair to a consistency
group.
- To create a group and assign the new pair to it, click the New or
existing Group Number button and enter a new number for the
group in the box.
- To assign the pair to an existing group, enter its number in the
Group Number box, or enter the group name in the Existing
Group Name box.
- If you do not want to assign the pair to a consistency group, they
will be assigned automatically. Leave the New or existing Group
Number button selected with no number entered in the box.
Splitting a Pair
Data is copied to the S-VOL at every update cycle until the pair is split.
• When the split is executed, all differential data accumulated in the local
array is updated to the S-VOL.
• After the split operation, write updates continue to the P-VOL but not to
the S-VOL.
After the Split Pair operation:
• S-VOL data is consistent to P-VOL data at the time of the split. The S-
VOL can receive read/write instructions.
• The TCE pair can be made identical again by re-synchronizing from
primary-to-secondary or secondary-to-primary.
Resynchronizing a Pair
Re-synchronizing a pair updates the S-VOL so that it is again identical with
the P-VOL. Differential data accumulated on the local array since the last
pairing is updated to the S-VOL.
• Pair status during a re-synchronizing is Synchronizing.
• Status changes to Paired when the resync is complete.
• If P-VOL status is Failure and S-VOL status is Takeover or Simplex, the
pair cannot be recovered by resynchronizing. It must be deleted and
created again.
• Best practice is to perform a resynchronization when I/O load is low, to
reduce impact on host activities.
Prerequisites
• The pair must be in Split, Failure, or Pool Full status.
To resync the pair
1. In the Navigator 2 GUI, select the desired array, then click the Show &
Configure Array button.
2. From the Replication tree, select the Remote Replication icon. The
Pairs screen displays.
3. Select the pair you want to resync.
4. Click the Resync Pair button. View further instructions by clicking the
Help button, as needed.
To manage the various operations, they are broken down by host, LU, and
backup schedule, as shown in Table 8-1.
•
In the procedure example that follows, scripts are executed for host A on
Monday at 11 p.m. The following assumptions are made:
• The system is completed.
• The TCE pairs are in Paired status.
• The SnapShot pairs are in Split status.
• Host A uses a Windows operating system.
The variables used in the script are shown in Table 8-2. The procedure and
scripts follow.
set STONAVM_HOME=.
set STONAVM_RSP_PASS=on
set LOCAL=LocalArray
set REMOTE=RemoteArray
set TCE_PAIR_DB1=TCE_LU0001_LU0001
set TCE_PAIR_DB2=TCE_LU0002_LU0002
set SS_PAIR_DB1_MON=SS_LU0001_LU0101
set SS_PAIR_DB2_MON=SS_LU0002_LU0102
set DB1_DIR=D:\
set DB2_DIR=E:\
set LU1_GUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
set LU2_GUID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
set TIME=18000
(To be continued)
2. Stop the database application, then unmount the P-VOL, as shown
below. Doing this stabilizes the data in the P-VOL.
:END
In the event of a disaster at the primary site, the cycle update process is
suspended and updating of the S-VOL stops. If the host requests an S-VOL
takeover (CCI horctakeover), the remote array restores the S-VOL using
data in the data pool from the previous cycle.
The AMS version of TCE does not support mirroring consistency of S-VOL
data, even if the local array and remote path are functional. P-VOL and S-
VOL data are therefore not identical when takeover is executed. Any P-VOL
data updates made during the time the takeover command was issued
cannot be salvaged.
Takeover Processing
S-VOL takeover is performed when the horctakeover operation is issued by
the secondary array. The TCE pair is split and system operation can be
continued with the S-VOL only. In order to settle the S-VOL data being
copied cyclically, it is restored using the data that was pre-determined in the
preceding cycle and saved to the data pool, as mentioned above. The S-VOL
is immediately enabled to receive the I/O instruction.
Routine Maintenance
Access Access
Pair Status Description
to P-VOL to S-VOL
Simplex TCE pair is not created. Simplex volumes accept Read/ P-VOL S-VOL
Write I/Os. Not displayed in TCE pair list on Navigator 2 does not does not
GUI. exist exist
Synchronizin Copying is in progress, initiated by Create Pair or Read/ Read Only
g Resynchronize Pair operations. Upon completion, pair Write
status changes to Paired. Data written to the P-VOL
during copying is transferred as differential data after the
copying operation is completed. Copy progress is shown
on the Pairs screen in the Navigator 2 GUI.
Access Access
Pair Status Description
to P-VOL to S-VOL
Paired Copying of data from P-VOL to S-VOL is completed. While Read/ Read Only
the pair is in Paired status, data consistency in the S-VOL Write
is guaranteed. Data written to the P-VOL is transferred
periodically to the S-VOL as differential data.
Paired:split When a pair-split operation is initiated, the differential Read/ Read Only
data accumulated in the local array is updated to the S- Write
VOL before the status changes to Split. Paired:split is a
transitional status between Paired and Split.
Paired:delete When a pair-delete operation is initiated, the differential Read/ Read Only
data accumulated in the local array is updated to the S- Write
VOL before the status changes to Simplex. Paired:delete
is a transitional status between Paired and Simplex.
Split Updates to the S-VOL are suspended; S-VOL data is Read/ Read/
consistent and usable by an application for read/write. Write Write
Data written to the P-VOL and to the S-VOL are managed (mountab
as differential data in the local and remote arrays. le)or Read
only
Pool Full If the local data pool capacity exceeds 90%, while status Read/ Read Only
is Paired, the following takes place: Write
• Pair status on the local array changes to Pool Full.
• Pair status on the remote array at this time remains
Paired.
• Data updates stop from the P-VOL to S-VOL.
• Data written to the P-VOL is managed as differential
data
If the remote data pool capacity reaches 100% while the
pair status is Paired, the following takes place:
• Pair status on the remote array changes to Pool Full.
• Pair status on the local array changes to Failure.
If a pair in a group becomes Pool Full, the status all pairs
in the group becomes Pool Full.
To recover, add LUs to the data pool or reduce use of the
data pool. Then resynchronize the pair.
Takeover Takeover is a transitional status after Swap Pair is Read/
initiated. The data in the remote data pool, which is in a Write
consistent state established at the end of the previous
cycle, is restored to the S-VOL. Immediately after the
pair becomes Takeover, the pair relationship is swapped
and copy from the new P-VOL to the new S-VOL is
started.
Busy Busy is a transitional status after Swap Pair is initiated. Read/
Takeover occurs after Busy. This status can be seen from Write
the Navigator 2 GUI, though not from CCI.
Inconsistent This status on the remote array occurs when copying No Read/
from P-VOL to S-VOL stops due to failure in the S-VOL. Write
The failure includes failure of the HDD that constitutes
the S-VOL, or the data pool for the S-VOL becomes full.
To recover, resynchronize the pair, which leads to a full
volume copy of the P-VOL to the S-VOL.
Access Access
Pair Status Description
to P-VOL to S-VOL
Failure P-VOL pair status changes to Failure if copying from the Read/
P-VOL to the S-VOL can no longer continue. The failure Write
includes HDD failure and remote path failure that
disconnects the local array and the remote array.
• Data consistency is guaranteed in the group if the
pair status at the local array changes from Paired to
Failure.
• Data consistency is not guaranteed if pair status
changes from Synchronizing to Failure.
• Data written to the P-VOL is managed as differential
data.
To recover, remove the cause then resynchronize the
pair.
Routine Maintenance
You may want to delete a volume pair, data pool, DMLU, or remote path.
The following sections provide prerequisites and procedures.
Deleting a DMLU
When TCE is enabled on the array and only one DMLU exists, it cannot be
deleted. If two DMLUs exist, one can be deleted.
To delete a DMLU
1. In the Replication tree view, select Setup and then DMLU. The
Differential Management Logical Units list appears.
1. Select the LUN you want to remove.
2. Click the Remove DMLU button. A success message displays.
3. Click Close.
Troubleshooting Overview
Miscellaneous Troubleshooting
Troubleshooting 10–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Troubleshooting Overview
TCE stops operating when any of the following occur:
• Pair status changes to Failure
• Pair status changes to Pool Full
• Remote path status changes to Detached
The following steps can help track down the cause of the problem and take
corrective action.
1. Check the Event Log, which may indicate the cause of the failure. See
Using the Event Log on page 10-6.
2. Check pair status.
a. If pair status is Pool Full, please continue with instructions in
Correcting Data Pool Shortage on page 10-2.
b. If pair status is Failure, check the following:
• Check the status of the local and remote arrays. If there is a
Warning, please continue with instructions in Correcting Array
Problems on page 10-3.
• Check pair operation procedures. Resynchronize the pairs. If a
prolem occurs during resynchronization, please continue with
instructions in Correcting Resynchronization Errors on page 10-4.
3. Check remote path status. If status is Detached, please continue with
instructions in Correcting Array Problems on page 10-3.
10–2 Troubleshooting
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
- To check the local data pool, see Monitoring Data Pool Capacity on
page 9-4.
- To check the remote data pool, review the event log. (Shortages in
the remote data pool prompt the array to resynchronize SnapShot
pairs if they exist—thus reducing pool usage. The GUI may not
immediately show the lowered rate.) Refer to Using the Event Log
on page 10-6.
2. If both local and remote data pools have sufficient space, resynchronize
the pairs.
3. To correct a data pool shortage, proceed as follows:
a. If there is enough disk space on the array, create and assign more
LUs to the data pool. See Expanding Data Pool Capacity on page 9-5.
b. If LUs cannot be added to the data pool, review the importance of
your TCE pairs. Delete the pairs not vital to business operations. See
Deleting a Volume Pair on page 9-8.
c. When corrective steps have been taken, resynchronize the pairs.
4. For SnapShot pair recovery, review the troubleshooting chapter in
Hitachi AMS Copy-On-Write SnapShot User’s Guide (MK-97DF8124).
Troubleshooting 10–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Delays in Settling of S-VOL Data
When the amount of data that flows into the primary array from the host is
larger than outflow from the secondary array, more time is required to
complete the settling of the S-VOL data, because the amount of data to be
transferred increases.
When the settlement of the S-VOL data is delayed, the amount of the data
loss increases if a failure in the primary array occurs.
Differential data in the primary array increases a when:
• The load on the controller is heavy
• An initial or resynchronization copy is made
• SATA drives are used
• The path or controller is switched
Error
Error Contents Actions to be Taken
Code
0307 The array ID of the remote array Check the serial number of the
cannot be specified. remote array.
0308 The LU assigned to a TCE pair cannot The resynchronization cannot be
be specified. performed. Create a pair again
after deleting the pair.
0309 Restoration from the Data Pool is in Retry after waiting for a while.
progress.
10–4 Troubleshooting
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Table 10-2: Error Codes for Failure during Resync (Continued)
Error
Error Contents Actions to be Taken
Code
030A The target S-VOL of TCE is a P-VOL When the SnapShot pair is being
of SnapShot. Besides, the SnapShot restored, execute it after the
pair is being restored or reading/ restoration is completed. When
writing is not allowed. reading/writing is not allowed,
execute it after enabling the
reading/writing.
030C The TCE pair cannot be specified in The resynchronization cannot be
the CTG. performed. Create a pair again
after deleting the pair.
0310 The status of the TCE pair is
Takeover.
0311 The status of the TCE pair is
Simplex.
031F The LU of the S-VOL of the TCE is S- Check the LU status of in the
VOL Disable. remote array, release the S-VOL
Disable, and execute it again.
0320 The target LU in the remote array is Retry after waiting for a while.
undergoing the parity correction.
0321 The status of the target LU in the Execute it again after restoring
remote array is other than normal or the target LU status.
regressed.
0322 The number of unused bits is Retry after waiting for a while.
insufficient.
0323 The LU status of the Data Pool is Execute it again after restoring
other than normal or regressed. the LU status of the Data Pool.
0324 The LU of the Data Pool is Retry after waiting for a while.
undergoing the parity correction.
0325 The expiration date of the temporary The resynchronization cannot be
key is expired. performed because the trial time
limit is expired. Purchase the
permanent key.
0326 The disk drives that configure a RAID Perform the operation again after
group, to which a target LU in the spinning up the disk drives that
remote array belongs have been configure the RAID group.
spun down.
Troubleshooting 10–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Using the Event Log
Using the event log helps in locating the reasons for a problem. The event
log can be displayed using Navigator 2 GUI or CLI.
To display the Event Log using the GUI
1. Select the Alerts & Events icon. The Alerts & Events screen appears.
2. Click the Event Log tab. The Event Log displays as shown in Figure 10-2.
•
10–6 Troubleshooting
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Miscellaneous Troubleshooting
Table 10-3 contains details on pair and takeover operations that may help
when troubleshooting. Review these restrictions to see if they apply to your
problem.
•
Restriction Description
Restrictions for When a pair split operation is begun, data is first copied from the P-
pair splitting VOL to the S-VOL. This causes a time delay before the status of the
pair becomes Split.
The splitting of the TCE pair cannot be done when the pairsplit -mscas
processing is being executed for the CTG.
When a command to split pairs in each CTG is issued while the pairsplit
-mscas processing is being executed for the cascaded SnapShot pair,
the splitting cannot be executed for all the pairs in the CTG.
When a command to split each pair is issued and the target pair is
under the completion processing, it cannot be accepted if the Paired
to be split is undergoing the end operation.
When a command to split each pair is issued and the target pair is
under the completion processing, it cannot be accepted if the Paired
to be split is undergoing the splitting operation.
When a command to split pairs in each group is issued, it cannot be
executed if even a single pair that is being split exists in the CTG
concerned.
When a command to terminate pairs in each group is issued, it cannot
be executed if even a single pair that is being split exists in the CTG
concerned.
The pairsplit -P command is not supported.
Restrictions on When the SVOL_Takeover operation is performed for a pair by the
execution of the horctakeover command, the S-VOL is first restored from the data pool.
horctakeover This causes a time delay before the status of the pair changes.
(SVOL_Takeover
The restoration of up to four LUs can be done in parallel for each
)
controller. When restoration of four or more LUs is required, the first
command
four LUs are selected according to an order given in the requirement,
but the following LUs are selected in ascending order of the LU
numbers.
Because the SVOL_Takeover operation is performed on the secondary
side only, the differential data of the P-VOL that has not been
transferred is not reflected on the S-VOL data even when the TCE pair
is operating normally.
When the S-VOL of the pair, to which the instruction to perform the
SVOL_Takeover operation is issued, is in the Inconsistent status that
does not allow Read/Write operation, the SVOL_Takeover operation
cannot be executed. Whether the Split is Inconsistent or not can be
referred to using Navigator 2.
When the command specifies the target as a group, it cannot be
executed for all the pairs in the CTG if even a single pair in the
Inconsistent status exists in the CTG.
When the command specifies the target as a pair, it cannot be
executed if the target pair is in the Simplex or Synchronizing status.
Troubleshooting 10–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Table 10-3: Miscellaneous Troubleshooting (Continued)
Restriction Description
Restrictions on The pair splitting instruction cannot be issued to the SnapShot pair
execution of the cascaded with the TCE S-VOL pair in the Synchronizing or Paired
pairsplit -macas status from the host on the secondary side.
command
When even a single pair in the CTG is being split or deleted, the
command cannot be executed.
Pairsplit -mscas processing is continued unless it becomes Failure or
Pool Full.
Restrictions on When a delete pair operation is begun, data is first copied from the P-
the performance VOL to the S-VOL. This causes a time delay before the status of the
of pair delete pair changes.
operation
The end processing is continued unless it becomes Failure or Pool Full.
A pair cannot be deleted it is being split.
When a delete pair command is issued to a group, it will not be
executed if any of the pairs in the group is being split.
A pair cannot be deleted when the pairsplit -mscas command is being
executed. This applies singly or by the CTG.
When a delete pair command is issued to a group, it will not be
executed if any of the pairs in the group is undergoing the pair split -
mscas operation.
Also in the execution of the pairsplit -R command that requires the
secondary array to delete a pair, the differential data of the P-VOL that
has not been transferred is not reflected on the S-VOL data in the
same way as the case of the SVOL_Takeover operation.
The pairsplit -R command cannot be executed during the restoration
of the S-VOL data through the SVOL_Takeover operation.
The pairsplit -R command cannot be issued to each group when a pair,
whose S-VOL data is being restored through the SVOL_Takeover
operation, exists in the CTG.
10–8 Troubleshooting
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
A
Operations Using CLI
Pair Operations
Sample Script
•
Installing
To install TCE
1. From the command prompt, register the array in which the TCE is to be
installed, and then connect to the arruy.
2. Execute the auopt command to install TCE. For example:
•
Un-installing
To uninstall TCE, the key code provided for optional features is required.
Prerequisites for uninstalling
• TCE pairs must be released (the status of all LUs must be Simplex).
• The remote path must be released.
• Data pools must be deleted, unless a SnapShot system exists on the
array.
• Make sure a spin-down operation is not in progress.
To uninstall TCE
1. From the command prompt, register the array in which the TCE is to be
uninstalled, and then connect to the array.
2. Execute the auopt command to uninstall TCE. For example:
•
Release a DMLU
Observe the following when releasing a DMLU for TCE:
• When only one DMLU is set, it cannot be released.
• When two DMLUs are set, only one can be released.
To release a TCE DMLU
Use the forllowing example:
•
Path Information
Interface Type :
Remote Array ID :
Bandwidth [0.1 Mbps] :
iSCSI CHAP Secret :
Path Information
Interface Type :
Remote Array ID :
Bandwidth [0.1 Mbps] :
iSCSI CHAP Secret :
Target Information
Local Array ID :
%
Path Information
Interface Type : FC
Remote Array ID : 85000027
Bandwidth [0.1 Mbps] : 15
iSCSI CHAP Secret : N/A
Path Information
Interface Type : iSCSI
Remote Array ID : 85000027
Bandwidth [0.1 Mbps] : 100
iSCSI CHAP Secret : Disable
Target Information
Local Array ID : 85000027
Path Information
Interface Type :
Remote Array ID :
Bandwidth [0.1 Mbps] :
iSCSI CHAP Secret :
Splitting a Pair
A pair split operation on a pair belonging to a group results in all pairs in the
group being split.
To split a pair
1. From the command prompt, register the local array in which you want
to split pairs, and then connect to the array.
2. Execute the aureplicationremote -split command to split the
specified pair. For example:
•
Swapping a Pair
Please review the Prerequisites in Swapping Pairs on page 7-7.
To swap the pairs, the remote path must be set to the local array from the
remote array.
To swap a pair
1. From the command prompt, register the remote array in which you want
to swap pairs, and then connect to the array.
2. Execute the aureplicationremote -swaps command to swap the
specified pair. For example:
•
Deleting a Pair
To delete a pair
1. From the command prompt, register the local array in which you want
to delete pairs, and then connect to the array.
2. Execute the aureplicationremote -simplex command to delete the
specified pair. For example:
•
Controller 0
12/17/2007 18:31:48 00 RBE301 Flash program update end
12/17/2007 18:31:08 00 RBE300 Flash program update start
Controller 1
12/17/2007 18:32:37 10 RBE301 Flash program update end
12/17/2007 18:31:49 10 RBE300 Flash program update start
%
The event log was displayed. When searching the specified messages or
error detail codes, store the output result in the file and use the search
function of the text editor as shown below.
•
echo off
REM Specify the registered name of the arrays
set UNITNAME=Array1
REM Specify the group name (Specify “Ungroup” if the pair doesn’t belong to any
group)
set G_NAME=Ungrouped
REM Specify the pair name
set P_NAME=TCE_LU0001_LU0002
REM Specify the directory path that is mount point of P-VOL and S-VOL
set MAINDIR=C:\main
set BACKUPDIR=C:\backup
REM Specify GUID of P-VOL and S-VOL
PVOL_GUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SVOL_GUID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
When Windows 2000 is used, the CCI mount command is required when
mounting or un-mounting a volume. The GUID, which is displayed by the
Windows mountvol command, is needed as an argument when using the
mount command. For more informaiton, refer to the Hitachi Adaptable
Modular Storage Command Control Interface (CCI) Reference Guide.
Setup
Pair Operations
Setting LU Mapping
For iSCSI, use the autargetmap command instead of the auhgmap
command.
To set up LU Mapping
1. From the command prompt, register the array to which you want to set
the LU Mapping, then connect to the array.
2. Execute the auhgmap command to set the LU Mapping. The following is
an example of setting LU 0 in the array to be recognized as 6 by the host.
The port is connected via target group 0 of port 0A on controller 0.
•
3. Execute the auhgmap command to verify that the LU Mapping is set. For
example:
•
C:\>cd horcm\etc
C:\horm\etc>
C:\HORCM\etc>set HORCMINST=0
C:\HORCM\etc>horcmstart 0 1
starting HORCM inst 0
HORCM inst 0 starts successfully.
starting HORCM inst 1
HORCM inst 1 starts successfully.
C:\HORCM\etc>pairdisplay -g VG01
group PairVOL(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.SMPL ---- ------,----
- ---- -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.SMPL ---- ------,----
- ---- -
NOTE: A pair created using CCI and defined in the configuration definition
file appear unnamed in the Navigator 2 GUI. Consistency groups created
using CCI and defined in the configuration definition file are not seen in the
Navigator 2 GUI. Also, pairs assigned to groups using CCI appear
ungrouped in the Navigator 2 GUI.
C:\HORCM\etc>pairdisplay –g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/
S,Status,Fence, Seq#,P-LDEV# M
vg01 oradb1(L) (CL1-A, 1, 1)85000174 1.P-VOL PAIR
ASYNC ,85000175 2 -
vg01 oradb1(R) (CL1-B, 2, 2)85000175 2.S-VOL PAIR
ASYNC ,----- 1 -
C:\HORCM\etc>pairdisplay -g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.SMPL ----- ------,---
-- ---- -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.SMPL ----- ------,---
-- ---- -
c:\HORCM\etc>pairdisplay -g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.P-VOL PAIR Never ,85000175
2 -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.S-VOL PAIR Never ,----
- 1 -
C:\HORCM\etc>pairsplit -g VG01
2. Execute the pairdisplay command to verify the pair status and the
configuration. For example:
c:\horcm\etc>pairdisplay -g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.P-VOL PSUS ASYNC ,85000175
2 -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.S-VOL SSUS ASYNC ,-----
1 -
C:\HORCM\etc>pairresync -g VG01 -c 10
C:\HORCM\etc>pairevtwait -g VG01 -s pair -t 300 10
pairevtwait : Wait status done.
3. Execute the pairdisplay command to verify the pair status and the
configuration. For example:
•
c:\horcm\etc>pairdisplay -g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.P-VOL PAIR ASYNC ,85000175
2 -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.S-VOL PAIR ASYNC ,-----
1 -
c:\horcm\etc>pairdisplay –g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.P-VOL PAIR ASYNC ,85000175
2 -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.S-VOL PAIR ASYNC ,-----
1 -
C:\HORCM\etc>pairsplit –g VG01 -R
c:\horcm\etc>pairdisplay –g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.P-VOL PSUE ASYNC ,85000175
2 -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.S-VOL ----- ----- ,----
-- ---- -
C:\HORCM\etc>pairsplit -g VG01 -S
c:\horcm\etc>pairdisplay –g VG01
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
VG01 oradb1(L) (CL1-A , 1, 1 )85000174 1.SMPL ----- ------,----
- ---- -
VG01 oradb1(R) (CL1-A , 1, 2 )85000175 2.SMPL ----- ------,----
- ---- -
2. Verify that the status of the TCE pair is still PAIR by executing the
pairdisplay command. The group in the example is ora.
•
c:\horcm\etc>pairdisplay –g ora
Group PairVol(L/R) (Port#,TID, LU) ,Seq#,LDEV#.P/S,Status,Fence, Seq#,P-
LDEV# M
ora oradb1(L) (CL1-A , 1, 1 )85000174 1.PAIR ----- ------,----- ---- -
ora oradb1(R) (CL1-B , 1, 2 )85000175 2.PAIR ----- ------,----- ---- -
3. Confirm that the SnapShot Pair is split using the indirect or direct
methods.
a. For the indirect method, execute the pairsyncwait command to
verify that the P-VOL data has been transferred to the S-VOL. For
example:
•
The status may not display for one cycle after the command is
issued.
Q-Marker counts up one by executing the pairsplit -mscas
command.
c:\horcm\etc>pairdisplay –g o1 -v smk
Group PairVol(L/R) Serial# LDEV# P/S Status UTC-TIME ----
-Split-Maker-----
o1 URA_000(L) 85000175 2 P-VOL PSUS - -
o1 URA_000(R) 85000175 3 S-VOL SSUS 123456ef Split-
Marker
The TCE pair is released. For details on the pairsplit command, the –mscas
option, and pairsyncwait command, refer to the Hitachi Adaptable Modular
Storage Command Control Interface (CCI) Reference Guide.
Comman Next
Options Status Response Remarks
d Status
pairsplit -S PAIR Depend on SMPL S-VOL data
Delete pair differential consistency
data guaranteed
COPY Immediate SMPL No S-VOL data
consistency
Others Immediate SMPL No S-VOL data
consistency
-R PAIR Immediate SMPL No S-VOL data
Delete pair (S-VOL only) consistency
COPY Immediate SMPL No S-VOL data
(S-VOL only) consistency
Can not be executed for
SSWS(R) status
Others Immediate SMPL No S-VOL data
(S-VOL only) consistency
Can not be executed for
SSWS(R) status
-mscas PAIR Immediate No change A completion time
Create remote depends on the
snapshot amount of differential
data.
(See note)
A completion can be check by
Split-Marker and a creation
time.
Cycle updating process stops
during creating a remote
snapshot.
Others ― ― ―
Others PAIR Depend on PSUS S-VOL data
Split pair differential consistency
data guaranteed
COPY Immediate PSUS S-VOL data
consistency
guaranteed
Others Immediate No change S-VOL data
consistency
guaranteed
•
NOTE: Only -g option is valid. The -d option is not accepted. If there are
pairs which status is not PAIR, in a CTG, a command cannot be accepted.
All S-VOLs with PAIR status need to have corresponding cascading V-VOLs
and MU# of these SnapShot pairs must match the MU# specified in a
pairsplit -mscas command option.
SVOL_Takeover
Attribut Paircurchk
Status Data
e Result Next Status
Consistency
SMPL - To be confirmed No SMPL
P-VOL - To be confirmed No -
S-VOL COPY Inconsistent No COPY
PAIR To be analyzed CTG SSWS
PSUS Suspected Pair SSWS
PSUS( Suspected No PSUS(N)
N)
PFUS Suspected CTG SSWS
PSUE Suspected CTG SSWS
SSWS Suspected Pair SSWS
• Responses of paircurchk.
- To be confirmed: The object volume is not an S-VOL. Check is
required.
- Inconsistent: There is no write order guarantee of an S-VOL
because an initial copy or a resync copy is on going or because of
S-VOL failures. So SVOL_Takeover cannot be executed.
- To be analyzed: Mirroring consistency cannot be determined just
from a pair status of an S-VOL. However TCE does not support
mirroring consistency, this result always shows that S-VOL has
data consistency across a CTG not depending on a pair status of a
P-VOL.
- Suspected: There is no mirroring consistency of an S-VOL. If a pair
status is PSUE or PFUS, there is data consistency across a CTG. If
a pair status is PSUS or SSWS, there is data consistency for each
pair in a CTG. In a case of PSUS(N), there is no data consistency.
• Data consistency after SVOL_Takeover and its responce:
- CTG: Data consistency across a CTG is guaranteed.
- Pair: Data consistency of each pair is guaranteed.
- No: No data consistency of each pair.
- Good: Response of takeover is normal.
- NG: Response of takeover is an error. If a pair status of an S-VOL
is PSUS, the pair status is changed to SSWS even if the response
is an error.
See Hitachi Adaptable Modular Storage Command Control Interface (CCI)
Reference Guide for more details about horctakeover.
Cascade Configurations
Table C-2: Supported TCE Operations when TCE S-VOL/SS P-VOL Cascaded
This appendix discusses WDM and dark fibre, which are used to
extend fibre channel remote paths.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
A
array
A set of hard disks mounted in a single enclosure and grouped logically
together to function as one contiguous storage space.
asynchronous
Asynchronous data communications operate between a computer and
various devices. Data transfers occur intermittently rather than in a
steady stream. Asynchronous replication does not depend on
acknowledging the remote write, but it does write to a local log file.
Synchronous replication depends on receiving an acknowledgement
code (ACK) from the remote system and the remote system also keeps
a log file.
B
background copy
A physical copy of all tracks from the source volume to the target
volume.
bps
Bits per second, the standard measure of data transmission speeds.
C
cache
A temporary, high-speed storage mechanism. It is a reserved section of
main memory or an independent high-speed storage device. Two types
of caching are found in computers: memory caching and disk caching.
Memory caches are built into the architecture of microprocessors and
often computers have external cache memory. Disk caching works like
memory caching; however, it uses slower, conventional main memory
that on some devices is called a memory buffer.
capacity
The amount of information (usually expressed in megabytes) that can
be stored on a disk drive. It is the measure of the potential contents of
a device; the volume it can contain or hold. In communications,
capacity refers to the maximum possible data transfer rate of a
communications channel under ideal conditions.
CCI
See command control interface.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–2
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
CLI
See command line interface.
cluster
A group of disk sectors. The operating system assigns a unique number
to each cluster and then keeps track of files according to which clusters
they use.
cluster capacity
The total amount of disk space in a cluster, excluding the space
required for system overhead and the operating system. Cluster
capacity is the amount of space available for all archive data, including
original file data, metadata, and redundant data.
command devices
Dedicated logical volumes that are used only by management software
such as CCI, to interface with the storage systems. Command devices
are not used by ordinary applications. Command devices can be shared
between several hosts.
concurrency of S-VOL
Occurs when an S-VOL is synchronized by simultaneously updating an
S-VOL with P-VOL data AND data cached in the primary host memory.
Discrepancies in S-VOL data may occur if data is cached in the primary
host memory between two write operations. This data, which is not
available on the P-VOL, is not reflected on to the S-VOL. To ensure
concurrency of the S-VOL, cached data is written onto the P-VOL before
subsequent remote copy operations take place.
concurrent copy
A management solution that creates data dumps, or copies, while other
applications are updating that data. This allows end-user processing to
continue. Concurrent copy allows you to update the data in the files
being copied, however, the copy or dump of the data it secures does
not contain any of the intervening updates.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
configuration definition file
The configuration definition file describes the system configuration for
making CCI operational in a TrueCopy Extended Distance Software
environment. The configuration definition file is a text file created and/
or edited using any standard text editor, and can be defined from the
PC where the CCI software is installed. The configuration definition file
describes configuration of new TrueCopy Extended Distance pairs on
the primary or remote storage system.
consistency of S-VOL
A state in which a reliable copy of S-VOL data from a previous update
cycle is available at all times on the remote storage system A consistent
copy of S-VOL data is internally pre-determined during each update
cycle and maintained in the remote data pool. When remote takeover
operations are performed, this reliable copy is restored to the S-VOL,
eliminating any data discrepancies. Data consistency at the remote site
enables quicker restart of operations upon disaster recovery.
CRC
Cyclical Redundancy Checking, a scheme for checking the correctness
of data that has been transmitted or stored and retrieved. A CRC
consists of a fixed number of bits computed as a function of the data to
be protected, and appended to the data. When the data is read or
received, the function is recomputed, and the result is compared to that
appended to the data.
CTG
See Consistency Group.
cycle time
A user specified time interval used to execute recurring data updates
for remote copying. Cycle time updates are set for each storage system
and are calculated based on the number of consistency groups CTG.
cycle update
Involves periodically transferring differential data updates from the P-
VOL to the S-VOL. TrueCopy Extended Distance Software remote
replication processes are implemented as recurring cycle update
operations executed in specific time periods (cycles).
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–4
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
D
data path
See remote path.
data pool
One or more disk volumes designated to temporarily store un-
transferred differential data (in the local storage system or snapshots
of backup data in the remote storage system). The saved snapshots are
useful for accurate data restoration (of the P-VOL) and faster remote
takeover processing (using the S-VOL).
data volume
A volume that stores database information. Other files, such as index
files and data dictionaries, store administrative information (metadata).
differential-data
The original data blocks replaced by writes to the primary volume. In
Copy-on-Write, differential data is stored in the data pool to preserve
the copy made of the P-VOL to the time of the snapshot.
differential-data
The original data blocks replaced by writes to the primary volume. In
Copy-on-Write, differential data is stored in the data pool to preserve
the copy made of the P-VOL to the time of the snapshot.
disaster recovery
A set of procedures to recover critical application data and processing
after a disaster or other failure. Disaster recovery processes include
failover and failback procedures.
disk array
An enterprise storage system containing multiple disk drives. Also
referred to as “disk array device” or “disk storage system.”
DMLU
See Differential Management-Logical Unit.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
dual copy
The process of simultaneously updating a P-VOL and S-VOL while using
a single write operation.
duplex
The transmission of data in either one or two directions. Duplex modes
are full-duplex and half-duplex. Full-duplex is the simultaneous
transmission of data in two direction. For example, a telephone is a full-
duplex device, because both parties can talk at once. In contrast, a
walkie-talkie is a half-duplex device because only one party can
transmit at a time.
E
entire copy
Copies all data in the primary volume to the secondary volume to make
sure that both volumes are identical.
extent
A contiguous area of storage in a computer file system that is reserved
for writing or storing a file.
F
failover
The automatic substitution of a functionally equivalent system
component for a failed one. The term failover is most often applied to
intelligent controllers connected to the same storage devices and host
computers. If one of the controllers fails, failover occurs, and the
survivor takes over its I/O load.
fallback
Refers to the process of restarting business operations at a local site
using the P-VOL. It takes place after the storage systems have been
recovered.
Fault tolerance
A system with the ability to continue operating, possibly at a reduced
level, rather than failing completely, when some part of the system
fails.
FC
See fibre channel.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–6
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
fibre channel
A gigabit-speed network technology primarily used for storage
networking.
firmware
Software embedded into a storage device. It may also be referred to as
Microcode.
full duplex
The concurrent transmission and the reception of data on a single link.
G
Gbps
Gigabit per second.
GUI
Graphical user interface.
I
I/O
Input/output.
initial copy
An initial copy operation involves copying all data in the primary
volume to the secondary volume prior to any update processing. Initial
copy is performed when a volume pair is created.
initiator ports
A port-type used for main control unit port of Fibre Remote Copy
function.
IOPS
I/O per second.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
iSCSI
Internet-Small Computer Systems Interface, a TCP/IP protocol for
carrying SCSI commands over IP networks.
iSNS
Internet-Small Computer Systems Interface, a TCP/IP protocol for
carrying SCSI commands over IP networks.
L
LAN
Local Area Network, a computer network that spans a relatively small
area, such as a single building or group of buildings.
load
In UNIX computing, the system load is a measure of the amount of
work that a computer system is doing.
logical
Describes a user's view of the way data or systems are organized. The
opposite of logical is physical, which refers to the real organization of a
system. A logical description of a file is that it is a quantity of data
collected together in one place. The file appears this way to users.
Physically, the elements of the file could live in segments across a disk.
logical unit
See logical unit number.
LU
Logical unit.
LUN
See logical unit number.
LUN Manager
This storage feature is operated through Storage Navigator Modular 2
software and manages access paths among host and logical units for
each port in your array.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–8
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
M
metadata
In sophisticated data systems, the metadata -- the contextual
information surrounding the data -- will also be very sophisticated,
capable of answering many questions that help understand the data.
microcode
The lowest-level instructions directly controlling a microprocessor.
Microcode is generally hardwired and cannot be modified. It is also
referred to as firmware embedded in a storage subsystem.
mount
To mount a device or a system means to make a storage device
available to a host or platform.
mount point
The location in your system where you mount your file systems or
devices. For a volume that is attached to an empty folder on an NTFS
file system volume, the empty folder is a mount point. In some systems
a mount point is simply a directory.
P
pair
Refers to two logical volumes that are associated with each other for
data management purposes (e.g., replication, migration). A pair is
usually composed of a primary or source volume and a secondary or
target volume as defined by the user.
pair splitting
The operation that splits a pair. When a pair is "Paired", all data written
to the primary volume is also copied to the secondary volume. When
the pair is "Split", the primary volume continues being updated, but
data in the secondary volume remains as it was at the time of the split,
until the pair is re-synchronized.
pair status
Internal status assigned to a volume pair before or after pair
operations. Pair status transitions occur when pair operations are
performed or as a result of failures. Pair statuses are used to monitor
copy operations and detect system failures.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–9
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
paired volume
Two volumes that are paired in a disk array.
parity
The technique of checking whether data has been lost or corrupted
when it's transferred from one place to another, such as between
storage units or between computers. It is an error detection scheme
that uses an extra checking bit, called the parity bit, to allow the
receiver to verify that the data is error free. Parity data in a RAID array
is data stored on member disks that can be used for regenerating any
user data that becomes inaccessible.
parity groups
RAID groups can contain single or multiple parity groups where the
parity group acts as a partition of that container.
pool volume
Used to store backup versions of files, archive copies of files, and files
migrated from other storage.
P-VOL
See primary volume.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–10
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
R
RAID
Redundant Array of Independent Disks, a disk array in which part of the
physical storage capacity is used to store redundant information about
user data stored on the remainder of the storage capacity. The
redundant information enables regeneration of user data in the event
that one of the array's member disks or the access path to it fails.
remote path
Also called the data path, the remote path is a link that connects ports
on the local storage system and the remote storage system. Two
remote paths must be set up for each AMS array (one path for each of
the two controllers built in the storage system).
resynchronization
Refers to the data copy operations performed between two volumes in
a pair to bring the volumes back into synchronization. The volumes in a
pair are synchronized when the data on the primary and secondary
volumes is identical.
RPO
See Recovery Point Objective.
RTO
See Recovery Time Objective.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–11
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
S
SAS
Serial Attached SCSI, an evolution of parallel SCSI into a point-to-point
serial peripheral interface in which controllers are linked directly to disk
drives. SAS delivers improved performance over traditional SCSI
because SAS enables up to 128 devices of different sizes and types to
be connected simultaneously.
SATA
Serial ATA is a computer bus technology primarily designed for the
transfer of data to and from hard disks and optical drives. SATA is the
evolution of the legacy Advanced Technology Attachment (ATA)
interface from a parallel bus to serial connection architecture.
SMPL
Simplex.
snapshot
A term used to denote a copy of the data and data-file organization on
a node in a disk file system. A snapshot is a replica of the data as it
existed at a particular point in time.
SNM2
See Storage Navigator Modular 2.
suspended status
Occurs when the update operation is suspended while maintaining the
pair status. During suspended status, the differential data control for
the updated data is performed in the primary volume.
S-VOL
See secondary volume.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–12
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
S-VOL determination
Independent of update operations, S-VOL determination replicates the
S-VOL on the remote storage system. This process occurs at the end of
each update cycle and a pre-determined copy of S-VOL data, consistent
with P-VOL data, is maintained on the remote site at all times.
T
target copy
A file, device, or any type of location to which data is moved or copied.
V
virtual volume (V-VOL)
In Copy-on-Write, a secondary volume in which a view of the primary
volume (P-VOL) is maintained as it existed at the time of the last
snapshot. The V-VOL contains no data but is composed of pointers to
data in the P-VOL and the data pool. The V-VOL appears as a full
volume copy to any secondary host.
volume
A disk array object that most closely resembles a physical disk from the
operating environment's viewpoint. The basic unit of storage as seen
from the host.
volume copy
Copies all data from the P-VOL to the S-VOL.
volume pair
Formed by pairing two logical data volumes. It typically consists of one
primary volume (P-VOL) on the local storage system and one
secondary volume (S-VOL) on the remote storage systems.
V-VOL
See virtual volume.
V-VOLTL
Virtual Volume Tape Library.
W
WMS
Workgroup Modular Storage.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–13
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
write order guarantee
Ensures that data is updated in an S-VOL, in the same order that it is
updated in the P-VOL, particularly when there are multiple write
operations in one update cycle. This feature is critical to maintain data
consistency in the remote S-VOL and is implemented by inserting
sequence numbers in each update record. Update records are then
sorted in the cache within the remote system, to assure write
sequencing.
write workload
The amount of data written to a volume over a specified period of time.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–14
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Index
Index-1
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Copy Pace, specifying 7-5 G
create pair procedure 7-3, 7-4
Group Name, adding 7-5
CTG. See consistency group
GUI, description 1-7
cycle time, monitoring, changing in
GUI, using to
GUI 9-6
assign pairs to a consistency
cycle time, setting up with CLI A-7
group 7-4
create a pair 7-3
D delete a data pool 9-9
dark fibre E-1 delete a DMLU 9-9
Data path, planning 3-5 delete a pair 9-8
data path. See remote path delete a remote path 9-9
data pools install, enable/disable TCE 6-2
deleting 9-9 monitor data pool usage 9-4
description 1-4 monitor pair status 9-2
expanding 9-5 resynchronize a pair 7-6
measuring workload for 2-3 set up data pool 6-5
monitoring usage 9-4 set up DMLU 6-4
setting up with CLI A-6 set up remote path 6-6
setting up with GUI 6-4 split a pair 7-5
shortage 10-2 swap a pair 7-7
sizing 2-4
Threshold field 6-5 H
data, measuring write-workload 2-3
horctakeover 8-11
deleting
host group, connecting to HP server 4-4
data pool 9-9
host recognition of P-VOL, S-VOL 4-3
DMLU 9-9
host time-out recommendation 4-3
remote path 9-9
volume pair 9-8
designing the system 2-1 I
Differential Management Logical Unit. See initial copy 7-2
DMLU installing TCE with CLI A-2
disaster recovery process 8-11 installing TCE with GUI 6-2
DMLU interfaces for TCE 1-7
defining 6-4 iSCSI remote path requirements and
deleting 9-9 configurations 3-11
description 1-7
setting up with CLI A-5 L
dynamic disk with Windows Server
LAN requirements 3-3
2000 4-4
logical units, recommendations 4-3
dynamic disk with Windows Server
2003 4-6
M
E maintaining local array, swapping I/O 8-
9
enabling, disabling TCE 6-3
MC/Service Guard 4-4
enabling, with CLI A-3
measuring write-workload 2-3
environment variable B-6
migrating volumes from earlier AMS
error codes, failure during resync 10-4
models 4-2
Event Log, using 10-6
monitoring
expanding data pool size 9-5
data pool usage 9-4
extenders E-1
pair status 9-2
remote path 9-6
F moving data procedure 8-10
failback procedure 8-12
fibre channel remote path requirements O
and configurations 3-5
operating systems, restrictions with 4-3
fibre channel, port transfer-rate 3-10
operations 7-2
Index-2
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
P RPO, checking 9-8
RPO, update cycle 2-2
Pair Name field, differences on local,
remote array 7-4
pair names and group names, Nav2
S
differences from CCI B-17 scripts for backups (CLI) 8-2, A-18
pairs setting port transfer-rate 3-10
assigning to a consistency group 7-4 SnapShot
creating 7-3 behaviors vs TCE’s C-7
deleting 9-8 cascading with C-2
description 1-3 supported operations when
displaying status with CLI A-12 cascaded C-4
monitoring status with GUI 9-2 specifications 5-2
monitoring with CCI B-7 split pair procedure 7-5
recommendations 4-3 statuses, pair 9-2
recovering from Pool Full 10-2 statuses, supported for cascading C-4
resynchronizing 7-6 supported capacity calculation 4-8
splitting 7-5 supported remote path configurations 3-
status definitions 9-2 5
swapping 7-7 S-VOL, backing up 8-2
planning S-VOL, updating 7-6
arrays 4-2 swapping pairs 7-7
remote path 3-5
TCE volumes 4-3 T
Planning the remote path 3-5 takeover 8-11
port transfer-rate 3-10 TCE
prerequisites for pair creation 7-2 array combinations 4-2
backing up the S-VOL 8-2
R behaviors vs SnapShot’s C-7
RAID groups and volume pairs 4-3 calculating bandwidth 2-7
recovering after array problems 10-3 changing bandwidth 9-6
recovering from failure during create pair procedure 7-4
resync 10-4 data pool
release a command device, using CCI B- description 1-4
2 setup 6-4
remote array, shutdown, TCE tasks 9-10 sizing 2-4
Remote path environment 1-3
planning 3-5 how it works 1-2
remote path installing, enabling, disabling 6-2
best practices 3-18 interface 1-7
deleting 9-9 monitoring pair status 9-2
description 3-5 operations 7-2
monitoring 9-6 operations before firmware
planning 3-5 updating 9-10
preventing blockage 3-18 pair recommendations 4-3
requirements 3-5 procedure for moving data 8-10
setup with CLI A-9 remote path configurations 3-5
setup with GUI 6-6 requirements 5-2
supported configurations 3-5 setting up the remote path 6-6
Requirements setup 6-4
bandwidth, for WANs 2-8 setup wizard 7-3
LAN 3-3 splitting a pair 7-5
requirements 5-2 supported operations when
response time for pairsplit B-15 cascaded C-3
resynchronization error codes 10-4 typical environment 1-3
resynchronization errors, correcting 10- Threshold field, changing 9-5
4 threshold reached, consequences 6-5
resynchronizing a pair 7-6
rolling averages, and cycle time 2-5
Index-3
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
U
uninstalling with CLI A-4
uninstalling with GUI 6-3
update cycle 1-2, 1-4, 2-2
specifying cycle time 9-6
updating firmware, TCE tasks 9-10
updating the S-VOL 7-6
V
volume pair description 1-3
volume pairs, recommendations 4-3
W
WAN
bandwidth requirements 2-8
configurations supported 3-12
general requirements 3-3
types supported 3-3
WDM E-1
Windows Server 2000 restrictions 4-4
Windows Server 2003 restrictions 4-4
wizard, TCE setup 7-3
WOCs, configurations supported 3-14
write order 1-4
write-workload 2-3
Index-4
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
1
Hitachi AMS 2000 Family TrueCopy Extended Distance User’s Guide
Hitachi Data Systems
Corporate Headquarters
750 Central Expressway
Santa Clara, California 95050-2627
U.S.A.
Phone: 1 408 970 1000
www.hds.com
info@hds.com
Asia Pacific and Americas
750 Central Expressway
Santa Clara, California 95050-2627
U.S.A.
Phone: 1 408 970 1000
info@hds.com
Europe Headquarters
Sefton Park
Stoke Poges
Buckinghamshire SL2 4HD
United Kingdom
Phone: + 44 (0)1753 618000
info.eu@hds.com
MK-97DF8054-01