Migrating Data From HPE XP To HPE 3PAR
Migrating Data From HPE XP To HPE 3PAR
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and
services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is
not responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Intel® , Itanium ® , Optane™, Pentium® , Xeon® , Intel Inside ® , and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other
countries.
AMD and the AMD EPYC ™ and combinations thereof are trademarks of Advanced Micro Devices, Inc.
Microsoft ® and Windows ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other
countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
Revision history
Planning
HPE Storage Online Import Utility
Planning the migration
Accessing the Migration Host Support matrix on SPOCK
Migration utility requirements
Source storage system requirements
SMI-S Provider requirements
Multipathing requirements
Oracle RAC-based configurations
Destination storage system requirements
Migration requirements
SAN fabric requirements
Considerations when selecting storage objects for migration
Requirements for migrating multiple configurations
Data migration types
Online migration
Minimally Disruptive migration
Offline migration
Preparing
Preparing for the migration
Gather storage system information
Uninstalling the migration utility from a Windows system
Downloading the migration utility software
Installing the migration utility
Launching the migration utility console
Installing the XP SMI-S Provider
Creating an SMI-S Provider user for XP7 source systems
Adding the source storage system to the migration utility
Adding the destination storage system to the migration utility
Uninstalling vendor-specific multipath software
Reconfiguring the host multipath solution for MDM
Configuring multipath software on an HP-UX host
Configuring multipath software on an IBM AIX host
Setting up volumes on an IBM AIX host
Configuring multipath software on a Linux host
Configuring multipath software on a Windows Server host including Hyper-V configurations
Zoning the source storage system to the destination storage system
Migrating
Migration process
Consistency Groups
Data reduction
LUN conflicts
Prioritization
Performing online migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Performing Minimally Disruptive migration
Enabling SCSI-3 persistent reservations for Veritas Storage Foundation
Performing Offline migration
Migrating data from an Oracle RAC cluster
Falling back to the source storage system after a failed or aborted migration on Oracle RAC clusters
Stopping host services
Stopping Oracle RAC cluster services
Stopping Oracle RAC database services
Stopping Veritas cluster services
Stopping cluster services on a Hyper-V running on a Windows Server
Bringing the host back online
Bringing Hyper-V clusters online
Bringing the IBM AIX host online
Bringing Linux hosts online
Bringing Windows hosts online
Aborting a migration
Post migration
Cleaning up after successful MDM and Online migrations
Performing postmigration tasks in VMware ESXi environments
Cleaning up after a successful Offline migration
Identifying and deleting source storage system LUN paths
Identifying and deleting source system LUN paths with VMware ESXi
Identifying and deleting source system LUN paths with Linux native device-mapper multipath
Identifying and deleting source storage system LUN paths with HP-UX 11 v3
Troubleshooting
Where are the migration utility log files?
How to increase logging details
The migration utility console does not open in Windows 7
Cannot add an HPE XP source storage system
Cannot admit or import a volume
Cannot log in to the migration utility
Cannot validate a security certificate
Clean up after the createmigration command fails
The createmigration command fails.
The createmigration command fails with an error about peer port connectivity
The createmigration command returns error OIUERRDB1006
The adddestination command fails with error OIUERRDST00001
The createmigration command fails because of an error creating the host
The createmigration command returns error OIUERRPREP1023 for XP
The createmigration command returns error OIUERRPREP1027 migrating a 64 TiB LUN
The createmigration command fails with an error in the -srcvolmap parameter
The createmigration command with -hostset parameter returns error OIUERRDST0003
The startmigration command fails after a successful createmigration task
The startmigration command fails
The startmigration task fails without an error message
Trailing spaces in IP address return login error
Reference
The migration process
Examples using the consistency group parameters
Prioritization examples
Examples using data reduction settings
Examples using the autoresolve parameter
Rolling back to the original source storage system
Clearing a SCSI reservation with HPE 3PAR OS 3.2.1 MU3 or later
Websites
Support and other resources
Accessing Hewlett Packard Enterprise Support
Accessing updates
Remote support
Warranty information
Regulatory information
Documentation feedback
HPE Storage Online Import Utility
The HPE Storage Online Import Utility provides scripting commands allowing data migration with little or no disruption to hosts, host
clusters, or volumes being migrated.
1. Review the 3PAR Online Import for XP Storage - Migration Host Support matrix .
Review Guidelines for rolling back to the original source storage system .
9. If possible, plan migrations during off-peak hours. Migrate hosts with the least data first to reduce performance impacts.
TIP:
If you do not have an HPE Passport account, create an account from the SPOCK login page.
2. In the left navigation pane, scroll to Software, and then click Array SW: 3PAR .
3. Select 3PAR Online Import for XP Storage - Migration Host Support matrix under 3PAR Federation Technologies.
Migrating a server or cluster that accesses LUNs from multiple source systems has the following host group requirements:
The host group name on each source storage system must match.
The host group entry on each source storage system must contain the same HBA WWPNs.
The migration utility communicates with all supported XP systems using SMI-S Provider software.
HPE XP7 Storage—SMI-S Provider is integrated in the Service Processor.
Other XP storage systems—SMI-S Provider is embedded in HPE XP Command View Advanced Edition (CVAE).
On the server where CVAE is installed, make sure that the TCP and UDP ports used by CVAE are not in use by other applications. For
more information, see the HPE XP Command View Advanced Edition Installation and Configuration Guide on
www.hpe.com/support/hpesc.
License requirements:
The XP CVAE CLI/SMI-S license
Multipathing requirements
See the 3PAR Online Import for XP Storage - Migration Host Support matrix for supported multipath I/O (MPIO) software.
Review the appropriate host implementation guide for complete configuration instructions for your environment. See
https://www.hpe.com/support/hpesc.
Before migrating a SAN-based Oracle RAC cluster, understand the distribution of Oracle RAC cluster registry (CRS), voting disks,
and data disks across the source storage system.
HPE XP storage systems do not support simultaneous migrations from two or more HPE XP storage systems to one destination
storage system (N:1 setup). Instead, perform multiple migrations serially to transfer data from multiple source storage systems to a
single destination storage system. For more information, see Data migration for an Oracle RAC cluster use case .
You can migrate Oracle RAC disks distributed over the HPE 3PAR storage system and HPE XP storage system.
For Automatic Storage Management (ASM) based Oracle RAC configurations, modify the persistent device or partition names used by
ASMlib to label ASM disks.
Using vendor-specific multipath software, rename the HPE XP storage system as /dev/sddlm* .
With Linux native device-mapper multipath software, rename the devices as /dev/mapper/mpath* .
To determine whether the current ASMlib uses vendor-specific multipath-based names, issue the following Oracle ASM CLI
command for the HPE XP Storage: # oracleasm querydisk -p /dev/sddlm* .
See the 3PAR Online Import for XP Storage - Migration Host Support matrix for supported HPE 3PAR models and HPE 3PAR OS
versions.
The destination system has a valid Online Import license installed.
Destination storage system names are not the same as host names.
There must be adequate storage for the data being migrated. The provisioning type used on the destination impacts the capacity
needed for migrating storage objects.
Deduplicated volumes are supported on a destination storage system with HPE 3PAR OS 3.2.1 MU2 or later.
Compressed VVs are supported on a destination storage system with HPE 3PAR OS 3.3.1 or later.
Migration requirements
Storage objects include volumes, and hosts. When selecting storage objects for migration:
Each migration definition supports up to 255 storage objects (volumes).
The names of volumes selected for migration on the source must not exist on the destination.
Do not simultaneously specify a host group and LDEV in the same migration task.
You can migrate data from up to four source storage systems to destination systems running HPE 3PAR OS 3.2.1 and later.
Two FC host ports on the source system for building the peer links.
FC port speed: The speeds of the FC ports do not have to be the same.
At least one FC switch between the source and destination is supported. Using two switches adds redundancy and is recommended.
The FC fabric between the source and the destination is NPIV capable. NPIV is enabled on the peer ports on the SAN switches.
When creating migration tasks, the migration utility identifies relationships between initiator groups and presented volumes. As you
select objects for migration, these relationships can cause additional storage objects to be included in the migration. This function is
automatic and cannot be modified.
NOTE:
If the total number of volumes selected for migration exceeds 255, the migration task will fail.
Selecting a host group or set of host groups with logical device (LDEV) presentations: All LDEVs presented to one or more of the
selected host groups are migrated. Presentations that the source LDEVs have to other host groups will include those host groups
and all their presented LDEVs in the migration.
Presented LDEVs
Selecting an LDEV or group of LDEVs with host group presentations: All selected LDEVs, and all other LDEVs presented to the
same host group are migrated. Also included are any presentations that the source host groups have with other LDEVs.
Unpresented LDEVs
Only the selected LDEVs are migrated offline, no additional LDEVs are included.
TIP:
On the HPE 3PAR CLI , you can use the showvlun command to find host and volume relationships.
Online migration
Online Migration moves data from the source to the destination without causing disruption to I/O on the host. All presentation
relationships between hosts and volumes are maintained during the migration. Disable host T10 DIF.
NOTE:
Ensure that WWNs on the host are the same on the source and destination systems.
NOTE:
Ensure that WWNs on the host are the same on the source and destination systems.
Offline migration
Migrate volumes or volume sets offline. Host definitions are not created on the destination storage system. This migration type is
available for all supported host operating systems. Offline migration does not export virtual volumes during the migration. During the
migration process, you must either shut down the hosts or take the migrating volumes offline.
NOTE:
Ensure that the migrating volumes or volume sets are not exported to a host.
For HPE XP7 Storage: Create an SMI-S Provider user for XP7 source systems
8. Set up zoning
b. Credentials for an account with Administrator privilege on the HPE Command View Advanced Edition ( CVAE) server.
a. DNS name or IP address for the HPE XP7 Storage Service Processor (SVP).
c. Credentials for an account with Administrator privilege to the Service Processor (SVP).
Uninstall any previous versions of the migration utility before installing a new version.
IMPORTANT:
Removing the migration utility deletes previous migration definitions, and settings.
Procedure
1. On the Windows computer where an old version of the migration utility is installed: In the Control Panel, select Programs >
Programs and Features > Uninstall a Program.
2. Locate the HPE Storage Online Import Utility and then click Uninstall.
TIP:
If you do not have an HPE Passport account, create an account from the My HPE Software Center main page.
The client and server are installed by default on the Windows computer.
Prerequisites
Uninstalled prior versions of the migration utility.
Procedure
Migrating data from HPE XP to HPE 3PAR 12
1. Navigate to the ISO file on the Windows computer.
3. Make sure that TCP ports 2390 and 2388 are available. On the Windows command line, enter:
When the specified ports not available, results appear similar to the following:
4. Launch the install wizard by double-clicking the executable file. Client and server are installed by default.
To use an existing CA-signed certificate, click Yes. To allow the installer to generate a new self-signed certificate, choose No.
5. If one of the default TCP ports is busy during installation, a message displays and the installer prompts you to enter a free port.
6. (Optional) At the end of installation, select the Show the Windows Installer log check box to review log details.
7. If you assigned a new TCP port number, update the HPE Storage Online Import Utility startup file: OIUCli.bat in the default
folder: C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu\CLI .
8. The install wizard creates two Windows user groups. Add local or domain users to these groups so that they can sign in to the
migration utility with access as follows:
HPE Storage Migration Admins—Can perform all data migration tasks. Add the Windows Administrator to this group.
HPE Storage Migration Users—Can view data migration information, such as, show * commands.
9. To launch the migration utility console: On the Windows desktop, double-click the migration utility icon.
2. Enter the DNS name or IP address of the system where the HPE Storage Online Import Utility server is installed.
Enter localhost , if the migration utility client and server is installed on the same computer.
1. Install a supported version of HPE XP Command View Advanced Edition (CVAE) and accept the default settings.
a. HPE Command View Advanced Edition > Login - Command View AE > License
b. Add the XP CVAE CLI/SMI-S or Device Manager license, and then click Save.
c. Verify that the license was added: HPE Command View Advanced Edition > Login - Command View AE > License.
3. If the Device Manager license was added and verified: Log on and add the XP source to the Device Manager database. Stop,
installation is complete.
4. If the XP CVAE CLI/SMI-S was added and verified: Log into the XP CVAE.
5. Download the XP CVAE CLI application, click CV XP AE > Tools > Download.
6. In the Device Manager Software Deployment dialog box, select Device Manager Command Line Interface (CLI) Application and then
download the Windows edition.
7. Move the downloaded file to a directory of your choice, and then double-click the filename. Follow the prompts to install the Device
Manager Command Line Interface (CLI) Application.
8. To customize the interaction with the XP CVAE CLI, edit hdvmcli.properties . The options set in
hdvmcli.properties are run each time you execute the hdvmcli.bat file.
a. Open a Windows command prompt and change the path to the location where the XP CVAE CLI was installed.
Example:
ipaddress
The IP address of the SVP (service processor) of the source storage system.
userid
The user ID for access to the XP Remote Web Console storage system running on the source storage system.
arraypasswd
The password to access the XP Remote Web Console of the source storage system.
NOTE:
Verify that the SVP is in View mode before adding the source storage system. The process that adds the storage
system can take up to 5 minutes.
Perform the following before adding source HPE XP7 Storage storage systems to the migration utility.
Prerequisites
Logged into the HPE XP7 Storage Service Processor (SVP) with administrator privileges.
Procedure
Create a new user in the Storage Administrator User group: Administration > User Groups > Storage Administrator (View and Modify)
User Group.
NOTE:
Use DNS name or IP address of the XP CVAE for -mgmtip .
showsource
Example output:
More information
Launching the migration utility console
Gather storage system information
showdestination
TIP:
Removing the vendor-specific multipath software requires a reboot before taking effect.
For environments using direct device referencing custom scripts or /etc/fstab use one of the following:
blkid/uuid
Windows Servers—Removing proprietary host multipath software from a Windows Server might disable the multipath I/O (MPIO)
installation without removing it. Manually check to confirm. If the native Microsoft MPIO is still installed, remove it and then restart the
host.
Procedure
If applicable, uninstall the proprietary Multipathing I/O (MPIO) software from the host according to vendor documentation.
2. For LUNs on the source host selected for migration, make sure that valid paths exist for two or four controller nodes.
More information
Uninstalling vendor-specific multipath software
For information about supported HBA drivers on the destination, see the SPOCK.
For information, see the HPE 3PAR HP-UX Implementation Guide at https://www.hpe.com/info/EIL.
To configure multipath software for an IBM AIX host, see the HPE 3PAR AIX and IBM Virtual I/O Server Implementation Guide at
https://www.hpe.com/info/EIL.
Procedure
1. Get the volume group-to-PVID mappings:
lspv
exportvg
3. If the host is a member of a cluster, clear all SCSI reservations on the cluster disks.
2. If required, upgrade host HBA drivers. For information about supported HPE 3PAR drivers, see SPOCK.
3. Register HPE 3PAR LUN types with Device Mapper (DM) by white listing HPE 3PAR-specific information (the vendor is
3PARdata and the product is VV ) in /etc/multipath.conf .
For more information about the Linux multipath configuration, see the HPE 3PAR Red Hat and Oracle Linux Implementation Guide
at https://www.hpe.com/info/EIL.
5. Modify the /etc/fstab for new mount points. If the migrating LUN alias was not created in the multipath.conf file,
base the new mount points on discovered LUNs.
For information about supported HPE 3PAR HBA drivers, see SPOCK.
NOTE:
Do not remove zoning between the source and destination storage systems until the migration operation is complete.
Procedure
cli% showport
NOTE:
The WWN of a host port changes when it is set to become a peer port. Make sure that you are using the new WWN
in the zoning.
b. Set the port connection type to point and set the mode to peer :
Example:
3. Set up ports on the source storage system. Select two host ports for communication and data transfer to the destination. Cable
each host port to a SAN switch. These host ports can also be used for host connectivity.
Choose the host ports on the source storage system from a different XP power domain.
4. Zone the source to the destination. On each source system, zone one host port to one peer port on the destination.
NOTE:
You can zone multiple source systems to the same pair of peer ports on the destination. Configure multiple peer
zones with each zone containing only two ports, one from a source system and the peer port configured on the
destination storage system.
5. Make sure that the source system appears as a device that is connected to both peer ports of the destination.
Example:
6. In the migration utility console, verify that the communication between the source and destination is properly configured.
Example:
cli% showconnection
7. For MDM and online migrations only, zone hosts to the destination storage system. Use the HPE 3PAR SSMC to verify that the host
on the source has paths to as many destination controller nodes as are zoned in the SAN.
Migration process
Procedure
Consistency groups
Prioritization
Data reduction
LUN conflicts
2. Perform a migration:
Consistency Groups
An HPE Consistency Group is an optional feature that allows you to consistently migrate the dependent volumes of applications.
For consistent imports, limit the number of volumes in a consistency group to 60. Limit the volumes to only include the ones that
must remain consistent.
To avoid long switch over time at the end of imports, limit total volume size in a consistency group to 120 TB.
Data reduction
You can apply data reduction settings on the destination storage system for migrating volumes. Requirements and limitations for
volume compression during data migration include the following:
Supported with HPE 3PAR destination systems and HPE Storage Online Import Utility 2.4.
The source volume must be less than 16 TiB to be migrated as a compressed volume.
NOTE:
You can compress volumes selected for migration or all volumes that meet compression requirements.
LUN conflicts
Migrating storage objects from multiple source storage systems to multiple destination storage systems can cause conflicts. Conflicts
can occur if the LUN ID from migrating volumes already exists on the destination host. LUN conflicts will cause the migration to fail.
When a conflict exists, the showmigration command shows STATUS output similar to the following example:
preparationfailed(-NA-)
(OIUERRPREP1021: Lun number conflict exists for LUN# <x> <y> ... , while presenting to hosts
<host1>
By default, the migration utility resolves LUN conflicts that occur during createmigration . You can override this action by
setting -autoresolve false allowing the createmigration command to fail.
Prioritization
Set migration priority across volumes using the optional priority parameter in the createmigration command. The
priority level allows you to set low , medium (default), or high .
Use the priority parameter as follows:
You can set a priority on volumes or volume sets. The priority setting on a volume takes precedence over the priority setting
on a volume set.
1. In the migration utility console, verify that there is a connection between the source and destination storage systems:
showconnection
Example:
IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .
For XP7 systems, specify the 10-digit serial number for -sourceuid .
Example output:
For Oracle RAC—Execute multiple migrations if you want to transfer all Oracle-based disks from multiple source systems to a single
destination storage system.
showmigration or shomigrationdetails
Example:
showmigration
MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1394789473527 online <source_name> <destination_name> Thu Sep 25 15:23:50 EDT 2014
When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.
4. From either the HPE 3PAR SSMC or the HPE 3PAR OS, verify migration paths.
5. Update path configuration on the host by rescanning all HBAs. Verify the newly discovered paths. The multipath subsystem on the
host recognizes extra paths to the storage system after rescanning.
On HP-UX hosts:
ioscan -f
On Linux hosts:
# ls /sys/class/fc_host
host4 host5
# echo "1" > /sys/class/fc_host/host4/issue_lip
# echo "- - -" > /sys/class/scsi_host/host4/scan
# echo "1" > /sys/class/fc_host/host5/issue_lip
# echo "- - -" > /sys/class/scsi_host/host5/scan
# multipath -ll
mpath2 (360060e80045be50000005be500001249) dm-3 HP,OPEN-V
[size=6.8G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 4:0:0:1 sdb 8:16 [active][ready]
NOTE:
If paths were not discovered, stop. Enter the multipath -v2 command to troubleshoot and investigate
potential multipath issues. Correct issues and rescan before proceeding to the next step.
6. On the destination HPE 3PAR OS, verify that there is traffic over all paths that connect the host and the source storage system:
Example:
Example:
END_TIME STATUS(PROGRESS)(MESSAGE)
-NA- success(-NA-) (-NA-)
More information
Preparing for the migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Aborting a migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
To discover the source storage system LUN with LUN ID 254 on a Linux host:
NOTE:
The output of the commands in the following procedure is an example. Following this procedure allows online migration
for LUNs with LUN ID 254 with supported Linux configurations. For a cluster configuration, perform this procedure on
each node.
Procedure
2. To find the name of the host devices and rescan their HBAs, issue the following command after the createmigration
operation is complete:
# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "1" > /sys/class/fc_host/host3/issue_lip
# echo "- - -" > /sys/class/scsi_host/host3/scan
The output shows two of the sg devices that are tagged as VV instead of SES .
4. To verify that the LUN with LUN ID 254 is not yet listed, issue the multipath -ll command on the Linux server.
To discover the LUN with LUN ID 254, delete old devices and then rescan the HBAs again as follows:
5. To delete old devices that are listed as VV, issue the following command:
# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "1" > /sys/class/fc_host/host3/issue_lip
7. To verify that the discovered LUN with LUN ID 254 is now listed, issue the multipath -ll command on the Linux server:
Results show that you can now migrate the LUN using the online procedure.
More information
Preparing for the migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Aborting a migration
Prerequisites
Completed Preparing the migration
Procedure
1. In the migration utility console, verify that there is a connection between the source and destination storage systems:
showconnection
Example:
IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .
For XP7 systems, specify the 10-digit serial number for -sourceuid .
Example output:
showmigration or shomigrationdetails
Example:
When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.
4. Shut down the host and leave it offline until after the migration starts.
8. Monitor the migration progress. The PROGRESS of each volume is 0% and the TASK_ID is unknown . TASK_ID and
PROGRESS fields update as the task executes.
NOTE:
Do not bring hosts back online while TASK_ID is unknown
Example:
When all objects have migrated successfully, the PROGRESS column shows Completed .
More information
Enabling SCSI-3 persistent reservations for Veritas Storage Foundation
Bringing the host back online
When an LDEV on the source XP host is migrated, an SCSI-3 reservation prevents unwanted management changes. This reservation is
automatically removed after the migration completes successfully.
Procedure
1. In the Veritas Enterprise Administrator, right-click DMP DSMs and then select DSM Configuration.
4. Select SCSI-3 support for SCSI settings, and then click OK.
Prerequisites
Completed Preparing the migration
Procedure
1. In the migration utility console, verify that there is a connection between the source and destination storage systems:
showconnection
Example:
IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .
For XP7 systems, specify the 10-digit serial number for -sourceuid .
Example output:
showmigration or shomigrationdetails
Example:
When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.
NOTE:
Offline migration does not create the host definition on the destination storage system. The storage administrator
performs this step manually as a post-migration task.
Example:
When all objects have migrated successfully, the PROGRESS column shows Completed .
The STATUS column indicates Success when all volumes or LDEVs have migrated successfully.
This use case migrates data from an Oracle RAC cluster configured with the cluster registry (CRS), voting disks, and data disks
distributed across multiple storage systems belonging to the same storage vendor.
Oracle supports increases to database capacity by adding storage systems. If Automatic Storage Management (ASM) is enabled, disks in
an ASM disk group that are included from different storage systems must all have the same performance characteristics and size. Data
Migrating data from HPE XP to HPE 3PAR 27
disks can be distributed across multiple storage systems. Because the CRS, voting disks are used for cluster configuration and integrity
and they do not need to be configured on every storage system.
Data migration using OIU for Oracle RAC clusters is documented here for two specific configuration scenarios:
Oracle Database deployments before 11gR2, with the CRS and voting disks residing outside the ASM, and data disks included in the
ASM disk group.
Oracle Database 11gR2 deployments with the CRS, voting disks, and data disks included in the ASM disk group.
You can use OIU to migrate data from one source storage system at a time, and in an Oracle RAC migration, you must include all
volumes in a consistency group. The number of migrations made by OIU for an Oracle RAC configuration distributed across multiple
source storage systems is equal to the number of source storage systems deployed. For example, if the Oracle RAC database is
distributed across three storage systems (with the CRS and voting disks configured on one of them), perform three OIU migrations.
The order of OIU migrations does not matter. In this use case, the following sequences were verified:
Completely migrate the source storage systems with the CRS and voting disks and then migrate the source storage systems with
the data disks.
Completely migrate the source storage systems with the data disks and then migrate the source storage system with the CRS and
voting disks.
IMPORTANT:
Migrations are performed using the online migration procedure described in this guide. During migration, there are
instances when the Oracle RAC is distributed, and the CRS, voting, and data disks coexist on the deployed source and
destination storage systems. Coexistence is supported across the destination storage system and the source storage
system as shown in the support matrix.
The figure shows two source storage systems—one with CRS, voting, and data disks, and the other with data disks only. Migrating the
The deployment uses Oracle Database 11gR2 RAC with ASM enabled, CRS, voting disk, and data disks on both source storage
systems included in the ASM. Implementation is not affected even if the CRS and voting disks are excluded from the ASM (as
required by Oracle Database implementations with releases earlier than 11gR2).
There are two source storage systems: source storage system 1 and source storage system 2.
NOTE:
More than two nodes in a cluster or more than two source storage systems is supported for this implementation.
This example shows two source storage systems, two phases, or two OIU migrations deployed for a complete migration:
Migration 1: The CRS, voting disks, and data disks are migrated from source storage system 1 to the destination storage system.
Migration 2: Data disks from source storage system 2 are migrated to the destination storage system.
NOTE:
The order of operation is not important—you can reverse migration 1 and migration 2.
Falling back to the source storage system after a failed or aborted migration on Oracle
RAC clusters
After a migration fails or is aborted, stop all applications performing I/O operations on the LUNs that were being migrated.
Procedure
1. From any one of the cluster nodes, stop all the databases by issuing:
2. (Optional) On the HPE storage system, clear the reservation from all the LUNs that are part of the failed migration:
3. Verify that all the database instances are offline. On all nodes:
# $GRID_HOME/bin/crs_stat —t
4. Zone the source storage system back to the Oracle cluster nodes.
5. Present the volume back to the host from the source storage system by rescanning for new data paths to the LUN.
NOTE:
Migrations that fail in the import phase require a manual cleanup of the source and destination storage systems.
8. Verify that migrating VVs are removed from the source HPE storage system. Manually remove the migrating VVs:
# removevv —f <vvname>
9. (Optional) Unmask volumes from the HPE peer host on the source storage system.
10. Rescan the host and verify that it does not see paths from the HPE storage system.
# $GRID_HOME/crs_stat -t
CRS-0184: Cannot communicate with CRS Daemon
/etc/init.d/oracleasm stop
Procedure
/opt/VRTSvcs/bin/hastop -all
attempting to connect....
VCS ERROR V-16-1-10600 Cannot connect to VCS engine
attempting to connect....not available; will retry
3. The following steps verify that reservations are clear after stopping a cluster:
b. Verify that the reservation keys are clear: vxfenadm -s all -f tmpfile
The reservation keys are clear when each data LUN in the tmpfile indicates No keys .
For an active/passive cluster running Windows Server 2008 or Windows Server 2008 R2: Disable the cluster to release SCSI
reservations:
NOTE:
Do not bring hosts back online before a TASK_ID is assigned.
Prerequisites
Verify that a TASK_ID was issued by the migration utility.
Procedure
1. Reconfigure the HBA BIOS during host startup, and then select the HPE storage system boot device.
2. For host booting over SAN from the source storage system—To select the HPE 3PAR boot device, reconfigure the HBA BIOS during
host startup.
Hyper-V clusters
3. Scan for newly exported LUNs from the destination storage system:
Linux hosts
4. If applicable, restart the cluster, applications, and services. The host can resume normal operations.
1. To rescan the host for disks with the storage system signature:
cfgmgr
2. Verify that the host recognizes the volumes on the storage system:
lsdev
Example:
3. Using the volume group to PVID-mapping information, import the volume groups.
If a volume group is mapped to multiple physical disks, specify one disk from the list. The importvg command automatically
identifies the remaining disks that are mapped to the volume group.
# lspv
hdisk0 00f825bdd5f7e96e rootvg active
hdisk1 00f825bd4a05917b None
hdisk2 00f825bd5dfc80c6 AIX1VG active
hdisk3 00f825bd5dfc82c1 AIX2VG active
Notice, PVIDs associated with the volume groups AIX1VG and AIX2VG . After zoning the host to the storage system, rescan the
disks. The resulting output from the lspv command shows:
Import the volume groups by specifying the corresponding disks with the same PVIDS. IN the example, pvid is the physical disk
on which the volume group was created:
For applications using XP storage system LDEVs in raw format, reconfigure to point to the corresponding storage system volumes
that have the same PVID.
4. On the host, verify that all disks exported from the destination have the expected number of paths:
lspath
# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "1" > /sys/class/fc_host/host3/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "- - -" > /sys/class/scsi_host/host3/scan
diskpart.exe
2. Depending on the Windows SAN policy, the disks that were migrated are either offline or online.
Aborting a migration
NOTE:
You cannot abort a migration after issuing the startmigration command.
Procedure
Verified that all migration tasks from the source storage system completed successfully.
Verified that all volumes migrated successfully to the destination storage system.
All applications started, and work correctly from the destination storage system.
Procedure
1. For each completed migration task:
Example:
Example:
Example:
4. Remove zoning between the source storage system and the destination storage system.
5. Reconfigure the peer ports into host ports on the destination system.
7. If needed, schedule a time to resignature the migrated VMware disks before rebooting cluster nodes.
8. (Optional)The WWN of a migrated volume is the one it had on the source system. To change the WWN to the schema used on the
destination system:
10. In Windows environments, return the Path Verify Enabled MPIO to the previous setting.
NOTE:
For storage systems in an HPE 3PAR Peer Persistence relationship, do not disable Path Verify Enabled
MPIO .
11. (Optional) Expand exported volumes to the next 256 MB boundary from the HPE 3PAR OS:
More information
After rebooting, remove and then add RDM devices to virtual machines. See the VMware KB article 1016210 from the VMware
Knowledge Base
After rebooting, the ESXi host might not automatically mount VMFS data stores. If the data store is not accessible by the ESXi host,
see the VMware KB article 1011387 .
After all data stores are mounted and resignatured, they mount automatically after rebooting. Extra steps are required for updating
references to the original signature in virtual machine files. For more information, see Managing duplicate VMFS datastores on the
vSphere 5 documentation center .
Verified that all volumes migrated successfully to the destination storage system.
All applications started, and work correctly from the destination storage system.
Procedure
2. The WWN of a migrated volume is the one it had on the source system. To change the WWN to the schema used on the destination
system:
3. Export the LUNs to the host on the destination storage system. From HPE 3PAR SSMC:
a. Create a host entry on the destination storage system (if it does not exist).
b. Set the HPE storage system host persona/host OS to the appropriate value.
c. Export migrated LUNs to the newly created host. For more information, see the HPE 3PAR implementation guides for your
specific host and the HPE 3PAR Host Explorer User Guide at the HPE Information Library.
Identifying and deleting source system LUN paths with VMware ESXi
Identifying and deleting source system LUN paths with Linux native device-mapper multipath
Identifying and deleting source storage system LUN paths with HP-UX 11 v3
Identifying and deleting source system LUN paths with VMware ESXi
Migrating data from HPE XP to HPE 3PAR 35
After the createmigration task completes successfully, log on to the ESXi host and rescan HBAs.
Procedure
1. From the CLI, issue the following command to rescan HBAs in the host:
#
esxcfg-rescan vmhba3
#
esxcfg-rescan vmhba2
The output shows LUNs with their source and destination system paths.
2. To list all LUNs and their corresponding paths, issue the following command:
#
esxcfg-mpath -b
3. Remove the source storage system from the host zone. The host will show the status of the source path as Target:
Unavailable .
#
esxcfg-mpath -b
#
esxcfg-rescan vmhba3
#
esxcfg-rescan vmhba2
#
esxcfg-mpath -b
Identifying and deleting source system LUN paths with Linux native device-mapper
multipath
After the createmigration task completes successfully, rescan the HBAs on a Linux host.
Procedure
Rescanning HBAs
#
ls /sys/class/fc_host
host4 host5
#
echo "1" > /sys/class/fc_host/host4/issue_lip
#
echo "1" > /sys/class/fc_host/host5/issue_lip
#
echo "- - -" > /sys/class/scsi_host/host4/scan
#
echo "- - -" > /sys/class/scsi_host/host5/scan
#
multipath -ll
Rescanning HBAs and listing the updated multipath mapping for SUSE
# ls /sys/class/fc_host
host2 host3
#
echo "- - -" > /sys/class/scsi_host/host2/scan
#
echo "1" > /sys/class/fc_host/host3/issue_lip
#
# echo "- - -" > /sys/class/scsi_host/host3/scan
#
multipath –ll
2. For each LUN being migrated, identify the LUN and its WWN in the output. For example, migrating a LUN with WWN
360060e80045be50000005be500001424 and four paths, sde , sdg , sdp , and sdl :
a. To identify the paths associated with the source storage system, issue the following command on each associated path:
#
cat /sys/block/sde/device/model
OPEN-E
#
cat /sys/block/sdg/device/model
OPEN-E
#
cat /sys/block/sdp/device/model
VV
#
cat /sys/block/sdl/device/model
VV
The output shows that paths sde and sdg belong to the source storage system. Paths sdp and sdl belong to the
b. To delete paths from the operating system, enter the following command:
#
echo "1">/sys/block/sde/device/delete
#
echo "1">/sys/block/sdg/device/delete
Identifying and deleting source storage system LUN paths with HP-UX 11 v3
After the createmigration task completes successfully, remove zoning between the source storage system and the host.
NOTE:
With legacy DSF paths, before continuing, clean up stale paths from the volume group using the pvchange(1M)
and vgreduce(1M) commands.
Procedure
#
ioscan -fnN
2. Remove the paths from the host to the source, enter ioscan -f as follows:
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4000000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4001000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4002000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4003000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4000000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4001000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4002000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4003000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x0
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933
3. To verify that all paths from the host to the source shown in the NO_HW state were removed, enter ioscan -fnN as follows:
#
ioscan -fnN
NOTE:
Stop the migration utility service before retrieving log files.
Action
log4j.rootCategory=INFO, DebugLogAppender
To
log4j.rootCategory=ALL, DebugLogAppender
2. Run the OIUCLI.bat file from the following default location or directory where you installed the migration utility:
Default location:
An invalid source storage system UID is specified when executing the addsource command.
The HPE XP source storage system is in a failed stage within the XP CVAE.
The XP CVAE has client IP address filtering enabled and the OIU server is not on the trusted IP address list.
The OIU server and the XP CVAE do not have network connectivity.
The XP CVAE and the source storage system do not have network connectivity.
Action
Check each of the above conditions to identify which may be causing the problem, then resolve it.
Solution 1
Action
1. For MDM and Offline migrationverify that the destination storage system has no active paths. This can be done by checking the
host information on the HPE 3PAR SSMC or by using the HPE 3PAR CLI command, showhost .
2. To determine where the operation failed, monitor the migration task by checking the task screen or using the HPE 3PAR CLI .
showtask
Solution 2
Cause
The task does not complete before the volume import stage, but after volumes are admitted on the destination storage system, you
can manually return the system to the pre-admit state. This process is non-disruptive to the hosts provided that the appropriate
zoning and host multipathing are re-established. The host must have access to the volume through the source system. For single-
volume migrations, removing zoning is not required.
Action
To return the system to its state before an unsuccessful volume admit:
1. On the fabric and host—If zoning was removed from between the host and source system, re-zone and then confirm that I/O is
directed to the source system.
2. On the fabric and host—Removing zoning between the host and the destination storage system. Verify that all access to the
volumes is through the source system.
3. On the destination storage system—Remove the VLUNs on the destination storage system for the peer volumes exported to the
hosts.
4. On the destination storage system—Remove the peer volumes from the destination storage system.
5. On the destination storage system—When no volumes were exported from the destination storage system to the host, remove the
host from the destination storage system.
6. On the source storage system—Remove the VLUN exports to the host representing the destination storage system from the
source storage system.
7. On the source storage system—Remove the host representing the destination storage system from the source storage system.
Solution 3
Cause
The task does not complete after volume import tasks started. The hosts' access to the volumes on the source system was interrupted.
A failed import returns the system to the point where you can retry the import after resolving the problem. You can revert the
configuration so that the I/O access is from the source system, but this is a manual process and requires down time.
Action
To revert the configuration so that the source system is servicing I/O:
1. On the host—To prevent consistency issues, shut down active applications before shutting down the hosts. Stop access to the
destination storage system from the host. The host will lose access to the volumes being migrated, as part of the procedure.
2. On the destination storage system—Cancel all active import tasks for the volumes that were being migrated. To cancel the import
task, from the HPE 3PAR CLI :
canceltask
3. On the destination storage system—Remove the VLUNs for the volumes exported to the host.
5. On the destination storage system—If no other volumes are exported to the host, remove the host.
7. On the source system—Remove the host representing the destination storage system.
8. On the source system—From the HPE 3PAR CLI issue the setvv -clrrsv command to all volumes that were being migrated
on the source system. If the source system is running HPE 3PAR OS 3.1.3 or above, issue the setvv -clralua command to
all volumes that were being migrated.
9. On the fabric and host—If needed, re-zone the host to the source system.
10. On the host—Restart the host and any applications that were shut down at the beginning of this process.
Solution 4
Cause
The import task fails with LD read failure as follows:
2015-08-07 05:32:31.87 PDT {10920} {events:normal }
LD mirroring failed after 740 seconds due to failure in LD read(1).
Request was for 256MB from LD 270:1280MB to be mirrored to LD 501:3328MB
1. Check for issues such as broken peer links, bad disks on the source, or source running out of space. Fix any issues before re-
initiating the migration.
2. If an LD write error is reported from a destination volume, check for bad disks or insufficient space on the destination. Fix any
issues before re-initiating the migration.
Solution 1
Cause
The user was required to change their password.
Action
1. Clear the User must change password at next logon check box in the user setup.
Solution 2
Action
1. On the Windows server, where the migration utility is installed: Stop the migration utility service.
%HOMEDRIVE%%HOMEPATH%%
OR
OIURSLDST0010. Please use the installcertificate command to accept the certificate.
Perform the following steps to add the CA signed certificate to the HPE 3PAR storage system:
Action
1. Connect to the storage system using PuTTY or access the HPE 3PAR CLI .
The root CA signed certificate should appear. If the following message displays:
3. Copy and save the certificate with a .pem extension in the security folder ( home directory of current
user>\InFormMC\security ).
NOTE:
To view the home directory of current user, run the echo %HOMEDRIVE%%HOMEPATH% command from the
Windows command prompt.
If %HOMEDRIVE%%HOMEPATH% is blank or not the directory of the user, check and then use one of the
following locations:
C: \InFormMC\security
C:\Windows\SysWOW64\config\systemprofile\InFormMC\security
The intermediate CA signed certificate should appear. If the following message displays:
5. Copy and save the certificate with a .pem extension in the security folder (mentioned above).
6. To install the root and intermediate CA signed certificates, run the following command (in the command line) twice, once for the
root CA and once for the intermediate CA:
NOTE:
To run the keytool commands, Java v6.0 or later must be installed and the PATH environment variable should
contain the path to java.exe . If the path is not specified, issue set PATH=%PATH%;C:\Program
Files (x86)\Java\jre\bin to set it dynamically.
Example:
7. Issue the addsource command or the adddestination command again to add the HPE 3PAR storage system.
3. For MDM or online migration: Remove zoning between the host and the destination storage system.
4. Delete any VLUNs that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .
5. Delete any host or host sets that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .
NOTE:
If any of the migrating volumes were presented, unpresent them from the host, and then delete.
6. Delete any volumes that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .
7. On the source storage system, remove the exports for the volumes to the host groups HCMDxxxxx whose migration failed.
8. Back up the current data folder, and then delete the data folder from:
9. Back up of current log folder and delete the log folder from .
10. Verify:
a. Make sure the target port WWNs appear in the discovered port list.
showconnection
15. After the peer host is created on the source: Make sure that the source appears as a connected device on both peer ports of the
destination.
a. Perform a rescan:
showtarget -rescan
OIUERRAPP0000
OIUERRDB1006
OIUERRPREP1023
OIURSLAPP0000
Cause
The createmigration command can fail if storage objects selected for migration do not meet all requirements, for example:
LUNs were not found in the host group specified for migration.
Action
3. Verify that the storage system information you gathered is correct. For example, IP address, host name, and volume names.
4. Before trying the command again, see How do I clean up after the createmigration command fails?
The createmigration command fails with an error about peer port connectivity
Symptom
The createmigration task fails and the error message indicates that there is a communication issue between peer ports and
host ports.
Cause
The createmigration command causes the migration utility to verify network connectivity and zoning.
Action
2. Make sure that each host port on the source system is zoned to one physical peer port on the destination system. Virtual peer
ports can be present on the physical peer ports of the destination system.
3. Make sure that the source and destination systems are connected to the fabric.
Cause
The LUNs were not found in the host group specified for migration.
Action
1. Add the LUNs to the host group.
Cause
A security certificate was not installed with the migration utility.
Action
1. Install a certificate:
Example:
(alternate)
Example output:
Certificate details:
Issue by: aperol.eu.tslabs.hpecorp.net
Issue to: aperol.eu.tslabs.hpecorp.net
Valid from: 03/26/2019
Valid to: 03/25/2022
SHA-1 Fingerprint: F1:93:FB:ED:6F:60:89:C4:A6:30:35:02:E6:C3:E3:6D:B9:29:65:DA
Version: v3
Do you accept the certificate? Y/N
Y
(alternate)
Certificate details:
Issue by: systemname.aa.testlab.examplecorp.net
Issue to: systemname.aa.testlab.examplecorp.net
Valid from: 03/26/2019
Valid to: 03/25/2022
SHA-1 Fingerprint: F1:93:BB:ED:6B:60:89:C4:B6:30:35:02:F6:B3:D3:6E:A9:29:65:EB
2. Enter Y to accept.
Solution 1
Cause
The host name exists on the destination storage system with a different WWN.
Action
1. Same host name, different WWN—On initial migration of a host, include all the WWNs in the host group (for some storage
systems). Include WWNs that are not managing any LUNs. Subsequent migration of any WWN for this host will find a match.
2. Different host name, same WWN—Change the host name so that it does not match the existing host name on the destination
storage system.
Solution 2
Cause
For migrations from multiple source systems, specify the host name, host group name, or initiator group name in the -srchost
parameter of the createmigration command.
Action
If you cannot modify the host name or initiator group name on the source storage system: Edit the host name on the destination
storage system before issuing the createmigration command.
2. Remove all presentations of the migrating volumes to the host groups named HCMDxxxxx that represent the HPE 3PAR on the
HPE XP source system.
Cause
The LUN selected for migration exceeds the maximum limit of 64 TiB.
Example:
createmigration -sourceuid <sourceuid> -srcvolmap [{00:10:A4}] -migtype mdm
-persona RHEL_8 -destcpg ssd_r6 -destprov thin
Cause
There is an invalid character, such as a space, in the host set name. The following example shows an invalid space in the host set name:
createmigration -sourceuid <sourceuid> -srchost R65-S02-IG -destcpg FC_r6
-destprov <provisioningtype> -migtype MDM -persona <persona> -vvset R65-S02_VVset
-hostset R65-S02 Hostset
SUCCESS: Migration job submitted successfully. Please check status/details using
showmigration command. Migration id: 1440011328444
Action
1. Remove the createmigration task twice. The first time removes the migration from the queue. The second time removes
the migration from the migration utility database.
Example:
Example:
1. Increase the size of the LUN to at least 16 GB, and then retry the operation.
2. Remove any volume smaller than 16 GB from the createmigration definition, retry the operation.
3. Check the paths between the source and destination systems. Verify that the showconnection command displays two paths
for each source storage system. Retry the operation.
Solution 1
Cause
The host name exceeds 31 characters or contains a space character.
Action
Solution 2
2. Verify that the connection between the source and destination is active.
Cause
There were trailing spaces in the IP address.
Action
Launch the migration utility console and enter the IP address without trailing spaces.
For MDM and Online migration, the migration utility creates the migrating hosts on the destination storage system.
For migrating consistency groups, temporary VV sets are created on the destination storage system. These temporary VV sets are
removed after migration completes.
Verifies the source host group configuration: Make sure that the LDEVs or host groups specified in the createmigration
command are mapped to a host group on the associated source system.
The LUN is not a snapshot or pool volume for snapshots or thin provisioning.
For Online migrations, the LDEVs being migrated are admitted to the destination system and then exported to the host.
For MDM, the LDEVs are admitted to the destination storage system but are not exported to the host.
When data-transfer time is insufficient for the migration of all volumes, a subsequent migration can be started with another subset
at a later stage.
All or a subset of the LDEVs selected are migrated with the startmigration command.
During data migration, LDEV service time is adversely affected when a host issues large amounts of read/write traffic over the
destination storage system.
During data migration, paths must remain zoned between the host and destination storage system. The host operating system and
the migration type determine when these paths are created.
allvolumesincg specifies all volumes including implicit volumes for consistent migration.
NOTE:
When there are multiple consistency groups in a migration task, and you issue the showmigrationdetails
command, the consistency group name changes to Not Assigned as each group migration completes.
The -srcvolmap parameter selects vol1 , vol2 , and vol3 for migration from the source storage system.
The -srcvolmap parameter selects vol1 , vol2 , and vol3 for migration from the source storage system.
The -allvolumesincg places the listed volumes, and all implicitly added volumes in a single Consistency Group on the
destination storage system.
The -srchost option selects all volumes exported to the host hostname for migration.
Volumes vol1 , vol2 , and vol3 are placed in cg1 . Volumes vol4 , vol5 , and vol6 are placed in cg2 . The volumes
in cg1 , and cg2 are migrated independently, and consistently.
All volumes exported to hostname on the source storage system are placed in a Consistency Group.
Prioritization examples
Prioritizing volumes
createmigration -migtype online -sourceuid 3PAR1 -srcvolmap
"[{vol1,thin,testcpg},{vol2,thin,testcpg}]" -destcpg testcpg -destprov reduce
-persona "RHEL_5_6" -priorityvolmap {"values":{"low":["vol1","vol2"],"high":["vol3","vol4"]}}
The -priorityvolmap parameter sets priority to low for vol1 and vol2 .
vol1 and vol2 migrate with a lower priority than vol3 and vol4 .
The -priorityvolmap parameter sets priority to high for vol3 and vol4 .
vol3 and vol4 migrate with a higher priority than vol1 and vol2 .
Other volumes that were implicitly added to the migration receive the default medium priority.
The -srcvolmap parameter adds a volume set named volset1 to the migration.
The -priorityvolmap parameter sets priority to low for vol1 and vol2 located in volset1 .
The -priorityvolmap parameter sets priority to high for vol3 and vol4 located in volset1 .
vol3 and vol4 migrate with a higher priority than vol1 and vol2 .
Any other volumes in the volume set migrate at the default priority.
Using the srcvolmap option to migrate vol1 at high priority and vol2 at low priority.
The volumes listed in cgvolmap are migrated consistently with low priority.
–srcvolmap [{00:0B:F2,thin,testcpg,compress}]
–srcvolmap [{00:0B:F2,dedupe,testcpg,compress}]
00:0B:F2,thin,SSD_r1,compress
00:FB:23,thin,FC_r1
00:10:34,thin,SSD_r1,compress
00:0B:F2,dedupe,SSD_r1,compress
00:FB:23,dedupe,FC_r1
00:10:34,dedupe,SSD_r1,compress
Migrating a host
createmigration -srchost "hostname" -sourceid 12345
-migtype mdm -persona "WINDOWS_2008_R2" -destcpg testcpt
-destprov thin -autoresolve false
Migrating a volume
createmigratin -sourceid 12345 -migtype mdm
-autoresolve false -persona "WINDOWS_2008_R2"
-srcvolmap "[{00:34:1B,thin,testcpg}]" -destcpt testcpg -destprov thin
As a best practice, test a fail-back process before performing migration on live or production data. Typical considerations include:
Responding to a failed migration.
Clear the SCSI reservation only after rolling back to the original source storage system . After clearing the SCSI reservation, reissue the
startmigration command.
Procedure
#
showvv
Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize
0 admin full base --- 0 RW normal 0 0 10240 10240
22 vol0 peer base --- 22 RW exclusive 0 0 16640 16385
23 vol1 peer base --- 23 RW exclusive 0 0 16640 16385
21 vol2 peer base --- 21 RW exclusive 0 0 16640 16385
-------------------------------------------------------------------------------
4 total 0 0 121600 120835
#
setvv -clrrsv vol0
Websites
General websites
Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix
https://www.hpe.com/storage/spock
Storage white papers and analyst reports
https://www.hpe.com/storage/whitepapers
For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
https://www.hpe.com/info/assistance
To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website:
https://www.hpe.com/support/hpesc
Information to collect
Technical support registration number (if applicable)
Firmware version
Error messages
Accessing updates
Some software products provide a mechanism for accessing software updates through the product interface. Review your product
documentation to identify the recommended software update method.
https://www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
https://www.hpe.com/support/downloads
My HPE Software Center
https://www.hpe.com/software/hpesoftwarecenter
https://www.hpe.com/support/e-updates
To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard
Enterprise Support Center More Information on Access to Support Materials page:
https://www.hpe.com/support/AccessToSupportMaterials
IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise
Support Center. You must have an HPE Passport set up with relevant entitlements.
Remote support
Remote support is available with supported devices as part of your warranty or contractual support agreement. It provides intelligent
event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which initiates a fast
and accurate resolution based on the service level of your product. Hewlett Packard Enterprise strongly recommends that you register
your device for remote support.
If your product includes additional remote support details, use search to locate that information.
https://www.hpe.com/services/getconnected
HPE Pointnext Tech Care
https://www.hpe.com/services/techcare
HPE Datacenter Care
Warranty information
To view the warranty information for your product, see the links provided below:
https://www.hpe.com/support/EnterpriseServers-Warranties
HPE Storage Products
https://www.hpe.com/support/Storage-Warranties
HPE Networking Products
https://www.hpe.com/support/Networking-Warranties
Regulatory information
To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage, Power,
Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center:
https://www.hpe.com/support/Safety-Compliance-EnterpriseProducts
https://www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance data, including RoHS and REACH, see:
https://www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs, product recycling, and energy efficiency, see:
https://www.hpe.com/info/environment
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation,
use the Feedback button and icons (located at the bottom of an opened document) on the Hewlett Packard Enterprise Support Center
portal (https://www.hpe.com/support/hpesc) to send any errors, suggestions, or comments. All document information is captured by
the process.