0% found this document useful (0 votes)
34 views62 pages

Migrating Data From HPE XP To HPE 3PAR

Uploaded by

David Delta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views62 pages

Migrating Data From HPE XP To HPE 3PAR

Uploaded by

David Delta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Migrating data from HPE XP to HPE 3PAR

Part Number: 20-OIU25-MDX3P-ED2


Published: May 2021
Edition: 2
Migrating data from HPE XP to HPE 3PAR
Abstract
This guide provides information for using the HPE Storage Online Import Utility to migrate data from HPE XP storage systems to HPE 3PAR
storage systems.
Part Number: 20-OIU25-MDX3P-ED2
Published: May 2021
Edition: 2

© Copyright 2019, 2021 Hewlett Packard Enterprise Development LP

Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and
services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.

Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is
not responsible for information outside the Hewlett Packard Enterprise website.

Acknowledgments
Intel® , Itanium ® , Optane™, Pentium® , Xeon® , Intel Inside ® , and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other
countries.

AMD and the AMD EPYC ™ and combinations thereof are trademarks of Advanced Micro Devices, Inc.
Microsoft ® and Windows ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other
countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.

UNIX® is a registered trademark of The Open Group.


All third-party marks are property of their respective owners.

Revision history

Part number Publication date Edition Summary of changes

20-OIU25-MDX3P- May 2021 2 Updated the migration utility names.


ED2
Updated information about finding serial numbers.

QL226-99855 December 2019 1 Initial release


Table of contents

Planning
HPE Storage Online Import Utility
Planning the migration
Accessing the Migration Host Support matrix on SPOCK
Migration utility requirements
Source storage system requirements
SMI-S Provider requirements
Multipathing requirements
Oracle RAC-based configurations
Destination storage system requirements
Migration requirements
SAN fabric requirements
Considerations when selecting storage objects for migration
Requirements for migrating multiple configurations
Data migration types
Online migration
Minimally Disruptive migration
Offline migration
Preparing
Preparing for the migration
Gather storage system information
Uninstalling the migration utility from a Windows system
Downloading the migration utility software
Installing the migration utility
Launching the migration utility console
Installing the XP SMI-S Provider
Creating an SMI-S Provider user for XP7 source systems
Adding the source storage system to the migration utility
Adding the destination storage system to the migration utility
Uninstalling vendor-specific multipath software
Reconfiguring the host multipath solution for MDM
Configuring multipath software on an HP-UX host
Configuring multipath software on an IBM AIX host
Setting up volumes on an IBM AIX host
Configuring multipath software on a Linux host
Configuring multipath software on a Windows Server host including Hyper-V configurations
Zoning the source storage system to the destination storage system
Migrating
Migration process
Consistency Groups
Data reduction
LUN conflicts
Prioritization
Performing online migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Performing Minimally Disruptive migration
Enabling SCSI-3 persistent reservations for Veritas Storage Foundation
Performing Offline migration
Migrating data from an Oracle RAC cluster
Falling back to the source storage system after a failed or aborted migration on Oracle RAC clusters
Stopping host services
Stopping Oracle RAC cluster services
Stopping Oracle RAC database services
Stopping Veritas cluster services
Stopping cluster services on a Hyper-V running on a Windows Server
Bringing the host back online
Bringing Hyper-V clusters online
Bringing the IBM AIX host online
Bringing Linux hosts online
Bringing Windows hosts online
Aborting a migration
Post migration
Cleaning up after successful MDM and Online migrations
Performing postmigration tasks in VMware ESXi environments
Cleaning up after a successful Offline migration
Identifying and deleting source storage system LUN paths
Identifying and deleting source system LUN paths with VMware ESXi
Identifying and deleting source system LUN paths with Linux native device-mapper multipath
Identifying and deleting source storage system LUN paths with HP-UX 11 v3
Troubleshooting
Where are the migration utility log files?
How to increase logging details
The migration utility console does not open in Windows 7
Cannot add an HPE XP source storage system
Cannot admit or import a volume
Cannot log in to the migration utility
Cannot validate a security certificate
Clean up after the createmigration command fails
The createmigration command fails.
The createmigration command fails with an error about peer port connectivity
The createmigration command returns error OIUERRDB1006
The adddestination command fails with error OIUERRDST00001
The createmigration command fails because of an error creating the host
The createmigration command returns error OIUERRPREP1023 for XP
The createmigration command returns error OIUERRPREP1027 migrating a 64 TiB LUN
The createmigration command fails with an error in the -srcvolmap parameter
The createmigration command with -hostset parameter returns error OIUERRDST0003
The startmigration command fails after a successful createmigration task
The startmigration command fails
The startmigration task fails without an error message
Trailing spaces in IP address return login error
Reference
The migration process
Examples using the consistency group parameters
Prioritization examples
Examples using data reduction settings
Examples using the autoresolve parameter
Rolling back to the original source storage system
Clearing a SCSI reservation with HPE 3PAR OS 3.2.1 MU3 or later
Websites
Support and other resources
Accessing Hewlett Packard Enterprise Support
Accessing updates
Remote support
Warranty information
Regulatory information
Documentation feedback
HPE Storage Online Import Utility
The HPE Storage Online Import Utility provides scripting commands allowing data migration with little or no disruption to hosts, host
clusters, or volumes being migrated.

Planning the migration


Procedure

1. Review the 3PAR Online Import for XP Storage - Migration Host Support matrix .

See Accessing the Migration Host Support matrix on SPOCK .

2. Review Migration utility requirements.

3. Review Source storage system requirements .

a. Review SMI-S Provider requirements.

b. Review Multipathing requirements.

c. Review Oracle RAC-based configurations.

4. Review Destination storage system requirements .

5. Review Migration requirements.

6. Review SAN fabric requirements.

7. Review migration utility rules for selecting storage objects:

a. Considerations when selecting storage objects for migration .

b. Requirements for migrating multiple configurations .

8. Backup hosts and associated data before migrating data.

Review Guidelines for rolling back to the original source storage system .

9. If possible, plan migrations during off-peak hours. Migrate hosts with the least data first to reduce performance impacts.

Accessing the Migration Host Support matrix on SPOCK


Procedure
1. Use your HPE Passport account to log in to SPOCK.

TIP:
If you do not have an HPE Passport account, create an account from the SPOCK login page.

2. In the left navigation pane, scroll to Software, and then click Array SW: 3PAR .

3. Select 3PAR Online Import for XP Storage - Migration Host Support matrix under 3PAR Federation Technologies.

Migration utility requirements

Migrating data from HPE XP to HPE 3PAR 6


The migration utility software requires a Windows (Intel x86 or x64) or compatible computer for installing a server and client. Both
server and client are installed on the same computer by default. The install wizard provides an option to choose different Windows
hosts for the server and client.

HPE Storage Online Import Utility 2.5 Supported Windows versions

Server 2008, 2012, 2016, 2019

Client 2008, 2012, 2016, 2019, Windows 7, Windows 10

Before installing, uninstall any prior versions of the migration utility.

Source storage system requirements

A supported XP storage system model with supported microcode version.

Non-XP7 systems are registered in HPE Command View Advanced Edition.

Migrating a server or cluster that accesses LUNs from multiple source systems has the following host group requirements:
The host group name on each source storage system must match.

The host group entry on each source storage system must contain the same HBA WWPNs.

SMI-S Provider requirements

The migration utility communicates with all supported XP systems using SMI-S Provider software.
HPE XP7 Storage—SMI-S Provider is integrated in the Service Processor.

Other XP storage systems—SMI-S Provider is embedded in HPE XP Command View Advanced Edition (CVAE).
On the server where CVAE is installed, make sure that the TCP and UDP ports used by CVAE are not in use by other applications. For
more information, see the HPE XP Command View Advanced Edition Installation and Configuration Guide on
www.hpe.com/support/hpesc.
License requirements:
The XP CVAE CLI/SMI-S license

XP CVAE Device Manager license


HPE XP7 Storage systems do not require a license for CVAE.

Multipathing requirements

See the 3PAR Online Import for XP Storage - Migration Host Support matrix for supported multipath I/O (MPIO) software.
Review the appropriate host implementation guide for complete configuration instructions for your environment. See
https://www.hpe.com/support/hpesc.

Remove unsupported multipathing solutions according to vendor documentation.

Install/configure supported multipathing.

Migrating data from HPE XP to HPE 3PAR 7


For Windows systems: Enable Path Verify Enabled MPIO.

Oracle RAC-based configurations

Before migrating a SAN-based Oracle RAC cluster, understand the distribution of Oracle RAC cluster registry (CRS), voting disks,
and data disks across the source storage system.

HPE XP storage systems do not support simultaneous migrations from two or more HPE XP storage systems to one destination
storage system (N:1 setup). Instead, perform multiple migrations serially to transfer data from multiple source storage systems to a
single destination storage system. For more information, see Data migration for an Oracle RAC cluster use case .

You can migrate Oracle RAC disks distributed over the HPE 3PAR storage system and HPE XP storage system.
For Automatic Storage Management (ASM) based Oracle RAC configurations, modify the persistent device or partition names used by
ASMlib to label ASM disks.

Using vendor-specific multipath software, rename the HPE XP storage system as /dev/sddlm* .

With Linux native device-mapper multipath software, rename the devices as /dev/mapper/mpath* .

To determine whether the current ASMlib uses vendor-specific multipath-based names, issue the following Oracle ASM CLI
command for the HPE XP Storage: # oracleasm querydisk -p /dev/sddlm* .

Destination storage system requirements

See the 3PAR Online Import for XP Storage - Migration Host Support matrix for supported HPE 3PAR models and HPE 3PAR OS
versions.
The destination system has a valid Online Import license installed.

Destination storage system names are not the same as host names.
There must be adequate storage for the data being migrated. The provisioning type used on the destination impacts the capacity
needed for migrating storage objects.

Deduplicated volumes are supported on a destination storage system with HPE 3PAR OS 3.2.1 MU2 or later.

Compressed VVs are supported on a destination storage system with HPE 3PAR OS 3.3.1 or later.

Volumes to be migrated from the source can be compressed, deduplicated or both.

Migration requirements

Storage objects include volumes, and hosts. When selecting storage objects for migration:
Each migration definition supports up to 255 storage objects (volumes).

Each volume selected must be at least 256 MB in size.

The names of volumes selected for migration on the source must not exist on the destination.

Do not migrate external volumes from the source storage system.

Do not migrate replicated LDEVs, snapshots, or pool volumes.

Do not migrate LDEVs that are a part of backup jobs.

Do not simultaneously specify a host group and LDEV in the same migration task.

Migrating data from HPE XP to HPE 3PAR 8


FCoE host side connections are only supported using Offline migration.

You can migrate data from up to four source storage systems to destination systems running HPE 3PAR OS 3.2.1 and later.

SAN fabric requirements

Two FC host ports on the source system for building the peer links.

Point-to-point connections (fabric connections) between the source and destination.

FC port speed: The speeds of the FC ports do not have to be the same.

FC protocol: The migration utility is supported over the FC protocol.

At least one FC switch between the source and destination is supported. Using two switches adds redundancy and is recommended.

The FC fabric between the source and the destination is NPIV capable. NPIV is enabled on the peer ports on the SAN switches.

Considerations when selecting storage objects for migration

When creating migration tasks, the migration utility identifies relationships between initiator groups and presented volumes. As you
select objects for migration, these relationships can cause additional storage objects to be included in the migration. This function is
automatic and cannot be modified.

NOTE:
If the total number of volumes selected for migration exceeds 255, the migration task will fail.

The migration utility applies the following rules:


Hosts and host sets

Selecting a host group or set of host groups with logical device (LDEV) presentations: All LDEVs presented to one or more of the
selected host groups are migrated. Presentations that the source LDEVs have to other host groups will include those host groups
and all their presented LDEVs in the migration.

Presented LDEVs

Selecting an LDEV or group of LDEVs with host group presentations: All selected LDEVs, and all other LDEVs presented to the
same host group are migrated. Also included are any presentations that the source host groups have with other LDEVs.

Unpresented LDEVs

Only the selected LDEVs are migrated offline, no additional LDEVs are included.

TIP:
On the HPE 3PAR CLI , you can use the showvlun command to find host and volume relationships.

Requirements for migrating multiple configurations

Migrating data from HPE XP to HPE 3PAR 9


Source storage system
Migrate multiple hosts from Migrate multiple hosts Migrate multiple hosts Migrate multiple hosts
one source to one destination to mutually exclusive from multiple source from one source storage
system. source-to-destination storage systems to one system to multiple
(1:1) storage system pairs. destination. destinations.
(N:N) (N:1) (1:N)

XP24000 No Yes Yes Yes

P9500 No Yes Yes Yes

XP7 No Yes Yes Yes

Data migration types


See the 3PAR Online Import for XP Storage - Migration Host Support matrix to determine the migration type for your environment.

Online migration
Online Migration moves data from the source to the destination without causing disruption to I/O on the host. All presentation
relationships between hosts and volumes are maintained during the migration. Disable host T10 DIF.

NOTE:
Ensure that WWNs on the host are the same on the source and destination systems.

Minimally Disruptive migration


Minimally Disruptive Migration (MDM) disrupts host I/O only while you reconfigure the host and multipath solution on the destination
storage system. The host continues to access data during the migration. You can enable host T10 DIF.

NOTE:
Ensure that WWNs on the host are the same on the source and destination systems.

Offline migration
Migrate volumes or volume sets offline. Host definitions are not created on the destination storage system. This migration type is
available for all supported host operating systems. Offline migration does not export virtual volumes during the migration. During the
migration process, you must either shut down the hosts or take the migrating volumes offline.

NOTE:
Ensure that the migrating volumes or volume sets are not exported to a host.

Preparing for the migration


Migrating data from HPE XP to HPE 3PAR 10
Procedure
1. Gather storage system information.

2. Prepare the migration utility:

a. If required, Uninstall previous versions of the migration utility .

b. Download the migration utility software .

c. Install the migration utility .

3. Configure the SMI-S Provider:

For non-XP7 sources: Install the XP SMI-S Provider

For HPE XP7 Storage: Create an SMI-S Provider user for XP7 source systems

4. Add the source storage system to the migration utility

5. Add the destination storage system to the migration utility

6. If present, Uninstall vendor-specific multipath software

7. If needed, reconfigure the host multipath solution for MDM

8. Set up zoning

Gather storage system information


Procedure

1. For each Non-XP7 source:

a. DNS name or IP address

b. Credentials for an account with Administrator privilege on the HPE Command View Advanced Edition ( CVAE) server.

c. Credentials for the HPE XP Command View Advanced Edition application.

d. Names of hosts or LDEVs being migrated.

e. Host operating system.

f. Persona value on the host.

g. Storage system family type: XP

h. The 5-digit serial number.

2. For each HPE XP7 Storage source:

a. DNS name or IP address for the HPE XP7 Storage Service Processor (SVP).

b. Credentials for the SMI-S user account.

c. Credentials for an account with Administrator privilege to the Service Processor (SVP).

d. Names of hosts or LDEVs being migrated.

e. Host operating system.

f. Persona value on the host.

g. Storage system family type: XP7

h. The 10-digit serial number

3. From each destination system:

a. DNS name or IP address of the destination storage system.

Migrating data from HPE XP to HPE 3PAR 11


b. Credentials for an account with the Super role.

c. Names of the common provisioning groups (CPGs) to which data is migrating.

d. Provisioning details for storing migrating volumes.

e. (Optional) Domain name.

f. Storage system family type: 3PAR .

Uninstalling the migration utility from a Windows system

Uninstall any previous versions of the migration utility before installing a new version.
IMPORTANT:
Removing the migration utility deletes previous migration definitions, and settings.

Procedure

1. On the Windows computer where an old version of the migration utility is installed: In the Control Panel, select Programs >
Programs and Features > Uninstall a Program.

2. Locate the HPE Storage Online Import Utility and then click Uninstall.

3. Click Yes to confirm.

Downloading the migration utility software

The download contains the executables for supported Windows environments.


Procedure
1. Use your HPE Passport account to log in to My HPE Software Center .

TIP:
If you do not have an HPE Passport account, create an account from the My HPE Software Center main page.

2. Click Free Software.

For help, click ? and select the Quickstart Guide.

3. Navigate to HPE Storage Online Import Utility and click .

4. Select the appropriate file and click Download.

5. Download the ISO file.

Installing the migration utility

The client and server are installed by default on the Windows computer.
Prerequisites
Uninstalled prior versions of the migration utility.
Procedure
Migrating data from HPE XP to HPE 3PAR 12
1. Navigate to the ISO file on the Windows computer.

2. Right-click the ISO file and choose Mount.

3. Make sure that TCP ports 2390 and 2388 are available. On the Windows command line, enter:

netstat -an | findstr "2388 2390"

When the specified ports not available, results appear similar to the following:

TCP 0.0.0.0:2390 0.0.0.0:0 LISTENING


TCP 127.0.0.1:2388 0.0.0.0:0 LISTENING
TCP [::]:2390 [::]:0 LISTENING

4. Launch the install wizard by double-clicking the executable file. Client and server are installed by default.

To use an existing CA-signed certificate, click Yes. To allow the installer to generate a new self-signed certificate, choose No.

5. If one of the default TCP ports is busy during installation, a message displays and the installer prompts you to enter a free port.

Enter a free port number and then click Next to continue.

6. (Optional) At the end of installation, select the Show the Windows Installer log check box to review log details.

7. If you assigned a new TCP port number, update the HPE Storage Online Import Utility startup file: OIUCli.bat in the default
folder: C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu\CLI .

Example updating for port 9090:

java -jar ..\CLI\oiucli-1.0.0-jar-with-dependencies.jar %* -port 9090

8. The install wizard creates two Windows user groups. Add local or domain users to these groups so that they can sign in to the
migration utility with access as follows:

HPE Storage Migration Admins—Can perform all data migration tasks. Add the Windows Administrator to this group.

HPE Storage Migration Users—Can view data migration information, such as, show * commands.

9. To launch the migration utility console: On the Windows desktop, double-click the migration utility icon.

Launching the migration utility console


Procedure
1. On the Windows desktop, double-click the HPE Storage Online Import Utility icon.

2. Enter the DNS name or IP address of the system where the HPE Storage Online Import Utility server is installed.

Enter localhost , if the migration utility client and server is installed on the same computer.

3. Enter user credentials.

Installing the XP SMI-S Provider


The following procedure applies to non-XP7 storage systems only.
Prerequisites

IP address of the source storage system

Credentials of the Remote Web Console storage system


Procedure

1. Install a supported version of HPE XP Command View Advanced Edition (CVAE) and accept the default settings.

Migrating data from HPE XP to HPE 3PAR 13


2. Add a license:

a. HPE Command View Advanced Edition > Login - Command View AE > License

b. Add the XP CVAE CLI/SMI-S or Device Manager license, and then click Save.

c. Verify that the license was added: HPE Command View Advanced Edition > Login - Command View AE > License.

3. If the Device Manager license was added and verified: Log on and add the XP source to the Device Manager database. Stop,
installation is complete.

4. If the XP CVAE CLI/SMI-S was added and verified: Log into the XP CVAE.

5. Download the XP CVAE CLI application, click CV XP AE > Tools > Download.

6. In the Device Manager Software Deployment dialog box, select Device Manager Command Line Interface (CLI) Application and then
download the Windows edition.

7. Move the downloaded file to a directory of your choice, and then double-click the filename. Follow the prompts to install the Device
Manager Command Line Interface (CLI) Application.

8. To customize the interaction with the XP CVAE CLI, edit hdvmcli.properties . The options set in
hdvmcli.properties are run each time you execute the hdvmcli.bat file.

9. To add the source storage system to the XP CVAE:

a. Open a Windows command prompt and change the path to the location where the XP CVAE CLI was installed.

b. Register the storage system.

Example:

hdvmcli AddStorageArray ipaddress=x.x.x.x family=XP12K/10K/SVS200


userid=administrator arraypasswd=administrator displayfamily=XP12K/10K/SVS200

ipaddress

The IP address of the SVP (service processor) of the source storage system.
userid

The user ID for access to the XP Remote Web Console storage system running on the source storage system.
arraypasswd

The password to access the XP Remote Web Console of the source storage system.

NOTE:
Verify that the SVP is in View mode before adding the source storage system. The process that adds the storage
system can take up to 5 minutes.

Creating an SMI-S Provider user for XP7 source systems

Perform the following before adding source HPE XP7 Storage storage systems to the migration utility.
Prerequisites
Logged into the HPE XP7 Storage Service Processor (SVP) with administrator privileges.
Procedure
Create a new user in the Storage Administrator User group: Administration > User Groups > Storage Administrator (View and Modify)
User Group.

Adding the source storage system to the migration utility


Migrating data from HPE XP to HPE 3PAR 14
Adding the source storage system to the migration utility

For each source storage system:


Procedure
1. Launch the data migration utility console and log in.

2. Add the source storage system:

addsource -type <storagesystemfamily> -mgmtip <ipaddress> -user <userid>


-password <password> -uid <serialnumber>

NOTE:
Use DNS name or IP address of the XP CVAE for -mgmtip .

Example adding an XP7 source:

addsource -type XP7 -mgmtip 10.11.12.13 -user smis_user -password SmisUserPassword


-uid 0000012345

A success message displays:

SUCCESS: Added source storage system

3. Verify that the source storage system was added:

showsource

Example output:

NAME TYPE UNIQUE_ID FIRMWARE


MANAGEMENT_SERVER OPERATIONAL_STATE

VSP G1000.0000012345 XP7 0000012345 80-06-75


10.11.12.13 Good

More information
Launching the migration utility console
Gather storage system information

Adding the destination storage system to the migration utility

For each destination storage system:


Prerequisites
For each destination storage system:

DNS name or IP address

User account with Super credentials


Procedure

1. Add the destination storage system:

adddestination -mgmtip 11.21.19.23 -user <username> -password <password> -type 3PAR

A success message displays:

SUCCESS: Added destination storage system

2. Verify that the destination storage system was added.

showdestination

Migrating data from HPE XP to HPE 3PAR 15


More information
Gather storage system information
The adddestination command fails with error OIUERRDST00001

Uninstalling vendor-specific multipath software

TIP:
Removing the vendor-specific multipath software requires a reboot before taking effect.

For environments using direct device referencing custom scripts or /etc/fstab use one of the following:

Perform fstab mounts using:

blkid/uuid

Mount /dev/mpathx as /var .

Windows Servers—Removing proprietary host multipath software from a Windows Server might disable the multipath I/O (MPIO)
installation without removing it. Manually check to confirm. If the native Microsoft MPIO is still installed, remove it and then restart the
host.
Procedure
If applicable, uninstall the proprietary Multipathing I/O (MPIO) software from the host according to vendor documentation.

Reconfiguring the host multipath solution for MDM


Prerequisites
Vendor-specific multipathing software was removed and hosts are shut down.
Procedure
1. Zone the source storage system to the destination storage system .

2. For LUNs on the source host selected for migration, make sure that valid paths exist for two or four controller nodes.

3. Configure a supported multipath solution on the host as follows:

Configure multipath software on an HP-UX host.

Configure multipath software on an IBM AIX host.

Configure multipath software on a Linux host.

Configure multipath software on a Windows Server host including Hyper-V configurations.

More information
Uninstalling vendor-specific multipath software

Configuring multipath software on an HP-UX host


Procedure

1. If needed, upgrade host HBA drivers.

For information about supported HBA drivers on the destination, see the SPOCK.

2. Configure the multipath software,

For information, see the HPE 3PAR HP-UX Implementation Guide at https://www.hpe.com/info/EIL.

Migrating data from HPE XP to HPE 3PAR 16


Configuring multipath software on an IBM AIX host

To configure multipath software for an IBM AIX host, see the HPE 3PAR AIX and IBM Virtual I/O Server Implementation Guide at
https://www.hpe.com/info/EIL.

Setting up volumes on an IBM AIX host

Procedure
1. Get the volume group-to-PVID mappings:

lspv

2. Export the volume groups:

exportvg

3. If the host is a member of a cluster, clear all SCSI reservations on the cluster disks.

4. Delete all disks being migrated:

rmdev -dl <disk>

Configuring multipath software on a Linux host


Procedure

1. If not already installed, install device-mapper multipath.

2. If required, upgrade host HBA drivers. For information about supported HPE 3PAR drivers, see SPOCK.

3. Register HPE 3PAR LUN types with Device Mapper (DM) by white listing HPE 3PAR-specific information (the vendor is
3PARdata and the product is VV ) in /etc/multipath.conf .

For more information about the Linux multipath configuration, see the HPE 3PAR Red Hat and Oracle Linux Implementation Guide
at https://www.hpe.com/info/EIL.

4. Restart device-mapper multipath.

5. Modify the /etc/fstab for new mount points. If the migrating LUN alias was not created in the multipath.conf file,
base the new mount points on discovered LUNs.

Configuring multipath software on a Windows Server host including Hyper-V


configurations
Procedure
1. If disabled, enable the Windows native multipath MPIO.

2. Register HPE 3PAR LUN types with MPIO:

Set vender to 3PARdata .

Migrating data from HPE XP to HPE 3PAR 17


Set product id to VV .

3. Do not reboot the host when prompted.

4. If required, upgrade HBA drivers.

For information about supported HPE 3PAR HBA drivers, see SPOCK.

Zoning the source storage system to the destination storage system

NOTE:
Do not remove zoning between the source and destination storage systems until the migration operation is complete.

Procedure

1. On the destination system, get a list of available ports.

cli% showport

2. Set up peer ports on the destination storage system:

NOTE:
The WWN of a host port changes when it is set to become a peer port. Make sure that you are using the new WWN
in the zoning.

a. Select two unused ports on partner nodes.

b. Set the port connection type to point and set the mode to peer :

cli% controlport offline n:s:p


cli% controlport config peer -ct point n:s:p

c. To confirm that the peer ports are in the ready state:

cli% showport -peer

Example:

cli% showport -peer


N:S:P Mode State ----Node_WWN---- ----Port_WWN---- Rate VPI
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC07EA90 16Gbps 0
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC17EA90 16Gbps 1
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC27EA90 16Gbps 2
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC37EA90 16Gbps 3
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC47EA90 16Gbps 4
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC57EA90 16Gbps 5
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC67EA90 16Gbps 6
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC77EA90 16Gbps 7
0:3:4 initiator ready 2FF70202AC07EA90 20340202AC87EA90 16Gbps 8
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC07EA90 16Gbps 0
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC17EA90 16Gbps 1
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC27EA90 16Gbps 2
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC37EA90 16Gbps 3
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC47EA90 16Gbps 4
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC57EA90 16Gbps 5
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC67EA90 16Gbps 6
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC77EA90 16Gbps 7
1:3:4 initiator ready 2FF70202AC07EA90 21340202AC87EA90 16Gbps 8
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC07EA90 16Gbps 0
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC17EA90 16Gbps 1
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC27EA90 16Gbps 2

Migrating data from HPE XP to HPE 3PAR 18


2:3:4 initiator ready 2FF70202AC07EA90 22340202AC37EA90 16Gbps 3
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC47EA90 16Gbps 4
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC57EA90 16Gbps 5
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC67EA90 16Gbps 6
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC77EA90 16Gbps 7
2:3:4 initiator ready 2FF70202AC07EA90 22340202AC87EA90 16Gbps 8
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC07EA90 16Gbps 0
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC17EA90 16Gbps 1
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC27EA90 16Gbps 2
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC37EA90 16Gbps 3
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC47EA90 16Gbps 4
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC57EA90 16Gbps 5
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC67EA90 16Gbps 6
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC77EA90 16Gbps 7
3:3:4 initiator ready 2FF70202AC07EA90 23340202AC87EA90 16Gbps 8
------------------------------------------------------------------
36
%

3. Set up ports on the source storage system. Select two host ports for communication and data transfer to the destination. Cable
each host port to a SAN switch. These host ports can also be used for host connectivity.

Choose the host ports on the source storage system from a different XP power domain.

4. Zone the source to the destination. On each source system, zone one host port to one peer port on the destination.

NOTE:
You can zone multiple source systems to the same pair of peer ports on the destination. Configure multiple peer
zones with each zone containing only two ports, one from a source system and the peer port configured on the
destination storage system.

5. Make sure that the source system appears as a device that is connected to both peer ports of the destination.

Example:

cli% showtarget -rescan


cli% showtarget
Port ----Node_WWN---- ----Port_WWN---- ------Description------
1:2:2 50000972C012F110 50000972C012F10C reported_as_scsi_target
0:2:2 50000972C012F120 50000972C012F120 reported_as_scsi_target

Node_WWN is the WWN of the source storage system.

6. In the migration utility console, verify that the communication between the source and destination is properly configured.

Example:

cli% showconnection

SOURCE_NAME SOURCE_UNIQUE_ID DESTINATION_NAME DESTINATION_UNIQUE_ID DESTINATION_PEER_PORT


SOURCE_HOST_PORT
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY01-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY11-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY21-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY31-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY41-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY51-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY61-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)

Migrating data from HPE XP to HPE 3PAR 19


0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY71-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2121-0202-XY81-
9656(1:2:1) 2123-0002-RS01-9666(1:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY01-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY11-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY21-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY31-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY41-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY51-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY61-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY71-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)
0000012345 2FF70002AC011234 HPE_3PAR_8400 2FF70002AC012345 2001-0202-XY81-
9656(0:0:1) 2023-0002-RS01-9666(0:2:3)

7. For MDM and online migrations only, zone hosts to the destination storage system. Use the HPE 3PAR SSMC to verify that the host
on the source has paths to as many destination controller nodes as are zoned in the SAN.

Migration process
Procedure

1. Review optional parameters:

Consistency groups

Prioritization

Data reduction

LUN conflicts

2. Perform a migration:

Perform an Online migration.

Perform a Minimally Disruptive migration.

Perform an Offline migration.

Cluster migration from an Oracle RAC cluster .

3. Clean up after the migration:

Clean up after successful MDM and Online migrations

Cleaning up after a successful Offline migration

Postmigration tasks in VMware ESXi environments

Consistency Groups
An HPE Consistency Group is an optional feature that allows you to consistently migrate the dependent volumes of applications.

Migrating data from HPE XP to HPE 3PAR 20


To keep source volumes in a consistent state during migration, I/O writes are mirrored to the source storage system during the entire
migration. When the migration completes, I/O switches to the destination simultaneously, for all volumes in a consistency group.
When the migration task completes, consistency groups are automatically deleted from the destination storage system.
Guidelines

For consistent imports, limit the number of volumes in a consistency group to 60. Limit the volumes to only include the ones that
must remain consistent.

To avoid long switch over time at the end of imports, limit total volume size in a consistency group to 120 TB.

Data reduction
You can apply data reduction settings on the destination storage system for migrating volumes. Requirements and limitations for
volume compression during data migration include the following:
Supported with HPE 3PAR destination systems and HPE Storage Online Import Utility 2.4.

The physical disks of the CPG must be SSD.

Volumes on the source must be at least 16 GiB for compression.

The source volume must be less than 16 TiB to be migrated as a compressed volume.

You cannot compress a fully provisioned volume.

NOTE:
You can compress volumes selected for migration or all volumes that meet compression requirements.

LUN conflicts
Migrating storage objects from multiple source storage systems to multiple destination storage systems can cause conflicts. Conflicts
can occur if the LUN ID from migrating volumes already exists on the destination host. LUN conflicts will cause the migration to fail.
When a conflict exists, the showmigration command shows STATUS output similar to the following example:
preparationfailed(-NA-)
(OIUERRPREP1021: Lun number conflict exists for LUN# <x> <y> ... , while presenting to hosts
<host1>

By default, the migration utility resolves LUN conflicts that occur during createmigration . You can override this action by
setting -autoresolve false allowing the createmigration command to fail.

Prioritization
Set migration priority across volumes using the optional priority parameter in the createmigration command. The
priority level allows you to set low , medium (default), or high .
Use the priority parameter as follows:

You can set a priority on volumes or volume sets. The priority setting on a volume takes precedence over the priority setting
on a volume set.

The priorityvolmap parameter places volumes inside a priority level.

Performing online migration


Migrating data from HPE XP to HPE 3PAR 21
Procedure

1. In the migration utility console, verify that there is a connection between the source and destination storage systems:

showconnection

2. Prepare the migration task:

Example:

createmigration -sourceuid 12345 -srchost "<host_ID>" -destcpg <CPG_ID>


-destprov <provisioning_type> -migtype online -persona "RHEL_5_6"

IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .

For XP7 systems, specify the 10-digit serial number for -sourceuid .

Example output:

SUCCESS: Migration job submitted successfully. Please check status/details using


showmigration command.

Migration id: 1394789473527

For Oracle RAC—Execute multiple migrations if you want to transfer all Oracle-based disks from multiple source systems to a single
destination storage system.

3. To retrieve the migration id and monitor progress:

showmigration or shomigrationdetails

Example:

showmigration
MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1394789473527 online <source_name> <destination_name> Thu Sep 25 15:23:50 EDT 2014

END_TIME STATUS(PROGRESS) (MESSAGE)


-NA- preparationcomplete(100%)(-NA-)

When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.

4. From either the HPE 3PAR SSMC or the HPE 3PAR OS, verify migration paths.

5. Update path configuration on the host by rescanning all HBAs. Verify the newly discovered paths. The multipath subsystem on the
host recognizes extra paths to the storage system after rescanning.

On HP-UX hosts:

ioscan -f

On Linux hosts:

# ls /sys/class/fc_host
host4 host5
# echo "1" > /sys/class/fc_host/host4/issue_lip
# echo "- - -" > /sys/class/scsi_host/host4/scan
# echo "1" > /sys/class/fc_host/host5/issue_lip
# echo "- - -" > /sys/class/scsi_host/host5/scan

# multipath -ll
mpath2 (360060e80045be50000005be500001249) dm-3 HP,OPEN-V
[size=6.8G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 4:0:0:1 sdb 8:16 [active][ready]

Migrating data from HPE XP to HPE 3PAR 22


\_ 4:0:4:1 sdf 8:80 [active][ready]
\_ 5:0:1:1 sdh 8:112 [active][ready]
\_ 5:0:2:1 sdj 8:144 [active][ready]
mpath1 (360060e80045be50000005be500001248) dm-2 HP,OPEN-V
[size=6.8G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 4:0:0:0 sda 8:0 [active][ready]
\_ 4:0:4:0 sde 8:64 [active][ready]
\_ 5:0:1:0 sdg 8:96 [active][ready]
\_ 5:0:2:0 sdi 8:128 [active][ready]

NOTE:
If paths were not discovered, stop. Enter the multipath -v2 command to troubleshoot and investigate
potential multipath issues. Correct issues and rescan before proceeding to the next step.

On VMware ESXi 6.x hosts—Rescan the HBAs:

# esxcli storage core adapter rescan -all

List the updated multipath mappings:

# esxcli storage core path list

On Windows hosts—Used the rescan option in the diskpart command.

6. On the destination HPE 3PAR OS, verify that there is traffic over all paths that connect the host and the source storage system:

statport -host <hostname>

7. Remove zoning to the host from the source storage system.

Example— RHEL 6.6:

mpathoq (360060e8006cf52000000cf5200002541) dm-429 3PARdata,VV


size=9.2G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:2:409 sdou 129:416 active ready running
`- 2:0:2:409 sdaie 129:800 active ready running
mpathnl (360060e8006cf52000000cf5200002526) dm-403 3PARdata,VV
size=9.2G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:2:382 sdnt 71:496 active ready running
`- 2:0:2:382 sdahd 71:880 active ready running

8. Start the migration.

startmigration -migrationid <migrationid>

Example:

startmigration -migrationid 1394789473527


SUCCESS: Data transfer started successfully.

9. Monitor the migration progress.

showmigrationdetails -migrationid <migrationid>

Example:

shomigration -migationid 1394789473527


MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1394789473527 online sourceXP1 dest3PAR1 Fri Apr 04 16:38:24 EDT 2019

END_TIME STATUS(PROGRESS)(MESSAGE)
-NA- success(-NA-) (-NA-)

Migrating data from HPE XP to HPE 3PAR 23


When all objects have migrated successfully, the PROGRESS column shows Completed .

10. Verify that all object migrated to the destination.

More information
Preparing for the migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Aborting a migration

Discovering a source storage system LUN with LUN ID 254 on a Linux host

To discover the source storage system LUN with LUN ID 254 on a Linux host:

NOTE:
The output of the commands in the following procedure is an example. Following this procedure allows online migration
for LUNs with LUN ID 254 with supported Linux configurations. For a cluster configuration, perform this procedure on
each node.

Procedure

1. Issue the following command on the Linux server:

# sg_map -i|grep 3PARdata | grep SES


/dev/sg28 3PARdata SES 3222
/dev/sg29 3PARdata SES 3222
/dev/sg69 3PARdata SES 3222
/dev/sg82 3PARdata SES 3222

2. To find the name of the host devices and rescan their HBAs, issue the following command after the createmigration
operation is complete:

# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "1" > /sys/class/fc_host/host3/issue_lip
# echo "- - -" > /sys/class/scsi_host/host3/scan

3. To list the devices, rerun the command:

# sg_map -i|grep 3PARdata


/dev/sg28 3PARdata VV 3222
/dev/sg29 3PARdata SES 3222
/dev/sg69 3PARdata SES 3222
/dev/sg82 3PARdata VV 3222

The output shows two of the sg devices that are tagged as VV instead of SES .

4. To verify that the LUN with LUN ID 254 is not yet listed, issue the multipath -ll command on the Linux server.

To discover the LUN with LUN ID 254, delete old devices and then rescan the HBAs again as follows:

5. To delete old devices that are listed as VV, issue the following command:

# echo "1" > /sys/class/scsi_generic/sg28/device/delete


# echo "1" > /sys/class/scsi_generic/sg82/device/delete

6. To rescan HBAs on the host, issue the following command:

# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "1" > /sys/class/fc_host/host3/issue_lip

Migrating data from HPE XP to HPE 3PAR 24


# echo "- - -" > /sys/class/scsi_host/host3/scan

7. To verify that the discovered LUN with LUN ID 254 is now listed, issue the multipath -ll command on the Linux server:

size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw


`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:2:254 sdbr 68:80 active ready running
`- 5:0:1:254 sdbs 68:96 active ready running

Results show that you can now migrate the LUN using the online procedure.

More information
Preparing for the migration
Discovering a source storage system LUN with LUN ID 254 on a Linux host
Aborting a migration

Performing Minimally Disruptive migration

Prerequisites
Completed Preparing the migration
Procedure

1. In the migration utility console, verify that there is a connection between the source and destination storage systems:

showconnection

2. Prepare the migration task:

Example:

createmigration -sourceuid <source_ID> -srchost "<host_ID>"


-destcpg <CPG_ID> -destprov <provisioning_type> -migtype MDM -persona "RHEL_5_6"

IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .

For XP7 systems, specify the 10-digit serial number for -sourceuid .

Example output:

SUCCESS: Migration job submitted successfully. Please check status/details


using showmigration command.
Migration id: <migration_ID>

3. To retrieve the migration id and monitor progress:

showmigration or shomigrationdetails

Example:

showmigration -migrationid 1395864499741


MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1395864499741 MDM <source_name> <destination_name> Wed Mar 26 13:08:19 PDT 2014

END_TIME STATUS(PROGRESS) (MESSAGE)


-NA- preparationcomplete(100%)(-NA-)

When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.

4. Shut down the host and leave it offline until after the migration starts.

5. Remove zoning between the host OS and the source system.

Migrating data from HPE XP to HPE 3PAR 25


6. Add zoning between the host OS and the destination system.

7. Start the migration:

startmigration -migrationid <migration_ID>

8. Monitor the migration progress. The PROGRESS of each volume is 0% and the TASK_ID is unknown . TASK_ID and
PROGRESS fields update as the task executes.

NOTE:
Do not bring hosts back online while TASK_ID is unknown

showmigrationdetails -migrationid <migrationid>

Example:

showmigrationdetails -migrationid <migration_ID>

SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS


<source_volume> <destination_volume> <task_id> 27%

When all objects have migrated successfully, the PROGRESS column shows Completed .

9. Bring the host back online .

10. Rescan the disks to detect the migrated volumes/LDEVs.

More information
Enabling SCSI-3 persistent reservations for Veritas Storage Foundation
Bringing the host back online

Enabling SCSI-3 persistent reservations for Veritas Storage Foundation

When an LDEV on the source XP host is migrated, an SCSI-3 reservation prevents unwanted management changes. This reservation is
automatically removed after the migration completes successfully.
Procedure

1. In the Veritas Enterprise Administrator, right-click DMP DSMs and then select DSM Configuration.

2. Select V3PARAA from the available DSM list.

3. Set the load balance policy: Round Robin (Active/Active).

4. Select SCSI-3 support for SCSI settings, and then click OK.

Performing Offline migration

Prerequisites
Completed Preparing the migration
Procedure

1. In the migration utility console, verify that there is a connection between the source and destination storage systems:

showconnection

2. Prepare the migration task:

Example:

Migrating data from HPE XP to HPE 3PAR 26


createmigration -sourceuid <source_ID> -srchost "<host_ID>"
-destcpg <CPG_ID> -destprov <provisioning_type> -migtype offline -persona "RHEL_5_6"

IMPORTANT:
For non-XP7 systems, specify the five-digit serial number for -sourceuid .

For XP7 systems, specify the 10-digit serial number for -sourceuid .

Example output:

SUCCESS: Migration job submitted successfully. Please check status/details


using showmigration command.
Migration id: <migration_ID>

3. To retrieve the migration id and monitor progress:

showmigration or shomigrationdetails

Example:

showmigration -migrationid 1395864499741


MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1395864499741 offline <source_name> <destination_name> Wed Mar 26 13:08:19 PDT 2014

END_TIME STATUS(PROGRESS) (MESSAGE)


-NA- preparationcomplete(100%)(-NA-)

When the STATUS column of the showmigration output indicates preparationcomplete(100%) , continue to the
next step.

NOTE:
Offline migration does not create the host definition on the destination storage system. The storage administrator
performs this step manually as a post-migration task.

4. Start the migration:

startmigration -migrationid <migration_ID>

5. Monitor the migration progress.

showmigrationdetails -migrationid <migrationid>

Example:

shomigration -migationid 1394789473527


MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1495864389774 offline sourceXP1 dest3PAR1 Fri Apr 04 16:38:24 EDT 2019

END_TIME STATUS (PROGRESS)(MESSAGE)


-NA- unpresenting(-1%-) (-NA-)

When all objects have migrated successfully, the PROGRESS column shows Completed .

The STATUS column indicates Success when all volumes or LDEVs have migrated successfully.

Migrating data from an Oracle RAC cluster

This use case migrates data from an Oracle RAC cluster configured with the cluster registry (CRS), voting disks, and data disks
distributed across multiple storage systems belonging to the same storage vendor.
Oracle supports increases to database capacity by adding storage systems. If Automatic Storage Management (ASM) is enabled, disks in
an ASM disk group that are included from different storage systems must all have the same performance characteristics and size. Data
Migrating data from HPE XP to HPE 3PAR 27
disks can be distributed across multiple storage systems. Because the CRS, voting disks are used for cluster configuration and integrity
and they do not need to be configured on every storage system.
Data migration using OIU for Oracle RAC clusters is documented here for two specific configuration scenarios:

Oracle Database deployments before 11gR2, with the CRS and voting disks residing outside the ASM, and data disks included in the
ASM disk group.

Oracle Database 11gR2 deployments with the CRS, voting disks, and data disks included in the ASM disk group.
You can use OIU to migrate data from one source storage system at a time, and in an Oracle RAC migration, you must include all
volumes in a consistency group. The number of migrations made by OIU for an Oracle RAC configuration distributed across multiple
source storage systems is equal to the number of source storage systems deployed. For example, if the Oracle RAC database is
distributed across three storage systems (with the CRS and voting disks configured on one of them), perform three OIU migrations.
The order of OIU migrations does not matter. In this use case, the following sequences were verified:

Completely migrate the source storage systems with the CRS and voting disks and then migrate the source storage systems with
the data disks.

Completely migrate the source storage systems with the data disks and then migrate the source storage system with the CRS and
voting disks.

IMPORTANT:
Migrations are performed using the online migration procedure described in this guide. During migration, there are
instances when the Oracle RAC is distributed, and the CRS, voting, and data disks coexist on the deployed source and
destination storage systems. Coexistence is supported across the destination storage system and the source storage
system as shown in the support matrix.

Figure 1: Oracle RAC—Data migration from multiple storage systems

The figure shows two source storage systems—one with CRS, voting, and data disks, and the other with data disks only. Migrating the

Migrating data from HPE XP to HPE 3PAR 28


disks online from the two source storage systems to a single destination storage system with OIU is accomplished in two serially
implemented phases.
Configuration details:

The deployment uses Oracle Database 11gR2 RAC with ASM enabled, CRS, voting disk, and data disks on both source storage
systems included in the ASM. Implementation is not affected even if the CRS and voting disks are excluded from the ASM (as
required by Oracle Database implementations with releases earlier than 11gR2).

There are two source storage systems: source storage system 1 and source storage system 2.

NOTE:
More than two nodes in a cluster or more than two source storage systems is supported for this implementation.

This example shows two source storage systems, two phases, or two OIU migrations deployed for a complete migration:

Migration 1: The CRS, voting disks, and data disks are migrated from source storage system 1 to the destination storage system.

Migration 2: Data disks from source storage system 2 are migrated to the destination storage system.

NOTE:
The order of operation is not important—you can reverse migration 1 and migration 2.

Falling back to the source storage system after a failed or aborted migration on Oracle
RAC clusters

After a migration fails or is aborted, stop all applications performing I/O operations on the LUNs that were being migrated.
Procedure
1. From any one of the cluster nodes, stop all the databases by issuing:

# $ORACLE_HOME/bin/srvctl stop database -d <database name>

2. (Optional) On the HPE storage system, clear the reservation from all the LUNs that are part of the failed migration:

setvv -clrrsv -f <vvname>

3. Verify that all the database instances are offline. On all nodes:

# $GRID_HOME/bin/crs_stat —t

4. Zone the source storage system back to the Oracle cluster nodes.

5. Present the volume back to the host from the source storage system by rescanning for new data paths to the LUN.

6. Delete SD device paths through the HPE storage system.

7. Remove the failed migration:

removemigration -migrationid <migrationid>

NOTE:
Migrations that fail in the import phase require a manual cleanup of the source and destination storage systems.

8. Verify that migrating VVs are removed from the source HPE storage system. Manually remove the migrating VVs:

# removevv —f <vvname>

9. (Optional) Unmask volumes from the HPE peer host on the source storage system.

10. Rescan the host and verify that it does not see paths from the HPE storage system.

11. Start the database:

Migrating data from HPE XP to HPE 3PAR 29


# $GRID_HOME/bin/crs_stat —t

12. Restart the applications.

Stopping host services

Stopping Oracle RAC cluster services


Procedure

1. On the primary node:

$GRID_HOME/bin/crsctl stop cluster -all

2. Confirm that cluster services stopped:

# $GRID_HOME/crs_stat -t
CRS-0184: Cannot communicate with CRS Daemon

3. Stop the ASM:

/etc/init.d/oracleasm stop

Stopping Oracle RAC database services


Procedure

1. To stop the Oracle RAC database services, on the primary node:

# $ORACLE_HOME/bin/srvctl stop database -d <DB_name> -o immediate

2. Confirm that database services stopped:

# $ORACLE_HOME/bin/srvctl status database -d <DB_name>

Stopping Veritas cluster services

Procedure

1. Stop the clusters:

/opt/VRTSvcs/bin/hastop -all

2. Verify cluster status: on all nodes.


/opt/VRTSvcs/bin/hastatus

When a cluster is down a message displays:

attempting to connect....
VCS ERROR V-16-1-10600 Cannot connect to VCS engine
attempting to connect....not available; will retry

3. The following steps verify that reservations are clear after stopping a cluster:

Migrating data from HPE XP to HPE 3PAR 30


a. Create a tmpfile in the /root directory with all the LUNs or LDEVs.

b. Verify that the reservation keys are clear: vxfenadm -s all -f tmpfile

c. If the reservation keys are not clear, clear them:

# vxfenadm -a -k ABCD1234 -f tmpfile


# vxfenadm -c -k ABCD5678 -f tmpfile

The reservation keys are clear when each data LUN in the tmpfile indicates No keys .

Stopping cluster services on a Hyper-V running on a Windows Server


Procedure

For an active/passive cluster running Windows Server 2008 or Windows Server 2008 R2: Disable the cluster to release SCSI
reservations:

1. Set the Cluster Shared Volume (CSV) disks to Maintenance Mode.

2. Set the quorum disk to the Offline mode.

3. Select Shutdown Cluster from the Failover Cluster Manager.

For an active/passive cluster running Windows Server 2012 or later:

1. Stop all applications running on the cluster.

2. Set the quorum disk to Offline.

3. Select Shutdown Cluster from the Failover Cluster Manager.

Bringing the host back online

NOTE:
Do not bring hosts back online before a TASK_ID is assigned.

Prerequisites
Verify that a TASK_ID was issued by the migration utility.
Procedure

1. Reconfigure the HBA BIOS during host startup, and then select the HPE storage system boot device.

2. For host booting over SAN from the source storage system—To select the HPE 3PAR boot device, reconfigure the HBA BIOS during
host startup.

Windows Server hosts

Hyper-V clusters

3. Scan for newly exported LUNs from the destination storage system:

Linux hosts

IBM AIX hosts

4. If applicable, restart the cluster, applications, and services. The host can resume normal operations.

Bringing Hyper-V clusters online


Migrating data from HPE XP to HPE 3PAR 31
Bringing Hyper-V clusters online
Procedure
1. Bring the quorum disks online.

2. Bring the CSV disks online.

3. Start virtual machines from the Failover Cluster Manager.

Bringing the IBM AIX host online


Procedure

1. To rescan the host for disks with the storage system signature:

cfgmgr

2. Verify that the host recognizes the volumes on the storage system:

lsdev

Example:

# lsdev -Cc disk


hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available 00-08-02 3PAR InServ Virtual Volume
hdisk2 Available 00-08-02 3PAR InServ Virtual Volume
hdisk3 Available 00-08-02 3PAR InServ Virtual Volume
hdisk4 Available 00-08-02 3PAR InServ Virtual Volume
hdisk5 Available 00-08-02 3PAR InServ Virtual Volume
hdisk6 Available 00-08-02 3PAR InServ Virtual Volume

3. Using the volume group to PVID-mapping information, import the volume groups.

If a volume group is mapped to multiple physical disks, specify one disk from the list. The importvg command automatically
identifies the remaining disks that are mapped to the volume group.

# lspv
hdisk0 00f825bdd5f7e96e rootvg active
hdisk1 00f825bd4a05917b None
hdisk2 00f825bd5dfc80c6 AIX1VG active
hdisk3 00f825bd5dfc82c1 AIX2VG active

Notice, PVIDs associated with the volume groups AIX1VG and AIX2VG . After zoning the host to the storage system, rescan the
disks. The resulting output from the lspv command shows:

hdisk0 00f825bdd5f7e96e rootvg active


hdisk1 00f825bd4a05917b None
hdisk7 00f825bd5dfc80c6 None
hdisk8 00f825bd5dfc82c1 None

Import the volume groups by specifying the corresponding disks with the same PVIDS. IN the example, pvid is the physical disk
on which the volume group was created:

importvg -y vgname pvid

For applications using XP storage system LDEVs in raw format, reconfigure to point to the corresponding storage system volumes
that have the same PVID.

4. On the host, verify that all disks exported from the destination have the expected number of paths:

lspath

Example host with multiple LUNs:

Migrating data from HPE XP to HPE 3PAR 32


# lspath
Enabled hdisk0 scsi0
Enabled hdisk1 scsi0
Enabled hdisk2 scsi0
Enabled hdisk3 fscsi0
Enabled hdisk4 fscsi0
Enabled hdisk3 fscsi1
Enabled hdisk4 fscsi1

Bringing Linux hosts online


Procedure
For each HBA on the host that is connected to the storage system:
echo "1" > /sys/class/fc_host/host<#>/issue_lip
echo "- - -" > /sys/class/scsi_host/host<#>/scan

# ls /sys/class/fc_host
host2 host3
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "1" > /sys/class/fc_host/host3/issue_lip
# echo "- - -" > /sys/class/scsi_host/host2/scan
# echo "- - -" > /sys/class/scsi_host/host3/scan

Bringing Windows hosts online


Procedure

1. Check the disk status using one of the following:

diskpart.exe

Windows Disk Manager

2. Depending on the Windows SAN policy, the disks that were migrated are either offline or online.

Offline disks— diskpart.exe .

Online disks—Windows Disk Manager

Aborting a migration

NOTE:
You cannot abort a migration after issuing the startmigration command.

Procedure

1. Abort a migration after issuing the createmigration command.

removemigration -migrationid 1394789473527

2. Remove the migration.

removemigration -migrationid 1394789473527

Migrating data from HPE XP to HPE 3PAR 33


Cleaning up after successful MDM and Online migrations
Prerequisites

Verified that all migration tasks from the source storage system completed successfully.

Verified that all volumes migrated successfully to the destination storage system.

All applications started, and work correctly from the destination storage system.

Procedure
1. For each completed migration task:

removemigration -migrationid <migrationid>

Example:

removemigration -migrationid 1122334455667


SUCCESS: The specified migration is successfully removed

2. Remove the source storage system from the migration:

removesource -type <storagesystemfamily> -uid <uid>

Example:

removesource -type XP -uid 48748


SUCCESS: Removed source storage system

3. Remove the destination storage system from the migration:

removedestination -uid <uid>

Example:

removedestination -uid xxxxxxxxxxxx


SUCCESS: Removed destination storage system

4. Remove zoning between the source storage system and the destination storage system.

5. Reconfigure the peer ports into host ports on the destination system.

6. Remove the SMI-S provider by removing HPE XP CVAE.

7. If needed, schedule a time to resignature the migrated VMware disks before rebooting cluster nodes.

8. (Optional)The WWN of a migrated volume is the one it had on the source system. To change the WWN to the schema used on the
destination system:

setvv –wwn auto <VV>

NOTE: Do not use this command on exported volumes.

9. Delete the migrated volumes on the source system.

10. In Windows environments, return the Path Verify Enabled MPIO to the previous setting.

NOTE:
For storage systems in an HPE 3PAR Peer Persistence relationship, do not disable Path Verify Enabled
MPIO .

11. (Optional) Expand exported volumes to the next 256 MB boundary from the HPE 3PAR OS:

growvv <name of volume> 1

More information

Migrating data from HPE XP to HPE 3PAR 34


Performing postmigration tasks in VMware ESXi environments

Performing postmigration tasks in VMware ESXi environments


Procedure

After rebooting, remove and then add RDM devices to virtual machines. See the VMware KB article 1016210 from the VMware
Knowledge Base

After rebooting, the ESXi host might not automatically mount VMFS data stores. If the data store is not accessible by the ESXi host,
see the VMware KB article 1011387 .

After all data stores are mounted and resignatured, they mount automatically after rebooting. Extra steps are required for updating
references to the original signature in virtual machine files. For more information, see Managing duplicate VMFS datastores on the
vSphere 5 documentation center .

Cleaning up after a successful Offline migration


Prerequisites
Verified that all migration tasks from the source storage system completed successfully.

Verified that all volumes migrated successfully to the destination storage system.

All applications started, and work correctly from the destination storage system.

Procedure

1. Remove zoning between the source storage system and host.

2. The WWN of a migrated volume is the one it had on the source system. To change the WWN to the schema used on the destination
system:

setvv –wwn auto <VV>

3. Export the LUNs to the host on the destination storage system. From HPE 3PAR SSMC:

a. Create a host entry on the destination storage system (if it does not exist).

b. Set the HPE storage system host persona/host OS to the appropriate value.

c. Export migrated LUNs to the newly created host. For more information, see the HPE 3PAR implementation guides for your
specific host and the HPE 3PAR Host Explorer User Guide at the HPE Information Library.

4. Delete the migrated volumes from the source storage system.

Identifying and deleting source storage system LUN paths

The examples show LUN paths on an HPE XP storage system.

Identifying and deleting source system LUN paths with VMware ESXi

Identifying and deleting source system LUN paths with Linux native device-mapper multipath

Identifying and deleting source storage system LUN paths with HP-UX 11 v3

Identifying and deleting source system LUN paths with VMware ESXi
Migrating data from HPE XP to HPE 3PAR 35
After the createmigration task completes successfully, log on to the ESXi host and rescan HBAs.
Procedure

1. From the CLI, issue the following command to rescan HBAs in the host:

In this example, vmhba2 and vmhba3 are the FC HBAs.

#
esxcfg-rescan vmhba3

#
esxcfg-rescan vmhba2

The output shows LUNs with their source and destination system paths.

2. To list all LUNs and their corresponding paths, issue the following command:

#
esxcfg-mpath -b

naa.60060e8005bc1f000000bc1f00000170 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000170)
vmhba3:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09

naa.60060e8005bc1f000000bc1f00000171 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000171)
vmhba3:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09

naa.60060e8005bc1f000000bc1f00000150 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000150)
vmhba3:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09

naa.60060e8005bc1f000000bc1f00000151 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000151)
vmhba3:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09

Migrating data from HPE XP to HPE 3PAR 36


The output shows LUNs with their source and destination system paths.

3. Remove the source storage system from the host zone. The host will show the status of the source path as Target:
Unavailable .

#
esxcfg-mpath -b

naa.60060e8005bc1f000000bc1f00000170 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000170)
vmhba3:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L2 LUN:2 state:dead fc Adapter: Unavailable Target: Unavailable

naa.60060e8005bc1f000000bc1f00000171 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000171)
vmhba3:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L3 LUN:3 state:dead fc Adapter: Unavailable Target: Unavailable

naa.60060e8005bc1f000000bc1f00000150 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000150)
vmhba3:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L0 LUN:0 state:dead fc Adapter: Unavailable Target: Unavailable

naa.60060e8005bc1f000000bc1f00000151 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000151)
vmhba3:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc
vmhba3:C0:T2:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21
vmhba2:C0:T0:L1 LUN:1 state:dead fc Adapter: Unavailable Target: Unavailable

4. Rescan the HBAs.

In this example, the FC HBAs are vmhba2 and vmhba3 .

#
esxcfg-rescan vmhba3

#
esxcfg-rescan vmhba2

5. Verify the LUN paths, issuing the following command:

#
esxcfg-mpath -b

naa.60060e8005bc1f000000bc1f00000170 : HP Fibre Channel Disk

Migrating data from HPE XP to HPE 3PAR 37


(naa.60060e8005bc1f000000bc1f00000170)
vmhba3:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc

naa.60060e8005bc1f000000bc1f00000171 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000171)
vmhba3:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc

naa.60060e8005bc1f000000bc1f00000150 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000150)
vmhba3:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc

naa.60060e8005bc1f000000bc1f00000151 : HP Fibre Channel Disk


(naa.60060e8005bc1f000000bc1f00000151)
vmhba3:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN:
10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc
vmhba2:C0:T3:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN:
10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc

Identifying and deleting source system LUN paths with Linux native device-mapper
multipath

After the createmigration task completes successfully, rescan the HBAs on a Linux host.
Procedure

1. Issue the multipath -ll command:

Rescanning HBAs

#
ls /sys/class/fc_host

host4 host5
#
echo "1" > /sys/class/fc_host/host4/issue_lip

#
echo "1" > /sys/class/fc_host/host5/issue_lip

#
echo "- - -" > /sys/class/scsi_host/host4/scan

#
echo "- - -" > /sys/class/scsi_host/host5/scan

#
multipath -ll

mpathd (360060e80045be50000005be500001424) dm-5 XP,OPEN-V


size=14G features='1 queue_if_no_path' hwhandler='0' wp=rw

Migrating data from HPE XP to HPE 3PAR 38


`-+- policy='round-robin 0' prio=1 status=active
|- 5:0:1:2 sde 8:64 active ready running
|- 4:0:2:2 sdg 8:96 active ready running
|- 5:0:0:2 sdp 8:240 active ready running
`- 4:0:0:2 sdl 8:176 active ready running
mpathc (360060e80045be50000005be500001412) dm-3 XP,OPEN-V
size=244G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 5:0:1:1 sdd 8:48 active ready running
|- 4:0:2:1 sdf 8:80 active ready running
|- 5:0:0:1 sdo 8:224 active ready running
`- 4:0:0:1 sdk 8:160 active ready running

Rescanning HBAs and listing the updated multipath mapping for SUSE

# ls /sys/class/fc_host

host2 host3
#
echo "- - -" > /sys/class/scsi_host/host2/scan

#
echo "1" > /sys/class/fc_host/host3/issue_lip

#
# echo "- - -" > /sys/class/scsi_host/host3/scan

#
multipath –ll

360060e8005bc1f000000bc1f00000040 dm-11 HP,OPEN-V


size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 2:0:0:0 sda 8:0 active ready running
`- 3:0:1:0 sdt 65:48 active ready running
360060e8006cf49000000cf4900000f26 dm-2 HP,OPEN-V
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 2:0:2:5 sdk 8:160 active ready running
`- 3:0:2:5 sdad 65:208 active ready running

2. For each LUN being migrated, identify the LUN and its WWN in the output. For example, migrating a LUN with WWN
360060e80045be50000005be500001424 and four paths, sde , sdg , sdp , and sdl :

a. To identify the paths associated with the source storage system, issue the following command on each associated path:

#
cat /sys/block/sde/device/model

OPEN-E
#
cat /sys/block/sdg/device/model

OPEN-E
#
cat /sys/block/sdp/device/model

VV
#
cat /sys/block/sdl/device/model

VV

The output shows that paths sde and sdg belong to the source storage system. Paths sdp and sdl belong to the

Migrating data from HPE XP to HPE 3PAR 39


destination storage system.

b. To delete paths from the operating system, enter the following command:

#
echo "1">/sys/block/sde/device/delete

#
echo "1">/sys/block/sdg/device/delete

3. Repeat the previous steps for all nodes in the cluster.

Identifying and deleting source storage system LUN paths with HP-UX 11 v3

After the createmigration task completes successfully, remove zoning between the source storage system and the host.

NOTE:
With legacy DSF paths, before continuing, clean up stale paths from the volume group using the pvchange(1M)
and vgreduce(1M) commands.

Procedure

1. Using the HP-UX CLI, enter ioscan -fnN as follows:

#
ioscan -fnN

slot 2 0/0/0/9/0/0 pci_slot CLAIMED SLOT PCI Slot


fc 2 0/0/0/9/0/0/0 fcd CLAIMED INTERFACE HP 451871-B21 8Gb
Dual Port PCIe Fibre Channel Mezzanine (FC Port 1)
/dev/fcd2
tgtpath 9 0/0/0/9/0/0/0.0x20220002ac001abc estp CLAIMED
TGT_PATH fibre_channel target served by fcd driver, target port id 0x10c00
lunpath 1256 0/0/0/9/0/0/0.0x20220002ac001abc.0x0 eslpt CLAIMED
LUN_PATH LUN path for ctl30
lunpath 1258 0/0/0/9/0/0/0.0x20220002ac001abc.0x4001000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2153
lunpath 1259 0/0/0/9/0/0/0.0x20220002ac001abc.0x4002000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2150
lunpath 1260 0/0/0/9/0/0/0.0x20220002ac001abc.0x4003000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2149
lunpath 1261 0/0/0/9/0/0/0.0x20220002ac001abc.0x4004000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2154
lunpath 1257 0/0/0/9/0/0/0.0x20220002ac001abc.0x40fe000000000000 eslpt CLAIMED
LUN_PATH LUN path for ctl31
tgtpath 4 0/0/0/9/0/0/0.0x50060e8006cf4923
estp NO_HW TGT_PATH fibre_channel target served by fcd driver, target
port id 0x10100
lunpath 1004 0/0/0/9/0/0/0.0x50060e8006cf4923.0x0
eslpt NO_HW LUN_PATH LUN path for ctl13
lunpath 1003 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4000000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2149
lunpath 1005 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4001000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2150
lunpath 1007 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4002000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2153
lunpath 1008 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4003000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2154
fc 3 0/0/0/9/0/0/1 fcd CLAIMED INTERFACE HP 451871-B21 8Gb

Migrating data from HPE XP to HPE 3PAR 40


Dual Port PCIe Fibre Channel Mezzanine (FC Port 2)
/dev/fcd3
tgtpath 10 0/0/0/9/0/0/1.0x21220002ac001abc estp CLAIMED
TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0b00
lunpath 1262 0/0/0/9/0/0/1.0x21220002ac001abc.0x0 eslpt CLAIMED
LUN_PATH LUN path for ctl32
lunpath 1050 0/0/0/9/0/0/1.0x21220002ac001abc.0x4001000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2153
lunpath 1051 0/0/0/9/0/0/1.0x21220002ac001abc.0x4002000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2150
lunpath 1052 0/0/0/9/0/0/1.0x21220002ac001abc.0x4003000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2149
lunpath 1105 0/0/0/9/0/0/1.0x21220002ac001abc.0x4004000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2154
lunpath 1263 0/0/0/9/0/0/1.0x21220002ac001abc.0x40fe000000000000 eslpt CLAIMED
LUN_PATH LUN path for ctl31
tgtpath 5 0/0/0/9/0/0/1.0x50060e8006cf4933 estp NO_HW
TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0100
lunpath 1006 0/0/0/9/0/0/1.0x50060e8006cf4933.0x0 eslpt NO_HW
LUN_PATH LUN path for ctl13
lunpath 1009 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4000000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2149
lunpath 1012 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4001000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2150
lunpath 1010 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4002000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2153
lunpath 1011 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4003000000000000 eslpt NO_HW
LUN_PATH LUN path for disk2154
usb 4 0/0/0/29/0 uhci CLAIMED INTERFACE Intel UHCI Controller
usb 5 0/0/0/29/1 uhci CLAIMED INTERFACE Intel UHCI Controller

2. Remove the paths from the host to the source, enter ioscan -f as follows:

# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4000000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4001000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4002000000000000
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x4003000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4000000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4001000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4002000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x4003000000000000
# rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x0
# rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933

3. To verify that all paths from the host to the source shown in the NO_HW state were removed, enter ioscan -fnN as follows:

#
ioscan -fnN

slot 2 0/0/0/9/0/0 pci_slot CLAIMED SLOT PCI Slot


fc 2 0/0/0/9/0/0/0 fcd CLAIMED INTERFACE HP 451871-B21 8Gb
Dual Port PCIe Fibre Channel Mezzanine (FC Port 1)
/dev/fcd2
tgtpath 9 0/0/0/9/0/0/0.0x20220002ac001abc estp CLAIMED
TGT_PATH fibre_channel target served by fcd driver, target port id 0x10c00
lunpath 1256 0/0/0/9/0/0/0.0x20220002ac001abc.0x0 eslpt CLAIMED
LUN_PATH LUN path for ctl30
lunpath 1258 0/0/0/9/0/0/0.0x20220002ac001abc.0x4001000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2153
lunpath 1259 0/0/0/9/0/0/0.0x20220002ac001abc.0x4002000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2150
lunpath 1260 0/0/0/9/0/0/0.0x20220002ac001abc.0x4003000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2149

Migrating data from HPE XP to HPE 3PAR 41


lunpath 1261 0/0/0/9/0/0/0.0x20220002ac001abc.0x4004000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2154
lunpath 1257 0/0/0/9/0/0/0.0x20220002ac001abc.0x40fe000000000000 eslpt CLAIMED
LUN_PATH LUN path for ctl31
fc 3 0/0/0/9/0/0/1 fcd CLAIMED INTERFACE HP 451871-B21 8Gb
Dual Port PCIe Fibre Channel Mezzanine (FC Port 2)
/dev/fcd3
tgtpath 10 0/0/0/9/0/0/1.0x21220002ac001abc estp CLAIMED
TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0b00
lunpath 1262 0/0/0/9/0/0/1.0x21220002ac001abc.0x0 eslpt CLAIMED
LUN_PATH LUN path for ctl32
lunpath 1050 0/0/0/9/0/0/1.0x21220002ac001abc.0x4001000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2153
lunpath 1051 0/0/0/9/0/0/1.0x21220002ac001abc.0x4002000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2150
lunpath 1052 0/0/0/9/0/0/1.0x21220002ac001abc.0x4003000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2149
lunpath 1105 0/0/0/9/0/0/1.0x21220002ac001abc.0x4004000000000000 eslpt CLAIMED
LUN_PATH LUN path for disk2154
lunpath 1263 0/0/0/9/0/0/1.0x21220002ac001abc.0x40fe000000000000 eslpt CLAIMED
LUN_PATH LUN path for ctl31
usb 4 0/0/0/29/0 uhci CLAIMED INTERFACE Intel UHCI Controller
usb 5 0/0/0/29/1 uhci CLAIMED INTERFACE Intel UHCI Controller
usb 6 0/0/0/29/7 ehci CLAIMED INTERFACE Intel EHCI 64-bit
Controller

Where are the migration utility log files?


Symptom
Locate the migration utility log files to help identify why an error occurred.

NOTE:
Stop the migration utility service before retrieving log files.

Action

1. Default log location:

C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu\OIUTools\tomcat\32-


bit\apache-tomcat-7.0.59\logs

2. Default log file names:

hpeoiu.log and hpeoiuaudit.log

3. Default configuration data location:

C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu\OIUData\data

How to increase logging details


Symptom
Increase logging to the most verbose level.
Action

1. Stop the migration utility service.

2. Edit the file in the following default location:

Migrating data from HPE XP to HPE 3PAR 42


C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3parpmu\OIUTools\tomcat\32-bit\apache-
tomcat-7.0.59\webapps\oiuweb\WEB-INF\classes\applicationConfig.properties

3. Make the following change:

log4j.rootCategory=INFO, DebugLogAppender

To

log4j.rootCategory=ALL, DebugLogAppender

4. Start the migration utility service.

The migration utility console does not open in Windows 7


Symptom
Double-clicking the migration utility icon or right-clicking the icon to Run As Administrator does not launch the console.
Action

1. Launch a command window.

2. Run the OIUCLI.bat file from the following default location or directory where you installed the migration utility:

Default location:

C:\Program Files (x86)\Hewlett Packard Enterprise\hp3paroiu\CLI

3. Log in to the migration utility.

Cannot add an HPE XP source storage system


Symptom
The addsource command is unsuccessful.
Cause

An invalid source storage system UID is specified when executing the addsource command.

The HPE XP source storage system is in a failed stage within the XP CVAE.

The XP CVAE is running an unsupported software version.

An invalid user and/or password is provided for the XP CVAE.

An invalid IP address or port (secure/unsecure) is provided for the XP CVAE.

The XP CVAE has client IP address filtering enabled and the OIU server is not on the trusted IP address list.

The XP CVAE is not managing the source storage system.

The OIU server and the XP CVAE do not have network connectivity.

The XP CVAE and the source storage system do not have network connectivity.

The HPE XP source storage system is running an unsupported firmware version.

Action
Check each of the above conditions to identify which may be causing the problem, then resolve it.

Cannot admit or import a volume


Migrating data from HPE XP to HPE 3PAR 43
Cannot admit or import a volume
Symptom
Admitting or importing a volume does not succeed.

Solution 1
Action

1. For MDM and Offline migrationverify that the destination storage system has no active paths. This can be done by checking the
host information on the HPE 3PAR SSMC or by using the HPE 3PAR CLI command, showhost .

2. To determine where the operation failed, monitor the migration task by checking the task screen or using the HPE 3PAR CLI .

showtask

Solution 2
Cause
The task does not complete before the volume import stage, but after volumes are admitted on the destination storage system, you
can manually return the system to the pre-admit state. This process is non-disruptive to the hosts provided that the appropriate
zoning and host multipathing are re-established. The host must have access to the volume through the source system. For single-
volume migrations, removing zoning is not required.
Action
To return the system to its state before an unsuccessful volume admit:
1. On the fabric and host—If zoning was removed from between the host and source system, re-zone and then confirm that I/O is
directed to the source system.

2. On the fabric and host—Removing zoning between the host and the destination storage system. Verify that all access to the
volumes is through the source system.

3. On the destination storage system—Remove the VLUNs on the destination storage system for the peer volumes exported to the
hosts.

4. On the destination storage system—Remove the peer volumes from the destination storage system.

5. On the destination storage system—When no volumes were exported from the destination storage system to the host, remove the
host from the destination storage system.

6. On the source storage system—Remove the VLUN exports to the host representing the destination storage system from the
source storage system.

7. On the source storage system—Remove the host representing the destination storage system from the source storage system.

Solution 3
Cause
The task does not complete after volume import tasks started. The hosts' access to the volumes on the source system was interrupted.
A failed import returns the system to the point where you can retry the import after resolving the problem. You can revert the
configuration so that the I/O access is from the source system, but this is a manual process and requires down time.
Action
To revert the configuration so that the source system is servicing I/O:

1. On the host—To prevent consistency issues, shut down active applications before shutting down the hosts. Stop access to the
destination storage system from the host. The host will lose access to the volumes being migrated, as part of the procedure.

2. On the destination storage system—Cancel all active import tasks for the volumes that were being migrated. To cancel the import
task, from the HPE 3PAR CLI :

canceltask

3. On the destination storage system—Remove the VLUNs for the volumes exported to the host.

4. On the destination storage system—Remove the peer volumes.

5. On the destination storage system—If no other volumes are exported to the host, remove the host.

Migrating data from HPE XP to HPE 3PAR 44


6. On the source system—Remove the VLUN exports to the destination storage system.

7. On the source system—Remove the host representing the destination storage system.

8. On the source system—From the HPE 3PAR CLI issue the setvv -clrrsv command to all volumes that were being migrated
on the source system. If the source system is running HPE 3PAR OS 3.1.3 or above, issue the setvv -clralua command to
all volumes that were being migrated.

9. On the fabric and host—If needed, re-zone the host to the source system.

10. On the host—Restart the host and any applications that were shut down at the beginning of this process.

Solution 4
Cause
The import task fails with LD read failure as follows:
2015-08-07 05:32:31.87 PDT {10920} {events:normal }
LD mirroring failed after 740 seconds due to failure in LD read(1).
Request was for 256MB from LD 270:1280MB to be mirrored to LD 501:3328MB

The problem could be caused by reading from the source volume.


Action

1. Check for issues such as broken peer links, bad disks on the source, or source running out of space. Fix any issues before re-
initiating the migration.

2. If an LD write error is reported from a destination volume, check for bad disks or insufficient space on the destination. Fix any
issues before re-initiating the migration.

Cannot log in to the migration utility


Symptom
When you attempt to log in to the migration utility console, an error displays:
CLI Version: x.x.x
Enter IPADDRESS: 10.22.3.123
Enter USERNAME: tester
Enter PASSWORD:
>ERROR: Invalid credentials, Please try with valid credentials
Enter IPADDRESS:

Solution 1
Cause
The user was required to change their password.
Action

1. Clear the User must change password at next logon check box in the user setup.

2. Launch the migration utility console and log in.

Solution 2
Action

1. On the Windows server, where the migration utility is installed: Stop the migration utility service.

2. Remove the oiu.keystore from:

%HOMEDRIVE%%HOMEPATH%%

3. Start the migration utility service.

Migrating data from HPE XP to HPE 3PAR 45


4. Launch the migration utility console and log in.

Cannot validate a security certificate


Symptom
When you attempt to issue the adddestination command for an HPE 3PAR storage system running HPE 3PAR OS 3.1.2 MU3 or
later, one of the following errors display:
OIUERRDST0010 Unable to validate certificate for HPE 3PAR Storage System

OR
OIURSLDST0010. Please use the installcertificate command to accept the certificate.

Perform the following steps to add the CA signed certificate to the HPE 3PAR storage system:
Action

1. Connect to the storage system using PuTTY or access the HPE 3PAR CLI .

2. Issue the showcert -service cli -type rootca –pem command.

The root CA signed certificate should appear. If the following message displays:

There are no certificates for the following service(s): cli

Issue the showcert -type rootca –pem command.

3. Copy and save the certificate with a .pem extension in the security folder ( home directory of current
user>\InFormMC\security ).

NOTE:
To view the home directory of current user, run the echo %HOMEDRIVE%%HOMEPATH% command from the
Windows command prompt.
If %HOMEDRIVE%%HOMEPATH% is blank or not the directory of the user, check and then use one of the
following locations:
C: \InFormMC\security

C:\Windows\SysWOW64\config\systemprofile\InFormMC\security

4. Run the showcert -service cli -type intca –pem command.

The intermediate CA signed certificate should appear. If the following message displays:

There are no certificates for the following service(s): cli

Issue the showcert -type intca –pem command.

5. Copy and save the certificate with a .pem extension in the security folder (mentioned above).

6. To install the root and intermediate CA signed certificates, run the following command (in the command line) twice, once for the
root CA and once for the intermediate CA:

keytool -import -file <path of security folder >\<filename>.pem -keystore HP-3PAR-MC-


TrustStore

NOTE:
To run the keytool commands, Java v6.0 or later must be installed and the PATH environment variable should
contain the path to java.exe . If the path is not specified, issue set PATH=%PATH%;C:\Program
Files (x86)\Java\jre\bin to set it dynamically.

Example:

Migrating data from HPE XP to HPE 3PAR 46


keytool -import -file rootca.pem -alias rootca –keystore HP-3PAR-MC-TrustStore

7. Issue the addsource command or the adddestination command again to add the HPE 3PAR storage system.

Clean up after the createmigration command fails


Symptom
The createmigration command failed.
Cause
After the command fails, perform the recovery actions before starting over.
Action

1. Remove the migration:

removemigration -migrationid <migrationID>

2. Stop the migration utility service.

3. For MDM or online migration: Remove zoning between the host and the destination storage system.

4. Delete any VLUNs that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .

5. Delete any host or host sets that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .

NOTE:
If any of the migrating volumes were presented, unpresent them from the host, and then delete.

6. Delete any volumes that were created on the destination storage system. Use the HPE 3PAR SSMC or HPE 3PAR CLI .

7. On the source storage system, remove the exports for the volumes to the host groups HCMDxxxxx whose migration failed.

8. Back up the current data folder, and then delete the data folder from:

<Install drive/folder>/Hewlett Packard Enterprise/hpe3paroiu/OIUData/data

9. Back up of current log folder and delete the log folder from .

<Install drive/folder>/Hewlett Packard Enterprise/hpe3paroiu/OIUTools/tomcat/32-bit/apache-


tomcat-7.0.59/logs

10. Verify:

a. The destination CPG has enough space.

b. Disks are in good condition.

c. No snapshot or replication volumes are selected for migration.

11. Check the peer links on the peer ports.

a. Make sure the target port WWNs appear in the discovered port list.

b. Check target port connections from the destination storage system.

12. Start the migration utility service.

13. From the migration utility console:

a. Add the source storage system to the migration utility.

b. Add the destination storage system to the migration utility.

c. Verify the source and destination zoning:

showconnection

Migrating data from HPE XP to HPE 3PAR 47


14. Issue the createmigration command.

15. After the peer host is created on the source: Make sure that the source appears as a connected device on both peer ports of the
destination.

a. Perform a rescan:

showtarget -rescan

b. List the visible target WWNs:

showtarget -lun all

The createmigration command fails.


Symptom
Issuing the createmigration command fails.
The following error codes might appear:

OIUERRAPP0000

OIUERRDB1006

OIUERRPREP1023

OIURSLAPP0000

Cause
The createmigration command can fail if storage objects selected for migration do not meet all requirements, for example:

One or more LUNs in a host group is being replicated.

A LUN selected for migration exceeds the maximum limit of 64 TiB.

LUNs were not found in the host group specified for migration.

Action

1. Review Requirements for selecting storage objects.

2. Review Considerations when selecting storage objects for migration.

3. Verify that the storage system information you gathered is correct. For example, IP address, host name, and volume names.

4. Before trying the command again, see How do I clean up after the createmigration command fails?

The createmigration command fails with an error about peer port connectivity
Symptom
The createmigration task fails and the error message indicates that there is a communication issue between peer ports and
host ports.
Cause
The createmigration command causes the migration utility to verify network connectivity and zoning.
Action

1. Review the zoning section of your migration guide.

2. Make sure that each host port on the source system is zoned to one physical peer port on the destination system. Virtual peer
ports can be present on the physical peer ports of the destination system.

3. Make sure that the source and destination systems are connected to the fabric.

Migrating data from HPE XP to HPE 3PAR 48


The createmigration command returns error OIUERRDB1006
Symptom
The createmigration command displays the following error message:

"ERROR: OIUERRDB1006 Database constraint violated"

Cause
The LUNs were not found in the host group specified for migration.
Action
1. Add the LUNs to the host group.

2. Stop, and then start the migration utility service.

3. Retry the createmigration operation.

The adddestination command fails with error OIUERRDST00001


Symptom
Attempting to add a destination system results in the following error message:
ERROR: OIUERRDST0001 Unable to connect to the destination storage system. OIURSLDST0001 Ensure
that the valid IP address, credentials and certificate are provided.

Cause
A security certificate was not installed with the migration utility.
Action

1. Install a certificate:

Example:

installcertificate -mgmtip aperol

(alternate)

installcertificate -mgmtip <systemname>

Example output:

Certificate details:
Issue by: aperol.eu.tslabs.hpecorp.net
Issue to: aperol.eu.tslabs.hpecorp.net
Valid from: 03/26/2019
Valid to: 03/25/2022
SHA-1 Fingerprint: F1:93:FB:ED:6F:60:89:C4:A6:30:35:02:E6:C3:E3:6D:B9:29:65:DA
Version: v3
Do you accept the certificate? Y/N
Y

(alternate)

Certificate details:
Issue by: systemname.aa.testlab.examplecorp.net
Issue to: systemname.aa.testlab.examplecorp.net
Valid from: 03/26/2019
Valid to: 03/25/2022
SHA-1 Fingerprint: F1:93:BB:ED:6B:60:89:C4:B6:30:35:02:F6:B3:D3:6E:A9:29:65:EB

Migrating data from HPE XP to HPE 3PAR 49


Version: v3
Do you accept the certificate? Y/N
Y

2. Enter Y to accept.

The following message displays:

SUCCESS: Installed certificate successfully.

3. Retry the adddestination command.

The createmigration command fails because of an error creating the host


Symptom
The createmigration command fails because the migration utility could not create the host on the destination storage system.

Solution 1
Cause
The host name exists on the destination storage system with a different WWN.
Action

1. Same host name, different WWN—On initial migration of a host, include all the WWNs in the host group (for some storage
systems). Include WWNs that are not managing any LUNs. Subsequent migration of any WWN for this host will find a match.

2. Different host name, same WWN—Change the host name so that it does not match the existing host name on the destination
storage system.

Solution 2
Cause
For migrations from multiple source systems, specify the host name, host group name, or initiator group name in the -srchost
parameter of the createmigration command.
Action
If you cannot modify the host name or initiator group name on the source storage system: Edit the host name on the destination
storage system before issuing the createmigration command.

The createmigration command returns error OIUERRPREP1023 for XP


Symptom
The createmigration command returns error OIUERRPREP1023 when performing a migration from XP.
Action

1. Remove all volumes with Peer provisioning on the destination system.

2. Remove all presentations of the migrating volumes to the host groups named HCMDxxxxx that represent the HPE 3PAR on the
HPE XP source system.

3. Repeat the createmigration command.

The createmigration command returns error OIUERRPREP1027 migrating a 64 TiB


LUN
Migrating data from HPE XP to HPE 3PAR 50
Symptom
Issuing the createmigration command causes the error:
OIUERRPREP1027: Volume <Volumename> is ineligible for migration as size is less than minimum
size limit 256 MB or more than the maximum supported size.

Cause
The LUN selected for migration exceeds the maximum limit of 64 TiB.

The createmigration command fails with an error in the -srcvolmap parameter


Symptom
There is missing information in the -srcvolmap parameter.
Cause
Provisioning or CPG information was not provided in the -srcvolmap parameter. The migration utility detected more volumes
than specified in the -srcvolmap option.
Action
The -destprov and -destcpg parameters are required with the -srcvolmap parameter.
Example:
createmigration -sourceuid <sourceuid> -srchost host1 -migtype mdm
-persona RHEL_8 -destcpg ssd_r6 -destprov thin

SUCCESS: Migration job submitted successfully. Please check status/details using


showmigration command.
Migration id: 1400681640423

Example:
createmigration -sourceuid <sourceuid> -srcvolmap [{00:10:A4}] -migtype mdm
-persona RHEL_8 -destcpg ssd_r6 -destprov thin

SUCCESS: Migration job submitted successfully. Please check status/details using


showmigration command.
Migration id: 1400681640423

The createmigration command with -hostset parameter returns error


OIUERRDST0003
Symptom
Issuing the createmigration command with -hostset parameter returns the following error message:

preparationfailed(-NA-)(:OIUERRDST0003:The array is not in an usable state.;)

Cause
There is an invalid character, such as a space, in the host set name. The following example shows an invalid space in the host set name:
createmigration -sourceuid <sourceuid> -srchost R65-S02-IG -destcpg FC_r6
-destprov <provisioningtype> -migtype MDM -persona <persona> -vvset R65-S02_VVset
-hostset R65-S02 Hostset
SUCCESS: Migration job submitted successfully. Please check status/details using
showmigration command. Migration id: 1440011328444

Migrating data from HPE XP to HPE 3PAR 51


showmigration
MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME
1440011328444 MDM SYMMETRIX+000198701333 TAY-7200-07-R123 Wed Aug 19 15:08:48 EDT 2015
END_TIME STATUS(PROGRESS) (MESSAGE)
-NA preparationfailed(-NA-)(:OIUERRDST0003:The array is not in an usable state.;)

Action

1. Remove the createmigration task twice. The first time removes the migration from the queue. The second time removes
the migration from the migration utility database.

Example:

removemigration -migrationid 1440011328444


SUCCESS: The specified migration is successfully removed.

2. Reissue the createmigration command with a valid host name.

Example:

createmigration -sourceuid <sourceuid> -srchost R65-S02-IG


-destcpg FC_r6 -destprov <provisioningtype> -migtype MDM -persona <persona>
-vvset R65-S02_VVset -hostset R65-S02_Hostset

The startmigration command fails after a successful createmigration task


Symptom
A createmigration task completes successfully but startmigration fails.
Cause
Migrating LUNs smaller than 16 GB is not supported.
Action

1. Increase the size of the LUN to at least 16 GB, and then retry the operation.

2. Remove any volume smaller than 16 GB from the createmigration definition, retry the operation.

3. Check the paths between the source and destination systems. Verify that the showconnection command displays two paths
for each source storage system. Retry the operation.

4. Contact HPE Support if the issue persists.

The startmigration command fails


Symptom
The startmigration task can fail for the following reasons:

Solution 1
Cause
The host name exceeds 31 characters or contains a space character.
Action

1. Make sure that host names do not exceed 31 characters.

2. Make sure that host names do not contain space characters.

Solution 2

Migrating data from HPE XP to HPE 3PAR 52


Cause
The startmigration status shows Importing and the migration utility stalls, but an error message does not display.
Action

1. Make sure that the destination system is available.

2. Verify that the connection between the source and destination is active.

3. Resubmit the startmigration command.

The startmigration task fails without an error message


Symptom
When the startmigration task status displays Importing , the destination system crashes, the migration utility hangs, but
no error message displays.
Action
Resubmit the migration. If the problem persists, contact HPE Support.

Trailing spaces in IP address return login error


Symptom
When logging in to the migration utility an error message displays:
ERROR: Error connecting to server using IP XX.XX.XX.XX

Cause
There were trailing spaces in the IP address.
Action
Launch the migration utility console and enter the IP address without trailing spaces.

The migration process

Premigration sets up migration tasks. Data is identified but not moved.


On the source system: The migration utility creates two host groups that contain the WWN of one destination peer port initiator. The
host group name format is HCMDxxxxx where x is a hexadecimal number.

A createmigration command cannot simultaneously contain a host group and an LDEV.

For MDM and Online migration, the migration utility creates the migrating hosts on the destination storage system.

For migrating consistency groups, temporary VV sets are created on the destination storage system. These temporary VV sets are
removed after migration completes.

The createmigration command performs the following checks:

Verifies the source host group configuration: Make sure that the LDEVs or host groups specified in the createmigration
command are mapped to a host group on the associated source system.

Verifies the storage system model number.

Verifies LUN migration eligibility:

The host presentation protocol is FC.

Migrating data from HPE XP to HPE 3PAR 53


LUNs are not replicated.

The LUN is for OPEN systems.

The LUN is not a snapshot or pool volume for snapshots or thin provisioning.

The LUN is not a system device or command device.

The LUN is between 256 MB and 64 TB. 16 TB for deduplication or compression.

When the createmigration operation completes:

For Online migrations, the LDEVs being migrated are admitted to the destination system and then exported to the host.

For MDM, the LDEVs are admitted to the destination storage system but are not exported to the host.

Moving the data

When data-transfer time is insufficient for the migration of all volumes, a subsequent migration can be started with another subset
at a later stage.

All or a subset of the LDEVs selected are migrated with the startmigration command.

To select a subset of LDEVs, use the -subsetvolmap option.

During data migration, LDEV service time is adversely affected when a host issues large amounts of read/write traffic over the
destination storage system.

During data migration, paths must remain zoned between the host and destination storage system. The host operating system and
the migration type determine when these paths are created.

Examples using the consistency group parameters


Create Consistency groups with the following parameters in the createmigration command:
cgvolmap identifies volumes in a consistent migration.

allvolumesincg specifies all volumes including implicit volumes for consistent migration.

NOTE:
When there are multiple consistency groups in a migration task, and you issue the showmigrationdetails
command, the consistency group name changes to Not Assigned as each group migration completes.

Migrate a subset of volumes


createmigration -sourceuid 12345 —migtype online
-srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]"
-persona "RHEL_5_6" -destcpg testcpg -destprov full
-cgvolmap {“values":{"cg1": ["vol1","vol2"]}}

The -srcvolmap parameter selects vol1 , vol2 , and vol3 for migration from the source storage system.

The -cgvolmap parameter names the Consistency Group cg1 .

Migrate all volumes


createmigration -sourceuid 12345 —migtype online
-srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]"
-allvolumesincg -destcpg testcpg -destprov thin -persona "RHEL_5_6"

The -srcvolmap parameter selects vol1 , vol2 , and vol3 for migration from the source storage system.

The -allvolumesincg places the listed volumes, and all implicitly added volumes in a single Consistency Group on the
destination storage system.

Migrate a subset of volumes in a host


createmigration -sourceuid 12345 —migtype online

Migrating data from HPE XP to HPE 3PAR 54


-srchost "hostname" -destcpg testcpg -destprov thin -persona "RHEL_5_6"
-cgvolmap{“values":{"cg1":["vol1","vol2","vol3"],"cg2":["vol4","vol5","vol6"]}}

The -srchost option selects all volumes exported to the host hostname for migration.

The -cgvolmap parameter defines Consistency Groups cg1 , and cg2 .

Volumes vol1 , vol2 , and vol3 are placed in cg1 . Volumes vol4 , vol5 , and vol6 are placed in cg2 . The volumes
in cg1 , and cg2 are migrated independently, and consistently.

Migrate all volumes in a host


createmigration -sourceuid 12345 —migtype online
-srchost "hostname" -destcpg cpg1 -destprov thin -persona "RHEL_5_6"
-allvolumesincg

All volumes exported to hostname on the source storage system are placed in a Consistency Group.

Prioritization examples

Prioritizing volumes
createmigration -migtype online -sourceuid 3PAR1 -srcvolmap
"[{vol1,thin,testcpg},{vol2,thin,testcpg}]" -destcpg testcpg -destprov reduce
-persona "RHEL_5_6" -priorityvolmap {"values":{"low":["vol1","vol2"],"high":["vol3","vol4"]}}

The -priorityvolmap parameter sets priority to low for vol1 and vol2 .

vol1 and vol2 migrate with a lower priority than vol3 and vol4 .

The -priorityvolmap parameter sets priority to high for vol3 and vol4 .

vol3 and vol4 migrate with a higher priority than vol1 and vol2 .

Other volumes that were implicitly added to the migration receive the default medium priority.

Prioritizing volume sets


createmigration -migtype online -sourceuid 3PAR1 -srcvolmap "[{set:volset1,
thin, testcpg}]" -destcpg testcpg -destprov reduce -persona "RHEL_5_6"
-priorityvolmap {"values":{"low":["vol1","vol2"],"high":["vol3","vol4"]}}

The -srcvolmap parameter adds a volume set named volset1 to the migration.

The -priorityvolmap parameter sets priority to low for vol1 and vol2 located in volset1 .

The -priorityvolmap parameter sets priority to high for vol3 and vol4 located in volset1 .

vol3 and vol4 migrate with a higher priority than vol1 and vol2 .

Any other volumes in the volume set migrate at the default priority.

Prioritizing with the -srcvolmap option


createmigration -migtype online -sourceuid 3PAR1 -srcvolmap
"[{vol1,thin,FC_r1,,high},{vol2,thin,FC_r1,,low}]" -destcpg testcpg -destprov
reduce

Using the srcvolmap option to migrate vol1 at high priority and vol2 at low priority.

Migrating data from HPE XP to HPE 3PAR 55


Prioritizing with consistency groups
To migrate volumes consistently and set priorities, all volumes in a consistency group must have the same priority.
Using the allvolumesincg option:

createmigration -migtype online -sourceuid 3PAR1 -srcvolmap "[{vol1,thin,FC_r1


},{vol2,thin,FC_r1},{vol3,thin,FC_r1},{vol4,thin,FC_r1}]" -destcpg FC_r1
-destprov thin -persona "RHEL_5_6" –allvolumesincg -priorityvolmap
{"values":{"high":["vol1","vol2",”vol3”,”vol4”]}}

All explicitly named volumes are migrated with high priority.


Migrate volumes with different priority for each consistency group:
createmigration -migtype online -sourceuid 3PAR1 -srcvolmap
"[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol13,thin,testcpg},
{vol14,thin,testcpg}]" -destcpg testcpg -destprov thin -persona "RHEL_5_6"
{"values":{"CG1":["vol1","vol2"]}} -priorityvolmap
-cgvolmap {"values":{"low":["vol1","vol2",”vol3”,”vol4”]}}

All volumes in srcvolmap are migrated with priority as specified in priorityvolmap .

The volumes listed in cgvolmap are migrated consistently with low priority.

Examples using data reduction settings

Compressing all volumes in a host-based migration


To compress all volumes in a migration, use the -compressall option with the createmigration command as follows:

createmigration -sourceuid 12345 -srchost "host_a"


-migtype mdm -destcpg FC_r1 -destprov thin -compressall
-persona "personavalue"

Compressing selected volumes in a volume-based migration


To compress selected volumes in a migration, use the compress parameter in the -srcvolmap option of the
createmigration command for each volume entry.
Compress a volume, 00:0B:F2 with thin provisioning:

–srcvolmap [{00:0B:F2,thin,testcpg,compress}]

Compress a volume, 00:0B:F2 with dedupe provisioning:

–srcvolmap [{00:0B:F2,dedupe,testcpg,compress}]

Using the volmapfile option


You can compress selected volumes using the compress parameter in the volmapfile option of the createmigration
command. All volume details are listed in a separate text file.
Compress volumes with thin provisioning:
createmigration -sourceuid 2FF70002AC001DB5 -volmapfile "C://Volume/volumeMap.txt"

Where the file volumeMap.txt contains:

00:0B:F2,thin,SSD_r1,compress
00:FB:23,thin,FC_r1
00:10:34,thin,SSD_r1,compress

Migrating data from HPE XP to HPE 3PAR 56


Compress volumes with dedupe provisioning:
createmigration -sourceuid 2FF70002AC001DB5 -volmapfile "C://Volume/volumeMap.txt"

Where the file volumeMap.txt contains the following lines:

00:0B:F2,dedupe,SSD_r1,compress
00:FB:23,dedupe,FC_r1
00:10:34,dedupe,SSD_r1,compress

Examples using the autoresolve parameter

Migrating a host
createmigration -srchost "hostname" -sourceid 12345
-migtype mdm -persona "WINDOWS_2008_R2" -destcpg testcpt
-destprov thin -autoresolve false

Migrating a volume
createmigratin -sourceid 12345 -migtype mdm
-autoresolve false -persona "WINDOWS_2008_R2"
-srcvolmap "[{00:34:1B,thin,testcpg}]" -destcpt testcpg -destprov thin

Rolling back to the original source storage system

As a best practice, test a fail-back process before performing migration on live or production data. Typical considerations include:
Responding to a failed migration.

Rolling back to the original storage system.

Requirements for rolling back.


Understanding and documenting the rollback process can help ensure that downtime is either minimal or nonexistent. Steps will vary
depending on the specific hardware and software configurations in your migration.

Figure 2: Rollback process for a failed online migration

Migrating data from HPE XP to HPE 3PAR 57


Figure 3: Rollback process for a failed MDM

Migrating data from HPE XP to HPE 3PAR 58


More information
Clearing a SCSI reservation with HPE 3PAR OS 3.2.1 MU3 or later

Clearing a SCSI reservation with HPE 3PAR OS 3.2.1 MU3 or later

Clear the SCSI reservation only after rolling back to the original source storage system . After clearing the SCSI reservation, reissue the
startmigration command.
Procedure

1. Connect to the destination storage system:

Migrating data from HPE XP to HPE 3PAR 59


a. Enter the IP address or DNS name.

b. Enter user name and password.

2. Find the volumes with reservations:

#
showvv

Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize
0 admin full base --- 0 RW normal 0 0 10240 10240
22 vol0 peer base --- 22 RW exclusive 0 0 16640 16385
23 vol1 peer base --- 23 RW exclusive 0 0 16640 16385
21 vol2 peer base --- 21 RW exclusive 0 0 16640 16385
-------------------------------------------------------------------------------
4 total 0 0 121600 120835

Make a note of volumes where: PROV = peer and Detailed_State = exclusive .

3. Clear the reservation:

#
setvv -clrrsv vol0

4. Delete the volumes on the destination storage system.

Websites

General websites
Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix

https://www.hpe.com/storage/spock
Storage white papers and analyst reports

https://www.hpe.com/storage/whitepapers

For additional websites, see Support and other resources.

Support and other resources

Accessing Hewlett Packard Enterprise Support

For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:

https://www.hpe.com/info/assistance

To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website:

https://www.hpe.com/support/hpesc

Information to collect
Technical support registration number (if applicable)

Migrating data from HPE XP to HPE 3PAR 60


Product name, model or version, and serial number

Operating system name and version

Firmware version

Error messages

Product-specific reports and logs

Add-on products or components

Third-party products or components

Accessing updates

Some software products provide a mechanism for accessing software updates through the product interface. Review your product
documentation to identify the recommended software update method.

To download product updates:

Hewlett Packard Enterprise Support Center

https://www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads

https://www.hpe.com/support/downloads
My HPE Software Center

https://www.hpe.com/software/hpesoftwarecenter

To subscribe to eNewsletters and alerts:

https://www.hpe.com/support/e-updates

To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard
Enterprise Support Center More Information on Access to Support Materials page:

https://www.hpe.com/support/AccessToSupportMaterials

IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise
Support Center. You must have an HPE Passport set up with relevant entitlements.

Remote support

Remote support is available with supported devices as part of your warranty or contractual support agreement. It provides intelligent
event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which initiates a fast
and accurate resolution based on the service level of your product. Hewlett Packard Enterprise strongly recommends that you register
your device for remote support.
If your product includes additional remote support details, use search to locate that information.

HPE Get Connected

https://www.hpe.com/services/getconnected
HPE Pointnext Tech Care

https://www.hpe.com/services/techcare
HPE Datacenter Care

Migrating data from HPE XP to HPE 3PAR 61


https://www.hpe.com/services/datacentercare

Warranty information

To view the warranty information for your product, see the links provided below:

HPE ProLiant and IA-32 Servers and Options


https://www.hpe.com/support/ProLiantServers-Warranties
HPE Enterprise and Cloudline Servers

https://www.hpe.com/support/EnterpriseServers-Warranties
HPE Storage Products

https://www.hpe.com/support/Storage-Warranties
HPE Networking Products

https://www.hpe.com/support/Networking-Warranties

Regulatory information

To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage, Power,
Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center:

https://www.hpe.com/support/Safety-Compliance-EnterpriseProducts

Additional regulatory information


Hewlett Packard Enterprise is committed to providing our customers with information about the chemical substances in our products as
needed to comply with legal requirements such as REACH (Regulation EC No 1907/2006 of the European Parliament and the Council). A
chemical information report for this product can be found at:

https://www.hpe.com/info/reach

For Hewlett Packard Enterprise product environmental and safety information and compliance data, including RoHS and REACH, see:

https://www.hpe.com/info/ecodata

For Hewlett Packard Enterprise environmental information, including company programs, product recycling, and energy efficiency, see:
https://www.hpe.com/info/environment

Documentation feedback

Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation,
use the Feedback button and icons (located at the bottom of an opened document) on the Hewlett Packard Enterprise Support Center
portal (https://www.hpe.com/support/hpesc) to send any errors, suggestions, or comments. All document information is captured by
the process.

Migrating data from HPE XP to HPE 3PAR 62

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy