0% found this document useful (0 votes)
60 views122 pages

HA06

This document discusses configuring Power HA System Mirror for IBM i in a virtualized environment. It covers installing and configuring the components of the virtualized HA solution, including VIOS, virtual disk volumes, and DS configuration. It also discusses tasks for networking, updates, and finding WWPNs after VIOS installation.

Uploaded by

ulysses_ramos
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views122 pages

HA06

This document discusses configuring Power HA System Mirror for IBM i in a virtualized environment. It covers installing and configuring the components of the virtualized HA solution, including VIOS, virtual disk volumes, and DS configuration. It also discusses tasks for networking, updates, and finding WWPNs after VIOS installation.

Uploaded by

ulysses_ramos
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 122

IBM Power Systems Technical University

October 1822, 2010 Las Vegas, NV

Configuring Power HA System Mirror for i in a Virtualized Environment

2010 IBM Corporation

4.1.01

IBM Power Systems Technical University Las Vegas, NV

Agenda
Describe the implementation of an IBM i HA solution in a virtualized environment Install and configure the components of the virtualized HA solution:
DS volumes and host attachments VIOS partition with real devices mapped to virtual devices IBM i partition using virtual disk volumes

Explain the concept of virtualization

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

DS Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Agenda
Storage Manager interface overview Creating LUNs Attaching LUNs to VIOS Detecting LUNs in VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Storage Manager Interface Overview

Double-click a storage subsystem to manage it The task assistant contains the most common operator tasks

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Storage Manager Interface Overview

Logical/Physical view shows controllers physical drives, RAID arrays and LUNs
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

Click the Task Assistant button LUNs can also be created by right-clicking Free Capacity on any RAID array and selecting Create Logical Drive
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

Select correct RAID array based on sizing


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

Mapping of the LUN to VIOS will be configured separately


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Volumes

Physical formatting for the new Volume is still taking place However, the volume can be mapped to VIOS LPAR immediately
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Finding WWPN for Blade IOAs

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Finding WWPN for Blade IOAs

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Finding WWPN for Blade IOAs

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

Click Mappings drop-down menu and select Define New Host

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

The DS will scan the Fibre Channel fabric for adapters that are not already part of a host mapping Next, add the new adapter to the list of selected HBAs
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

Next, assign the Volumes to the newly defined host Right-click the unmapped Volumes, select Define Additional Mapping
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Mapping Volumes to VIOS

Select the new VIOS host in the drop-down menu Once the mapping is complete, the Volumes are now LUNs that are available to VIOS
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS LPAR Configuration, Install and Updates

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Agenda
VIOS enablement LPAR configuration VIOS install Post-install tasks
Configure networking Update VIOS (if necessary) Find out WWPNs
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Enablement
VIOS is a separate partition type Not enabled by default on any IBM Power servers
It is enabled on BladeCenter JS12, JS22

PowerVM Standard or Enterprise Edition must be ordered


VIOS media Enablement code to allow partition type VIOS

Use HMC to enter enablement code


ASMI can also be used
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

Choose Action Create Ethernet Adapter

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

Note the setting Access external network


Required for Shared Ethernet Adapter
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

Choose Action Create SCSI Adapter

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPAR Configuration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: SMS Setup

When the partition is powered on, the PFW screen will appear Hit 1 to enter the SMS menu and set up the install source for VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: SMS Setup

Accept the Hypervisor and PFW license agreement

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD Only

Select option 5, then option 1


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD Only

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: NIM Only

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: NIM Only

Client IP address is this VIOS partitions address Server IP address is the local NIM server A gateway will be necessary if client and server are on different subnets Press M on keyboard to return to the main menu
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: NIM Only

Use (6) for NIM, (3) for DVD


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: NIM Only

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

Exiting SMS starts the install

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

Press 2 and then Enter to confirm this terminal as the console during install The number 2 will not appear on the screen
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

Choose option 2 to verify VIOS is going to be installed on the correct disk unit
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

hdisk0 is the first physical disk or LUN detected VIOS typically installed on mirrored integrated disk, but can be installed on SAN
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Install: DVD or NIM

Once the install is complete and VIOS reboots, log in with userid padmin No password is required for first login; set password for padmin Next, accept the VIOS license agreement
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Post-install Tasks: Network Configuration


If NIM was used for installation, the network configuration is already complete Start a browser to the IP address of the VIOS partition and IVM to create the i5/OS partition(s) If DVD was used: On VIOS console, use the lsdev command to identify the correct network device name Example: ent4 Use the corresponding numeric network interface (en4 in our example) in the mktcpip command to configure networking: mktcpip -hostname <VIOS hostname> -inetaddr <VIOS IP address> -interface enX gateway <gateway IP address> nsrvaddr <DNS server IP address> nsrvdomain <domain name> start Use lstcpip to check the network configuration or rmtcpip f all to start over VIOS command reference:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/i phb1/iphb1_vios_commandslist.htm

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Post-install Tasks: VIOS Updates


The minimum required level of VIOS for i5/OS on blade is 1.5 with Fix Pack 10.1

To check the VIOS level:


Telnet to VIOS Sign in with padmin Use the ioslevel command

To access the latest available VIOS updates and instructions for installing them, visit
http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Determine World-wide Port Name (WWPN)


WWPN is MAC address for Fibre Channel (FC) adapter

Unique 16-digit hexadecimal number burned into FC card

Used in DS configuration to map volumes to hosts

VIOS command line is used


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Determine World-wide Port Name (WWPN)


Step 1: Find the FC adapter device name

Step 2: Find the WWPN

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Detecting LUNs in VIOS

First, use cfgdev command to force VIOS to scan for new devices Then use the lsdev command to check for LUNs (hdisks) The LUNs are available to be virtualized to IBM i
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIOS Configuration and IBM i Install

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Agenda

Creating virtualization objects in VIOS Creating the IBM i LPAR Installing IBM i

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating Virtualization Objects in VIOS


Required objects:
vtscsiX device for each LUN being virtualized to IBM i vtoptX device for each DVD drive

Optional objects:
New entX device for a Shared Ethernet Adapter IBM i partition can also use physical NIC or LHEA

VIOS command line is used


Unless system is managed by IVM instead of HMC IBM i partitions must use only virtual resources if system is IVM-managed IVM is used on Power blade
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LUN Virtualization
First, identify the correct virtual SCSI server adapter(s):

Next, verify the LUNs from the DS are reporting:

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LUN Virtualization
Use mkvdev to create the vtscsiX device:

Verify the new virtual disk (vtscsiX device):

The new virtual disk is immediately available as a non-configured drive to the IBM i client partition

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Optical Virtualization
First, identify the physical CD/DVD drive:

Use mkvdev to create the vtoptX device:

The virtual optical device is immediately available to the IBM i client partition as OPTxx
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LUN Mapping Between VIOS and IBM i


First, identify the virtual devices assigned to a client IBM i partition:

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LUN Mapping Between VIOS and IBM i


Next, examine the details for a single virtual disk (vtscsiX device):

Note the LUN number 1 in this example Lastly, display the disk units details in IBM i:

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating a Shared Ethernet Adapter


Bridges a physical Ethernet adapter and a virtual Ethernet adapter in VIOS Layer-2 bridge Any client partitions on the same VLAN as the bridged virtual Ethernet adapter will be able to access the physical network The mkvdev command is used again: mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 ent0 is the physical adapter in VIOS ent2 is the virtual Ethernet adapter defaultid 1 signifies which VLAN is being bridged As long as the IBM i client partition has a virtual Ethernet adapter on VLAN1, it will have access to the physical network through VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating an IBM i Partition as a Client of VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

Start partition creation wizard


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

Mixing physical and virtual I/O in the same client partition is supported
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

Choose Action Create Ethernet Adapter

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

VLAN ID should match ID virtual Ethernet adapter in VIOS partition SEA will forward packets from IBM i to physical network
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

Choose Action Create SCSI Adapter

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

New POWER6, IBM 6.1 and HMC 7.3.2 capability: vSCSI client adapters in IBM i partition At least 1 vSCSI client adapter is required vSCSI client and server adapter configuration must match
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

3 different methods to provide networking to client partition:


Physical NIC LHEA Virtual Ethernet
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation


Select available logical port

1 LHEA per physical HEA per LPAR is allowed

Additional LHEAs per physical port can be enabled in the managed systems properties
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Creation

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Installing an IBM i Partition on DS4000 through VIOS

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Open console in HMC First order of business, once the install starts, is to verify the virtual disks have been detected Use DST to start the Hardware Service Manager and check the logical resources
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Start a service tool

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Start the Hardware Service Manager (HSM)

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Work with system bus resources

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Find the Virtual IOP type 290A with display the resources associated with it
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

In this test example, a single LUN is virtualized by VIOS The optical unit is a virtual optical drive device in VIOS
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Return to the main DST menu and choose to install LIC

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Select the load source disk unit When multiple LUNs are virtualized for a client partition, any one can be selected as a load source
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Confirm the load source disk unit

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

IBM i Client Partition Install

Install LIC and (later) the OS From this point on, the IBM i install is no different from that on any other system
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating cluster (green screen)


Create Cluster (CRTCLU) Create Cluster (CRTCLU) Type choices, press Enter. Type choices, press Enter. Cluster . . . . . . . . . . . . AS540 Name Cluster . . . . . . . . . . . . AS540 Name Node list: _ Node list: _ Node identifier . . . . . . . ITCHA4C Name Node identifier . . . . . . . ITCHA4C Name IP address . . . . . . . . . . 9.5.110.42' IP address . . . . . . . . . . 9.5.110.42' 9.5.110.43 9.5.110.43 _ _ Node identifier . . . . . . . ITCHA4A Name Node identifier . . . . . . . ITCHA4A Name IP address . . . . . . . . . . 9.5.110.37' IP address . . . . . . . . . . 9.5.110.37' 9.5.110.38' 9.5.110.38' + for more values _ + for more values _ Start indicator . . . . . . . . *YES *YES, *NO Start indicator . . . . . . . . *YES *YES, *NO Target cluster version . . . . . *CUR *CUR, *PRV Target cluster version . . . . . *CUR *CUR, *PRV

If the cluster has more than one node when it is created, "Start indicator" is ignored. Nodes must be started manually after cluster creation. It is recommended to have two separate communication paths (IP interfaces) between systems. Clustering will use both interfaces for heartbeating.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating cluster (green screen)


Create Cluster (CRTCLU) Create Cluster (CRTCLU) Type choices, press Enter. Type choices, press Enter. Cluster . . . . . . . . . . . . AS540 Name Cluster . . . . . . . . . . . . AS540 Name Node list: _ Node list: _ Node identifier . . . . . . . ITCHA4C Name Node identifier . . . . . . . ITCHA4C Name IP address . . . . . . . . . . 9.5.110.42' IP address . . . . . . . . . . 9.5.110.42' 9.5.110.43 9.5.110.43 _ _ Node identifier . . . . . . . ITCHA4A Name Node identifier . . . . . . . ITCHA4A Name IP address . . . . . . . . . . 9.5.110.37' IP address . . . . . . . . . . 9.5.110.37' 9.5.110.38' 9.5.110.38' + for more values _ + for more values _ Start indicator . . . . . . . . *YES *YES, *NO Start indicator . . . . . . . . *YES *YES, *NO Target cluster version . . . . . *CUR *CUR, *PRV Target cluster version . . . . . *CUR *CUR, *PRV

If the cluster has more than one node when it is created, "Start indicator" is ignored. Nodes must be started manually after cluster creation. It is recommended to have two separate communication paths (IP interfaces) between systems. Clustering will use both interfaces for heartbeating.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Creating an HA/DR environment


The basis for HA in a virtualized environment is an IASP
Create the IASP on the production system Move the data and objects to be highly available from ASP 1 to the IASP

A cluster with geographic mirroring will be used


DS replication may be used to insure the information in the DS is highly available However, there is no integrated solution between the cluster and the DS in a virtualized environment Failover of the DS and the cluster nodes are handled independently
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Steps for implementing geographic mirroring


There are five main steps to create a cross site mirror HA solution

Identify the hardware to be used in the IASP or IASP group Create an Independent ASP or IASP group Create a cluster and define a device domain Create a Device CRG Create the mirror copy of the IASP
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Introduction to device domains (1 of 2)

Systems which will switch hardware must be in the same device domain

AS540

Device Domain NFL

IASP

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Introduction to device domains (2 of 2)


Tower 1 Tower 2 IASP33 Unit4016 Unit4017 Unit4019 Unit4020

NODE1
Virtual Addresses Unavailable to NODE1 for IASPs Available only to NODE1 for IASPs Available only to NODE1 for objects that are not in an IASP

IASP33 Unit4001 Unit4002 Unit4003 Unit4018

NODE2
Virtual Addresses Available to NODE2 for IASPs Unavailable only to NODE2 for IASPs Available only to NODE2 for objects that are not in an IASP

Cluster
Tower 3 IASP34 Unit4004 Unit4005 Unit4006 Unit4007 Unit4008 Unit4009 Unit4010 Unit4011 Unit4012 Unit4013 Unit4014 Unit4015

Device Domain

An example of how virtual address space is divided between two nodes in a device domain with three towers. Notice that disk units identification numbers are unique in the device domain, as well.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Adding device domain

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Adding device domain

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Steps for implementing geographic mirroring


There are five main steps to create a cross site mirror HA solution

Identify the hardware to be used in the IASP or IASP group Create an Independent ASP or IASP group Create a cluster and define a device domain Create a Device CRG Create the mirror copy of the IASP
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Create CRG (green screen) (1 of 4)


Create Cluster Resource Group (CRTCRG) Create Cluster Resource Group (CRTCRG) Type choices, press Enter. Type choices, press Enter. Name Cluster . . . . . . . . . . . AS540 Name Cluster . . . . . . . . . . . AS540 Cluster resource group . . . GEOMIR Name Cluster resource group . . . GEOMIR Name Cluster resource group type . *DEV *DATA, *APP, *DEV Cluster resource group type . *DEV *DATA, *APP, *DEV CRG exit program . . . . . . *none ____ Name, *NONE CRG exit program . . . . . . *none ____ Name, *NONE Library . . . . . . . . . . Name Library . . . . . . . . . . Name User profile . . . . . . . . *none Name, *NONE User profile . . . . . . . . *none Name, *NONE Recovery domain node list: _ Recovery domain node list: _ Node identifier . . . . . . ITCHA4C Name Node identifier . . . . . . ITCHA4C Name Node role . . . . . . . . . *PRIMARY *BACKUP, *PRIMARY, *REPLICATE Node role . . . . . . . . . *PRIMARY *BACKUP, *PRIMARY, *REPLICATE Backup sequence number . . *LAST Number, *LAST Backup sequence number . . *LAST Number, *LAST Site name . . . . . . . . . K206 _____ Name, *NONE Site name . . . . . . . . . K206 _____ Name, *NONE Data port IP address 9.5.110.42____ Data port IP address 9.5.110.42____ + for more values _______________ + for more values _______________ + for more values + + for more values + Bottom Bottom

To create a Device CRG through a CL command in green screen, use CRTCRG.


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Create CRG (green screen) (2 of 4)


Create Cluster Resource Group (CRTCRG) Create Cluster Resource Group (CRTCRG) Type choices, press Enter. Type choices, press Enter. Recovery domain node list: _ Recovery domain node list: _ Node identifier . . . . . . ITCHA4C Name Node identifier . . . . . . ITCHA4C Name Node role . . . . . . . . . *PRIMARY *BACKUP, *PRIMARY, *REPLICATE Node role . . . . . . . . . *PRIMARY *BACKUP, *PRIMARY, *REPLICATE Backup sequence number . . *LAST Number, *LAST Backup sequence number . . *LAST Number, *LAST Site name . . . . . . . . . K206 _____ Name, *NONE Site name . . . . . . . . . K206 _____ Name, *NONE Data port IP address 9.5.110.42____ Data port IP address 9.5.110.42____ + for more values _______________ + for more values _______________ Node identifier . . . . . . Node identifier . . . . . . Node role . . . . . . . . . Node role . . . . . . . . . Backup sequence number . . Backup sequence number . . Site name . . . . . . . . . Site name . . . . . . . . . Data port IP address Data port IP address + for more values + for more values

Geographic Mirroring Information Name ITCHA4A Name ITCHA4A *BACKUP *BACKUP, *PRIMARY, *REPLICATE *BACKUP *BACKUP, *PRIMARY, *REPLICATE *LAST Number, *LAST *LAST Number, *LAST L203 _____ Name, *NONE L203 _____ Name, *NONE 9.10.30.34_____ 9.10.30.34_____ _______________ _______________
More... More...

Fill in the information for all nodes in this CRG. NOTE: All nodes in the recovery domain must be in the same device domain.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Create CRG (green screen) (3 of 4)


Create Cluster Resource Group (CRTCRG) Create Cluster Resource Group (CRTCRG) Type choices, press Enter. Type choices, press Enter. Configuration object list: Configuration object list: Configuration object . . . . . APPLE#___ Name, *NONE Configuration object . . . . . APPLE#___ Name, *NONE Configuration object type . . *DEVD *DEVD Configuration object type . . *DEVD *DEVD Configuration object online . *OFFLINE *OFFLINE, *ONLINE, *PRIMARY Configuration object online . *OFFLINE *OFFLINE, *ONLINE, *PRIMARY Server takeover IP address . . *NONE Server takeover IP address . . *NONE Configuration object . . . . . __________ Name, *NONE Configuration object . . . . . __________ Name, *NONE Configuration object type . . *DEVD *DEVD Configuration object type . . *DEVD *DEVD Configuration object online . *OFFLINE *OFFLINE, *ONLINE, *PRIMARY Configuration object online . *OFFLINE *OFFLINE, *ONLINE, *PRIMARY Server takeover IP address . . *NONE Server takeover IP address . . *NONE Configuration object . . . . . *NONE Configuration object . . . . . *NONE Configuration object type . . *DEVD Configuration object type . . *DEVD Configuration object online . *OFFLINE Configuration object online . *OFFLINE Server takeover IP address . . *NONE Server takeover IP address . . *NONE Name, *NONE Name, *NONE *DEVD *DEVD *OFFLINE, *ONLINE, *PRIMARY *OFFLINE, *ONLINE, *PRIMARY More... More...

Fill in the information for all disk pools in this CRG. NOTE: device descriptions must exist for the specified disk pools on all nodes in the recovery domain.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Create CRG (green screen) (4 of 4)


Create Cluster Cluster Resource Resource Group Group (CRTCRG (CRTCRG Create Configuration object object list: list: Configuration Name, *NONE *NONE Configuration object object . .. .. .. . *NONE *NONE Configuration Name, Configuration object object type type . .. . *DEVD *DEVD *DEVD *DEVD Configuration *OFFLINE, *ONLINE, *ONLINE, *PRIMARY *PRIMARY Configuration object object online online . . *OFFLINE *OFFLINE *OFFLINE, Configuration Server take........................................................... Server take........................................................... : Configuration object object list list (CFGOBJ) (CFGOBJ) - Help Help : : Configuration : Text 'descri : : Text 'descri : : : *OFFLINE : : *OFFLINE : Failover mes : Do not vary the configuration object on and do not : Failover mes : Do not vary the configuration object on and do not : Library . : start the server takeover IP address. : Library . : start the server takeover IP address. : : : : : : *ONLINE : : *ONLINE : : Vary the configuration object on and start the : : Vary the configuration object on and start the : : server takeover takeover IP IP address. address. : : server : : : : : : More... : : More... : : F2=Extended help F10=Move to top F12=Cancel : : F2=Extended help F10=Move to top F12=Cancel : F3=Exit F4= : F13=Information Assistant F20=Enlarge F24=More keys : F3=Exit F4= : F13=Information Assistant F20=Enlarge F24=More keys : :.........................................................: :.........................................................:

The value of this parameter will determine whether a disk pool will be varied on upon failover/switchover.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

*ONLINE V6R1 enhancement


In V5R4, if the vary on of the IASP fails for any reason, the switchover/failover is cancelled and the hardware is returned to the original primary system In V6R1, if the vary on fails, only the vary is failed the hardware will remain at the new primary location

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configuring geographic mirroring


In order to configure geographic mirroring the following conditions must exist:
A cluster with two active nodes in the same device domain An independent disk pool on one of the nodes in the cluster Disk capacity on the second node that is similar in size to the existing disk pool An inactive CRG with the disk pool as a configuration object and geographic mirroring site information specified The disk pool must be varied off

All geographic mirroring functions must be performed using the iSeries Navigator

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (1 of 8)

On the disk pool screen, locate the disk pool to be geographically mirrored, click on the right arrow, select Sessions, select New, Select Geographic Mirroring Configure Geographic Mirroring.

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (2 of 8)

Specify the name of the node that will have the mirror copy. Click Next.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (3 of 8)

Provide signon information for the system that will have the mirror copy.

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (4 of 8)

Click Add Disks.


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (5 of 8)

Select the disks that will be used for the mirror copy and click OK.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (6 of 8)

Click Next.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (7 of 8)

Click Finish.
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Configure geographic mirroring-Web-based GUI (8 of 8)

Periodically refresh the display.


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Start the CRG


Start Cluster Resource Group (STRCRG) Start Cluster Resource Group (STRCRG) Type choices, press Enter. Type choices, press Enter. Cluster . . . . . . . . . . . . AS540 Name Cluster . . . .group . . .......... AS540 Name Name Cluster . resource GEOMIR Name Cluster resource group Exit program data . ........... GEOMIR Exit program data . . . . . . . *SAME________________________________ *SAME________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _____________________________________________________ _____________________________________________________

Bottom Bottom

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Summary
Virtualized environments allow many different types of disks to be used by IBM i High availability in a virtualized environment is available only with Geographic Mirroring All virtual HA solutions require option 41 of the operating system and Power HA System Mirror for i

2010 IBM Corporation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy