HUS OperationsGuide DF82753 PDF
HUS OperationsGuide DF82753 PDF
Operations Guide
FASTFIND LINKS
Document revision level
Document organization
Contents
MK-91DF8275-03
© 2012 Hitachi, Ltd., All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any
purpose without the express written permission of Hitachi, Ltd. and Hitachi Data Systems Corporation
(hereinafter referred to as “Hitachi”).
Hitachi, Ltd. and Hitachi Data Systems reserve the right to make changes to this document at any time
without notice and assume no responsibility for its use. Hitachi, Ltd. and Hitachi Data Systems products and
services can only be ordered under the terms and conditions of Hitachi Data Systems' applicable agreements.
All of the features described in this document may not be currently available. Refer to the most recent
product announcement or contact your local Hitachi Data Systems sales office for information on feature and
product availability.
Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of
Hitachi Data Systems’ applicable agreements. The use of Hitachi Data Systems products is governed by the
terms of your agreements with Hitachi Data Systems.
Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi in the United States and other countries.
All other trademarks, service marks, and company names are properties of their respective owners.
ii
Hitachi Unified Storage Operations Guide
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Navigator 2 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Navigator 2 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Security features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Monitoring features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Configuration management features . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Data migration features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Capacity features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
General features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Navigator 2 benefits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Navigator 2 task flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Navigator 2 functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Using the Navigator 2 online help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Contents iii
Hitachi Unified Storage Operations Guide
3 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Connecting Hitachi Storage Navigator Modular 2 to the Host . . . . . . . . . . . . . . 3-2
Installing Hitachi Storage Navigator Modular 2 . . . . . . . . . . . . . . . . . . . . . 3-2
Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Setting Linux kernel parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Setting Solaris 8 or Solaris 9 kernel parameters . . . . . . . . . . . . . . . . . . 3-7
Setting Solaris 10 kernel parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Types of installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Installing Navigator 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Getting started (all users). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Installing Navigator 2 on a Windows operating system . . . . . . . . . . . . . . 3-11
If the installation fails on a Windows operating system . . . . . . . . . . . . . . 3-15
Installing Navigator 2 on a Sun Solaris operating system. . . . . . . . . . . . . 3-16
Installing Navigator 2 on a Red Hat Linux operating system . . . . . . . . . . 3-18
Preinstallation information for Storage Features . . . . . . . . . . . . . . . . . . . . . . 3-19
Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Storage feature requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Requirements for installing and enabling features. . . . . . . . . . . . . . . . . . 3-20
Account Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Audit Logging requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Cache Partition Manager requirements . . . . . . . . . . . . . . . . . . . . . . . 3-20
Data Retention requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
LUN Manager requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
Password Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
SNMP Agent requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
Modular Volume Migration requirements . . . . . . . . . . . . . . . . . . . . . . 3-22
Installing storage features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
Enabling storage features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
Disabling storage features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
Uninstalling storage features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
Starting Navigator 2 host and client configuration. . . . . . . . . . . . . . . . . . . . . 3-24
Host side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24
Client side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24
For Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24
For Linux and Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
Starting Navigator 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27
Setting an attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28
Additional guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29
Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29
Understanding the Navigator 2 interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
Menu Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
Explorer Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
Button panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32
Page panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32
iv Contents
4 Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Provisioning overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Provisioning wizards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Provisioning task flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Hardware considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Verifying your hardware installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Connecting the management console . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Logging in to Navigator 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Selecting a storage system for the first time. . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Running the Add Array wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Running the Initial (Array) Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Registering the Array in the Hitachi Storage Navigator Modular 2 . . . . . . 4-8
Initial Array (Setup) wizard — configuring email alerts . . . . . . . . . . . . . 4-9
Initial Array (Setup) wizard — configuring management ports . . . . . . . 4-11
Initial Array (Setup) wizard — configuring host ports. . . . . . . . . . . . . . 4-12
Initial Array (Setup) wizard — configuring spare drives . . . . . . . . . . . . 4-14
Initial Array (Setup) wizard — configuring the system date and time . . 4-14
Initial Array (Setup) wizard — confirming your settings . . . . . . . . . . . . 4-14
Running the Create & Map Volume wizard . . . . . . . . . . . . . . . . . . . . . . . 4-15
Manually creating a RAID group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Using the Create & Map Volume Wizard to create a RAID group. . . . . . . . 4-17
Create & Map Volume wizard — defining volumes. . . . . . . . . . . . . . . . 4-18
Create & Map Volume wizard — defining host groups or iSCSI targets . 4-19
Create & Map Volume wizard — connecting to a host . . . . . . . . . . . . . 4-20
Create & Map Volume wizard — confirming your settings . . . . . . . . . . 4-21
Provisioning concepts and environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
About DP-Vols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
Changing DP-Vol Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
About volume numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
About Host Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
Creating Host Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
Displaying Host Group Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
About array management and provisioning . . . . . . . . . . . . . . . . . . . . . . 4-24
About array discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Understanding the Arrays screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Add Array screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Adding a Specific Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Adding Arrays Within a Range of IP Addresses . . . . . . . . . . . . . . . . . . 4-25
Using IPv6 Addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26
Contents v
Hitachi Unified Storage Operations Guide
5 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
Security overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-2
Security features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-2
Account Authentication . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-2
Audit Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-3
Data Retention Utility. . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-3
Security benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-3
Account Authentication overview . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-4
Account Authentication features . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-4
Account Authentication benefits . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-4
Account Authentication caveats . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-5
Account Authentication task flow . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-5
Account Authentication specifications . . . . . . . . . . . . . . .... . . . . . . . . . 5-8
Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-8
Account types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-9
Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . 5-9
Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-10
Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-12
Session types for operating resources . . . . . . . . . . . . .... . . . . . . . . 5-12
Advanced Security Mode . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-14
Changing Advanced Security Mode . . . . . . . . . . . . . . .... . . . . . . . . 5-14
Account Authentication procedures . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-15
Initial settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-15
Managing accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-15
Displaying accounts . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-15
Adding accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-17
Changing the Advanced Security Mode . . . . . . . . . . . . . .... . . . . . . . . 5-18
Modifying accounts . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-19
Deleting accounts . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-21
Changing session timeout length . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-22
Forcibly logging out . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-23
Setting and deleting a warning banner . . . . . . . . . . . . . .... . . . . . . . . 5-23
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-25
Audit Logging overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-27
Audit Logging features. . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-27
Audit Logging benefits . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-27
Audit Logging task flow . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-28
Audit Logging specifications . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-29
What to log? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-30
Security of logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-30
Pulling it all together . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-30
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-31
Audit Logging procedures . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-32
Initial settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-32
Optional operations . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . 5-32
vi Contents
Contents vii
Hitachi Unified Storage Operations Guide
LUN Manager benefits . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-3
LUN Manager task flow . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-3
For Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-3
For iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-4
LUN Manager feature specifications. . . . . . . . . . . . . .... . . . . . . . . . . . . 6-5
Understanding preconfigured volumes. . . . . . . . . . . .... . . . . . . . . . . . . 6-5
LUN Manager specifications . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-6
About iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-7
Design configurations and best practices . . . . . . . . . . . . .... . . . . . . . . . . . . 6-9
Fibre Channel configuration . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . 6-9
Fibre Channel design considerations . . . . . . . . . . . . .... . . . . . . . . . . . 6-11
Fibre system configuration . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-11
iSCSI system design considerations. . . . . . . . . . . . . .... . . . . . . . . . . . 6-11
iSCSI network port and switch considerations. . . . .... . . . . . . . . . . . 6-12
Additional system design considerations . . . . . . . .... . . . . . . . . . . . 6-13
System topology examples . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-14
Assigning iSCSI targets and volumes to hosts. . . . . . .... . . . . . . . . . . . 6-18
Preventing unauthorized SAN access . . . . . . . . . . . . .... . . . . . . . . . . . 6-20
Avoiding RAID Group Conflicts . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-21
SAN queue depth setting . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-22
Increasing queue depth and port sharing. . . . . . . .... . . . . . . . . . . . 6-23
Increasing queue depth through path switching . . .... . . . . . . . . . . . 6-23
LUN Manager procedures . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-24
Using Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-25
Using iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-26
Fibre Channel operations using LUN Manager . . . . . . . . .... . . . . . . . . . . . 6-29
About Host Groups . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-29
Adding host groups . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-30
Enabling and disabling host group security . . . . . . . .... . . . . . . . . . . . 6-30
Creating and editing host groups . . . . . . . . . . . . .... . . . . . . . . . . . 6-31
Initializing Host Group 000 . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-35
Deleting host groups . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-35
Changing nicknames . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-36
Deleting World Wide Names . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-36
Copy settings to other ports . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-37
iSCSI operations using LUN Manager. . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-38
Creating an iSCSI target. . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-39
Using the iSCSI Target Tabs . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-39
Setting the iSCSI target security . . . . . . . . . . . . . .... . . . . . . . . . . . 6-41
Editing iSCSI target nicknames. . . . . . . . . . . . . . .... . . . . . . . . . . . 6-42
Adding and deleting targets . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-43
About iSCSI target numbers, aliases, and names . .... . . . . . . . . . . . 6-47
Editing target information. . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-48
Editing authentication properties. . . . . . . . . . . . . .... . . . . . . . . . . . 6-49
Initializing Target 000 . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . 6-50
viii Contents
7 Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Capacity overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Cache Partition Manager overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Cache Partition Manager features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Cache Partition Manager benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Cache Partition Manager feature specifications . . . . . . . . . . . . . . . . . . . . . 7-3
Cache Partition Manager task flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Operation task flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Stopping Cache Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Pair cache partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
Partition capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Supported partition capacities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
Segment and stripe size restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Specifying partition capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Using a large segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Using load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Using ShadowImage, Dynamic Provisioning, or TCE . . . . . . . . . . . . . . 7-10
Installing Dynamic Provisioning when Cache Partition Manager is Used. . . 7-10
Adding or reducing cache memory . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Cache Partition Manager procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Initial settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Stopping Cache Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Working with cache partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Adding cache partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Deleting cache partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
Assigning cache partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Setting a pair cache partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Changing cache partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Changing cache partitions owner controller . . . . . . . . . . . . . . . . . . . . 7-20
Installing SnapShot or TCE or Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . 7-21
VMWare and Cache Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Cache Residency Manager overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Cache Residency Manager features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Cache Residency Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Cache Residency Manager task flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Cache Residency Manager Specifications . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Contents ix
Hitachi Unified Storage Operations Guide
Termination Conditions . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-25
Disabling Conditions . . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-25
Equipment . . . . . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-26
Volume Capacity . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-26
Supported Cache Residency capacities . . . . . . . ........ . . . . . . . . . . . . . 7-26
Restrictions. . . . . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-28
Cache Residency Manager procedures. . . . . . . . ........ . . . . . . . . . . . . . 7-29
Initial settings . . . . . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-29
Stopping Cache Residency Manager . . . . . . ........ . . . . . . . . . . . . . 7-29
Setting and canceling residency volumes . . . ........ . . . . . . . . . . . . . 7-29
NAS Unit Considerations. . . . . . . . . . . . . . . ........ . . . . . . . . . . . . . 7-30
VMware and Cache Residency Manager . . . . ........ . . . . . . . . . . . . . 7-31
x Contents
Contents xi
Hitachi Unified Storage Operations Guide
dfCommandExecutionInternalCondition group . . . . . . . . . . . . . . . . . . 9-55
Additional resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-57
10 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
Virtualization overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Virtualization features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Virtualization task flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Virtualization benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Virtualization and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
Storage Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
A sample approach to virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
Hitachi Dynamic Provisioning software . . . . . . . . . . . . . . . . . . . . . . . . . 10-7
Storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Zone configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Host Group configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-10
One Host Group per cluster, cluster host configuration . . . . . . . . . . . .10-10
Host Group options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-10
Virtual Disk and Dynamic Provisioning performance . . . . . . . . . . . . . .10-11
Virtual disks on standard volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-11
xii Contents
Contents xiii
Hitachi Unified Storage Operations Guide
Advanced Interactive eXecutive (AIX) . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Hewlett Packard UNIX (HP-UX) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Viewing Power Saving status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-46
Powering down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-48
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-49
Powering up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-50
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-51
Viewing volume information in a RAID group . . . . . . . . . . . . . . . . . . . . . . . .11-51
Failure notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-52
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-53
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-53
HDPS AUX-Copy plus aging and retention policies . . . . . . . . . . . . . . . . . . . .11-54
HDPS Power Saving vaulting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-55
HDPS sample scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-57
Windows scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-58
Power down and power up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-58
Using a Windows power up and power down script. . . . . . . . . . . . . . .11-58
Powering down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-60
Powering up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-60
UNIX scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-60
Power down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-60
Power up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-61
Using a UNIX power down and power up script . . . . . . . . . . . . . . . . .11-62
Powering down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-63
Powering up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-63
A Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Glossary
Index
xiv Contents
Intended audience
Product version
Document organization
Related documents
Document conventions
Getting help
Comments
Preface xv
Hitachi Unified Storage Operations Guide
Intended audience
This document is intended for system administrators, Hitachi Data Systems
representatives, and authorized service providers who install, configure,
and operate Hitachi Unified Storage systems.
This document assumes the following:
• The user has a background in data processing and understands storage
systems and their basic functions.
• The user has a background in data processing and understands
Microsoft Windows and their basic functions.
• The user has a background in data processing and understands Web
browsers and their basic functions.
Product version
This document applies to Hitachi Unified Storage firmware version
0920/B and to HSNM2 version 22.02 or later.
xvi Preface
Hitachi Unified Storage Operations Guide
Document organization
Thumbnail descriptions of the chapters are provided in the following table.
Click the chapter title in the first column to go to that chapter. The first page
of every chapter or appendix contains links to the contents.
Preface xvii
Hitachi Unified Storage Operations Guide
Related documents
This documentation set consists of the following documents.
Hitachi Unified Storage Firmware Release Notes, RN-91DF8304
Contains late-breaking information about the storage system firmware.
Hitachi Storage Navigator Modular 2 Release Notes, RN-91DF8305
Contains late-breaking information about the Navigator 2 software.
Read the release notes before installing and using this product. They
may contain requirements and restrictions not fully described in this
document, along with updates and corrections to this document.
Hitachi Unified Storage Getting Started Guide, MK-91DF8303
Describes how to get Hitachi Unified Storage systems up and running in
the shortest period of time. For detailed installation and configuration
information, refer to the Hitachi Unified Storage Hardware Installation
and Configuration Guide.
Hitachi Unified Storage Hardware Installation and Configuration
Guide, MK-91DF8273
Contains initial site planning and pre-installation information, along with
step-by-step procedures for installing and configuring Hitachi Unified
Storage systems.
Hitachi Unified Storage Hardware Service Guide, MK-91DF8302
Provides removal and replacement procedures for the components in
Hitachi Unified Storage systems.
Hitachi Unified Storage Operations Guide, MK-91DF8275 — this
document
Describes the following topics:
- Adopting virtualization with Hitachi Unified Storage systems
- Enforcing security with Account Authentication and Audit Logging.
- Creating DP-Vols, standard VOLs, Host Groups, provisioning
storage, and utilizing spares
- Tuning storage systems by monitoring performance and using
cache partitioning
- Monitoring storage systems using email notifications and Hi-Track
- Using SNMP Agent and advanced functions such as data retention
and power savings
- Using functions such as data migration, VOL Expansion and VOL
Shrink, RAID Group expansion, DP pool expansion, and Mega VOLs
Hitachi Unified Storage Replication User Guide, MK-91DF8274
Describes how to use the four types of Hitachi replication software to
meet your needs for data recovery:
- ShadowImage In-system Replication
- Copy-on-Write SnapShot
xviii Preface
Hitachi Unified Storage Operations Guide
- TrueCopy Remote Replication
- TrueCopy Extended Distance
Hitachi Unified Storage Command Control Interface Installation and
Configuration Guide, MK-91DF8306
Describes Command Control Interface installation, operation, and
troubleshooting.
Hitachi Unified Storage Dynamic Provisioning Configuration Guide,
MK-91DF8277
Describes how to use virtual storage capabilities to simplify storage
additions and administration.
Hitachi Unified Storage Command Line Interface Reference Guide,
MK-91DF8276
Describes how to perform management and replication activities from a
command line.
Document conventions
The following typographic conventions are used in this document.
Convention Description
Bold Indicates text on a window, other than the window title, including
menus, menu options, buttons, fields, and labels. Example: Click OK.
Italic Indicates a variable, which is a placeholder for actual text provided by
you or the system. Example: copy source-file target-file
Angled brackets (< >) are also used to indicate variables.
screen or Indicates text that is displayed on screen or entered by you. Example:
code # pairdisplay -g oradb
< > angled Indicates a variable, which is a placeholder for actual text provided by
brackets you or the system. Example: # pairdisplay -g <group>
Preface xix
Hitachi Unified Storage Operations Guide
This document uses the following symbols to draw attention to important
safety and operational information.
The following abbreviations for Hitachi Program Products are used in this
document.
Abbreviation Description
ShadowImage ShadowImage In-system Replication
SnapShot Copy-on-Write SnapShot
TrueCopy A term used when the following terms do not need to be
distinguished:
• True Copy
• True Copy Extended Distance
• True Copy remote replication
TCE TrueCopy Extended Distance
Volume Migration Modular Volume Migration
Navigator 2 Hitachi Storage Navigator Modular 2
xx Preface
Hitachi Unified Storage Operations Guide
Logical storage capacity values (for example, logical device capacity) are
calculated based on the following values:
Getting help
The Hitachi Data Systems customer support staff is available 24 hours a
day, seven days a week. If you need technical support, please log on to the
HDS Support Portal for contact information: https://portal.hds.com
Comments
Please send us your comments on this document: doc.comments@hds.com.
Include the document title, number, and revision, and refer to specific
sections and paragraphs whenever possible.
Thank you!
Preface xxi
Hitachi Unified Storage Operations Guide
xxii Preface
Hitachi Unified Storage Operations Guide
1
Introduction
Navigator 2 overview
Navigator 2 functions
Introduction 1–1
Hitachi Unified Storage Operations Guide
Navigator 2 overview
Hitachi Data Systems Navigator 2 empowers you to take advantage of the
full power of your Hitachi storage systems. Using Navigator 2, you can
configure and manage your storage assets from a local host and from a
remote host across an Intranet or TCP/IP network to ensure maximum data
reliability, network up-time, and system serviceability.
The role that the Navigator 2 management console plays is to provide views
of feature settings on the storage system in addition to enabling you to
configure and manage those features. The following section provides more
detail about what features Navigator 2 provides to optimize your experience
with the Hitachi Unified Storage system.
Navigator 2 features
Navigator 2 provides the features detailed in the following sections.
Security features
• Account Authentication - Account authentication and audit logging
provide access control to management functions.
• Audit Logging - Records all system changes.
• SAN Security - SAN security software helps ensure security in open
systems storage area networking environments through restricted
server access.
Monitoring features
• Performance Monitor - Performance monitoring software allows you
to see performance within the storage system.
1–2 Introduction
Hitachi Unified Storage Operations Guide
• Cache Residency Manager - This feature allows you to "lock" and
"unlock" data into a cache in real time for optimal access to your most
frequently accessed data.
Capacity features
• Cache Partition Manager - This feature allows the application to
partition the cache for improved performance.
• RAID Group Expansion - Online RAID group expansion feature
enables dynamic addition of HDDs to a RAID group.
General features
• Point and click GUI - Point-and-click graphical interface with initial
set-up wizards that simplifies configuration, management, and
visualization of Hitachi storage systems.
• Real-time view of environment - An immediate view of available
storage and current usage.
• Deployment efficiency - Efficient deployment of storage resources to
meet business and application needs, optimize storage productivity,
and reduce the time required to configure storage systems and balance
I/O workloads.
• Access protection - Protection of access to information by restricting
storage access at the port level, requiring case-sensitive password
logins, and providing secure domains for application-specific data.
• Data redundancy - Protection of the information itself by letting you
configure data-redundancy and assign hot spares.
• System management - functions for Hitachi storage systems, such as
storage system status, event logging, email alert notifications, and
statistics.
• Major platform compatibility - Compatibility with Microsoft®
Windows®, UNIX, and Linux environments.
• Online help - Online help to enable easy access to information about
use of features.
• Command Line Interface - A full featured and scriptable command
line interface. For more information, refer to the Hitachi Unified Storage
Command Line Interface Reference Guide.
Navigator 2 benefits
Navigator 2 provides the following benefits:
• Simplification - Simplifies storage configuration and management for
the HUS family of storage systems.
• Access protection - Protects access to information by allowing secure
permission to assigned storage
• Performance enhancement - Enhances data access performance to
key applications and protects data availability of mission-critical
information
Introduction 1–3
Hitachi Unified Storage Operations Guide
• Optimization of data retrieval - Optimizes storage administrator
productivity by reducing the time required to configure storage systems
and balance I/O workloads
• Enables integration - Facilitates integration of Hitachi storage
systems with enterprise management products
• Cost reduction - Reduces storage costs.
• Long-term planning enabler - Improves the organization’s long-term
sustainable business strategy.
• Establishment of metrics - Identifies clear metrics with a full analysis
of the payback period and savings potential.
• Capacity provisioning - Provisions content storage capacity to
organizations and to post production end users.
1–4 Introduction
Hitachi Unified Storage Operations Guide
Figure 1-1 shows how Navigator 2 connects directly to the front-end
controller of the HUS family storage system.
Navigator 2 functions
Table 1-1 details the various functions.
Online
Category Function Name Description Notes
Usage
Component status Displays the status of a component
Components Yes
display such as tray.
RAID group: Creates, deletes, or
RAID Groups Yes
displays a RAID group.
VOL creation: Used to add a volume. A
new volume is added by specifying its Yes
capacity.
Groups VOL deletion: Deletes the defined
Yes
volume. User data is deleted.
VOL formatting: Required to make a
defined volume accessible by the host.
Yes
Writes null data to the specified
volume, and deletes user data.
Introduction 1–5
Hitachi Unified Storage Operations Guide
Table 1-1: Function details
Online
Category Function Name Description Notes
Usage
Host Groups Review, operate, and set host groups. Yes
Review, operate, and set iSCSI
iSCSI Targets Yes
targets.
iSCSI Settings View and configure iSCSI ports. Yes
FC Settings View and configure FC ports. Yes
Port Options View and configure port options. Yes
Spare Drives View, add, or remove spare drives. Yes
View, install, or de-install licensed
Licenses Yes
storage features.
Command devices View and configure command devices. Yes
View and configure the Differential
Settings
DMLU management volumes for replication/ Yes
migration.
View and configure SNMP Agent
SNMP Agent Yes
Support Function
LAN View and configure LAN. Yes
View and configure options to recovery
Drive Recovery Yes
drives.
Input and output constitute array
Constitute Array Yes
parameters.
System View and configure system
Yes
Parameters parameters
Verification View and configure verification for the
Yes
Settings drive and cache.
Parity Correction Recovery parity status of the volumes. Yes
View and configure Mapping Guard for
Mapping Guard Yes
the volumes
Mapping Mode View and configure mapping mode. Yes
Array must be
restarted to
Boot Options View and configure boot options Yes
enable the
settings.
View and configure format mode for
Format Mode Yes
the volume.
Array must be
restarted to
Firmware Refer/update firmware. Yes
enable the
settings.
1–6 Introduction
Hitachi Unified Storage Operations Guide
Table 1-1: Function details
Online
Category Function Name Description Notes
Usage
Set the SSL certificate and validity/
Security Secure LAN Yes
invalidity of the normal port.
View and output the monitored
Monitoring Yes
performance in the array.
Performance
Configure the parameter to
Tuning Parameter Yes
performance in the array.
Alerts &
- Displays the alerts and events. Yes
Events
Report when a
Polls the array and displays the status. Contact your
Error failure occurs and
If an error is detected, it is output into maintenance Yes
Monitoring controller status
log. personal.
display
Introduction 1–7
Hitachi Unified Storage Operations Guide
Help Menu
Contents
Index Tab
Search Tab
1–8 Introduction
Hitachi Unified Storage Operations Guide
2
System theory of operation
RAID features
RAID levels
Host volumes
RAID features
To put RAID to practical use, some techniques such as striping, mirroring,
and parity disk are used.
• Striping - To store data spreading it on several Disk Drives. The
technique segments logically sequential files, in a way that accesses
sequential segments for different physical storage devices. Striping is
useful when a processing device requests access to data more quickly
than a storage device can provide access. By performing segment
accesses on multiple devices, multiple segments can be accessed
concurrently. This provides more data access throughput, which avoids
causing the processor to idly wait for data accesses.
• Disk Drives - The time required to access each Disk Drive is shortened
and thus, time required for reading or writing is shortened.
• Mirroring - It means to copy all the contents of one Disk Drive to one
or more Disk Drives at the same time in order to enhance reliability.
• Parity disk - It is a data writing method used when configure RAID
with three or more Disk Drives. Parity of data in the corresponding
positions of two or more Disk Drives is generated and stored on
another Disk Drive.
RAID 5 - [New parity] = ([Data before update] EOR [Data after update])
EOR [Parity before update]
RAID 6 - [New P parity] = ([Data before update] EOR [Data after update])
EOR [P parity before update] [New Q parity] = [Coefficient parity] AND
([Data before update] EOR [Data after update]) EOR [Q parity before
update]
RAID levels
Your Hitachi storage system supports various RAID configurations. Review
the information in this section to determine the best RAID configuration for
your requirements.
The Hitachi Unified Storage systems support RAID 0 (2D to 16D), RAID 1,
RAID 5 (2D+1P to 15D+1P), RAID 6 (2D+2P to 28D+2P) and RAID 1+0
(2D+2D to 8D+8D).
Note that some usage replaces chunk with “stripe size,” “stripe depth” or
“interleave factor,” and stripe size with “stripe width,” “row width” or “row
size.” The chunk is the primary unit of protection management for either
parity or mirror RAID mechanisms.
The storage system will locate that address within that volume to a
particular disk sector address, and then proceed to read or write only that
amount of data — not that entire chunk. Also note that this request could
require physical I/O to two disks if the host 8KB logical block spans two
chunks. It could have 2KB at the end of one chunk and 6KB on the beginning
of the next chunk in that stripe.
Because of the variations of file system formatting and such, there is no way
to determine where a particular block may lie on the raw space presented
by a volume. Each file system will create a unique variety of metadata in a
quantity and distribution pattern that is related to the size of that volume.
Most file systems also typically scatter writes around within the LBA range
— an outdated holdover from long ago when file systems wanted to avoid a
common problem of the appearance of bad sectors or tracks on disks. What
this means is that attempts to align application block sizes with RAID chunk
sizes is a pointless exercise.
These also have a native “stripe size” that is selectable when creating a
logical volume from several physical storage volumes. In this case, the LVM
stripe size should be a multiple of the RAID chunk size due to various
interactions between the LVM and the volumes.
One example is the case of large block sequential I/O. If the LVM stripe size
is equal to the RAID chunk size, then a series of requests will be issued to
different volumes for that same I/O, making the request appear to be
several random I/O operations to the storage system. This can defeat the
system’s sequential detect mechanisms and turn off sequential prefetch,
slowing down these types of operations.
It is also true that, if many small volumes are carved out of a single RAID
Group, their simultaneous use will create maximum seek times on each
disk, reducing the maximum sustainable small block random IOPS rate to
the disk’s minimum.
On nearly every midrange storage system from any vendor, the individual
volumes are tightly bound to an “owning” controller. This is because there
is no global sharing between the controllers of either the data or its
metadata. Each controller is independently responsible for managing these
two objects. On enterprise storage systems, there is no concept of either a
“controller” or “volume ownership.” All data and metadata on most
enterprise systems are globally shared by all front-end processors.
the HUS is the successor to the AMS 2000, the midrange Hitachi modular
storage systems that were the current price list modular family during the
past three years. The HUS family systems have much higher performance
and introduced features and designs from the HUS systems.
The HUS family systems have still higher performance and incorporate
some significant features that were present in the AMS 2000 family, and
introduce new features that were not present in the previous generation of
modular devices. The HUS 110, 130, and 150 models comprise the current
generation.
Load Balancing - The HUS family uses the Hitachi Dynamic Load Balancing
Controller. These are proprietary purpose-built Hitachi designs, not (like so
many others) generic Intel OEM small server boards with a Windows/Linux
operating system, generic Fibre Channel disk adapters, and a storage
software package.
Term Explanation
Host group A group that virtualizes access to the same port by multiple
hosts since host settings for a volume are not made at the
physical port level but at a virtual port level.
Term Explanation
Profile A set of attributes that are used to create a storage pool. The
system has a predefined set of storage profiles. You can choose
a profile suitable for the application that is using the storage, or
you can create a custom profile.
Pool A collection of volumes with the same configuration. A storage
pool is associated with a storage profile, which defines the
storage properties and performance characteristics of a volume.
Snapshot A point-in-time copy of a primary volume. The snapshot can be
mounted by an application and used for backup, application
testing, or data mining without requiring you to take the
primary volume offline.
Storage domain A logical entity used to partition storage.
Volume A container into which applications, databases, and file systems
store data. Volumes are created from virtual disks, based on the
characteristics of a storage pool. You map a volume to a host or
host group.
RAID Redundant Array of Independent Disks (RAID) — A disk array in
which part of the physical storage capacity is used to store
redundant information about user data stored on the remainder
of the storage capacity. The redundant information enables
regeneration.
Parity Disk A RAID-3 disk that provides redundancy. RAID-3 distributes the
data in stripes across all but one of the disks in the array. It then
writes the parity in the corresponding stripe on the remaining
disk. This disk is the parity disk.
Volume (formerly Logical unit number (LUN) — An address for an individual disk
called LUN) drive, and by extension, the disk device itself. Used in the SCSI
protocol as a way to differentiate individual disk drives within a
common SCSI target device, like a disk array. Volumes are
normal.
iSCSI Internet-Small Computer Systems Interface (iSCSI) — A TCP/IP
protocol for carrying SCSI commands over IP networks.
iSCSI Target A system component that receives an iSCSI I/O command. The
command is sent to the iSCSI bus address of the target device
or controller.
iSCSI Initiator The component that transmits an iSCSI I/O command to the
iSCSI bus address of the target device or controller.
Firewall considerations
A firewall's main purpose is to block incoming unsolicited connection
attempts to your network. If the HUS storage system is used within an
environment that uses a firewall, there will be times when the storage
system’s outbound connections will need to traverse the firewall.
The storage system's incoming indication ports are ephemeral, with the
system randomly selecting the first available open port that is not being
used by another Transmission Control Protocol (TCP) application. To permit
outbound connections from the storage system, you must either disable the
firewall or create or revise a source-based firewall rule (not a port-based
rule), so that items coming from the storage system are allowed to traverse
the firewall.
Firewalls should be disabled when installing Navigator 2 (refer to the
documentation for your firewall). After the installation completes, you can
turn on your firewall.
NOTE: For outgoing traffic from the storage system’s management port,
there are no fixed port numbers (ports are ephemeral), so all ports should
be open for traffic from the storage system management port.
Types of installations
Installing Navigator 2
Operations
Installation 3–1
Hitachi Unified Storage Operations Guide
Connecting Hitachi Storage Navigator Modular 2 to the
Host
You can connect Hitachi Storage Navigator Modular 2 to a host through a
LAN with or without a switch.
When two or more LAN cards are installed in a host and a segment set in
each LAN card is different from the others, Hitachi Storage Navigator
Modular 2 can only access from the LAN card side specified by the installer.
When accessing the array unit from the other segment, make the
configuration that a router is used. Install one LAN card in the host to be
installed.
Preparation
Make sure of the following on the host in which Hitachi Storage Navigator
Modular 2 is to be installed before starting installation:
When the preparation items are not done correctly, installation may not be
completed. It is usually completed in about 30 minutes. If it is not
completed even one hour or more passes, terminate the installer forcibly
and check that the preparation items are correctly done.
• For Windows, when you install Hitachi Storage Navigator Modular 2 to
the C: partition. A filename “program” is required to be placed directly
under the C: partition.
• For Windows, you are logged on to Windows as an Administrator or a
member of the Administrators group.
For Linux and Solaris, you are logged on to as a root user.
• To install Hitachi Storage Navigator Modular 2, the following free disk
capacity is required.
3–2 Installation
Hitachi Unified Storage Operations Guide
Table 3-1: Free disk capacity
OS Directory Free Disk Capacity
/var/opt/HiCommand 1.0 GB
/var/tmp 1.0 GB
Installation 3–3
Hitachi Unified Storage Operations Guide
When you display a window, you may not able to install Hitachi Storage
Navigator Modular 2. If the installation is not completed after one hour
elapsed, terminate the installation forcibly and check if the window is
displayed.
• Services (daemon process) such as process monitoring and virus
monitoring must not be operating.
When the service (daemon process) is operating, you may not be able
to install Hitachi Storage Navigator Modular 2. If the installation is not
completed after one hour elapsed, terminate the installation forcibly and
check what service (daemon process) is operating.
• When third-party-made firewall software other than Windows firewall is
used, it must be invalidated during the installation or un-installation.
When you are using the third party- made firewall software, if the
installation of Hitachi Storage Navigator Modular 2 is not completed after
one hour elapsed, terminate the installation forcibly and check if the
third party-made firewall software is invalidated.
• For Linux and Solaris environment, the firewall must be invalidated.
To invalidate the firewall, see the each firewall manual.
• Some of the firewall functions provided by the OS might terminate
socket connections in the local host. You cannot install and operate
Hitachi Storage Navigator Modular 2 in an environment in which socket
connections are terminated in the local host. When setting up the
firewall provided by the OS, configure the settings so that socket
connections cannot be terminated in the local host.
• Windows must be set to produce the 8+3 form file name that is
compatible with MS-DOS.
There is no problem because Windows creates the 8+3 form file name
in the standard setting. When the tuning tool of Windows is used, the
standard setting may have been changed. In that case, return the
setting to the standard one.
• Hitachi Storage Navigator Modular 2 for Windows supports the Windows
Remote Desktop functionality. Note that the Microsoft terms used for
this functionality differ depending on the Windows OS. The following
terms can refer to the same functionality:
• Terminal Services in the Remote Administration mode
• Remote Desktop for Administration
• Remote Desktop connection
When using the Remote Desktop functionality to perform Hitachi
Storage Navigator Modular 2 operation (including installation or un-
installation), you need to connect to the console session of the target
server in advance. However, even if you have successfully connected to
the console session, the product might not work properly if another user
connects to the console session.
• Windows must be used in the application server mode of the terminal
service and must not be installed in the execution mode.
3–4 Installation
Hitachi Unified Storage Operations Guide
When installing Hitachi Storage Navigator Modular 2, do not use the
application server mode of the terminal service. If the installer is
executed in such an environment, the installation may fail or the
installer may become unable to respond.
To disable DEP
1. Choose Start, Settings, Control Panel, and then System.
The System Properties dialog box appears.
2. Select the Advanced tab, and under Performance click Settings.
The Performance Options dialog box appears.
3. Select the Data Execution Prevention tab, and select the Turn on
DEP for all programs and services except those I select radio
button.
4. Click Add and specify Hitachi Storage Navigator Modular 2 installer
(HSNM2- xxxx-W-GUI.exe). (The portion “xxxx” of file names varies
with the version of Hitachi Storage Navigator Modular 2, etc.)
Hitachi Storage Navigator Modular 2 installer (HSNM2-xxxx-W-GUI.exe)
is added to the list.
5. Select the checkbox next to Hitachi Storage Navigator Modular 2
installer (HSNM2-xxxx-W-GUI.exe) and click OK.
Automatic exception registration of Windows firewall:
When Windows firewall is used, the installer for Hitachi Storage
Navigator Modular 2 automatically registers the file of Hitachi Storage
Navigator Modular 2 and that included in Hitachi Storage Command
Suite Common Components as exceptions to the firewall. Check that no
problems of security exist before executing the installer.
Installation 3–5
Hitachi Unified Storage Operations Guide
Setting Linux kernel parameters
When you install Hitachi Storage Navigator Modular 2 to Linux, set the Linux
kernel parameters. Otherwise, the installer ends without installing the
hsoftware. The only exception is if Navigator 2 has already been installed
and used in a Hitachi Storage Command Suite Common Component
environment. In this case, you do not need to set the Linux kernel
parameters.
To set the Linux kernel parameters
1. Back up the kernel parameters setting file (/etc/sysctl.conf and /etc/
security/limits.conf).
2. Ascertain the IP address of the management console (for example, using
ipconfig in a DOS environement). Then change its IP address to
192.168.0.x where x is a number from 1 to 254, excluding 16 and 17.
Write this IP address on a piece of paper. You will be prompted for it
during the Storage Navigator Modular 2 installation procedure.
3. Disable popup blockers in your Web browser. We also recommend that
you disable anti-virus and proxy settings on the management console
when installing the Storage Navigator Modular 2 software.
4. To log in to Storage Navigator Modular 2 with a Red Hat Enterprise Linux
(RHEL) operating system, modify the kernel settings as follows:
• SHMMAX parameter. This parameter defines the maximum size, in
bytes, of a single shared memory segment that a Linux process
can allocate in its virtual address space. If the RHEL default
parameter is larger than both the SNM2 and Database values, you
do not need to change it.
• SHMALL parameter. This parameter sets the total amount of
shared memory pages that can be used system wide. For SNM2,
this value must equal the sum of the default value, SNM2, and
Database values.
• Other parameters. The following parameters follow the same rule
as SHMALL and must be the higher of (RHEL current value +
value in Navigator 2 column) or the value from the Database
value.
• kernel.shmmni
• kernel.threads-max
• kernel.msgmni
• kernel.sem (second parameter)
• kernel.sem (fourth parameter)
• fs.file-max nofile nproc
Table 3-2 details recommended values for Linux kernel parameters.
3–6 Installation
Hitachi Unified Storage Operations Guide
Table 3-2: Linux kernel parameters
Parameter Standard Sample Storage SNM2 Required
Name RHEL 5.x Customer Navigator Database New Value
Values Modular 2
kernel.shmmax 4294967295 4294967295 11542528 20000000 4294967295
0
kernel.shmall 268435456 268435456 22418432 22418432 22418432
kernel.threads- 65536
122876 184 574 123060
max
kernel.msgmni 32 32 32 32 64
kernel.sem 32000 32000 80 7200 32080
(Second
parameters)
kernel.sem 128 128 9 1024 1024
(Fourth
parameters)
fs.file-max 205701 387230 53898 53898 441128
nofile 0 0 572 1344 1344
nproc 0 0 165 512 512
Installation 3–7
Hitachi Unified Storage Operations Guide
1. Back up the kernel parameters setting file (/etc/system).
2. Open the kernel parameters setting file (/etc/system) with exit editor
and add the following text line to bottom.
When a certain value has been set in the file, revise the existing value
by adding the following value within the limit that the value does not
exceed the maximum value which each OS specifies. For the maximum
value, refer to the manual of each OS.
3. Reboot the Solaris host and then install Hitachi Storage Navigator
Modular 2.
3. From the console, execute the following command and then set the
parameters.
When a certain value has been set, revise the existing value by adding
the following value within the limit that the value does not exceed the
maximum value which each OS specifies. For the maximum value, refer
to the manual of each OS.
The parameter must be set for the both projects, user.root and system.
4. Reboot the Solaris host and then install Hitachi Storage Navigator
Modular 2.
3–8 Installation
Hitachi Unified Storage Operations Guide
NOTE: In case of the setting of the kernel parameters is not enabled in
Solaris 10, open the file (/etc/system) with text editor and change referring
to the following before reboot host.
Installation 3–9
Hitachi Unified Storage Operations Guide
Types of installations
Navigator 2 supports two types of installations:
• Interactive installations — attended installation that displays graphical
windows and requires user input.
• Silent installations — unattended installation using command-line
parameters that do not require any user input.
This chapter describes the interactive installation procedure. For
information about performing silent installations using CLI commands, refer
to the Hitachi Storage Navigator Modular 2 Command Line Interface (CLI)
Reference Guide or the Navigator 2 online help.
Before proceeding, be sure you reviewed and completed all pre-installation
requirements described earlier in this chapter in Preinstallation information
for Storage Features on page 3-19.
Installing Navigator 2
The following sections describe how to install Navigator 2 on a management
console running one of the Windows, Solaris, or Linux operating systems
that Navigator 2 supports (see Preinstallation Information on page 2-1).
During the Navigator installation procedure, the installer creates the
directories _HDBInstallerTemp and StorageNavigatorModular. You can
delete these directories if necessary.
To perform this procedure, you need the IP address (or host name) and port
number that will be used to access Navigator 2. Avoid port number 1099 if
this port number is available and use a port number such as 2500 instead.
3–10 Installation
Hitachi Unified Storage Operations Guide
3. Proceed to the appropriate section for the operating system running on
your management console:
• Microsoft Windows. See Installing Navigator 2 on a Windows
operating system, below.
• Solaris. See Installing Navigator 2 on a Sun Solaris operating
system on page 3-16.
• Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 4 Linux.
See Installing Navigator 2 on a Red Hat Linux operating system on
page 3-18.
The installation process takes about 15 minutes to complete. During the
installation, the progress bar can pause for several seconds. This is
normal and does not mean the installation has stopped.
Installation 3–11
Hitachi Unified Storage Operations Guide
4. After you insert the Hitachi Storage Navigator Modular 2 installation CD-
ROM into the management console’s CD/DVD-ROM drive, the installation
starts automatically and the Welcome window appears.
3–12 Installation
Hitachi Unified Storage Operations Guide
6. Install Navigator 2 in the default destination folder shown or click the
Browse button to select a different destination folder.
7. Click Next. The Input the IP address and port number of the PC window
appears.
Figure 3-3: Input the IP address and port number of the PC window
8. Enter the following information:
• IP Addr. Enter the IP address or host name used to access
Navigator 2 from your browser. Do not specify 127.0.0.1 and
localhost.
• Port No. Enter the port number used to access Navigator 2 from
your browser. The default port number is 1099.
Installation 3–13
Hitachi Unified Storage Operations Guide
Figure 3-4: InstallShield wizard - Start Copying Files
10.Review the settings to make sure they are correct. To change any, click
Back until you return to the appropriate window, make the change, and
click Next until you return to the Start Copying Files window.
11.In the Start Copying Files window, click Next to start the installation.
During the installation, windows show the progress of the installation.
When installation is complete, the InstallShield Wizard Complete window
appears. You cannot stop the installation after it starts.
3–14 Installation
Hitachi Unified Storage Operations Guide
Figure 3-5: InstallShield Wizard Complete window
12.In the InstallShield Wizard Complete window, click Finish to complete
the installation. Then proceed to for a description of the Navigator 2
interface.
13.Proceed to Starting Navigator 2 host and client configuration on page 3-
24 for instructions about logging in to Navigator 2.
If your Navigator 2 installation fails, see If the Installation Fails on a
Windows Operating System on page 11-2.
Installation 3–15
Hitachi Unified Storage Operations Guide
5. Click Turn on DEP for all programs and services except those I
select.
6. Click Add and specify the Navigator 2 installer HSNM2-xxxx-W-GUI.exe,
where xxxx varies with the version of Navigator 2. The Navigator 2
installer HSNM2-xxxx-W-GUI.exe is added to the list.
7. Click the checkbox next to the Navigator 2 installer HSNM2-xxxx-W-
GUI.exe and click OK.
/var/opt/HiCommand 1.0 GB
/var/tmp 1.0 GB
3–16 Installation
Hitachi Unified Storage Operations Guide
NOTE: If the CD-ROM cannot be read, copy the files install-hsnm2.sh
and HSNM2-XXXX-S-GUI.tar.gz to a file system that the host can
recognize.
2. Mount the CD-ROM on the file system. The mount destination is /cdrom.
3. Create a temporary directory with sufficient free space (more than 600
MB) on the file system and expand the compressed files. The temporary
directory is /temporary here.
4. In the console, issue the following command lines. In the last command,
XXXX varies with the version of Navigator 2.
mkdir /temporary
cd /temporary
gunzip < /cdrom/HSNM2-XXXX-S-GUI.tar.gz | tar xf -
Installation 3–17
Hitachi Unified Storage Operations Guide
• [port number] is the port number used to access Navigator 2
from your browser. The default, port number is 1099. If you use it,
you can omit the –p option from the command line.
TIP: For environments using DHCP, enter the host name (computer name)
for the IP address.
2. Mount the CD-ROM on the file system. The mount destination is /cdrom.
3. In the console, issue the following command line:
3–18 Installation
Hitachi Unified Storage Operations Guide
• [IP address] is the IP address used to access Navigator 2 from
your browser. When entering an IP address, do not specify
127.0.0.1 and localhost. For DHCP environments, specify the host
name (computer name).
• [port number] is the port number used to access Navigator 2
from your browser. The default port number is 1099. If you use it,
you can omit the –p option from the command line.
4. Proceed to Chapter 4, Starting Navigator 2 for instructions about logging
in to Navigator 2.
Environments
Your system should be updated to the most recent firmware version and
Navigator 2 software version to expose all the features currently available.
The current firmware, Navigator 2, and CCI versions applicable for this
guide are as follows:
• Firmware version 0916/A (1.6A) or higher for the HUS storage system.
• Navigator 2 version 21.60 or higher for your computer.
• When using the command control interface (CCI), version 01-27-03/02
or higher is required for your computer.
Installation 3–19
Hitachi Unified Storage Operations Guide
• The primary volume (P-VOL) size must equal the secondary volume (S-
VOL) size.
Account Authentication
• Account Authentication cannot be used with Password Protection. If
Account Authentication is installed or enabled, Password Protection
must be uninstalled or disabled.
• Password Protection cannot be used with Account Authentication. If
Password Protection is installed or enabled, Account Authentication
must be uninstalled or disabled.
3–20 Installation
Hitachi Unified Storage Operations Guide
• Move the VOLs to the master partitions on the side of the default owner
controller.
• Delete all of the sub-partitions and reduce the size of each master
partition to one half of the user data area, the user data capacity after
installing the SS/TCE/HDP.
• If you uninstall or disable this storage feature, sub-partitions, except
for the master partition, must be deleted and the capacity of the
master partition must be the default partition size (see Table 5-1 on
page 5-2).
Password Protection
• Password Protection cannot be used with Account Authentication. If
Password Protection is installed or enabled, Account Authentication
must be uninstalled or disabled.
Installation 3–21
Hitachi Unified Storage Operations Guide
monitored using the SNMP manager, monitor controller 0 and note the
following:
• Drive blockades detected by controller 1 are not reported with a
trap.
• Controller 1 is not reported as TRAP. The controller down is
reported as systemDown TRAP by the controller that went down.
• After controller 0 is blocked, the SNMP Agent Support cannot be used.
4. In the Licenses list, click the Key File or Key Code button, then enter
the file name or key code for the feature you want to install. You can
browse for the Key File.
5. Click OK.
6. Follow the on-screen instructions. A message displays confirming the
optional feature installed successfully. Mark the checkbox and click
Reboot Array.
7. To complete the installation, restart the storage system. The feature will
close upon restarting the storage system. The storage system cannot
access the host until the reboot completes and the system restarts.
Restarting usually takes from six to 25 minutes.
NOTE: The storage system may require more time to respond, depending
on its condition. If it does not respond after 25 minutes, check the condition
of the system.
3–22 Installation
Hitachi Unified Storage Operations Guide
3. If Password Protection is installed and enabled, log in with the registered
user ID and password for the array.
4. In the tree view, click Settings, and select Licenses.
5. Select the appropriate feature in the Licenses list.
6. Click Change Status. The Change License window appears.
7. Select the Enable check box.
8. Click OK.
9. Follow the on-screen instructions.
Installation 3–23
Hitachi Unified Storage Operations Guide
3. On the Licenses screen, select your feature in the Licenses list and click
De-install License.
4. When you uninstall the option using the key code, click the Key Code
radio button, and then set up the key code. When you uninstall the
option using the key file, click the Key File radio button, and then set
up a path for the key filename.
5. Click OK.
6. Follow the on-screen instructions.
7. To complete uninstalling the option, restart the storage system. The
feature will close upon restarting the storage system. The system cannot
access the host until the reboot completes and the system restarts.
Restarting usually takes 6 to 25 minutes.
NOTE: The storage system may require more time to respond, depending
on its condition. If it does not respond after 25 minutes, check the condition
of the system.
8. Log out from the disk array.
Uninstalling of the feature is now complete.
Client side
For Windows
When you use the JRE 1.6.0_10 or newer, setting the Java Runtime
Parameters are not necessary in a client to start Navigator 2. When you use
the JRE less than 1.6.0_10, setting the Java Runtime Parameters are
necessary in a client to start Navigator 2.
3–24 Installation
Hitachi Unified Storage Operations Guide
5. Click OK.
6. Click OK in the Java tab.
7. Close the Control Panel.
Starting Navigator 2
To start Navigator 2
1. Activate the browser and specify the URL as follows.
NOTE: The https is invalid in the status immediately after the installation.
When connecting it with the https, it is required to set the server certificate
and private key in advance referring to the section 0.
Installation 3–25
Hitachi Unified Storage Operations Guide
Figure 3-6: Navigator 2 login screen
2. Enter your login information and click Login.
When logging in Navigator 2 for the first time after installing it newly,
login with the built-in user’s system account. The default password of
the system account is manager. If another user is registered, login with
the registered user. Enter the user ID and password, and click Login.
To prevent unauthorized access, we recommend changing the default
password of the system account. You cannot delete the system account
or change the authority of the system account. The system account is
the built-in account common to the Hitachi Storage Command Suite
products.
The system account can use all the functions of Hitachi Storage
Command Suite Common Component including Navigator 2 and access
all the resources that each application manages. When Hitachi Storage
Command Suite Common Component are already installed in the PC,
etc. in which Navigator 2 is installed and the password of the system
account is changed, login with the changed password.
Although you can login with the user ID registered in Hitachi Storage
Command Suite Common Component, you cannot operate Navigator 2.
Add the operation authority of Navigator 2 after logging in Navigator 2,
and login again.
Navigator 2 starts and the Arrays screen displays.
3. Since the Arrays screen is displays, use Navigator 2 after registering the
array unit in it.
3–26 Installation
Hitachi Unified Storage Operations Guide
Operations
Navigator 2 screens consist of the Web and Applet screens. When you start
Navigator 2, the login screen is displayed. When you login, the Web screen
that shows the Arrays list is displayed. On the Web screen, operations
provided by the screen and the dialog box is displayed. When you execute
Advanced Settings on the Arrays screen and when you select the HUS on
the Web screen of the Arrays list, the Applet screen is displayed.
One user operates the Applet screen to run the HUS, and two or more users
cannot access it at the same time.
Installation 3–27
Hitachi Unified Storage Operations Guide
The following figure displays settings that appear in the Applet dialog box.
The following table shows the troubleshooting steps to take when the Applet
screen does not display.
Setting an attribute
To set an attribute
3–28 Installation
Hitachi Unified Storage Operations Guide
1. Start Navigator 2.
2. Log in as a registered user to Navigator 2.
3. Select the storage system in which you will set up an attribute.
4. Click Show & Configure Array.
5. Select the feature icon in the Security tree view. SNM2 displays the
home feature window.
6. Consider the following fields and settings in the Data Retention window.
Additional guidelines
• Navigator 2 is used by service personnel to maintain the arrays;
therefore, be sure they have accounts. Assign the Storage
Administrator (View and Modify) for service personnel accounts.
• The Syslog server log may have omissions because the log is not reset
when a failure on the communication path occurs.
• The audit log is sent to the Syslog server and conforms to the Berkeley
Software Distribution (BSD) syslog protocol (RFC3164) standard.
• If you are auditing multiple arrays, synchronize the Network Time
Protocol (NTP) server clock. For more details on setting the time on the
NTP server, see the Hitachi Storage Navigator Modular 2 online help.
• Reboot the array when changing the volume cache memory or
partition.
Help
Navigator 2 describes the function of the Web screen with Help. Display Help
by the following operation.
The following two ways exist for starting Help:
• Starting it from the Help menu in the Arrays screen.
• Starting it with the Help button in the individual screen.
Installation 3–29
Hitachi Unified Storage Operations Guide
When starting the Help menu in the Arrays screen, the beginning of Help is
displayed.
3–30 Installation
Hitachi Unified Storage Operations Guide
Understanding the Navigator 2 interface
Now that you have installed Navigator 2, you may want to develop a general
understanding of the interface design. Review the following sections for a
quick primer on the Navigator 2 interface.
Figure 3-11 shows the Navigator 2 interface with the Arrays window
displayed. This window appears when you log in to Navigator 2. It also
appears when you click Arrays in the Explorer panel.
Menu
Panel
Explorer Button
Panel Panel
Menu Panel
The Menu Panel appears on the left side of the Navigator 2 user interface.
The Menu Panel always contains the following menus, regardless of the
window displayed in the Page Panel:
• File — contains commands for closing the Navigator 2 application or
logging out. These commands are functionally equivalent to the Close
and Logout buttons in the Button Panel, described on the next page.
• Go — lets you start the ACE tool, a utility for configuring older AMS
1000 family systems.
• Help — displays the Navigator 2 online help and version information.
Explorer Panel
The Explorer Panel appears below the Menu Panel. The Explorer Panel
displays the following commands, regardless of the window shown in the
Page Panel.
• Resource — contains the Arrays command for displaying the Arrays
window.
Installation 3–31
Hitachi Unified Storage Operations Guide
• Administration — contains commands for accessing users,
permissions, and security settings. We recommend you use
Administration > Security > Password > Edit Settings to change
the default password after you log in for the first time. See Changing
the default system password on page 6-6.
• Settings — lets you access user profile settings.
Button panel
The Button Panel appears on the right side of the Navigator 2 interface and
contains two rows of buttons:
• Buttons on the top row let you close or log out of Navigator 2. These
buttons are functionally equivalent to the Close and Logout
commands in the File menu, described on the previous page.
• Buttons on the second row change, according to the window displayed
in the Page Panel. In the example above, the buttons on the second
row appear when the Arrays window appears in the Page Panel.
Page panel
The Page Panel is the large area below the Button Panel. When you click an
item in the Explorer Panel or the Arrays Panel (described later in this
chapter), the window associated with the item you clicked appears in the
Page Panel.
Information can appear at the top of the Page Panel and buttons can appear
at the bottom for performing tasks associated with the window in the Page
Panel. When the Arrays window in the example above is shown, for
example:
• Error monitoring information appears at the top of the Page Panel.
• Buttons at the bottom of the Page Panel let you reboot, show and
configure, add, edit, remove, and filter Hitachi storage systems.
3–32 Installation
Hitachi Unified Storage Operations Guide
of the storage system you selected to be managed from the Arrays window.
If you click the type and serial number, common storage system tasks
appear in the Page Panel.
Arrays Panel
Installation 3–33
Hitachi Unified Storage Operations Guide
Figure 3-13: Example of volume information
3–34 Installation
Hitachi Unified Storage Operations Guide
Table 3-4: Description of Navigator 2 activities (Continued)
Installation 3–35
Hitachi Unified Storage Operations Guide
Table 3-4: Description of Navigator 2 activities (Continued)
3–36 Installation
Hitachi Unified Storage Operations Guide
Table 3-4: Description of Navigator 2 activities (Continued)
Installation 3–37
Hitachi Unified Storage Operations Guide
3–38 Installation
Hitachi Unified Storage Operations Guide
4
Provisioning
Provisioning overview
Provisioning wizards
Hardware considerations
Logging in to Navigator 2
Provisioning 4–1
Hitachi Unified Storage Operations Guide
Provisioning overview
To successfully establish a storage system that is running properly, you first
must provision it. Provisioning refers to the pre-active state preparation of
a storage system required to carry out desired storage tasks and functions
and to make it available to administrators. Provisioning of HUS storage
systems is easy and convenient because of the availability of provisioning
wizards which automatically step you through stages of preparing the
storage system for rollout. The following section details the main HUS SNM2
wizards.
Provisioning wizards
The following are features for provisioning Navigator 2.
• Add Array Wizard
Whenever Navigator 2. is launched, it searches the database for listings
of existing arrays. If there are arrays listed in the database, the platform
displays them in the Subsystems dialog box. If there are no arrays,
Navigator 2. automatically launches the Add Array wizard.
This wizard works with only one array at a time. It guides users through
the steps to set up e-mail alerts, management ports, iSCSI ports and
setting the date and time.
• Create & Map Volume Wizard
This wizard helps you create a volume and map it to an iSCSI target. It
includes the following steps: 1) Create a new volume or select an
existing one. 2) Create a new iSCSI target or select an existing one. 3)
Connect to Host 4) Confirm 5) Back up a volume to another volume in
the same array.
• LUN Wizard
Enables you to configure volumes and corresponding unit numbers, and
to assign segments of stored data to the volumes.
• Create Local Backup Wizard
This wizard helps you create a local backup of a volume. The wizard
includes the following steps: 1). Select the volume to be backed up. 2)
Select a volume to contain the copied data. You will have the option to
allocate this volume to a host. 3) Name the pair (original volume and its
backup), and set copy parameters.
• User Registration Wizard
The User Registration Wizard is available when using the Account
Authentication feature, which secures selected arrays with roles-based
authentication.
• Simple DR Wizard
This wizard helps you create a remote backup of a volume. The purpose
is to duplicate the data and prevent data loss in case of a disaster such
as the complete failure of the array on which the source volume is
mounted. The wizard includes the following steps: 1) Introduction 2) Set
up a Remote Path 3) Set Up Volumes 4) Confirm
4–2 Provisioning
Hitachi Unified Storage Operations Guide
Provisioning task flow
The following details the task flow of the provisioning process:
1. A storage administrator determines a new storage system needs to be
added to the storage network for which he is responsible.
2. The administrator launches the wizard to discover arrays on the storage
network to add them to the Navigator 2 database.
3. If this is the first time you are configuring the array, the Add Array
Wizard launches automatically. If you are modifying an existing array
configuration, then manually launch the array.
NOTE: If the wizard does not launch, disable the browser’s popup
blockers, then click the Add Array button at the bottom of the Array List
dialog box to launch the wizard.
4. If you know the IP address of a specific array that you want to add, click
either Specific IP Address or Array Name to Search: and enter the IP
address of the array. The default IP addresses for each controller are as
follows:
• 192.168.0.16 - Controller 0
• 192.168.0.17 - Controller 1
5. If you know the range of IP addresses that includes one or more arrays
that you want to add, click Range of IP Addresses to Search and enter
the low and high IP addresses of that range. The range of addresses
must be located on a connected local area network (LAN).
6. This screen displays the results of the search that was specified in the
Search Array screen. Use this screen to select the arrays you want to add
to Navigator 2.
7. If you entered a specific IP address in the Search Array screen, that
array is automatically registered in Navigator 2.
8. If you entered a range of IP addresses in the Search Array screen, all of
the arrays within that range are displayed in this screen. To add an array
whose name is displayed, click on the area to the left of the array name.
Hardware considerations
Before you log in to Navigator 2, observe the following considerations.
Provisioning 4–3
Hitachi Unified Storage Operations Guide
Every controller on a Hitachi storage system has a 10/100BaseT Ethernet
management port labeled LAN. Hitachi storage systems equipped with two
controllers have two management ports, one for each controller. The
management ports let you configure the controllers using an attached
management console and the Navigator 2 software.
Your management console can connect to the management ports directly
using an Ethernet cable or through an Ethernet switch or hub. The
management ports support Auto-Medium Dependent Interface/Medium
Dependent Interface Crossover (Auto-MDI/MDIX) technology, allowing you
to use either standard (straight-through) or crossover Ethernet cables.
TIP: You can attach a portable (“pocket”) hub between the management
console and storage system to configure both controllers in one procedure,
similar to using a switch.
Logging in to Navigator 2
The following procedure describes how to log in to Navigator 2. When
logging in, you can specify an IPv4 address or IPv6 address using a
nonsecure (http) or secure (https) connection to the Hitachi storage
system.
To log in to Navigator 2
1. Launch a Web browser on the management console.
2. In the browser’s address bar, enter the IP address of the storage
system’s management port using IPv4 or IPv6 notation. You recorded
this IP address in Table C-1 on page C-1:
• IPv4 http example:
http://IP address:23015/StorageNavigatorModular/Login
• IPv4 https example:
https://IP address:23016/StorageNavigatorModular/Login
• IPv6 https example (IP address must be entered in brackets):
https://[IP address]:23015/StorageNavigatorModular/Login
You cannot make a secure connection immediately after installing
Navigator 2. To connect using https, set the server certificate and
private key (see Setting the certificate and private key on page 10-
8).
3. At the login page (see Figure 4-1), type system as the default User ID
and manager as the default case-sensitive password.
4–4 Provisioning
Hitachi Unified Storage Operations Guide
Figure 4-1: Login page
4. Click Login. Navigator 2 starts and the Arrays dialog box appears, with
a list of Hitachi storage systems (see Figure 4-2 on page 4-5).
Provisioning 4–5
Hitachi Unified Storage Operations Guide
Selecting a storage system for the first time
With primary goals of simplicity and ease-of-use, Navigator 2 has been
designed to make things obvious for new users from the get-go. To that end,
Navigator 2 runs a series of first-time setup wizards that let you define the
initial configuration settings for Hitachi storage systems. Configuration is as
easy as pointing and clicking your mouse.
The following first-time setup wizards run automatically when you select a
storage system from the Arrays dialog box. Use these wizards to define the
basic configuration for a HItachi storage system.
• Add Array wizard - lets you add Hitachi storage systems to the
Navigator 2 database. See page 4-6.
• Initial (Array) Setup wizard - lets you configure e-mail alerts,
management ports, Internet Small Computer Systems Interface
(iSCSI) ports and setting the date and time. See page 4-8.
• Create & Map Volume wizard - lets you create a volume and map it to a
Fibre Channel or iSCSI target. See page 4-15.
After you use these wizards to define the initial settings for your Hitachi
storage system, you can use Navigator 2 to change the settings in the future
if necessary.
Navigator 2 also provides the following wizard, which you can run manually
to further configure your Hitachi storage system:
• Backup Volume wizard - lets you create a local backup of a volume. See
page 4-21.
4–6 Provisioning
Hitachi Unified Storage Operations Guide
them in Appendix C for future reference. Use the navigation buttons at the
bottom of each dialog box to move forward or backward, cancel the wizard,
and obtain online help.
Field Description
IP Address or Array Name Discovers storage systems using a specific IP address or
storage system name in the Controller 0 and 1 fields. The
default IP addresses are:
• Controller 0: 192.168.0.16
• Controller 1: 192.168.0.17
For directly connected consoles, enter the default IP
address just for the port to which you are connected; you
will configure the other controller later.
Range of IP Addresses Discovers storage systems using a starting (From) and
ending (To) range of IP addresses. Check Range of
IPv4 Address and/or Search for IPv6 Addressees
automatically to widen the search if desired.
Using Ports Select whether communications between the console
and management ports will be secure, nonsecure, or
both.
Provisioning 4–7
Hitachi Unified Storage Operations Guide
Running the Initial (Array) Setup wizard
After you complete the Add Array wizard at initial log in, the Initial (Array)
Setup wizard starts automatically.
Using this wizard, you can configure:
• E-mail alerts — see page 4-9
• Management ports — see page 4-11
• Host ports — see page 4-12
• Spare drives — see page 4-14
• System date and time — see page 4-14
Initially, an introduction page lists the tasks you complete using this wizard.
Click Next > to continue to the Search Array dialog box (see Figure 4-5 on
page 4-10 and Table 4-2 on page 4-10) and begin the configuration. Use
the navigation buttons at the bottom of each dialog box to move forward or
backward, cancel the wizard, and obtain online help.
The following sections describe the Initial (Array) Setup wizard dialog
boxes.
NOTE: To change these settings in the future, run the wizard manually by
clicking the name of a storage system under the Array Name column in
the Arrays dialog box and then clicking Initial Setup in the Common
Array Tasks menu.
The Add Array wizard registers the storage system in the following steps:
1. Searches the storage system.
2. Registers the storage system.
4–8 Provisioning
Hitachi Unified Storage Operations Guide
3. Displays the name of the storage system. Note the name of the storage
system.
Record the
storage system
name and details
NOTE: This procedure assumes your Simple Mail Transfer Protocol (SMTP)
server is set up correctly to handle email. If desired, you can send a test
message to confirm that email notifications will work.
Provisioning 4–9
Hitachi Unified Storage Operations Guide
Figure 4-5: Set up E-mail Alert page
Field Description
E-mail Error Report To enable email notifications, click Enable and complete
Disable / Enable the remaining fields.
Domain Name Domain appended to addresses that do not contain one.
Mail Server Address Email address or IP address that identifies the storage
system as the source of the email.
From Address Each email sent by the storage system will be identified as
being sent from this address.
Send to Address Up to 3 individual email addresses or distribution lists
where notifications will be sent.
Reply To Address Email address where replies can be sent.
4–10 Provisioning
Hitachi Unified Storage Operations Guide
Initial Array (Setup) wizard — configuring management ports
The Set up Management Ports dialog box lets you configure the
management ports on the Hitachi storage system. These are the ports you
use to manage the system using Navigator 2.
To configure the management ports
1. Complete the fields in Figure 4-6 (see Table 4-3).
2. Click Next and go to Initial Array (Setup) wizard — configuring host
ports on page 4-12.
Field Description
IPv4/IPv6 Select the IP addressing method you want to use. For more
information about IPv6, see Using Internet Protocol Version
6 on page 10-2.
Use DHCP Configures the management port automatically, but requires
a Dynamic Host Control Protocol (DHCP) server. IPv6 users:
note that IPv6 addresses are based on Ethernet addresses.
If you replace the storage system, the IP address changes.
Therefore, you can want to assign static IP addresses to the
storage system using the Set Manually option instead of
having them auto-assigned by a DHCP server.
Provisioning 4–11
Hitachi Unified Storage Operations Guide
Table 4-3: Management Ports dialog box (Continued)
Field Description
Set Manually Lets you complete the remaining fields to configure the
management port manually.
IPv4 Address Static Internet Protocol address that matches the subnet
where the storage system will be used.
IPv4 Subnet Mask Subnet mask that matches the subnet where the storage
system will be used.
IPv4 Default Gateway Default gateway that matches the gateway where the
storage system will be used.
Negotiation Use the default setting (Auto) to auto-negotiate speed and
duplex mode, or select a fixed speed/duplex combination.
4–12 Provisioning
Hitachi Unified Storage Operations Guide
Figure 4-7: Set up Host Ports dialog box for Fibre Channel host ports
Table 4-4: Set up Host Ports dialog box for Fibre Channel host ports
Field Description
Port Address Enter the address for the Fibre Channel ports.
Transfer Rate Select a fixed data transfer rate from the drop-down list that
corresponds to the maximum transfer rate supported by the device
connected to the storage system, such as the server or switch.
Topology Select the topology in which the port will participate:
• Point-to-Point = port will be used with a Fibre Channel
switch.
• Loop = port is directly connected to the Fibre Channel port
of an HBA installed in a server.
Table 4-5: Set up Host Ports dialog box for iSCSI host ports
Field Description
IP Address Enter the IP address for the storage system iSCSI host
ports. The default IP addresses are:
Controller 0, Port A: 192.168.0.200
Controller 0, Port B: 192.168.0.201
Controller 1, Port A: 192.168.0.208
Controller 1, Port B: 192.168.0.209
Subnet Mask Enter the subnet mask for the storage system iSCSI host
port.
Default Gateway If a router is required for the storage system host port to
reach the initiator(s), the default gateway must have the IP
address of that router. In a network that requires a router
between the storage system and the initiator, enter the
router's IP address. In a network that uses only direct
connection, or a switch between the storage system and the
initiator(s), no entry is required.
Provisioning 4–13
Hitachi Unified Storage Operations Guide
Initial Array (Setup) wizard — configuring spare drives
Using the Set up Spare Drive dialog box, you can select a spare drive from
the available drives. If a drive in a RAID group fails, the Hitachi storage
system automatically uses the spare drive you select here. The spare drive
must be the same type, for example, Serial Attached SCSI (SAS), or Solid
State Disk (SSD), as the failed drive and have the same capacity as or
higher capacity than the failed drive. When you finish, click Next and go to
Initial Array (Setup) wizard — configuring the system date and time on page
4-14.
Figure 4-8: Initial Array (Setup) wizard: Set up Spare Drive dialog box
Initial Array (Setup) wizard — configuring the system date and time
Using the Set up Date & Time dialog box, you can select whether the Hitachi
storage system date and time are to be set automatically, manually, or not
at all. If you select Set Manually, enter the date and time (in 24-hour
format) in the fields provided. When you finish, click Next.
4–14 Provisioning
Hitachi Unified Storage Operations Guide
Figure 4-9: Set up Date & Time dialog box
NOTE: To change these settings in the future, run the wizard manually by
clicking the storage system in the Arrays dialog box, and then clicking
Create Volume and Mapping in the Common Array Tasks menu.
Use this function when you create, expand, delete, and refer to the RAID
group. This function can be used in the device Ready state. The unit does
not be rebooted.
Provisioning 4–15
Hitachi Unified Storage Operations Guide
4. Click the RAID Groups tab to display the RAID Groups list as shown in
Figure 4-10. RAID groups and volumes defined for the storage system
display.
4–16 Provisioning
Hitachi Unified Storage Operations Guide
• Combination
• Number of Parity Groups
7. In the Drives region, select one of the following radio buttons:
• Automatic Selection to direct the system to automatically select
a drive. Select a drive type and a drive capacity in the two list
boxes in this region.
• Manual Selection to manually select a desired drive in the
Assignable Drives list. Select an assignable drive in the list.
8. Click OK.
Using the Create & Map Volume Wizard to create a RAID group
Using the Search RAID Group dialog box, create a new RAID group for the
Hitachi storage system or make it part of an existing RAID group.
Provisioning 4–17
Hitachi Unified Storage Operations Guide
Create & Map Volume wizard — defining volumes
Using the next dialog box in the Create & Map Volume wizard, you can
create new volumes or use existing volumes for the Hitachi storage system.
4–18 Provisioning
Hitachi Unified Storage Operations Guide
Create & Map Volume wizard — defining host groups or iSCSI targets
Using the next dialog box in the Create & Map Volume wizard, you can
select:
• A physical port for a Fibre Channel host group or iSCSI target.
• Host groups for storage systems with Fibre Channel ports.
• iSCSI targets for storage systems with iSCSI ports.
Provisioning 4–19
Hitachi Unified Storage Operations Guide
To create a new iSCSI target or select an existing one for iSCSI
storage systems
1. Next to Port, select a port to map to from the available ports options.
2. Create a new iSCSI target or select an existing one:
To create a new iSCSI target:
a. Click Create a new iSCSI target.
b. Enter an iSCSI Target No (from 1 to 127).
c. Enter an iSCSI target Name (up to 32 characters).
d. Select Platform and Middleware settings from the drop-down lists
(refer to the Navigator 2 online help).
To select an existing iSCSI target:
a. Click Use an existing iSCSI target.
b. Select an iSCSI target from the iSCSI Target drop-down list.
3. Click Next and go to Create & Map Volume wizard — connecting to a
host, below.
4–20 Provisioning
Hitachi Unified Storage Operations Guide
3. When you finish, click Next.
About DP-Vols
The DP-VOL is a virtual volume that consumes and maps physical storage
space only for areas of the volume that have had data written. In Dynamic
Provisioning, it is required to associated the DP-VOL with a DP pool.
The DP-VOL needs to specify a DP pool number, DP-VOL logical capacity and
DP-VOL number. Many DP-VOLs can be defined for one pool. A given DP-
VOL cannot be defined to multiple DP pools. The HUS can register up to
4,095 DP-VOLs. The maximum number of DP-VOLs is reduced by the
number of RAID groups.
Provisioning 4–21
Hitachi Unified Storage Operations Guide
About volume numbers
A volume number is a number used to identify a volume, which is a
device addressed by the protocol either Fibre Channel or iSCSI. A volume
may be used with any device which supports read/write operations, such as
a tape drive, but is most often used to refer to a logical disk as created on
a SAN. Though not technically correct, the term "volume" is often also used
to refer to the drive itself.
Another example is a single disk drive with one physical SCSI port. It usually
provides just a single target, which in turn usually provides just a single
volume whose volume is zero. This volume represents the entire storage of
the disk drive.
In the current SCSI, a volume is a 64-bit identifier. It is divided into four 16-
bit pieces that reflect a multilevel addressing scheme, and it is unusual to
see any but the first of these used.
Volume vs. SCSI Device ID: The volume is not the only way to identify a
volume. There is also the SCSI Device ID, which identifies a volume
uniquely in the world. Labels or serial numbers stored in a volume's storage
volume often serve to identify the volume. However, the volume is the only
way for an initiator to address a command to a particular volume, so
initiators often create, via a discovery process, a mapping table of volume
to other identifiers.
Context sensitive: The volume identifies a volume only within the context
of a particular initiator. So two computers that access the same disk volume
may know it by different volumes.
4–22 Provisioning
Hitachi Unified Storage Operations Guide
Volume 0: There is one volume which is required to exist in every target:
zero. The volume with volume zero is special in that it must implement a
few specific commands which is how an initiator can find out all the other
volumes in the target. But Volume zero need not provide any other services,
such as a storage volume.
Many SCSI targets contain only one volume (so its volume is necessarily
zero). Others have a small number of volumes that correspond to separate
physical devices and have fixed volumes. A large storage system may have
up to thousands of volumes, defined logically, by administrative command,
and the administrator may choose the volume or the system may choose it.
Using the LUN Manager storage feature, you can add, modify, or delete
iSCSI targets during system operation. For example, if an additional disk is
installed or an additional host is connected in your iSCSI network, an
additional iSCSI target can be created for them with LUN Manager.
Provisioning 4–23
Hitachi Unified Storage Operations Guide
Displaying Host Group Properties
In the Array List dialog box, select an array of interest and click Show and
Configure Array.
In the Arrays tree, expand the Groups menu and select Host Groups. The
Host Groups dialog box displays. It contains a table that lists the host
groups that exist for the array.
The table includes the following data for each host group:
• Host group number and name, for example, 000-G000.
• Port number to which the host group belongs.
• Platform configured in the host group.
• Middleware configured in the host group.
In the Host Groups dialog box, click the name of the host group you want
to view. The Properties dialog box for the selected host group is displayed.
If the wizard does not launch, disable your browser's pop-up blockers, then
click the Add Array button at the bottom of the Array List dialog box to
launch the wizard.
The Add Array wizard is used to discover arrays on a storage network and
add them to the Navigator 2 database. The first time you configure the
array, the Add Array wizard launches automatically.
Each time Navigator 2 starts after the initial startup, it searches its database
for existing storage systems and displays them in the Arrays dialog box If
another Navigator 2 dialog box is displayed, you can redisplay the Arrays
dialog box by clicking Resource in the Explorer pane.
The Arrays dialog box provides a central location for you to view the settings
and status of the HUS Family storage systems that Navigator 2 is managing.
Buttons at the top left side of the dialog box let you run, stop, and edit error
monitoring.
There is also a Refresh Information button you can click to update the
contents in the widow. Below the buttons are fields that show the storage
system array status and error monitoring status.
4–24 Provisioning
Hitachi Unified Storage Operations Guide
Below the status indications are a drop-down list for viewing the number of
rows and pages (25, 50, or 100), and buttons for moving to the next,
previous, first, last, and a specific page in the Arrays dialog box. Buttons at
the bottom of the Arrays dialog box let you perform various tasks involving
the storage systems shown in the dialog box. Table 7-1 describes the tasks
you can perform with these
This screen displays the results of the search that was specified in the
Search Array screen. Use this screen to select the arrays you want to add
to Navigator 2.
If you entered a specific IP address in the Search Array screen, that array
is automatically registered in Navigator 2. Click Next to continue to the
Finish screen. A message box confirming that the array has been added is
displayed.
If you entered a range of IP addresses in the Search Array screen, all of the
arrays within that range are displayed in this screen. To add an array whose
name is displayed:
1. Click the to the left of the array name.
2. Click Next to add the arrays and continue to the Finish screen.
Provisioning 4–25
Hitachi Unified Storage Operations Guide
3. If any of the IP addresses entered are incorrect, when you click Next,
Navigator 2 displays the following message:
Failed to connect with the subsystem. Confirm the subsystem
status and the LAN environment, and then try again.
4. When configuring the management port settings, be sure the subnet you
specify matches the subnet of the management server or allows the
server to communicate with the port via a gateway. Otherwise, the
management server will not be able to communicate with the
management port.
4–26 Provisioning
Hitachi Unified Storage Operations Guide
5
Security
Security overview
Security 5–1
Hitachi Unified Storage Operations Guide
Security overview
Storage security is the group of parameters and settings that make storage
resources available to authorized users and trusted networks - and
unavailable to other entities. These parameters can apply to hardware,
programming, communications protocols, and organizational policy.
Security features
Navigator 2 uses four features to create a security solution:
• Account Authentication
• Audit Logging
• Data Retention Utility
Account Authentication
The Account Authentication feature enables your storage system to verify
the authenticity of users attempting to access the system. You can use this
feature to provide secure access to your site and leverage the database of
many accounts.
Hitachi provides you with the information needed to track the user on the
system. If the user does not have an account on the array, the information
provided will be sufficient to identify and interact with the user.
5–2 Security
Hitachi Unified Storage Operations Guide
Audit Logging
When an event occurs, it creates a piece of information that indicates the
user, operation, location of the event, and the results produced. This
information is known as an Audit Log entry. When a user accesses the
storage system from a computer in which HSNM2 operates and creates a
RAID group at the time of a setting operation outside the system, the disk
creates a log entry. The log indicates the exact time in hours, minutes, and
day of the month, that the operation occurred. It also indicates whether the
operation succeeded or failed.
Security benefits
Security on your storage system provides the following benefits:
• User access control - Only authorized parties can communicate with
each other. Consequently, a management station can interact with a
device only if the administrator configured the device to allow the
interaction.
• Fast transmission and receipt - Messages are received promptly;
users cannot save messages and replay them to alter content. This
prevents users from sabotaging SNMP configurations and operations.
For example, users can change configurations of network devices only if
authorized to do so.
Security 5–3
Hitachi Unified Storage Operations Guide
Account Authentication overview
The Account Authentication feature enables your storage system to verify
the authenticity of users attempting to access the system. You can use this
feature to provide secure access to your site and leverage the database of
many accounts.
Hitachi provides you with the information needed to track the user on the
system. If the user does not have an account on the array, the information
provided will be sufficient to identify and interact with the user.
Account Authentication is the process of determining who the user is, then
determining whether to grant that user access to the network. The primary
purpose is to bar intruders from networks. RADIUS authentication uses a
database of users and passwords.
A user who uses the storage system registers an account (user ID,
password, etc.) before beginning to configure account authentication. When
a user accesses the storage system, the Account Authentication feature
verifies whether the user is registered. From this information, users who use
the storage system can be discriminated and restricted.
A user who registered an account is given authority (role information) to
view and modify the storage system resources according to each purpose
of system management and the user can access each resource of the
storage system within the range of the authority (Access control).
5–4 Security
Hitachi Unified Storage Operations Guide
• Authorized communication - Only authorized parties can
communicate with each other. Consequently, a management station
can interact with a device only if the administrator configured the
device to allow the interaction.
• High performance of message transmission - Messages are
received promptly; users cannot save messages and replay them to
alter content. This prevents users from sabotaging SNMP configurations
and operations. For example, users can change configurations of
network devices only if authorized to do so.
• Role customization convenience - You can tailor access to the role
of the user. Typical roles are storage administrator, account
administrator and audit log administrator. This protects and secures
data from unauthorized access internally and externally. It also
provides focus for the user.
A user will not have to provide login information if you use the same user
name andpassword for both Navigator 2 and the Account Authentication
secured array.
The "built-in" or default root user account should only be used to create user
names and passwords. We recommend that you change it immediately after
enabling Account Authentication. Store your administration passwords
according to your organization's security policies. There is no "back door"
key to access an array in the event of a misplaced, lost, or forgotten
password.
The following steps detail the task flow of the Account Authentication
configuration process:
1. You determine that selected users need to have access to your storage
system and that all other users should be blocked from access to it.
2. You identify all users for access and all for denial, creating separate lists.
3. Configure the license for Account Authentication.
4. Log into HSNM2.
Security 5–5
Hitachi Unified Storage Operations Guide
5. Go to the Access Control area in HSNM2 that controls the Authentication
database.
6. Set a role-based permission to the administrator to whom you are
granting access to the storage system. The three administrator roles
supported on HSNM2 are:
• Account administrator. This figure manages and provisions
secure settings for individual accounts set for the storage system.
• Audit Log administrator. This figure manages, retrieves, and
provisions the Audit Log environment that is a record of all actions
involved with the storage system.
• Storage administrator. This figure manages and provisions
storage configurations on the storage system.
7. The newly configured administrator sends a request (a security query
packet) to a storage switch.
8. The storage switch forwards the packet to a location on the storage
system that contains one of the following types of information.
In the instance of the account administrator
• User account information
• User role information
In the instance of the Audit Log administrator
• Audit Log information
• Array configuration information
In the instance of the storage administrator
• General data
• Storage configuration information
9. The packet travels to either a storage area network or directly to the
storage system where the packets transmit header is evaluated for its
source.
10.If the source is allowed to obtain the data the packet is attempting to
locate, then it is granted permission to reach and retrieve the data.
5–6 Security
Hitachi Unified Storage Operations Guide
Figure 5-1: Account Authentication task flow
Security 5–7
Hitachi Unified Storage Operations Guide
Account Authentication specifications
Table 5-1 details account authentication specifications.
Item Description
Account creation The account information includes a user ID, password,
role, and whether the account is enabled or disabled. The
password must have at least six (6) characters.
Number of accounts You can register 200 accounts.
Number of users 256 users can log in. This includes duplicate log ins by the
same user.
Number of roles per account 6 roles can be assigned to an account.
• Storage Administrator (View and Modify)
• Storage Administrator (View)
• Account Administrator (View and Modify)
• Account Administrator (View)
• Audit Log Administrator (View and Modify)
• Audit Log Administrator (View)
Time before you are logged A log in can be set for 20-60 minutes in units of five
out minutes, 70-120 minutes in units of ten minutes, one
day, or indefinitely (OFF).
Security mode The Advanced Security Mode. Refer to Advanced Security
Mode on page 5-14 for more details.
Accounts
The account is the information (user ID, password, role, and validity/
invalidity of the account) that is registered in the array. An account is
required to access arrays where Account Authentication is enabled. The
array authenticates a user at the time of the log in, and can allow the user
to refer to, or update, the resources after the log in.Table 5-2 details
registered account specifications.
5–8 Security
Hitachi Unified Storage Operations Guide
Table 5-2: Registered account specifications
Account types
There are two types of accounts:
• Built-in
• Public
The built-in default account is a root account that has been originally
registered with the array. The user ID, password, and role are preset.
Administrators may create “public” accounts and define roles for them.
When operating the disk array, create a public account as the normally used
account, and assign the necessary role to it. See Table 5-3 for account types
and permissions that may be created.
The built-in default account may only have one active session and should be
used only to create accounts/users. Any current session is terminated if
attempting to log in again under this account.
•
Initial
Type Initial User ID Initial Assigned Role Description
Password
Built-In root storage Account Administrator An account that has been
(cannot change) (may change) (View and Modify) registered with Account
Authentication beforehand.
Public Defined by Defined by Defined by An account that can be created
administrator administrator administrator after Account Authentication is
(cannot change) enabled.
Roles
A role defines the permissions level to operate array resources (View and
Modify or View Only). You can place restrictions by assigning a role to an
account.Table 5-4 details role types and permissions.
Security 5–9
Hitachi Unified Storage Operations Guide
Table 5-4: Role types and permissions
Resources
The resource stores information (repository) that is defined by a role (for
example, the function to create a volume and to delete an account).
Table 5-5 details authentication resources.
5–10 Security
Hitachi Unified Storage Operations Guide
Table 5-5: Resources
The relationship between the roles and resource groups are shown in the
following table. For example, an account which is assigned the Storage
Administrator role (View and Modify) can perform the operations to view
and modify the key repository and the storage resource. Table 5-6 details
role and resource group relationships.
Resource
Group Name
(Repository)
Role Storage Role Account Audit Log Audit
Key Account
Definition Resource Mapping Setting Setting Log
Role Name
Storage - V/M V/M X X X X X
Administrator
(View and
Modify)
Storage - V V X X X X X
Administrator
(View Only)
Account - X X V/M V/M V/M X X
Administrator
(View and
Modify)
Account - X X V V V X X
Administrator
(View Only)
Audit Log - X X X X X V/M V
Administrator
(View and
Modify)
Audit Log - X X X X X V V
Administrator
(View Only)
Table Key:
• V = “View”
• M = “Modify”
• V/M = “View and Modify”
Security 5–11
Hitachi Unified Storage Operations Guide
• x = “Cannot view or modify”
• – = “Not available”
Session
A session is the period that you logged in and out from an array. Every log
in starts a session, so the same user can have more than one session.
When the user logs in, the array issues a session ID to the program they
are operating. 256 users can log in a single array at the same time
(including multiple log ins by the same user).
The session ID is deleted when the following occurs (note that after the
session ID is deleted, the array is not operational):
• A user logs out
• A user is forced to log out
• The status without an operation exceeds the log in validity
• The planned shutdown is executed
5–12 Security
Hitachi Unified Storage Operations Guide
The built-in account for the Account Administrator role always logs in with
the Modify mode. Therefore, after the built-in account logs in, a public
account that has the same View and Modify role, is forced into the View
mode.
Security 5–13
Hitachi Unified Storage Operations Guide
Advanced Security Mode
The Advanced Security Mode is a feature that improves the strength of the
password encryption registered in the array. By enabling the Advanced
Security Mode, the password is encrypted in the next generation method
which has the 128-bit strength.
Advanced Security Mode can only be operated with a built-in account. Also,
it can be set only when the firmware of version 0890/A or later is installed
in the storage system and Navigator 2 of version 9.00 or later is installed in
the management PC.
By changing the Advanced Security Mode, the following information is
deleted or initialized. As necessary, check the set following information in
advance, and set it again after changing the mode:
• All sessions during login (accounts during login are logged out)
• All public accounts registered in the storage system
• Role and password of the built-in account
When you change the Advanced Security Mode, the following information
will be deleted or initialized:
• All logged-in sessions. The logged-in account will log out.
• All public accounts registered to the storage system.
• The roles and password of the built-in account.
You can only change Advanced Security Mode using a built-in account.
5–14 Security
Hitachi Unified Storage Operations Guide
Account Authentication procedures
The following sections describe Account Authentication procedures.
Initial settings
To configure initial settings
1. Verify that you have the environments and requirements for Account
Authentication (see Preinstallation information on page 2-2).
2. Install the license.
3. Log in to Navigator 2.
4. Change the default password for the “built-in” account (see Account
types on page 5-9).
5. Register an account (see Adding accounts on page 5-17).
6. Registering an account for the service personnel (see Adding accounts
on page 5-17).
Managing accounts
The following sections describe how to:
• Display accounts — see Displaying accounts, below.
• Add accounts — see Adding accounts, below.
• Modify accounts — see Modifying accounts on page 5-19.
• Delete accounts — see Deleting accounts on page 5-21.
Displaying accounts
To display accounts, you must have an Account Administrator (View and
Modify or View Only) role. See Table 5-3 on page 5-9 for accounts types and
permissions that may be created.
To display accounts
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in as an Account Administrator (View and Modify) or an Account
Administrator (View Only)
4. Select the Account Authentication icon in the Security tree view.
5. The account information appears, as shown in Figure 5-2 on page 5-16.
Security 5–15
Hitachi Unified Storage Operations Guide
Figure 5-2: Account Information window
6. When the Session Count value is one or more, you can refer to the
session list. Click the numeric characters for the Session Count. The
logged sessions count list appears as shown in
5–16 Security
Hitachi Unified Storage Operations Guide
Adding accounts
To add accounts, you must have an Account Administrator (View and
Modify) role. After installing Account Authentication, log in with the built-in
account and then add the account. When adding accounts, register an
optional user ID and a password, and avoid the following strings:
Built_in_user, Admin, Administrator, Administrators, root, Authentication,
Authentications, Guest, Guests, Anyone, Everyone, System, Maintenance,
Developer, Supervisor.
To add accounts
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in as an Account Administrator (View and Modify) or an Account
Administrator (View Only)
4. Select the Account Authentication icon in the Security tree view.
Expand the Account Authentication list, and click Account. The Account
Screen is displayed.
5. Click Add Account. as shown in
Security 5–17
Hitachi Unified Storage Operations Guide
•
8. Type the old password in the Old password field. Then type the new
password in the New password field. Then retype the new password in
the Retype password field.
When skipping the password change, uncheck the Change Password
Checkbox.
9. Click Next. The Confirm wizard appears.
5–18 Security
Hitachi Unified Storage Operations Guide
5. Click Change Security Mode. The Change Security Mode screen
displays as shown in Figure 5-6.
Modifying accounts
If you are an Account Administrator (View and Modify), you can modify the
account password, role, and whether the account is enabled or disabled.
Note the following:
• You cannot modify your account unless you are using the built-in
account.
• A public account cannot modify a built-in account.
• The user ID of the public account and built-in account cannot be
changed.
To modify accounts
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in as an Account Administrator (View and Modify).
Security 5–19
Hitachi Unified Storage Operations Guide
4. Select the Account Authentication icon in the Security tree view.
Expand the Account Authentication list, and click Account. The Account
Screen is displayed.
5. Select the account from the Account list you want to modify, and then
click Edit Account as shown in Figure 5-8.
5–20 Security
Hitachi Unified Storage Operations Guide
8. Click OK.
9. Review the information in the Confirmation screen and any additional
messages, then click Close.
10.Follow the on-screen instructions.
Deleting accounts
If you are an Account Administrator (View and Modify), you can delete
accounts. Note that you cannot delete the built-in, and your own, account.
•
NOTE: A user with active session is automatically logged out if you delete
the account when they are logged in.
To delete accounts
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in as an Account Administrator (View and Modify) or an Account
Administrator (View Only)
4. Select the Account Authentication icon in the Security tree view.
Expand the Account Authentication list, and click Account. The Account
Screen is displayed.
5. Select the account from the Account list to be deleted, then click
Delete Account as shown in Figure 5-10.
Security 5–21
Hitachi Unified Storage Operations Guide
Changing session timeout length
If you are an Account Administrator (View and Modify or View Only), you
can change how long a user can be logged in.
To change the session length
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in as an Account Administrator (View and Modify) or an Account
Administrator (View Only)
4. Select the Account Authentication icon in the Security tree view.
Expand the Account Authentication list, and click Account. The Account
Screen is displayed.
5. Click the Option tab. The Account Authentication - Option tab displays
as shown in Figure 5-11
5–22 Security
Hitachi Unified Storage Operations Guide
Forcibly logging out
Log out forcibly when you want to log out other users except for the built-
in account user.
NOTE: When a controller failure occurs in the array during a log in, a
session ID can remain. Consequently, forcibly log out all accounts.
Security 5–23
Hitachi Unified Storage Operations Guide
4. Select the Warning Banner option in the Security menu. The Warning
Banner screen displays. Then click Edit Message in the Warning Banner
screen displays as shown in Figure 5-14.
5–24 Security
Hitachi Unified Storage Operations Guide
6. Review the preview contents and click Ok. A set message displays in the
Warning Banner view as shown in Figure 5-16.
Troubleshooting
Problem: The permission to modify (View and Modify) cannot be obtained
for a user who has the proper privileges.
Description and Solution: Log out of the account and then log back in.
The account may become View Only.
If this problem occurs, the login status of the array is retained until the
time-out of the array session occurs or while the login to Navigator 2 is valid
(up to 17 minutes when Navigator 2 is terminated by pressing the Logout
button or up to 34 minutes when Navigator 2 is terminated by clicking the
Close or X button.
When a change of the settings of the array is required immediately after the
logout, return to the Arrays screen by clicking the Resources button on the
left side of the screen, and then terminate Navigator 2 by clicking the
button.
Security 5–25
Hitachi Unified Storage Operations Guide
See the section Displaying accounts on page 5-15 and confirm the account
has update permissions. When the number of sessions is more than one,
you can confirm update permissions and IP addresses per session. Also,
issue a forced logout operation to log out of this account forcibly because a
user using the account requiring updated permissions cannot be specified.
5–26 Security
Hitachi Unified Storage Operations Guide
Audit Logging overview
When an event occurs, it creates a piece of information that indicates the
user, operation, location of the event, and the results produced. When a
user accesses the storage system from a computer in which HSNM2
operates and creates a RAID group at the time of a setting operation outside
the system, the disk creates a log entry. The log indicates the exact time in
hours, minutes, and day of the month, that the operation occurred. It also
indicates whether the operation succeeded or failed.
If the storage system enters the Ready status at the time of a status change
(system event) inside the disk, the storage system crates a log indicating
the exact time and success state of the Array Ready operation. It then sends
a log to the Syslog server.
Security 5–27
Hitachi Unified Storage Operations Guide
to be addressed. For example, investigating causal factors of failed
jobs, resource utilization, trending and so on.
• Creates an audit trail - Enables you to problem solve and trace back
to where a potential mistake has been made
5–28 Security
Hitachi Unified Storage Operations Guide
Figure 5-17 figure details the sequence of events that occur when an audit
log is created.
Item Description
Number of external Syslog Two
server
IPv4 or IPv6 IP addresses can be registered.
External Syslog server UDP port number 514 is to be used. The log conforms to
transmission method the BSD syslog Protocol (RFC3164).
Audit log length Less than 1,024 bytes per log. If the log (output) is more,
the message may be incomplete.
For the log of 1,024 bytes or more, only the first 1,024
bytes is output.
Audit log format The end of a log is expressed with the LF (Line Feed)
code. For more information, see the Hitachi Storage
Navigator Modular 2 Command Line Interface (CLI)
User’s Guide (MK-97DF8089).
Audit log occurrence The audit log is sent when any of the following occurs in
the array.
• Starting and stopping the array.
• Logging in and out using an account created with
Account Authentication.
• Changing an array setting (for example, creating or
deleting a volume).
• Initializing the log.
Sending the log to the The log is sent when an audit event occurs. However,
external Syslog server depending on the network traffic, there can be a delay of
some seconds.
Security 5–29
Hitachi Unified Storage Operations Guide
Table 5-10: Audit Logging specifications (Continued)
Item Description
Number of events that can 2,048 events (fixed). When the number of events
be stored exceeds 2,048, they are wrapped around. The audit log is
stored inside the system disk.
What to log?
Essentially, for each system monitored and likely event condition there must
be enough data logged for determinations to be made. At a minimum, you
need to be able to answer the standard who, what and when questions.
The data logged must be retained long enough to answer questions, but not
indefinitely. Storage space costs money and at a certain point, depending
on the data, the cost of storage is greater than the probable value of the log
data.
Security of logs
For the log data to be useful, it must be secured from unauthorized access
and integrity problems. This means there should be proper segregation of
duties between those who administer system/network accounts and those
who can access the log data.
The idea is to not have someone who can do both or else the risk, real or
perceived, is that an account can be created for malicious purposes, activity
performed, the account deleted and then the logs altered to not show what
happened. Bottom-line, access to the logs must be restricted to ensure their
integrity. This necessitates access controls as well as the use of hardened
systems.
Consideration must be given to the location of the logs as well – moving logs
to a central spot or at least off the sample platform can give added security
in the event that a given platform fails or is compromised. In other words,
if system X has catastrophic failure and the log data is on X, then the most
recent log data may be lost. However, if X’s data is stored on Y, then if X
fails, the log data is not lost and can be immediately available for analysis.
This can apply to hosts within a data center as well as across data centers
when geographic redundancy is viewed as important.
The trick is to understand what will be logged for each system. Log review
is a control put in place to mitigate risks to an acceptable level. The intent
is to only log what is necessary and to be able to ensure that management
agrees, which means talking to each system’s stakeholders. Be sure to
involve IT operations, security, end-user support, the business and the legal
department.
5–30 Security
Hitachi Unified Storage Operations Guide
Work with the stakeholders and populate a matrix wherein each system is
listed and then details are spelled out in terms of: what data must be logged
for security and operational considerations, how long it will be retained, how
it will be destroyed, who should have access, who will be responsible to
review it, how often it will be reviewed and how the review will be
evidenced. The latter is from a compliance perspective – if log reviews are
a required control, how can they be evidenced to auditors?
Summary
Audit logs are beneficial to have for a number of reasons. To be effective,
IT must understand log requirements for each system, then document what
will be logged for each system and get management’s approval. This will
reduce ambiguity over the details of logging and facilitate proper
management.
The audit log for an event has the format shown in Figure 5-18.
Security 5–31
Hitachi Unified Storage Operations Guide
Audit Logging procedures
The following sections describe the Audit Logging procedures.
Initial settings
To configure initial settings
1. Verify that you have the environments and requirements for Audit
Logging (see Preinstallation information on page 2-2).
2. Set the Syslog Server (see Table 5-10 on page 5-29).
Optional operations
To configure optional operations
1. Export the internal logged data.
2. Initialize the internal logged data (see Initializing logs on page 5-35).
5–32 Security
Hitachi Unified Storage Operations Guide
•
NOTE: This is recommended, because the log is sent to the Syslog server
uses UDP, may not record all events if there is a failure along the
communication path. See Storage Navigator Modular 2 Command Line
Interface (CLI) User’s Guide (MK-97DF8089) for information on exporting
the internal log.
9. Click OK.
If the Syslog server is successfully configured, a confirmation message is
sent to the Syslog server. If that confirmation message is not received at
the server, verify the following:
• The IP address of the destination Syslog server
• The management port IP address
• The subnet mask
• The default gateway
Security 5–33
Hitachi Unified Storage Operations Guide
Viewing Audit Log data
This section describes how to view audit log data.
•
NOTE: The output can only be executed by one user at a time. If the
output fails due to a LAN or controller failure, wait 3 minutes and then
execute the output again.
5–34 Security
Hitachi Unified Storage Operations Guide
Initializing logs
When logs are initialized, the stored logs are deleted and cannot be
restored. Be sure you export logs before initializing them. For more
information, see Storage Navigator Modular 2 Command Line Interface
(CLI) User’s Guide (MK-97DF8089).
To initialize logs
1. Start Navigator 2 and log in. The Arrays dialog box appears
2. Select the appropriate array and click Show & Configure Array.
3. Log in to Navigator 2. If the array is secured with Account
Authentication, you must log on as an Account Administrator (View
and Modify) or an Account Administrator (View Only).
4. Select the Audit Logging icon in the Security tree view. The Audit
Logging dialog box is displayed (see Figure 5-23).
NOTE: All stored internal log information is deleted when you initialize the
log. This information cannot be restored.
Security 5–35
Hitachi Unified Storage Operations Guide
Data Retention Utility overview
The Data Retention Utility feature protects data in your disk array from I/O
operations performed at open-systems hosts. Data Retention Utility enables
you to assign an access attribute to each logical volume. If you use the Data
Retention Utility, you will can use a logical volume as a read-only volume.
You will also be able to protect a volume against both read and write
operations.
Once data has been written, it can be retrieved and read only by authorized
applications or users.
5–36 Security
Hitachi Unified Storage Operations Guide
Data Retention Utility specifications
Table 5-11 shows the specifications of the Data Retention Utility.
Parameter Specifications
Unit of setting The setting is made for each unit. (However the expiration Lock
is set for each disk array.)
Number of settable HUS 110: 2,048 volumes
volumes HUS 130/150: 4,096 volumes
Kinds of access Defines the following types of attributes:
attributes • Read/Write (default setting)
• S-VOL Disable
• Read Only
• Protect
• Read Capacity 0(can be set or reset by CCI only)
• Invisible from Inquiry Command Can be set or reset by CCI
only)
Guard against a A change from Read Only, Protect, Read Capacity 0, or invisible
change of an access from Inquiry Command to Read/Write is rejected when the
attribute Retention Term does not expire or the Expiration Lock is set to
ON.
Relation with If the S-VOL Disable is set for an volume, a volume pair using the
ShadowImage/ volume as an S-VOL (data pool) is suppressed.
SnapShot/TrueCopy/ A setting of the S-VOL Disable of a volume that has already
TCE become an S-VOL (V-VOL or data pool) is not suppressed only
when the pair status is Split. Besides, when the S-VOL Disable is
set for a P-VOL, restoration of SnapShot, restoration of
ShadowImage is suppressed but a swapping of TrueCopy is not
suppressed.
Powering off/on An access attribute that has been set is retained even when the
power is turned off/on.
Controller An access attribute that has been set is retained even following a
detachment controller detachment.
Relation with drive A correction copy, dynamic sparing, and copy back are performed
restoration like a usual volume.
Volume detachment An access attribute that has been set for an volume is retained
even when the volume is detached.
Restriction of When the Data Retention Utility is enabled, initial setup and
firmware initialization of the feature’s settings (Configuration Clear) are
replacement suppressed.
Security 5–37
Hitachi Unified Storage Operations Guide
Restriction of access The following operations for a volume whose access attribute is
attribute setting other than Read/Write and for a RAID group that includes the
volume are suppressed:
• Volume deletion
• Volume formatting
• RAID group deletion
Setting by Navigator Navigator 2 can set an access attribute, one volume at a time.
2
Unified VOL A unified volume whose access level is a value other than Read/
Write can neither be composed nor dissolved.
Deleting, growing, or A volume for which an access attribute has been set cannot be
shrinking of VOL deleting, growing, or shrinking. An access attribute can be set for
a volume being grown or shrunken volume.
Expansion of RAID You can expand the RAID group to which the volumes that the
group access attribute is set belong.
Cache Residency An volume for which an access attribute has been set can be used
Manager for the Cache Residency Manager. On the other hand, an access
attribute can be set for an volume being used for the Cache
Residency Manager.
Concurrent use of Available.
LUN Manager
Concurrent use of Available.
Volume Migration The volume which executed the migration carries over the access
attribute and the retention term set by the Data Retention Utility
to the volume of the migration destination of the data and
releases the access attribute and the retention term of migration
resource (see Note below). When the access attribute is other
than Read/Write, the volume cannot be specified as an S-VOL of
Volume Migration.
5–38 Security
Hitachi Unified Storage Operations Guide
NOTE: Figure 5-24 shows the status where the migration is performed for
a volume which set the Read Only attribute. When the migration of the
VOL0 which set the attribute of Read Only to the VOL1 in the RAID group
1 is executed, the Read Only attribute carries over to the volume of the
migration destination of the data. Therefore, the VOL0 is in the status that
the Read Only attribute is set irrespective of the execution of the migration.
The Read Only attributes not copied to the VOL1. When the migration pair
is released and the VOL1 is deleted from the reserved volume, a host can
Read/Write to the VOL1.
Security 5–39
Hitachi Unified Storage Operations Guide
3. You define time intervals, or retention periods for which you want data
protected.
4. You configure the Data Retention Utility to apply to volumes that contain
volatile data.
5. You enable the Data Retention Utility.
Read/Write
If a logical volume has the Read/Write attribute, open-systems hosts can
perform both read and write operations on the logical volume.
ShadowImage, SnapShot, TrueCopy, and TCE can copy data to logical
volumes that have Read/Write attribute. However, if necessary, you can
prevent copying data to logical volumes that have the Read/Write attribute.
The Read/Write attribute is set by default for every volume.
Read Only
If a logical volume has the Read Only attribute, open-systems hosts can
perform read operations but cannot perform write operations on the
volume.
5–40 Security
Hitachi Unified Storage Operations Guide
ShadowImage, SnapShot, TrueCopy, and TCE cannot copy data to volumes
that have Read Only attribute.
Protect
If a logical volume has the Protect attribute, open-systems hosts cannot
access the logical volume. Open-systems hosts cannot perform either read
nor write operations on the volume.
ShadowImage, SnapShot, TrueCopy, and TCE cannot copy data to logical
volumes that have Protect attribute.
Invisible (Mode)
The Invisible mode can be set or reset by CCI only. When the Invisible mode
is set for the volume, the Read Capacity of the volume becomes zero and
the volume is invisible from the Inquiry command. The host becomes unable
to access the volume; it can neither read nor write data from/to it. The Read
Capacity of the volume becomes zero and the volume is hidden from the
Inquiry command.
ShadowImage, SnapShot, TrueCopy, and TCE cannot copy data to an
volume with an attribute that is in Invisible mode.
Retention terms
When the access attribute is changed to Read Only, Protect, Read Capacity
0, or Invisible from Inquiry Command, another change to Read/Write is
prohibited for a certain period. In the Data Retention Utility, the prohibited
change period is called Retention Term. When the Retention Term of an
volume is "2,190 days," the access attribute of the volume cannot be
changed for 2,190 days ahead.
The Retention Term is specified when the access attribute changes to Read
Only, Protect, Read Capacity 0, or Invisible from Inquiry Command from
Read/Write. The Retention Term that has been specified once can be
extended, but cannot be shortened.
Security 5–41
Hitachi Unified Storage Operations Guide
When the Retention Term expires, the Retention Term of the volume, with
an attribute is Read Only, Protect, Red Capacity 0, or Invisible from Inquiry
Command, can be changed to Read/Write.
NOTE: The Retention Term interval is updated only when the disk array is
in the Ready status. Therefore, the Retention Term may become longer
than the specified term when the disk array power is turned on/off by a
user. Also, the Retention Term interval may generate errors depending on
the environment.
NOTE: In the ShadowImage, TrueCopy, and TCE manuals, the term "S-
VOL" is used in place of the term "secondary volume".
5–42 Security
Hitachi Unified Storage Operations Guide
NOTE: SnapShot has two types of secondary volumes: a virtual volume
(V-VOL) and an area where differential data is stored (DP pool).
Usage
This section provides notes on using Data Retention.
Unified volumes
You cannot combine logical volumes that do not have a Read/Write
attribute. Unification of a unified volume, whose access attribute is not
Read/Write, cannot be dissolved.
Security 5–43
Hitachi Unified Storage Operations Guide
Host Side application example
Uses IXOS-eCONserver.
Windows 2000
A volume with a Read Only access attribute cannot be mounted.
Unix
When mounting a volume with a Read Only attribute, mount it as Read Only
(using the mount –r command).
5–44 Security
Hitachi Unified Storage Operations Guide
If a write is completed on the volume with a Read Only attribute, this may
result in no response; therefore, do not perform write commands (e.g., dd
command).
If Read/Write is done on a volume with a Protect attribute, this may result
in no response; therefore, do not perform read or write commands (e.g. dd
command).
HA Cluster Software
At times, a volume cannot be used as a resource for the HA cluster software
(such as the MSCS), because the HA cluster software periodically writes
management information in the management area to check resource
propriety.
Notes on usage
The access attribute for a volume should not be modified while an operation
is performed on the data residing on the volume. The operation may
terminate abnormally.
Logical volume for which the access attribute cannot be changed:
The Data Retention Utility does not enable you to change the access
attributes of the following logical volumes:
• A volume assigned to command device
• A volume assigned to DMLU
• An uninstalled volume
• A un-formatted volume
Security 5–45
Hitachi Unified Storage Operations Guide
Notes and restrictions for each operating system
• Use a volume whose access attributes have been set from the OS:
• If access attributes are set from the OS, they must be set before
mounting the volume. If the access attributes are set to the
volume after it is mounted, the system may not operate properly.
• If a command (create partition, format, etc.) is issued to an
volume with access attributes, from the operating system, it
appears as if the command ended normally. The information is
written to the host cache memory, the new information is not
reflected in the volume.
• An OS may not recognize a volume when the volume is larger than
the one on which the Invisible mode was set.
• Microsoft Windows® 2000:
• An volume with a Read Only access attribute cannot be mounted.
• Microsoft Windows Server 2003/Windows Server 2008
• When mounting an volume with a Read Only attribute, do not use
the diskpart command to mount and un-mount a volume. Use the
-x mount and -x umount commands of CCI.
• Using Windows® 2000/Windows Server 2003/Windows Server 2008:
• When setting a volume used by Windows® 2000/Windows Server
2003/Windows Server 2008 as the Data Retention Utility Volume,
the Data Retention Utility can be applied to a basic disk only. When
the Data Retention Utility is applied to a dynamic disk, an volume
is not correctly recognized.
• Unix® OS
• When mounting an volume with a is Read Only attribute, mount it
as Read Only (using the mount -r command).
• HP-UX®
• If there is an volume with a Read Only attribute, host shutdown
might not be possible. When shutting down the host, change the
attribute of volume from Read Only to Protect in advance.
• A volume with a Protect attribute, host startup time may be
lengthy. When starting the host, either change the attribute of the
volume from Protect to Read Only, or use mapping functions to
make the volume unrecognizable from the host.
• If a write is completed on the volume with a Read Only attribute, it
can results in no response; therefore, do not perform write
commands (e.g. dd command).
• If a Read/Write operation is performed on an volume with a Protect
attribute, this may result in no response; therefore, do not perform
read or write commands (for example, dd command).
• Using LVM
5–46 Security
Hitachi Unified Storage Operations Guide
• If you change the LVM configuration, including Data Retention
Volume, the specified volume must be temporarily blocked by the
raidvchkset -vg command. Place the volume again in the status
in which it is checked when the LVM configuration change is
completed.
• Using HA cluster software
• There may be times when an volume to which the Data Retention
Utility is applied might not be used as a resource of the HA cluster
software (such as the MSCS). This is because the HA cluster
software (such as the MSCS) writes management information in
the management area periodically to check propriety of the
resource.
Operations example
The operations procedure to use of the Data Retention Utility are shown in
the following sections.
Initial settings
Table 5-12 indicates what chapters contain topics on initial settings.
Security 5–47
Hitachi Unified Storage Operations Guide
Data Retention Utility procedures
To configure initial settings for the Data Retention Utility
1. Verify that you have the environments and requirements for Data
Retention (see Preinstallation information on page 2-2).
2. Set the command device using the CCI. Refer to documentation for more
information on the CCI.
3. Set the configuration definition file using the CCI. Refer to the
appropriate CCI end-user document (see list above).
4. Set the environment variable using the CCI. Refer to the appropriate CCI
end-user document (see list above).
Optional procedures
To configure optional operations
1. Set an attribute (see Setting S-VOLs on page 5-50).
2. Changing the retention term (see Setting S-VOLs on page 5-50).
3. Set an S-VOL (see Setting S-VOLs on page 5-50).
4. Set the expiration lock (see Setting expiration locks on page 5-50).
5–48 Security
Hitachi Unified Storage Operations Guide
• Capacity - Volume size
• S-VOL - Whether the volume can be set to S-VOL (Enable) or not
(Disable)
• Mode - The retention mode
• Retention Term - How long the data is retained
NOTE: When the attribute Read Only or Protect is set, the S-VOL is
disabled.
5. Select the volume and click Edit Retention. The Edit Retention screen
displays as shown in Figure 5-26.
Security 5–49
Hitachi Unified Storage Operations Guide
Setting S-VOLs
To set S-VOLs
1. Select a volume, and click Edit Retention. The Edit Retention screen
displays as shown in Figure 5-27.
5–50 Security
Hitachi Unified Storage Operations Guide
Setting an attribute
To set an attribute
1. Start Navigator 2.
2. Log in as a registered user to Navigator 2.
3. Select the storage system in which you will set up an attribute.
4. Click Show & Configure Array.
5. Select the Data Retention icon in the Security tree view.
•
6. Consider the fields and settings in the Data Retention dialog box as
shown in Table 5-12.
Item Description
VOL Displays the volume number.
Retention Attribute Displays the attribute associated with managing
the data. Values: Read/Write, Read Only,
Protect, Can’t Guard
Capacity Displays the volume capacity.
Secondary Volume Available Displays whether the volume can be set to S-
VOL (Enable) or is prevented from being set to
S-VOL (Disable).
Retention Term Displays the length of time associated with the
retention. Values: Unlimited or N/A.
Retention Mode Displays the mode associated with retaining
data. This field is for reference only. Values:
Read Capacity 0 (Zero), Hiding from Inquiry
Command Mode (Zero/Inv), or unspecifying (N/
A).
NOTE: When Read only or Protect is set as the attribute, S-VOL will be
disabled.
Security 5–51
Hitachi Unified Storage Operations Guide
•
NOTE: The Data Retention Utility cannot shorten the Retention Term.
The retention term is the length of time that the storage system keeps the
desired content. It can be either Unlimited or an integer value. If no
retention time is specified, the notation for three dotted lines (---) displays
as output.
To change the retention term
1. Select the volume, and then click Edit Retention.
The Edit Retention dialog box appears as shown in Figure 5-29.
2. Select Term or Unlimited from Retention Term. If you select Term, set
a Retention Term in years (0 to 60) and days (0 to 21,900).
A term of six years has been entered in default.
3. Click OK to display a confirmation message. Click Confirm and follow
the screen instructions.
5–52 Security
Hitachi Unified Storage Operations Guide
2. Click Change Lock.
The Change Expiration Lock dialog box displays.
•
Security 5–53
Hitachi Unified Storage Operations Guide
5–54 Security
Hitachi Unified Storage Operations Guide
6
Provisioning volumes
LUN Manager manages access paths between hosts and volumes for each
port. With LUN Manager, two or more systems or operating systems (also
called host groups) may be connected to one port of a Hitachi disk array,
and volumes may be freely assigned to each host system.
With LUN Manager, illegal access to volumes from any host system may be
prevented, and each host system may safely use a disk array as if it were
connected to several storage systems.
NOTE: The term volume previously was referred to as a logical unit
(volume). Most of the references to the term “logical unit” or “volume” have
been changed to the term “volume,” although, in some instances, the term
volume persists, especially in many of the figures in this chapter. These
references will be changed progressively over the next several releases of
HSNM2.
You can connect additional hosts to one port, although more connections
increases traffic on the port. When you use LUN Manager, design the system
configuration appropriately to evenly distribute traffic at the port, controller,
and drive.
With LUN Manager, illegal access to volumes from any host system may be
prevented, and each host system may safely use a disk array as if it were
connected to several storage systems.
The following steps detail the task flow of the LUN Manager configuration
process:
1. A system administrator determines that volumes are required for
operating on a currently configured storage system in the data center.
2. Determine which protocol is being used in the storage system: either
Fibre Channel or iSCSI.
3. Configure the license for LUN Manager.
4. Log into HSNM2.
Figure 6-1 illustrates a port being shared by multiple host systems with
volumes created in the host:
Figure 6-1: Setting access paths between hosts and volumes for Fibre
Channel
While LAN switches and Network Interface Cards (NICs) are viewed in
networks as equivalent nodes, some important differences exist between
them with the LAN connection when you use iSCSI. Pay attention to the
following:
The Host I/O load affects the iSCSI response time. Expect that when the
Host I/O load increases, your iSCSI environment performance will degrade.
Create a backup path between the host and iSCSI where the active
connection can switch to another path so that you can update the firmware
without stopping the system. Table 6-4 details LUN Manager iSCSI
specifications.
Table 6-5 detail the acceptable combinations of operating systems and Host
Bus Adapter (HBA) iSCSI entities.
Table 6-5: Operating System (OS) and host bus adapter (HBA)
iSCSI combinations
When connecting multiple hosts to one port of the storage system, the
storage system must be designed to accommodate the following:
System design. For proper system design, ensure the following tasks have
been performed:
• Assign volumes to hosts
• Assign volumes to RAID groups
• Determine the system configuration
• Determine the method of illegal access prevention
• Determine queue depth
Identify which volumes you want to use with a host, and then define a host
group on that port for them (see Figure 6-2 on page 6-10).
•
•
NOTES: If the queue depth is increased, array traffic also increases, and
host and switch traffic can increase. The formula for defining host queue
depth depends on the operating system or HBA. When determining the host
queue depth, consider the port limit. The formula for defining queue depth
on the host side varies depending on the type of operating system or HBA.
When determining the overall queue depth settings for hosts, consideration
should be given to the port limit.
Note that the maximum queue depth for the SAS LU is 32. The maximum
queue depth for the SATA LU is 68.
NOTE: We recommend that you execute any ping command tests when
there is no I/O between hosts and controllers.
Using iSCSI
The procedure flow for iSCSI below. For more information, see the Hitachi
iSCSI Resource and Planning Guide (MK-97DF8105).
To configure iSCSI
1. Verify that you have the environments and requirements for LUN
Manager (see Preinstallation information on page 2-2).
For the array:
2. Set up the iSCSI port (see iSCSI operations using LUN Manager on page
6-38).
3. Create a target (see Adding and deleting targets on page 6-43).
4. Set the iSCSI host name (see Setting the iSCSI target security on page
6-41).
To set a data input/output path, the authorized hosts for the volume are
required to be classified as a host group. Then the classified host group is
set to the port. For example, if a Windows host and a Linux host are
connected to port A, you must create host groups of volumes that can be
accessed by other operating systems.
A host group option (host connection mode) may be set for each host group
you create. Hosts connected to different ports cannot share the same host
group. Even if the volume to be accessed is the same, separated host
groups should be created for each port to which the hosts are connected.
Figure 6-26: Setting access paths between hosts and volumes for Fibre
Channel
NOTE: The number of ports displayed in the Host Groups and Host Group
Security windows can vary. SMS systems may display only four ports.
6. Select the port whose security you are changing, and click Change Host
Group Security.
7. In the Enable Host Group Security field, select the Yes checkbox to
enable security, or clear the checkbox to disable security.
8. Follow the on-screen instructions.
• After enabling host group security, Detected Hosts is displayed.
• The WWN of the HBA connected to the selected port is displayed in
the Detected Hosts field.
NOTE: HBA WWNs are set to each host group, and are used for identifying
hosts. When a port is connected to a host, the WWNs appear in the
Detected WWNs pane and can be added to the host group. 128 WWNs can
be assigned to a port. If you have more than 128 WWNs, delete one that is
not assigned to a host group. Occasionally, the WWNs may not appear in
the Detected WWNs pane, even though the port is connected to a host.
When this happens, manually add the WWNs (host information).
5. Click the Volumes tab. Figure 6-29 appears.
•.
NOTE: If iSCSI target security is enabled, the iSCSI host name specified
in your iSCSI initiator software must be added to the Hosts tab in Storage
Navigator Modular 2.
1. From the iSCSI Targets screen, check the name of an iSCSI target and
click Edit Target.
2. When the Edit iSCSI Target screen appears, go to the Hosts tab and
select Enter iSCSI Name Manually.
3. When the next Edit iSCSI Target window appears, enter the iSCSI host
name in the iSCSI Host Name field of the Hosts tab.
4. Click the Add button followed by the OK button.
Adding targets
When you add targets and click Create Target without selecting a port,
multiple ports are listed in the Available Ports list. Doing so allows you to
use the same setting for multiple ports. By editing the targets after making
the setting, you can omit the procedure for creating the target for each port.
To create targets for each port
1. In the iSCSI Targets tab, click Create Target. The iSCSI Target
Property screen is displayed.
•
Note that the Hosts tab displays only when iSCSI Target Security is
enabled.
NOTES: Up to 256 Hosts can be assigned for a port. The total of the
number of Hosts that have been already assigned (Selected Hosts) and the
number of Hosts that can be assigned (Selected Hosts) further is 256 for a
Port. If the number of Hosts assigned to a port exceeds 256 and further
input is impossible, delete a Host that is not assigned to a target.
In some cases, the Host is not listed in the Detected Hosts list, even though
the port is connected to a host. When the Host to be assigned to a target
is not listed in the Detected Hosts list, input and add it.
Not all targets may display when executing Discovery on the host and may
depend on the HBA in use due to the restriction of the number of characters
set for the iSCSI Name.
Item Requirements
iSCSI Target No. Enter a numeral from 1 through 254.
Alias Enter the alias of the target with less than or equal to 32
ASCII characters (alphabetic characters, numerals, and
the following symbols) can be used: (!, #, $, %, &, ‘, +,
-, ., =, @, ^, _, {, }, -, (, ), [, ], (space).
Spaces at the top are ignored. The same name cannot be
used in the same port.
iSCSI Name
When entering an iSCSI Name manually, enter the name
of the iSCSI Name with 223 or less alphanumeric
characters. A period (.), hyphen (-), and colon (:), can be
used.
For the iSCSI name, both the iqn and eui types are
supported.
iqn (iSCSI qualified name): The iqn consists of a type
identifier, “iqn”, a date of domain acquisition, a domain
name, and a character string given by a person who
acquired the domain.
Example: iqn.1994-
04.jp.co.hitachi:rsd.d9b.t.00026.1a000
eui (64-bit extended unique identifier): The eui consists
of a type identifier “eui” and an ASCII coded hexadecimal
eui-64 identifier.
Example: eui.0123456789abcdef
Deleting Targets
NOTE: Target 000 cannot be deleted. When deleting all the hosts and all
the Volumes in Target 000, initialize Target 000 (see section Initializing
Target 000).
To delete a target
1. Select the Target to be deleted and click Delete Target.
2. Click OK. The confirmation message appears.
3. Click Confirm. A deletion complete message appears.
4. Click Close.
Changing a nickname
To change a nickname
1. From the iSCSI Targets window, click the Hosts tab as shown in
Figure 6-45 on page 6-50.
•
CHAP users
CHAP is a security mechanism that one entity uses to verify the identity of
another entity, without revealing a secret password that is shared by the
two entities. In this way, CHAP prevents an unauthorized system from using
an authorized system's iSCSI name to access storage.
User authentication information can be set to the target to authorize access
for the target and to increase security.
2. Click Create CHAP User. The Create CHAP User window appears as
shown in Figure 6-46 on page 6-51.
•
•
Capacity overview
Partition capacity
Capacity 7–1
Hitachi Unified Storage Operations Guide
Capacity overview
The cache memory on a disk array is a gateway for receiving/sending data
from/to a host. In the disk array, the cache memory is used being divided
into a system control area and a user data area. For sending and receiving
data, the user data area is used.
A user can specify a size of the partition and a segment size (size of a unit
of data management) of a partition can be changed also. Therefore you can
optimize the data reception/sending from/to a host by assigning the most
suitable partition to a volume according to a kind of data to be received from
a host.
7–2 Capacity
Hitachi Unified Storage Operations Guide
use and performance of the cache. Enables applications to have access
to applications and data in the cache. By doing this, your retrieval time
of content is less, improving performance. Ordinarily applications are
swapped in and out of cache. As soon as we need the information
• Volume independence - Cache division enhances the independence
between the volumes that use each cache partition and can make the
volume less affected by the condition of I/O loads on the other
volumes.
Item Description
Supported cache memory HUS110: 4 GB/controller
HUS130: 8 GB/controller
HUS150: 8, 16 GB/controller
Number of partitions HUS110 (4 GB/controller): 2 to 7
HUS130 (8 GB/controller): 2 to 11
HUS150 (8 GB/controller): 2 to 151
HUS150 (16 GB/controller): 2 to 27
Partition capacity The partition capacity depends on the array model and
the capacity of the cache memory installed in the
controller. For more information, see Cache Partition
Manager settings on page 5-15.
Memory segment size • Master partition: Fixed 16 KB
• Sub partition: 4, 8, 16, 64, 256, or 512 KB
When changing the segment size, make sure you refer to
Specifying Partition Capacity on page 5-16.
Pair cache partition The default setting is “Auto” and you can specify the
partition. It is recommended that you use Load Balancing
in the “Auto” mode. For more information, see
Restrictions on page 5-15.
Partition mirroring Always On (it is always mirrored).
Capacity 7–3
Hitachi Unified Storage Operations Guide
5. Launch Cache Partition Manager.
6. Create a series of partitions that you will map to applications.
7. Create a system of pairing that you apply to the partitions.
The following steps detail the flow of tasks for stopping Cache Partition
Manager from running.
1. Change the partitions to the ones which all the volumes belong to the
master partitions.
2. Delete the sub-partitions.
3. Return the partition sizes of the master partitions to the default size.
4. Restart the disk array. This event has the result of deleting and
validating the change of the partitions sizes after the restart.
5. Uninstall the Cache Partition Manager.
7–4 Capacity
Hitachi Unified Storage Operations Guide
Pair cache partition
The pair cache partition is a partition to be changed in the Load Balancing
mode. By configuring controllers in the way detailed in Figure 7-1, partitions
can be used continuously in the way that partition numbers 0 and 1 are for
the SAS drives and partition numbers 2 and 3 are for the SAS7.2K drives
even if Load Balancing occurs.
Capacity 7–5
Hitachi Unified Storage Operations Guide
Figure 7-1: Cache Partition Manager Task Flow
Partition capacity
The partition capacity depends on the following entities.
• User data area - The user data area depends on the array model,
controller configuration (dual or single), and the controller cache
memory. You cannot create a partition that is larger than the user data
area.
• Default partition size - The tables in the partitioning sections show
partition sizes in MB for Cache Partition Manager. When you stop using
Cache Partition Manager, you must set the partition size to the default
size. The default partition size is equal to one half of the user data area
for dual controller configurations, and the whole user data area for
single controller configurations.
• Partitions size for small segments - This applies to partitions using
4 KB or 8 KB segments, and the value depends on the array model.
Sizes of partitions using all 4 KB or 8 KB segments must meet specific
criteria for maximum partitions size of small segments.
7–6 Capacity
Hitachi Unified Storage Operations Guide
Supported partition capacities
The supported partition capacity is determined depending on the user data
area of the cache memory and a specified segment size and the supported
partition capacity (when the hardware revision is 0100). All units are in
Megabytes (MB). Table 7-3 describes the supported partition capacity
tables for instances of a Dual Controller Configuration and Dynamic
Provisioning being disabled.
Capacity 7–7
Hitachi Unified Storage Operations Guide
Table 7-6 details segment and stripe size combinations.
The sum capacities of all the partitions cannot exceed the capacity of the
user data area. The maximum partition capacity above is a value that can
be calculated when the capacity of the other partition is established as the
minimum in the case of a configuration with only the master partitions. You
can calculate the residual capacity by using Navigator 2. Also, sizes of
partitions using all 4 Kbyte and 8 Kbyte segments must be within the limits
of the relational values shown in the next section.
Item Description
Modifying settings If you delete or add a partition, or change a partition or
segment size, you must restart the array.
Pair cache partition The segment size of a volume partition must be the same
as the specified partition. When a cache partition is
changed to a pair cache partition, the other partition
cannot be specified as a change destination.
Changing single or dual The configuration cannot be changed when Cache
configurations Partition Manager is enabled.
Concurrent use of When using ShadowImage, see Using ShadowImage,
ShadowImage Dynamic Provisioning, or TCE on page 7-10.
Concurrent use of Dynamic When Dynamic Provisioning is enabled, the partition
Provisioning status is initialized.
When using Dynamic Provisioning, see Using
ShadowImage, Dynamic Provisioning, or TCE on
page 7-10.
Concurrent use of a unified All the default partitions of the volume must be the same
volume partition.
7–8 Capacity
Hitachi Unified Storage Operations Guide
Table 7-7: Cache Partition Manager restrictions
Item Description
Volume Expansion You cannot expand volumes while making changes with
the Cache Partition Manager.
Concurrent use of RAID • You cannot change the Cache Partition Manager
group Expansion configuration for volumes belonging to a RAID group
that is being expanded.
• You cannot expand RAID groups while making
changes with the Cache Partition Manager.
Concurrent use of Cache Only the master partition can be used together. A
Residency Manager segment size of the partition to which a Cache Residency
volume belongs to, cannot be changed.
Concurrent use of Volume A volume that belongs to a partition cannot carry over.
Migration When the migration is completed, the volume belonging
to a partition is changed to destination partition.
Copy of partition information Not available. Cache partition information cannot be
by Navigator 2 copied.
Load Balancing Load balancing is not available for volumes where there
is no cache partition with the same segment size
available on the destination controller.
DP-VOLs The DP-VOLs can be set as a partition the same as the
normal volume. The DP pool cannot be set as a partition.
NOTE: You can only make changes when the cache is empty. Restart the
array after the cache is empty.
When the number of RAID group drives (to which volumes belong to)
increases, the use capacity of the Cache also increases. When a volume
exceeds 17 (15D+2P or more) of the number of disk drives that configure
the RAID group, using a partition with the capacity of the minimum partition
capacity +100 MB or more is recommended.
Capacity 7–9
Hitachi Unified Storage Operations Guide
Table 7-8: Partition capacity when changing segment
size
You must satisfy one of the following conditions when using these features
with Cache Partition Manager to pair the volumes:
• The P-VOL and S-VOL (V-VOL in the case of Dynamic Provisioning)
belong to the master partition (partition 0 or 1).
• The volume partitions that are used as the P-VOL and S-VOL are
controlled by the same controller.
You can check the information on the partitions, to which each volume
belongs, and the controllers that control the partitions in the setup window
of Cache Partition Manager. The detail is explained in the Chapter 4. For the
pair creation procedures, and so forth, please refer to the Hitachi
ShadowImage In-system Replication User's Guide or Hitachi Dynamic
Provisioning User’s Guide.
The P-VOL and S-VOL/V-VOL partitions that you want to specify as volumes
must be controlled by the same controller. See page 4 17 for more
information.
After creating the pair, monitor the partitions for each volume to ensure
they are controlled by the same controller.
7–10 Capacity
Hitachi Unified Storage Operations Guide
Make sure that the cache partition information is initialized as shown below
when Dynamic Provisioning is installed in the status where Cache Partition
Manager is already in use.
• All the volumes are moved to the master partitions on the side of the
default owner controller.
• All the sub-partitions are deleted and the size of each master partition
is reduced to a half of the user data area after installing Dynamic
Provisioning.
Figure 7-3: Case where Dynamic Provisioning is installed for use with
Cache Partition Manager
Capacity 7–11
Hitachi Unified Storage Operations Guide
Adding or reducing cache memory
You can add or reduce the cache memory used by Cache Partition Manager,
unless the following conditions apply.
• A sub-partition exists or is reserved.
• For dual controllers, the master partitions 0 and 1 sizes are different, or
the partition size reserved for the change is different.
7–12 Capacity
Hitachi Unified Storage Operations Guide
Cache Partition Manager procedures
The following sections describe Cache Partition Manager settings.
When you set, delete or change Cache Partition Manager settings when the
storage system is used on other remote side of TrueCopy or TCE, the
following activity results when you restart the system:
• Both paths of TrueCopy or TCE are blocked. In an instance of a blocked
path, the system generates a trap to the SNMP Agent Support function.
The path of TrueCopy or TCE is automatically recovered from the block
after the system restarts.
• When the pair status of TrueCopy or TCE is in either a Paired or
Synchronizing state, it changes to the Failure state.
Initial settings
NOTE: 1. When you modify partition settings, the change is validated after
the array is restarted.
NOTE: 2. You only have to restart the array once to validate multiple
partition setting modifications.
NOTE: 3. To create a volume with the partition you created, determine the
partition beforehand. Then, add the volume after the array is restarted and
the partition is validated.
Capacity 7–13
Hitachi Unified Storage Operations Guide
1. In the master partition, change volume partitions.
2. Delete sub partitions.
3. Return the master partition size (#0 and #1) to their default size.
4. Restart the array.
5. Disable or remove Cache Partition Manager.
After making changes to cache partitions, you must restart the array.
7–14 Capacity
Hitachi Unified Storage Operations Guide
3. In the Navigation bar, click Performance, then click Cache Partition.
Figure 7-4 appears.
Before deleting a cache partition, move the volume that has been assigned
to it, to another partition.
Capacity 7–15
Hitachi Unified Storage Operations Guide
To delete cache partitions
1. Start Navigator 2 and log in. The Arrays window appears
2. Click the appropriate array.
3. In the Navigation bar, click Performance, then click Cache Partition.
Figure 7-4 on page 7-15 appears.
4. Click Set. The Cache Partition dialog box appears, as shown in
Figure 5 on page 7-15.
5. Select the cache partition number that you are deleting, and click
Delete as shown in Figure 7-6.
7–16 Capacity
Hitachi Unified Storage Operations Guide
3. Click Show & Configure Array. The Show and Set Reservation window
displays as shown in Figure 7-7.
Capacity 7–17
Hitachi Unified Storage Operations Guide
6. Select a volume from the volume list, and click Edit Cache Partition.
The Edit Cache Partition window displays as shown in Figure 7-9.
NOTE: The rebooting process will execute after you change the settings.
NOTE: The owner controller must be different for the partition where the
volume is located and the partition pair cache is located.
7–18 Capacity
Hitachi Unified Storage Operations Guide
7. Select a partition number from the Pair Cache Partition drop-down list
and click OK.
8. Click Close after successfully creating the pair cache partition.
Capacity 7–19
Hitachi Unified Storage Operations Guide
6. To change capacity, double-click the Size (x10MB) field and make the
desired change as shown in Figure 7-9.
Figure 7-11: Edit Cache Partition Property window with segment size
selection
7. To change the segment size, select segment size from the drop-down
menu to the left of Segment Size.
8. Follow the on-screen instructions.
The controller that processes the I/O of a volume is referred to as the owner
controller.
7–20 Capacity
Hitachi Unified Storage Operations Guide
6. Select the Cache Partition number and the controller (CTL) number (0
or 1) from the drop-down menu and click OK as shown in Figure 7-12.
Figure 7-12: Edit Cache Partition Property window with new cache
partition owner controller selected
7. Follow the on-screen instructions.
8. The Automatic Pair Cache Partition Confirmation message box
displays.
Depending on the type of change you make, the setting of the pair cache
partition may be switched to Auto. Verify this by checking the setting
after restarting the storage system.
Click OK to continue. The Restart Array message is displayed. You
must restart the storage system to validate the settings, however, you
do not have to do it at this time. Restarting the storage system takes
approximately seven to 25 minutes.
9. To restart now, click OK. Restarting the storage system takes
approximately seven to 25 minutes. To restart later, click Cancel.
Your changes will be retained and implemented the next time you restart
the array.
Capacity 7–21
Hitachi Unified Storage Operations Guide
• All the volumes are moved to the master partitions on the side of the
default owner controller.
• All the sub-partitions are deleted and the size of the each master
partition is reduced to a half of the user data area after the installation
of either SnapShot, TCE, or Dynamic Provisioning.
When you set, delete or change Cache Residency Manager settings when
the storage system is used on other remote side of TrueCopy or TCE, the
following activity results when you restart the system:
• Both paths of TrueCopy or TCE are blocked. In an instance of a blocked
path, the system generates a trap to the SNMP Agent Support function.
The path of TrueCopy or TCE is automatically recovered from the block
after the system restarts.
• When the pair status of TrueCopy or TCE is in either a Paired or
Synchronizing state, it changes to the Failure state.
7–22 Capacity
Hitachi Unified Storage Operations Guide
Cache Residency Benefits
The following are Cache Residency Manager benefits:
Capacity 7–23
Hitachi Unified Storage Operations Guide
The internal controller operation is the same as that of the commands
issued to other volumes, except that the read/write command to the volume
with the Cache Residency Manager can be transferred from/to the cache
memory without accessing the disk drives.
A delay can occur in the following cases even if Cache Residency Manager
is applied to the volumes.
1. The command execution may wait for the completion of commands
issued to other volumes.
2. The command execution may wait for the completion of commands
other than read/write commands (such as the Mode Select command)
issued to the same volume.
3. The command execution may wait for the completion of processing for
internal operation such as data reconstruction, etc.
Figure 7-13 shows how part of cache memory installed in the controller is
used for the Cache Residency Manager function. Cache memory utilizes a
battery backup on both controllers, and the data is duplicated on each
controller for safety against power failure and cache package failure.
Item Description
Controller configuration Dual Controller configuration and controller is not
blockaded.
RAID level RAID 5, RAID 6, or RAID 1+0.
Cache partition Only the volume belonging to a master partition.
7–24 Capacity
Hitachi Unified Storage Operations Guide
Table 7-9: Cache Residency Specifications (Continued)
Item Description
Number of volumes with the 1/controller (2/arrays)
Cache Residency function
Termination Conditions
Condition Description
The array is turned off Normal case.
The cache capacity is changed and the Cache uninstallation.
available capacity of the cache
memory is less than volume size
A controller failure Failure.
The battery alarm occurs Failure.
A battery backup circuit failure Failure.
The number of PIN data (data unable Failure.
to be written to disk drives because of
failures) exceeds the threshold value
Disabling Conditions
Condition Description
The Cache Residency Manager setting Caused by the user.
is cleared
The Cache Residency Manager is Caused by the user.
disabled or uninstalled (locked)
The Cache Residency Manager volume Caused by the user.
or RAID group is deleted
The controller configuration is changed Caused by the user.
(Dual/Single)
Capacity 7–25
Hitachi Unified Storage Operations Guide
NOTE: When the controller configuration is changed from single to dual
after setting up the Cache Residency volume, the Cache Residency volume
is cancelled. You can open the Cache Residency Manager in single
configuration, but neither setup nor operation can be performed.
Equipment
Item Description
Controller configuration Dual Controller configuration and controller is not
blockaded.
RAID level RAID 5, RAID 6, or RAID 1+0.
Cache partition Only the volume belonging to a master partition.
Number of volumes with the 1/controller (2/arrays)
Cache Residency function
Volume Capacity
The maximum size of the Cache Residency Manager volume depends on the
cache memory. Note that the Cache Residency volume is only assigned a
master partition.
The capacity varies with Cache Partition Manager and SnapShot or TCE.
There are three scenarios:
• Cache Partition Manager and Dynamic Provisioning are disabled
• Cache Partition Manager is disabled and Dynamic Provisioning is
enabled
• Cache Partition Manager is enabled.
• Only when Dynamic Provisioning is valid
Table 7-13 details supported capacity for Cache Residency Volume where
Cache Partition Manager is disabled and Dynamic Provisioning is enabled.
7–26 Capacity
Hitachi Unified Storage Operations Guide
Table 7-13: Supported capacity of Cache Residency Volume (Cache
Partition Manager is disabled and Dynamic Provisioning is enabled)
Installed Cache Maximum Capacity of Cache Residency
Array Model
Memory Volume
HUS 110 4 GB/CTL 806,400 blocks (approx. 393 MB)
HUS 130 8 GB/CTL 3,245,760 blocks (approx. 1,584 MB)
HUS 150 8 GB/CTL 2,116,800 blocks (approx. 1,033 MB)
16 GB/CTL 8,789,760 blocks (approx. 4,291 MB)
NOTE: 1. The size becomes effective next time you start and is the master
partition size. Use the value of the smaller one in a formula.
NOTE: 2. One (1) block = 512 bytes, and a fraction less than 2,047 MB is
omitted.
Capacity 7–27
Hitachi Unified Storage Operations Guide
Restrictions
Table 7-16 details Cache Residency Manager restrictions.
Item Description
Concurrent use of SnapShot Cache Residency Manager and SnapShot can be used
together at the same time, but the volume specified for
Cache Residency Manager (volume cache residence)
cannot be set to P-VOL, V-VOL.
Concurrent use of Cache You cannot change a partition affiliated with the Cache
Partition Manager Residency volume.
After you cancel the Cache Residency volume, you must
set it up again.
Concurrent use of Volume The Cache Residency Manager volume (volume cache
Migration residence) cannot be set to P-VOL or S-VOL.
After you cancel the Cache Residency volume, you must
set it up again.
Concurrent use of Power A RAID group volume that has powered down can be
Saving specified as the Cache Residency volume. However, if a
host accesses a Cache Residency RAID group volume
that has powered down, and error occurs.
Concurrent use of TCE The volume specified for Cache Residency Manager
(volume cache residence) cannot be set to P-VOL or S-
VOL.
7–28 Capacity
Hitachi Unified Storage Operations Guide
•
Initial settings
To configure initial settings
1. Verify that you have the environments and requirements for Cache
Residency Manager (see Preinstallation information on page 2-2).
2. Set the Cache Residency Manager (see Setting and canceling residency
volumes on page 7-29).
Capacity 7–29
Hitachi Unified Storage Operations Guide
4. Click Change Residency. The Change Residency screen displays as
shown in Figure 7-15.
•
7–30 Capacity
Hitachi Unified Storage Operations Guide
• NAS unit is in operation (*2).
• A failure has not occurred on the NAS unit. (*3).
• Confirm with the storage system administrator to check whether
the NAS unit is connected or not.
• Confirm with the NAS unit administrator to check whether the NAS
service is operating or not.
• Ask the NAS unit administrator to check whether failure has
occurred or not by checking with the NAS administration software,
NAS Manager GUI, List of RAS Information, etc. In case of failure,
execute the maintenance operation together with the NAS
maintenance personal.
• Correspondence when connecting the NAS unit:
If the NAS unit is connected, ask the NAS unit administrator for
termination of NAS OS and planned shutdown of the NAS unit.
• Points to be checked after completing this operation:
Ask the NAS unit administrator to reboot the NAS unit. After rebooting,
ask the NAS unit administrator to refer to “Recovering from FC path
errors” in “Hitachi NAS Manager User’s Guide” and check the status of
the Fibre Channel path and to recover the FC path if it is in a failure
status.
In addition, if there are any personnel for the NAS unit maintenance, ask
the NAS unit maintenance personnel to reboot the NAS unit.
Capacity 7–31
Hitachi Unified Storage Operations Guide
7–32 Capacity
Hitachi Unified Storage Operations Guide
8
Performance Monitor
Monitoring features
• Graphing utility - Performance Monitor provides a mechanism to
create graphs that represent activity that occurs using a specific system
trend or event as a criterion. An example of a trend that you can
generate a graph from is CPU usage.
• Flexible data collecting criteria - Performance Monitor enables you
to change data collecting criteria like interval time and using
combinations of criteria objects.
• Multiple output types - Performance Monitor enables you to display
monitored data in various forms in addition to a graph, including bar
and pie charts.
• Tree view - Performance Monitor provides its own menuing system in
the form of a navigation tree called a Tree View. The various items you
can display in the Tree View include volumes, data pools, and ports.
• Collection status utility - Performance Monitor provides a mechanism
where data generated by the monitor. displays according to the Change
Measurement Items utility. It provides a status of the current snapshot
of the trend or event.
• Ability to save monitored data - Performance Monitor enables you to
save data generated through monitoring sessions by exporting it to
various file types.
Monitoring benefits
The following are benefits of the Performance Monitor system.
• Adjustment elimination - Eliminates ongoing adjustment of storage
system and storage network.
• Rapid diagnosis - Enables users to more rapidly diagnose
performance capabilities of host based systems and applications
• Increased efficiency - Enables increased efficiency by locating and
recommending solutions to impasses in the storage system and SAN
performance. Decreases problem determination time and diagnostic
analysis
Item Description
Information Acquires array performance and resource utilization.
Graphic display Information is displayed with line graphs. Information
displayed can be near-real time.
Information output The information can be output to a CSV file.
Management PC disk Navigator 2 creates a temporary file to the directory
capacity where it is installed to store the monitor output data. The
disk capacity of the maximum of 2.4 GB is required.
Note that these limitations are measured during normal operation when
hardware failures have not occurred.
Item Description
Graph item The objects of the information acquisition and the graphic
display occur with icons. When you click on a radio
button, details of the icon display in the Detailed Graph
Item.
Detailed Graph Item Details of items selected in the Graph Item display. The
most recent performance information of each item
displays for the array configuration and the defined
configuration.
Graph Item Information Specify items to be graphically displayed by selecting
them from the listed items. Items to be displayed are
determined according to the selection that is made in the
Graph Item.
Interval Time Specify an interval for acquiring the information. Specify
it in units of minutes within a range from one minute to
23 hours and 59 minutes. The default interval is one
minute.
In the above-mentioned interval time, the data for a
maximum of 1,440 times can be stored. If it exceeds
1,440 times, it will be overwritten from the old data.
Initial settings
To configure initial settings
1. Verify that you have the environments and requirements for
Performance Monitor (see Preinstallation information on page 2-2).
2. Collect the performance monitoring data (see Obtaining information on
page 8-8).
Optional operations
1. Use the graphic displays (see Using graphic displays on page 8-8).
2. Output the performance monitor information to a file.
3. Optimize the performance (see Performance troubleshooting on page 9-
6).
Obtaining information
The information is obtained for each controller.
To obtain information for each controller
1. Start Navigator 2 and log in. The Arrays window opens
2. Click the appropriate array.
3. Click Performance and click Monitoring. The Monitor - Performance
Measurement Items window displays.
4. Click Show Graph.
5. Specify the interval time.
6. Select the items (up to 8) that you want to appear in the graph.
7. Click Start. When the interval elapses, the graph appears.
•
Note that procedures in this guide frequently refer to the Tree View as a list,
for example, the Volume Migration list.
Table 8-7 details items in the Volume, Cache, and Processor items.
Table 8-8 details items in the Volume, Cache, and Processor items.
For the cache hit of the write command, the command performs the
operation (write after) to respond to a host with the status at the time of
completing write to the cache memory. Because of this response type, two
exception cases exist that are worth noting where a write to the cache
memory is viewed by the application variously as a hit and a miss:
• A case where the write to the cache memory is immediately performed
is defined as a hit.
• A case where the write to the cache memory is delayed because of
heavy cache memory use is defined as a miss.
Table 8-11 details Y axis values for the RAID Groups DP Pools item.
Once you have exported content to a CSV file, the files take default
filenames each with a .CSV extension. The following tables detail filenames
for each object type.
Table 8-15 lists filenames for the Port object.
Table 8-16 details CSV filenames for list items for RAID Groups and DP Pool
objects.
Table 8-17 details CSV filenames for list items associated with Volumes and
Processor objects.
Table 8-18 details CSV filenames for list items associated with Cache, Drive,
and Drive Operation objects.
Item Description
Port Information Displays information about the port.
RAID Group, DP VOL and Displays information about RAID groups, Dynamic
Volume Information provisioning pools and volumes.
Cache Information Displays information about cache on the storage
system.
Processor Information Displays information about the storage system
processor.
Drive Information Displays information about the administrative state
of the storage system disk drive.
Drive Operation Information Displays information about the operation of the
storage system disk drive.
Back-end Information Displays information about the back-end of the
storage system.
Management Area Displays cache hit rates and access count of
Information management data in stored drives acquired by the
array. This information is used only for acquiring
performance data. This information cannot be
graphed.
Controller imbalance
The controller load information can be obtained from the processor
operation rate and its cache use rate.
The volume load can be obtained from the I/O and transfer rate of each
volume.
When the loads between controllers differ considerably, the array disperses
the loads (load balancing). However, when this does not work, change the
volume by using the tuning parameters.
Port imbalance
The port load in the array can be obtained from the I/O and transfer rate of
each port.
If the loads between ports differ considerably, transfer the volume that
belongs to the port with the largest load, to a port with a smaller load.
SNMP overview
Supported configurations
Supported configurations
Operational guidelines
MIBs
Additional resources
The SNMP agent provided for the HUS systems is designed to provide SAN
information to MIB browsers that support SNMP v1.X Using Hitachi SNMP
Agent Support, you can monitor inventory, configuration, service indicators,
and environmental and fault reporting on Hitachi modular storage arrays
using SNMP network management systems such as IBM Tivoli, CA
Unicenter, and HP OpenView.
SNMP features
• Availability of MIBs - All SNMP-compliant devices include a specific
text file called a Management Information Base (MIB). A MIB is a
collection of hierarchically organized information that defines what
specific data can be collected from that particular device.
• Common language of network monitoring - SNMP (Simple Network
Management Protocol) is the common language of network
monitoring–it is integrated into most network infrastructure devices
today, and many network management tools include the ability to pull
and receive SNMP information.
• Data collection services - SNMP extends network visibility into
network-attached devices by providing data collection services useful to
any administrator. These devices include switches and routers as well
as servers and printers. The following information is designed to give
the reader a general understanding of what SNMP is, the benefits of
SNMP, and the proper usage of SNMP as part of a complete network
monitoring and management solution.
• Standard application layer protocol - The Simple Network
Management Protocol (SNMP) is a standard application layer protocol
(defined by RFC 1157) that allows a management station (the software
that collects SNMP information) to poll agents running on network
devices for specific pieces of information. What the agents report is
dependent on the device. For example, if the agent is running on a
server, it might report the server’s processor utilization and memory
usage. If the agent is running on a router, it could report statistics such
as interface utilization, priority queue levels, congestion notifications,
SNMP benefits
The following are SNMP benefits:
• Distributed model of management - Enables a centralized,
distributed way to manage nodes on a network macros multiple
domains. This provides an efficient way to manage devices where one
administrator can have visibility to many locations.
• System portability - Enables portability to other vendors to develop
applications to the main platform.
• Industry-wide common compliance - SNMP delivers management
information in a common, non-proprietary manner, making it easy for
an administrator to manage devices from different vendors using the
same tools and interface. Its power is in the fact that it is a standard:
one SNMP-compliant management station can communicate with
agents from multiple vendors, and do so simultaneously. Illustration 1
shows a sample SNMP management station screen displaying key
network statistics.
• Data transparency - The type of data that can be acquired is
transparent. For example, when using a protocol analyzer to monitor
network traffic from a switch's SPAN or mirror port, physical layer
errors are invisible. This is because switches do not forward error
packets to either the original destination port or to the analysis port.
After the manager receives the event, the manager displays it and can
choose to take an action based on the event. For instance, the manager can
poll the agent directly, or poll other associated device agents to get a better
understanding of the event.
SNMP versions
Like other Internet standards, SNMP is defined by a number of Requests for
Comments (RFCs) published by the Internet Engineering Task Force (IETF).
The MIB is a tree-like data dictionary used to assemble and interpret SNMP
messages. The manager accesses the MIB content using Get and Set
operations.
Figure 9-2 shows an example of the Hitachi SNMP Agent Support MIB-II
hierarchy that defines all OIDs residing below the series of integers
beginning with 1.3.6.1.2.1.
The SNMP manager sends a Set to change a Managed object to a new value.
The agent's GetResponse message confirms the change if allowed or an error
indication as to why the change cannot be made.
The agent sends a Trap when a specific event occurs. The Trap message
allows the agent to spontaneously inform the manager about an important
event.
Figure 9-3 shows the core PDUs that the SNMP Agent Support Function
supports and Table 9-1 on page 9-7 summarizes them.
GET REQUEST
GET RESPONSE
TRAP
PDU Description
GetRequest A manager-to-agent request to retrieve the value of a
MIB object. A Response with current values is returned.
GetResponse If an error in a request from the SNMP manager is
detected, the storage array sends a GetResponse to the
manager, together with the error status, as shown in
Table 9-2 on page 9-8.
GetNextRequest A manager-to-agent request to discover available MIB
objects continuously. The entire MIB of an agent can be
walked by iterative application of GetNextRequest,
starting at OID 0.
GetNextResponse SNMP agent response to a GetNextRequest operation.
PDU Description
Trap An asynchronous notification from the agent to the
manager. If an event occurs, the agent sends a Trap to
the manager, regardless of SNMP manager's request. A
trap notifies the manager about status changes and
error conditions that may not be able to wait until the
next interrogation cycle. The SNMP Agent Support
Function supports standard and extended traps (see
SNMP traps on page 9-8).
If the following errors are detected in the SNMP manager's request, the
Hitachi modular storage array does not respond.
• The community name does not match the setting. The array does not
respond and sends the standard trap Authentication Failure (incorrect
community name) to the manager.
• The SNMP request message exceeds 484 bytes. The array cannot send
or receive SNMP messages larger than 484 bytes, and does not
respond to received SNMP messages that exceed this limit.
SNMP traps
Traps are the method an agent uses to report important, unsolicited
information to a manager. Trap responses are not defined in SNMP v1, so
each managed element must have one or more trap receivers defined for
the trap to be effective.
The SNMP Agent Support Function reports SNMP v1 standard traps and
SNMP v2 extended traps. The following list shows the standard traps that
are supported.
• Start up SNMP Agent Support Function (when installing or enabling
SNMP Agent Support Function)
• Changing SNMP Agent Support Function setting
• Incorrect community name when acquiring MIB information
Figure 9-4 shows an example of an SNMP trap within the Hitachi modular
storage array. For more information, see SNMP traps on page 9-8.
2. A trap is issued.
The error is reported to
the SNMP manager.
UNIX/PC
S
Ethernet (10BaseT/100BaseT/1000BaseT)
Legend:
1: Depending on the contents of the failure, this trap might not be reported.
2: If a controller blockage occurs, the storage array issues Traps that show
the blockage. The controller blockage may recover automatically,
depending on the cause of the failure.
3: The Trap that shows the warning status of the storage array may be
issued via preventive maintenance, periodic part replacement, or field
work conducted by Hitachi service personnel.
Supported configurations
The SNMP Agent Support Function can be used in two configurations.
• Direct-connect — where a local computer or workstation acting as an
SNMP manager is directly connected to the Hitachi modular storage
array being managed within a private Local Area Network (LAN).
Figure 9-5 shows an example of this configuration.
• Public network — where gateways allow a remote computer or
workstation acting as an SNMP manager to connect to the Hitachi
modular storage array being managed. Figure 9-6 shows an example of
this configuration.
10BaseT, 100BaseT,
1000BaseT
SNMP Manager
Storage Arrays
10BaseT, 100BaseT,
1000BaseT
Switch
Gateway Gateway
Storage Arrays
SNMP Manager
License key
The SNMP Agent Support Function requires a license key before it can be
used. To obtain the required license key, please contact your Hitachi
representative.
NOTE: Hitachi SNMP Agent Support can also be installed from a command
line. Refer to the Hitachi Unified Storage Command Line Interface
Reference Guide.
1. Start Navigator 2 and log in as a registered user.
2. From the Arrays page, check the check box in the left column that
corresponds to the Hitachi modular storage array on which you want to
install the SNMP Agent Support Function.
3. At the bottom of the page, click Show & Configure Array.
4. Under Common Array Task, click the Install License icon:
This completes the procedure for installing Hitachi SNMP Agent Support.
Proceed to Hitachi SNMP Agent Support procedures, below, to confirm that
Hitachi SNMP Agent Support is enabled.
NOTE: Hitachi modular storage arrays with dual controllers require only
one operating environment file and one storage array name file. You cannot
have separate environment information files for each controller.
4. Using Navigator 2, take the SNMP environment information file created
in step 3 and register it with the storage array. See Registering the SNMP
environment information on page 9-18.
The operating environment file Config.txt is a text file you create using a
text editor such as Notepad or WordPad. Figure 9-8 and Figure 9-9 show
examples of this file using different IP addressing methods. Instructions for
creating this file appear after the figures.
COMMUNITY tagmastore
ALLOW ALL OPERATIONS
MANAGER 123.45.67.89
SEND ALL TRAPS TO PORT 162
WITH COMMUNITY "HITACHI DF800"
MANAGER 123.45.67.90
SEND ALL TRAPS TO PORT 162
WITH COMMUNITY "HITACHI DF800"
COMMUNITY tagmastore
ALLOW ALL OPERATIONS
MANAGER 123.45.67.89
SEND ALL TRAPS TO PORT 162
WITH COMMUNITY "HITACHI DF800"
MANAGER 2001::1::20a:87ff:fec6:1928
SEND ALL TRAPS TO PORT 162
WITH COMMUNITY "HITACHI DF800"
See step 1 2. Add a sysLocation value by adding a line beginning with INITIAL. See
the following example and the description in Table 9-3 on page 9-17.
See step 2
INITIAL sysLocation user set information
See step 3
When entering the information in steps 1 and 2:
• Do not exceed 255 alphanumeric characters.
See step 4
• To add special characters, such as a space, tab, hyphen, or
quotation mark, enclose them in double quotation marks (for
example “-”).
• Do not type line feeds when entering this information.
3. Below the sysContact value, add a line beginning with COMMUNITY to
specify the community name with which the disk array allows receiving
of requests. See the following example and the description in Table 9-3
See step 1 on page 9-17.
The storage array name file named Name.txt is a text file you create using
a text editor such as Notepad or WordPad. Table 9-4 lists the contents of
this file.
Table 9-4: Storage array name file
After you register the SNMP information in the Hitachi modular storage
array, refer to that information.
If the results of steps 1 and 2 succeed, it means all SNMP managers can
communicate with the array via SNMP.
3. Gathering of dfRegressionStatus
dfRegressionStatus = 69
A failure (drive blockade) is detected
Operational guidelines
When using SNMP Agent Support Function, observe the following
guidelines:
• Like other SNMP applications, SNMP Agent Support Function uses the
UDP protocol. UDP might prevent error traps from being reported
properly to the SNMP manager. Therefore, it is recommended that the
SNMP manager acquire MIB information periodically.
• If the interval for collecting MIB information is set too short, it can
adversely impact the Hitachi modular storage array’s performance.
• If failures occur in a Hitachi modular storage array after the SNMP
manager starts, the failures are not reported with a trap. In this case,
acquire the MIB objects dfRegressionStatus after starting the SNMP
manager and check whether failures occur.
• The SNMP Agent Support Function stops if the controller is blocked and
the SNMP managers receive no response.
LEGEND:
YES = GET and TRAP are possible. Drive blockages and occurrences
detected by the other controller in a dual-controller configuration are
excluded.
* = A trap is reported only for its own controller blockade (drive extractionis
not included) detected by its own controller.
Generic
Trap Description Supported?
Trap Code
0 coldStart Reset from power-off. (P/S on) YES
The SNMP agent started online.
1 warmStart Management module restarted. YES
The SNMP information file was
reset.
2 linkDown Link goes down NO
3 linkUp Link goes up NO
4 authenticationFailure Illegal SNMP accessed YES
5 egpNeiborLoss EGP error is detected NO
6 enterpriseSpecific Enterprise extended trap YES
Trap
Trap Meaning
Code
1 systemDown Array down occurred. If a controller is
blocked, the array issues TRAPs that show
the blockage. The array may recover from a
controller blockade automatically,
depending on the cause of the failure.
2 driveFailure Drive blocking occurred.
3 fanFailure Fan failure occurred.
4 powerSupplyFailure Power supply failure occurred.
5 batteryFailure Battery failure occurred.
6 cacheFailure Cache memory failure occurred.
7 upsFailure UPS failure occurred.
10 otherControllerFailure Other controller failure occurred. If a
controller is blocked, the array issues
TRAPs that show the blockage. The array
may recover from a controller blockade
automatically, depending on the cause of
the failure.
11 warning Warning occurred. The array warning status
can be set automatically in the warning
information via preventive maintenance,
periodic part replacement, or field work
conducted by Hitachi service personnel.
12 SpareDriveFailure Spare drive failure occurred.
Trap
Trap Meaning
Code
254 hostIoModuleFailure Host I/O module failure occurred.
255 driveIoModuleFailure Drive I/O module failure occurred.
256 managementModuleFailure Management module failure occurred.
257 recoverableControllerFailure Recoverable CTL alarm by the maintenance
procedures of the blocked component
300 psueShadowImage Failure occurred [ShadowImage].
301 psueSnapShot Failure occurred [SnapShot].
302 psueTrueCopy Failure occurred [TrueCopy]
303 psueTrueCopyExtendedDistance Failure occurred [TrueCopy Extended
Distance]
304 psueModularVolumeMigration Failure occurred [Modular Volume
Migration].
307 cycleTimeThresholdOver Cycle time threshold over occurred.
308 luFailure Data pool no free.
310 dpPoolEarlyAlert DP Pool consumed capacity early alert
311 dpPoolDepletionAlert DP Pool consumed capacity depletion alert
312 dpPoolCapacityOver DP Pool consumed capacity over
313 overProvisioningWarningThresho Over Provisioning Warning Threshold
ld
314 overProvisioningLimitThreshold Over Provisioning Limit Threshold
319 replicationDepletionAlert Over replication depletion alert threshold
320 replicationDataReleased Over replication data released threshold
321 ssdWriteCountEarlyAlert SSD write count early alert
322 ssdWriteCountExceedThreshold SSD write count exceeds threshold
323 sideCardFailure Side Card failure occurred
MIB installation
This section provides installation specifications for MIBs supported by
Hitachi modular storage arrays. The following conventions are used in this
section:
• Standard = the standard shown on the subject standard document.
• Content: = the content of the subject extended MIB.
• Access = shows whether the item read/write (RW), read only (R), or
not accessible (N/A).
• Installation - the specifications for mounting the subject MIB in the
array.
• Supported status = can be YES, Partially, or NO.
[Installation] 100000000
at group
icmp group
tcp group
udp group
egp group
dfSystemParameter group
Bit 7 6 5 4 3 2 1 0
Byte
0 0 I/F board 0 Host 0 0 0 Cache
connector
1 Managem Host 0 Fan BK 0 PS Battery
ent Module
Module
2 False CTL Drive 0 0 0 Path 0 UPS
Module
3 CTL Warning 0 0 ENC D-Drive S-Drive Drive
Bit 7 6 5 4 3 2 1 0
Byte
0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
Subject bits should be “on” if each part is in the regressed state. This value
can be fixed as “0,” depending on the array type and the firmware revision.
Table 9-18 shows this object value for each failure status.
If two or more components fail, the object value adds up each object value.
dfCommandExecutionCondition group
• HUS110: 0 to 2,047
• Other HUS130/HUS150
models: 0 to 4,095
dfPort group
[Installation] Ditto.
[Installation] Ditto. (1 to 4)
Port Controller
Fibre Comments
Number Number
0 0A
1 0B
2 0C
3 0D
4 0 0E
5 0F
6 0G
7 0H
8 1A
9 1B
10 1C
11 1D
1
12 1E
13 1F
14 1G
15 1H
Port types
For ports other than those that are not applicable, None is set.
Value Meaning
1 Fabric (on) & FCAL
2 Fabric (off) & FCAL
3 Fabric (on) & Point to Point
4 Fabric (off) & Point to Point
5 Not Fibre
Port Controller
Fibre Comments
Number Number
0 *0A*
1 *0B*
2 *0C*
3 *0D*
4 0 *0E*
5 *0F*
6 *0G*
7 *0H*
8 *1A*
9 *1B*
10 *1C*
11 *1D*
1
12 *1E*
13 *1F*
14 *1G*
15 *1H*
Port WWN
dfCommandExecutionInternalCondition group
dfCommandExecutionInternalConditionOBJECT IDENTIFIER :: =
{dfraidLanExMib 7}
[Installation] Same as
above (Refer to the lower
hierarchical level)
1.1.1 dfInternalLun R [Content] Volume number YES (index)
{dfCommandEntry 1}
[Installation] Same as
above
• HUS110: 0 to 2,047
• Other HUS130/HUS150
models: 0 to 4,095
1.1.2 dInternalfReadCommand R [Content] Number of read YES
Number command receptions
{dfCommandEntry 2}
[Installation] Same as
above
1.1.3 dfInternalReadHitNumber R [Content] Number of cache YES
{dfCommandEntry 3} read hits
[Installation] Number of
read commands whose host
request range completely
hits that of the cache
1.1.4 dfInternalReadHitRate R [Content] Cache read hit YES
{dfCommandEntry 4} rate (%)
[Installation] (Number of
cache read hits / Number of
read command receptions)
x 100
1.1.5 dfInternalWriteCommand R [Content] Number of write YES
Number command receptions
{dfCommandEntry 5}
[Installation] Same as
above
[Installation] Number of
write commands that were
not restricted to write data
(not made to wait for
writing data) in cache by
the dirty threshold value
manager
1.1.7 dfInternalWriteHitRate R [Content] Cache write hit YES
{dfCommandEntry 7} rate (%)
[Installation] Number of
cache write hits / Number of
write command receptions)
x 100
Additional resources
For more information about SNMP, refer to the following resources and to
the IETF Web site http://www.ietf.org/rfc.html.
SNMP Version 1
• RFC 1155 — structure and identification of management information for
TCP/IP-based internets.
• RFC 1157 – simple protocol by which management information for a
network element can be inspected or altered by logically remote users
• RFC 1212 – format for producing MIB modules.
• RFC 1213 — v2 of MIB-2 for network management of TCP/IP-based
internets.
• RFC 1215 – Trap-Trap macro for using experimental MIBs.
SNMP Version 2
• RFC 2578 – adapted subset of OSI's Abstract Syntax Notation One,
ASN.1 (1988) and associated administrative values.
• RFC 2579 – initial set of textual conventions available to all MIB
modules.
• RFC 2580 – notation used to define the acceptable lower-bounds of
implementation, along with the actual level of implementation
achieved.
• RFC 3416 – syntax and elements of procedure for sending, receiving,
and processing SNMP PDUs.
SNMP Version 3
• RFC 3410 – overview of SNMP v3.
Virtualization overview
Virtualization 10–1
Hitachi Unified Storage Operations Guide
Virtualization overview
Most data centers use less than 15 percent of available, compute, storage,
and memory capacity. By underutilizing these resources, companies deploy
more servers than necessary to perform a given amount of work. Additional
servers increase costs and create a more complex and disparate
environment that can be difficult to manage.
Tiered storage designs are a natural for both the enterprise Hitachi
Universal Storage Platform™ family and the midrange Hitachi Adaptable
Modular Storage systems with their ability to support a mix of drive types,
sizes and speeds along with advanced RAID options. Solutions based
around a Universal Storage Platform add the ability to virtualize both
internal and external heterogeneous storage into a single pool with well
defined tiers and the ability to transparently move data at will between
them.
Virtualization features
The following are Virtualization features:
• Premium storage reserved for critical applications - Deploy
premium storage for critical applications and data that need premium
storage services
• Cost prioritization model - Assign lower cost, relatively slower
storage for less critical data (like backups or archived data)
• Data portability - Move data across tiers as needed to meet
application and business requirements
10–2 Virtualization
Hitachi Unified Storage Operations Guide
2. You begin the process of virtualization using Hitachi Virtual Storage
Platform. Figure 10-1 details the approach to Virtualization.
Virtualization benefits
The following are Virtualization benefits:
• Basic task improvement - Improves backup, recovery and archiving;
utilization and availability.
• Transparency - Allows seamless transparent data volume movement
among any storage systems attached to a Virtual Storage Platform
• Data volume portability - Enables movement of data volumes
between custom storage tiers without requiring administrators to pause
or halt applications
• Complexity reduction - Masks the underlying complexity of tiered
storage data migration and does not require the administrator to
master the operation of complex storage analysis
• Cost and efficiency – You can't keep throwing more storage as point
solutions for each user or business need. You need to balance high
business demands with low budgets, contain costs, and "do more with
less". Virtualization helps you reclaim, utilize and optimize your storage
assets.
• Data and technology management – You have more and more data
to manage, and you're dealing with a multi-vendor environment as a
result of data growth and business change. It's time to rein in all those
assets and manage them to drive your business.
• Improve customer service – You're under pressure to meet SLAs,
align IT with business strategy, and support users and customers.
Virtualization 10–3
Hitachi Unified Storage Operations Guide
Virtualizing enables you to deliver storage in right-sized, right-
performing slices—slices of what you have now, but weren't maximizing
before.
• Stay competitive – Business is always looking for ways to be better,
faster, cheaper. Hitachi Storage Virtualization increases business agility
and lets you do more with less so that you can ramp up fast to meet
changing business needs.
• Enhance performance – The best way you can support your users
and customers is to improve speed and access to their data.
Virtualizing gives new life to your existing infrastructure because it lets
you optimize all your multi-vendor storage and match storage to
application requirements.
Now that we have designed our tiers from a requirements standpoint, how
do you configure a system to match? There are a variety of ways to
configure tiered storage architectures. You can configure performance, but
the bulk of the storage for the mailboxes themselves can be mapped to the
less expensive but still performing "Lower Cost" Tier for business data. A
small amount of storage space is also mapped in from "Less Critical" for
development purposes. With stringent retention policies and an expanding
amount of emails with large attachments, a large amount of "Archive" Tier
storage is needed.
The NAS Head File and Print functions need some "Primary Tier" storage for
several critical image processing applications. However, the bulk is file
sharing used for shared directories within the company and print spooling
and can use inexpensive "Low Critical" tier.
Additionally, the company's web server uses the "Lower Cost" Tier for
business data for the core set of often accessed pages. The bulk of what is
online is infrequently accessed and can be kept on "Less Critical" storage.
10–4 Virtualization
Hitachi Unified Storage Operations Guide
Storage Options
Now that we have designed our tiers from a requirements standpoint, how
do you configure a system to match? There are a variety of ways to
configure tiered storage architectures.
You can dedicate specific storage systems for each tier, or you can use
different types of storage within a storage system for an "in-the-box" tiered
storage system. The Hitachi best practice is to use the virtualization
capabilities of the Hitachi Virtual Storage Platform (VSP) and the Hitachi
Universal Storage Platform (USP) family to eliminate the inflexible nature of
dedicated tiered storage silos and seamlessly combine both. This allows for
the best overall solution possible.
For example, for the highest tier you could start with a VSP configured with
Fibre Channel drives and a high performance RAID configuration. Here the
highest levels of performance and availability for mission critical
applications are required. As a second tier you could add the USP with Fibre
Channel drives, which are configured at a RAID level that is more cost-
effective and still highly reliable but with a little less performance.
The Hitachi HUS systems are the only midrange storage systems with the
Hitachi Dynamic Load Balancing Controller that provide integrated,
automated hardware-based front to back end I/O load balancing. This
eliminates many complex and time-consuming tasks that storage
administrators typically face.
This type of approach this ensures I/O traffic to back-end disk devices is
dynamically managed, balanced and shared equally across both controllers.
The point-to-point backend design virtually eliminates I/O transfer delays
and contention associated with Fibre Channel arbitration and provides
significantly higher bandwidth and I/O concurrency.
Virtualization 10–5
Hitachi Unified Storage Operations Guide
Figure 10-2: View of a Hitachi HUS 110 in a controller
The active-active Fibre Channel ports mean the user does not have to
consider with controller ownership. I/O is passed to the managing controller
through cross-path communication.
Any path can be used as a normal path. The Hitachi Dynamic Load Balancing
controllers assist in balancing microprocessor load across the storage
systems. If a microprocessor becomes excessively busy, the volume
management automatically switches to help balance the microprocessor
load. Table 10-1 lists some of the differences between the 2000 family
storage systems.
10–6 Virtualization
Hitachi Unified Storage Operations Guide
Table 10-1: Hitachi Adapatable Modular Storage 2000 Family
overview
vSphere 4
This sample approach uses vSphere 4 as a Virtualization example. vSphere
4 is a highly efficient virtualization platform that provides a robust, scalable
and reliable infrastructure for the data center. vSphere features provide an
easy to manage platform. These features include
• Distributed Resource Scheduler
• High Availability
Virtualization 10–7
Hitachi Unified Storage Operations Guide
• Fault Tolerance
Use of ESX 4’s round robin multipathing policy with the symmetric active-
active controllers’ dynamic load balancing feature distributes load across
multiple host bus adapters (HBAs) and multiple storage ports. Use of
VMware Dynamic Resource Scheduling (DRS) with Hitachi Dynamic
Provisioning software automatically distributes loads on the ESX host and
on the storage system’s back end. For more information, see VMware's
vSphere web site.
For more information, see the Hhitachi Dynamic Provisioning data sheet.
Storage configuration
The following sections describe configuration considerations to keep in mind
when optimizing a 2000 family storage infrastructure to meet your
performance, scalability, availability, and ease of management
requirements.
Redundancy
Figure 10-1 shows that when one HBA is down with either hardware or link
failure, another HBA on the host can still provide access to the storage
resources. When ESX 4 hosts are connected in this fashion to a 2000 family
storage system, hosts can take advantage of using round robin multipathing
algorithm where the I/O load is distributed across all available paths. Hitachi
Data Systems recommends a minimum of two HBA ports for redundancy.
Zone configuration
Zoning divides the physical fabric into logical subsets for enhanced security
and data segregation. Incorrect zoning can lead to volume presentation
issues to ESX hosts, inconsistent paths, and other problems. Two types of
zones are available, each with advantages and disadvantages:
• Port — Uses a specific physical port on the Fibre Channel switch. Port
zones provide better security and can be easier to troubleshoot than
WWN zones. This might be advantageous in a smaller static
environment. The disadvantage of this is ESX host’s HBA must always
be connected to the specified port. Moving an HBA connection results in
loss of connectivity and requires rezoning.
• WWN — Uses nameservers to map an HBA’s WWN to a target port’s
WWN. The advantage of this is that the ESX host’s HBA can be
connected to any port on the switch, providing greater flexibility. This
10–8 Virtualization
Hitachi Unified Storage Operations Guide
might also be advantageous in a larger dynamic environment. However,
the disadvantage is the reduced security and adds more complexity in
troubleshooting.
When zoning, it’s also important to consider all the paths available to the
targets so that multipathing can be achieved. Table 10-2 shows an example
of a single-initiator zone with multipathing.
In this example, each ESX host has two HBAs with one port on each HBA.
Each HBA port is zoned to one port on each controller with single initiator
and two targets in one zone. The second HBA is zoned to another port on
each controller. As a result, each HBA port has two paths and one zone. With
a total of two HBA ports, each host has four paths and two zones.
Virtualization 10–9
Hitachi Unified Storage Operations Guide
Host Group configuration
Configuring host groups on the Hitachi Adaptable Modular Storage 2000
family involves defining which HBA or group of HBAs can access a volume
through certain ports on the controllers. The following sections describe
different host group configuration scenarios.
On a 2000 family storage system, host groups are created using Hitachi
Storage Navigator Modular 2 software. In the Available Ports box, select all
ports. This applies the host group settings to all the ports that you select.
Choose VMware from the Platform drop-down menu. Choose Standard
Mode from the Common Setting drop-down menu. In the Additional
Settings box, uncheck the check boxes. These settings automatically apply
the correct configuration. Hitachi Dynamic Provisioning software with
vSphere 4
The following sections describe best practices for using Hitachi Dynamic
Provisioning Software with vSphere 4. Dynamic Provisioning Space Saving
and Virtual Disks
Two of vSphere’s virtual disk formats are thin-friendly, meaning they only
allocate chunks from the Dynamic Provisioning pool as required. Thin and
zeroedthick format virtual disks are thin-friendly, eagerzeroedthick format
virtual disks are not. The eagerzeroedthick format virtual disk allocates 100
percent of the DP-VOLs space in the Dynamic Provisioning pool. While the
10–10 Virtualization
Hitachi Unified Storage Operations Guide
eagerzeroedthick format virtual disk does not give the benefit of cost
savings by over provisioning of storage, it can still assist in the wide striping
of the DP-VOL across all disks in the Dynamic Provisioning pool.
Keep in mind that this operation does not zero out the VMFS datastore space
that was freed by the Storage vMotion operation, meaning that Hitachi
Dynamic Provisioning software cannot reclaim the free space.
Virtualization 10–11
Hitachi Unified Storage Operations Guide
Loads Hitachi Dynamic Provisioning software can balance I/O load in
pools of RAID groups. VMware’s Distributed Resource Scheduling (DRS)
can balance computing capacity in CPU and memory pools. When you
use Hmemory into a DRS resource pool and Hitachi Dynamic
Provisioning groups into a Dynamic Provisioning pool. Figure 3 shows
how resource pool.
10–12 Virtualization
Hitachi Unified Storage Operations Guide
11
Special functions
Item Description
Number of pairs Migration can be performed for the following pairs per
array, per system:
• 1,023 (HUS 110)
• 2,047 (HUS 130 and HUS 150)
Item Description
Types of P-VOL/S-VOL drives Volumes consisting of SAS drives can be assigned to any
P-VOLs and S-VOLs.
You can specify a volume consisting of SAS drives for the
P-VOL and the S-VOL.
Host interface Fibre Channel or iSCSI
Canceling and resuming Migration cannot be stopped or resumed. When the
migration migration is canceled and executed again, Volume
Migration copies of the data again.
Handling of reserved You cannot delete volumes or RAID groups while they are
volumes being migrated.
Handling of volumes You cannot format, delete, expand, or reduce volumes
while they are being migrated. You also cannot delete or
expand the RAID group.
You can delete the pair after the migration, or stop the
migration.
Formatting restrictions You cannot specify a volume as a P-VOL or an S-VOL
while it is being formatted. Execute the migration after
the formatting is completed.
Volume restrictions Data pool volume, DMLU, and command devices (CCI)
cannot be specified as a P-VOL or an S-VOL.
Concurrent use of unified The unified volumes migrate after the unification. Using
volumes unified volumes on page 11-13.
Concurrent use of Data When the access attribute is not Read/Write, the volume
Retention cannot be specified as an S-VOL. The volume which
executed the migration carries over the access attribute
and the retention term.
For more information, see Using with the Data Retention
Utility on page 11-14.
Concurrent use of SNMP Available
Agent
Concurrent use of Password Available
Protection
Concurrent use of LUN Available
Manager
Concurrent use of Cache The Cache Residency volume cannot be set to P-VOL or
Residency Manager S-VOL.
Concurrent use of Cache Available. Note that a volume that belongs to a partition
Partition Manager and stripe size cannot carry over, and cannot be specified
as a P-VOL or an S-VOL.
Concurrent use of Power When a P-VOL or an S-VOL is included in a RAID group
Saving for which the Power Saving has been specified, you
cannot use Volume Migration.
Concurrent use of A P-VOL and an S-VOL of ShadowImage cannot be
ShadowImage specified as a P-VOL or an S-VOL of Volume Migration
unless their pair status is Simplex.
Concurrent use of SnapShot A SnapShot P-VOL cannot be specified as a P-VOL or an
S-VOL when the SnapShot volume (V-VOL) is defined.
Item Description
Concurrent use of TrueCopy A P-VOL and an S-VOL of TrueCopy or TCE cannot be
or TCE specified as a P-VOL or an S-VOL of Volume Migration
unless their pair status is Simplex.
Concurrent Use of Dynamic Available. The DP-VOLs created by Dynamic Provisioning
Provisioning and the normal volume can bet as a P-VOl, an S-VOL, or
a reserved volume.
Failures The migration fails if the copying from the P-VOL to the
S-VOL stops. The migration also fails when a volume
blockade occurs. However, the migration continues if a
drive blockade occurs.
Memory reduction To reduce the memory being used, you must disable
Volume Migration and SnapShot, ShadowImage,
TrueCopy, or TCE function.
Requirements
Table 11-3 shows requirements for Modular Volume Migration Manager.
Item Description
Specifications Number of controllers: 2 (dual configuration)
Supported capacity
Table 11-4 shows the maximum capacity of the S-VOL by the DMLU
capacity. The maximum capacity of the S-VOL is the total value of the S-
VOL capacity of ShadowImage, TrueCopy, and Volume Migration.
NOTE: The maximum capacity shown in Table 11-3 is the value smaller
than the pair creatable capacity displayed in Navigator 2. This condition is
because the pair creatable capacity in Navigator 2 is treated not as the real
capacity, but as the value rounded up by the 1.5 TB unit, not as the actual
capacity when calculating the S-VOL capacity. The maximum capacity (the
capacity of which the pair can be created) reduced by the capacity capable
of rounding up by the number of S-VOLs becomes the capacity shown in
Table 11-3.
Reserved Volume
Volume Migration registers the volume which is the migration destination of
the data as a reserved volume before executing the migration in order to
shut off the S-VOL from the Read/Write operation by a host beforehand.
When executing the migration using Navigator 2. The volume that is
selectable as an S-VOL is the reserved volume only. The reserved volume is
a volume which is the migration destination of the data when the migration
is executed, and data is not guaranteed.
DMLU
DMLU refers to Differential Management Logical Unit and a volume exclusive
for storing differential information of a P-VOL and an S-VOL of a Volume
Migration pair. To create a Volume Migration pair, you need to prepare one
DMLU in the array.
DMLU precautions
This section details DMLU precautions for setting, expanding, and removing.
VxVM
Do not allow the P-VOL and S-VOL to be recognized by the host at the same
time.
MSCS
Do not allow the P-VOL and S-VOL to be recognized by the host at the same
time.
• Do not place the MSCS Quorum Disk in CCI.
• Shutdown MSCS before executing the CCI sync command.
AIX
• Do not allow the P-VOL and S-VOL to be recognized by the host at the
same time.
Do not allow the P-VOL and S-VOL to be recognized by the host at the same
time.
Do not allow the P-VOL and S-VOL to be recognized by the host at the same
time.
Performance
• Migration affects the performance of the host I/O to P-VOL and other
volumes. The recommended Copy Pace is Normal, but if the host I/O
load is heavy, select Slow. Select Prior to shorten the migration time;
however, this can affect performance. The Copy Pace can be changed
during the migration.
• The RAID structure of the P-VOL and S-VOL affects the host I/O
performance. The write I/O performance concerning a VOL, which
migrates from a disk area, consists of the SAS drives, the SAS7.2K
drives or the SAS (SED) drives to a disk area is lower than that
concerning a volume that consists of the lower cost drives.
• Do not concurrently migrate logical volumes that are in the same RAID
group.
• Do not run Volume Migration from/to volumes that are in Synchronizing
status with ShadowImage initial copy, or in resynchronization in the
same RAID group. Additionally, do not execute ShadowImage initial
copy or resynchronization in the case where volumes involved in the
ShadowImage initial copy or resynchronization are from the same RAID
group.
• It is recommended that Volume Migration is run during periods of low
system I/O loads.
NOTE: When both the P-VOL and the S-VOL use DP-VOLs, a pair cannot be
created by combining the DP-VOLs which have different setting of Enabled/
Disabled for Full Capacity Mode.
• Usable Combination of DP Pool and RAID Group
The following table shows a usable combination of DP Pool and RAID
group.Table 11-6 details the contents of a volume migration P-VOL and
S-VOL.
Pair Statuses
Pair Statuses after the DP Pool Pair Statuses after the
before the DP Capacity DP Pool Capacity
Pool Capacity Depletion Depletion belonging to
Depletion belonging to P- S-VOL
VOL
Capacity in Capacity DP in
Operation Normal Regressed Blocked
Growth Depletion Optimization
Executing | X | | X |
Splitting | | | | | |
Canceling | | | | | |
Executing-Normal: Refer to the status of the DP pool to which the DP-
VOL of the S-VOL belongs. If the status exceeds the DP pool capacity
belonging to the S-VOL by Volume Migration operation, Volume
Migration operation cannot be executed.
Executing-Capacity Depletion: Refer to the status of the DP pool to
which the DP-VOL of the P-VOL belongs. If the status exceeds the DP
pool capacity belonging to the P-VOL by Volume Migration operation,
Volume Migration operation cannot be executed.
Also, When the DP pool was created or the capacity was added, the
formatting operates for the DP pool. If Volume Migration is performed
during the formatting, depletion of the usable capacity may occur. Since
the formatting progress is displayed when checking the DP pool status,
check if the sufficient usable capacity is secured according to the
formatting progress, and then start Volume Migration operation.
Executing-DP in Optimization
• Operation of the DP-VOL during Volume Migration use
When using the DP-VOL created by Dynamic Provisioning for a P-VOL or
an S-VOL of Volume Migration, any of the operations among the capacity
growing, capacity shrinking, volume deletion, and Full Capacity Mode
changing of the DP-VOL in use cannot be executed. To execute the
operation, split the Volume Migration pair of which the DP-VOL to be
operated is in use, and then execute it again.
• Operation of the DP pool during Volume Migration use
When using the DP-VOL created by Dynamic Provisioning for a P-VOL or
an S-VOL of Volume Migration, the DP pool to which the DP-VOL in use
belongs cannot be deleted. To execute the operation, split the Volume
Migration pair of which the DP-VOL is in use belonging to the DP pool to
be operated, and then execute it again. The attribute edit and capacity
addition of the DP pool can be executed usually regardless of Volume
Migration pair.
NOTE: You cannot migrate while the reserved volume is being formatted.
NOTE: When the mapping mode displays, the host cannot access the
volume if it has been allocated to the reserved volume. Also when the
mapping mode is enabled, the host cannot access the volume if the mapped
volume has been allocated to the reserved volume.
•
NOTE: Be careful when the host recognizes the volume that has been used
by Volume Migration. After releasing the Volume Migration pair or canceling
Volume Migration, delete the reserved volume or change the volume
mapping.
Migrating volumes
To migrate volumes
1. Start Navigator 2 and log in. The Arrays window appears
2. Click the appropriate array.
3. In the navigation tree view, click the Replication list and click the
Volume Migration.
4. Click Migration Pairs. The Migration Pairs screen displays as shown in
Figure 11-18.
7. Select the volume for the S-VOL and Copy Pace, click OK.
•
NOTE: Normal mode is the default for the Copy Pace. If the host I/O load
is heavy, performance can degrade. Use the Slow mode to prevent
performance degradation. Use the Prior mode only when the P-VOL is rarely
accessed and you want to shorten the copy time.
If you cancel the Migration Pair, you may have to wait up to five seconds
before the following tasks can be performed:
• Creating a pair in ShadowImage, when the volume specified as the S-
VOL of the canceled pair is an S-VOL.
• Creating a a pair in TrueCopy where the volume specified is the S-VOL
of the canceled pair.
• Volume Migration where the volume specifies is the S-VOL of the
canceled pair.
• Deleting the volume specified that is the S-VOL of the canceled pair.
• Removing the DMLU.
• Expanding the capacity of the DMLU.
To cancel a migration
1. Start Navigator 2 and log in. The Arrays window appears
2. Click the appropriate array.
3. In the navigation tree view, click the Replication list and click Volume
Migration.
4. Click Migration Pairs. The Migration Pairs screen displays as shown in
Figure 11-24.
5. Select the Volume Migration pair that you want to cancel and click
Cancel Migrations.
•
Note that if you cancel the migration pair, you will not be able to perform
any of the following tasks for up to five seconds after the cancel operation.
• Create a ShadowImage pair which specifies the volume as the S-VOL of
the canceled pair.
• Create a TrueCopy pair which specifies the volume as the S-VOL of the
canceled pair.
• Create a Migration pair which specifies the S-VOL of the canceled pair.
• Delete a volume which specifies the S-VOL of the canceled pair.
• Shrink a volume which specifies the S-VOL of the canceled pair.
• Remove the DMLU.
• Expand DMLU capacity.
If you cancel the migration pair, you will not be able to do any tasks related
to migration pairs up to five minutes.
This feature is different from the volume "grow" (expand) feature, which
allows you to expand the size of an existing volume using available free
space in a RAID group to which it belongs.
Add Volumes
To add a volume to a unified volume
1. In the unified volume properties window, click Add Volumes. The Add
Volumes dialog box is displayed. The dialog box includes a table that
displays the parameters of the selected unified volume, and a table that
lists the available volumes that can be added to the existing unified
volume.
2. Click the to the left of the name of the volume that you want to add to
the unified volume.
3. Click OK. A warning message regarding RAID levels and drive types is
displayed. The warning message also includes information that the data
in the volume that is added will be destroyed.
4. To add the selected volume to the unified volume, click the to agree that
you have read the warning message, and then click Confirm. A
message box confirming that the volume has been added is displayed.
5. Click Close to exit the message box and return to the unified volume
properties window.
6. Observe the contents of the window and verify that the volume has been
added.
Item Specification
RAID level Any RAID level supported by the array.
Start of the spin-down When spinning down the drives, instruct the spin-down to
operation the RAID group from Navigator 2. Specify the command
monitoring time also at the time of instructing the spin-
down. According to the instructed command monitoring
time, monitor the command or monitor the I/O issuance
from the host or the application to the RAID group to which
the spin-down was instructed.
The spin-down is done when no command is issued during
the command monitoring.
When a command is issued during the command
monitoring, the disk array and RAID group are judged to
be in use and therefore the spin-down fails.
Command monitoring Command monitoring time: Can be set in the range of 0 to
time 720 minutes in units of minute.
The default of the command monitoring time is one
minute.
If you can manage the operation for using the RAID group
and want to spin down immediately, specify the command
monitoring time to 0 minute. The command monitoring is
terminated immediately and migrated to the spin-down
processing. Even if the command monitoring time is
specified as 0 minute, when the uncompleted command
remains in the array for the target RAID groups, the spin-
down fails. When a drive fails occurred, the spin-down
executed after a drive reconstruction completed.
When an instruction to RAID groups are spun down in ascending order of the RAID
spin down is issued to two group numbers. The command monitoring is done for
or more RAID groups at specified minute for the first RAID group. For the second
the same time and following RAID groups, the command monitoring is
done until the spin down occurs.
Item Specification
Instructing spin-down • If the spin-down is instructed during the command
during command monitoring, reset the command monitoring time
monitoring according to the instructed command monitoring
time, and monitor the command again.
• When the RAID group status is Normal (Command
Monitoring), do not turn OFF the array. If the power is
turned OFF while the RAID group status is Normal
(Command Monitoring), even the power is turned ON,
the command monitoring is considered to be
suspended by the power-OFF and the RAID group
status becomes Normal (Spin Down Failure: PS OFF/
ON), and it does not spin down. To spin down, instruct
the spin-down again.
• If a controller failure or a failure between the host and
array has occurred during the command monitoring
time, the command is issued from the host to the
array, and it may be the spin-down cancellation.
Moreover, if the controller failure or the failure
between the host and the array is restored during the
command monitoring time, the command is also
issued to the array, and it may be the spin-down
cancellation.
How to cancel the To cancel the command monitoring, instruct the target
command monitoring RAID group to spin up or instruct the command monitoring
time by the short time such as 0, and instruct the spin-
down
RAID groups which cannot • The RAID group that includes the system drives
issue the instruction to (drives #0 to #4 of the basic cabinet for AMS2100/
spin down AMS2300, drives #0 to #4 of the first expansion
cabinet for AMS2500). The system drive is the drive
where the firmware is stored.
• The RAID group configuring the SSDs.
• The RAID group for ShadowImage, TrueCopy, or TCE
including a P-VOL or an S-VOL in a pair status other
than following Simplex, Split, Takeover
• The RAID group including a volume whose pair is not
released during the Volume Migration or after the
Volume Migration is completed
• The RAID group including a volume being formatted
• The RAID group including a volume to which the
parity correction is being performed
• The RAID group including a volume for data pool
• The RAID group including a volume for DMLU
• The RAID group including a volume for command
device
• The expanding RAID group
• The RAID group that the drive firmware is being
replaced
• When Turbo LU Warning is enabled by specifying the
System Parameter option, for the RAID group
including the volume using Cache Residency Manager,
the de-staging does not proceed, and the spin-down
may fail. Disable Turbo LU Warning and instruct the
spin-down again.
Item Specification
Items that will restrain the • I/O command from a host
operation during the spin- • The ShadowImage pair operation including a copy
down or command process Creating pairs, re-synchronizing pairs,
monitoring restoring pairs
• The SnapShot pair operation including a copy
process Restoring pairs
• The TrueCopy or TCE pair operation including a
copy process Creating pairs (including no copy),
re-synchronizing pairs, swapping pairs (pair
status changes to Takeover)
• Executing Volume Migration
• Creating a volume
• Deleting the RAID group
• Formatting a volume
• Executing the parity correction of a volume
• Setting a volume for DP
• Setting a volume for DMLU
• Setting a volume for command device
• Expansion of a RAID group
• Volume growth
Number of times the same Up to seven times a day.
RAID group is spun down
Two or more instructions The last instruction is enabled. If the spin-down is
to the same RAID group instructed during the command monitoring, according to
the instructed command monitoring time, and monitor the
command again. To cancel the command monitoring,
instruct the RAID group to spin up.
Scheduling function An instruction to spin down or spin up can be issued using
a scheduling function provided by JP1, etc.
Item Specification
Action to be taken for the In order to prevent the drive heads from sticking to the
long time spin-down disk surfaces, a RAID group which has been kept spun
(health check) down for 30 days is spun up for six minutes. It is then spun
down again. Although the drives are spun up temporarily,
no host I/O can be accepted in this period.
Item Specification
Unified Volume The unified volume is put in the same status as being spun
down if one of the configured RAID groups has been spun
down, so that the same restrictions with the VOL in the
spun down status are applied to the operation to prevent a
host I/O, etc.
NOTE: When you refer to the Power Saving Modes and Normal (Spin Up)
appears, the power-up is completed. If the host uses a volume, it must
mount it.
•
Table 11-10 details Power Saving effects. Note that the percent of the
saving of electric power consumption and value varies by drive type.
Effect:
During input/output Percentage of
During Power Number of
Expansion (I/O) operation the saving of
Saving Drives Spun
TrayType Unit: validation the electric
(Unit: VA) Down
authority (VA) power
consumption
Drive tray 320 140 24 of 24 60% to 70%
for 2.5 inch
drive
Drive tray 280 90 12 of 12
for 3.5 inch
drive
Dense drive 1,000 420 48 of 48
tray for 3.5
inch drive
Power down
To power down
1. Make sure every volume is unmounted.
2. When LVM is used for the disk management, deport the volume or disk
groups.
3. Using Navigator 2, power down the RAID group.
4. Using Navigator 2, confirm the RAID group status for specified minutes
after powering down.
Power up
To power up
1. Using Navigator 2, power up the RAID group.
2. Using Navigator 2, confirm the RAID group status for several minutes
after the powering up.
3. When you refer to the Power Saving Status and see that Normal (Spin
Up) is displayed after a while, the power up is completed. Make a host
mount the volume included in the RAID group (if the host uses the
volume).
This section covers the following key topics:
Linux
• When the LVM is used, power down the volume group after making the
volume group offline and exporting it. When the LVM is not used, power
down the volume group after unmounting it.
• When middleware such as Veritas Storage Foundation for Windows is
used, specify power down after deporting the disk group.
Windows
• Mount or unmount the volume using the command control interface
(CCI) command.
For example:
pairdisplay -x umount D:\hd1
• When middleware such as Veritas Storage Foundation for Windows is
used, deport the disk group. Do not use the mounting or unmounting
function of CCI.
Solaris
• When Sun Volume Manager is used, perform the power down after
releasing the disk set from Solaris.
NOTE: For more information, see the Hitachi Adaptable Modular Storage
and Workgroup Modular Storage Command Control Interface (CCI) User
and Reference Guide, and the Hitachi Simple Modular Storage Command
Control Interface (CCI) User’s Guide.
Uninstalling
Enabling or disabling
•
Items Contents
RAID Group The RAID group appears.
Remaining I/O The remaining time of the command monitoring is displayed.
Monitoring Time The N/A display is exempt.
Power Saving Status The power saving information appears.
NOTE: The Power Saving Mode includes the power up and down of the
drives that configure the RAID group. The RAID group does not show the
mode of each drive.
To power down
1. Start Navigator 2.
2. Log in as a registered user.
3. Select the system you want to view information about.
4. Click Show & Configure Array.
5. Select the RG Power Saving icon in the Power Saving tree view.
6. Select the RAID group that you will spin down and click Execute Spin
Down. The Spin Down Property screen displays.
7. Enter an I/O monitoring time and click OK.
•
11.After you power down one RAID group, check the power saving status
after the specified minutes have passed. When you power down two or
more RAID groups, check the status after several minutes have passed.
Refer to Table 11-12 if a phrase other than Normal (Spin Down Failure:
Host Command), Normal (Spin Down Failure: Non-Host Command),
Normal (Spin Down Failure: Error), or Normal (Spin Down Failure: PS
OFF/ON) is displayed.
•
Notes
• Only one power down instruction per minute can be issued. Before
powering down, make sure that all volumes are unmounted. After
powering down the LVM volume group offline, power down the RAID
group.
• Do not use RAID group volumes that are going to be powered down.
• If there is a mounted volume, unmount it.
• When the logical volume manager (LVM) is used for the disk
management, (for example, Veritas) unmount the volume or disk
groups.
• Before issuing a power down instruction, verify that all previously
issued power down instructions are completed. If the power down fails,
Powering up
Power up a RAID group after it has been powered down. You can specify
more than one RAID group.
To power up
1. Start Navigator 2.
2. Register the array where you are powering up the RAID group, and
connect to it.
3. Click the Logical Status tab.
4. Log in as a registered user.
5. Select the system and RAID group you want to power up.
6. Click Show & Configure Array.
7. Select the RG Power Saving icon in the Power Saving tree view.
8. Select the RAID group that you will spin up.
9. Click Execute Spin Up.
10.The volume information included in the specified RAID group is
displayed. Verify that the spin-up does not cause a problem, and click
Confirm.
Notes
• Depending on the status of the array, more time may be required to
complete the power up.
• An instruction to power up in the middle of the power down cancels the
original instruction. Only the final instruction occurs.
NOTE: When you refer to the Power Saving Mode and Normal (Spin Up)
appears, the power up is completed. If the host uses a volume, it must
mount to it.
Failure notes
Failure notes
• When the system or the spare drive at the position of the FC SES drive
is used, you must perform the backup in the same way as that the
Spare Drive Operation Mode Fixed, even if the Spare Drive Operation
Mode is set to Variable.
• When a failure occurs during the power down in a RAID group other
than RAID 0, the array lets the RAID group power up and then makes it
power down after restoring the failure. However, if a failure occurs
while a RAID group is spun down, the drives being spun down are spun
up and the power down fails. The drives are not spun down
automatically after the failed drive is replaced.
• The drives in the power down status in the cabinet where a FC SES
failure occurs are spun up. After the SENC failure is restored, the RAID
group that has been instructed to power down is spun down.
This section provides use case examples when implementing Power Saving
in the Hitachi Data Protection Suite (HDPS) using the Navigator 2 CLI and
Account Authentication for a Windows and UNIX environment.
These use cases are only examples, and are only to be used as reference.
Your particular use case may vary.
Overview
Security
Security
This use case provides two levels of security. The first level is the array built-
in security provided by Hitachi Account Authentication. Account
authentication is required, and provides role based array security for the
Navigator GUI and protection from rogue scripts.
The second level of security is provided by the HDPS (CommVault) console.
Only authorized users can login to the CommVault console and schedule
backups.
Account authentication requires that external scripts obtain the appropriate
credentials (usernames/passwords). After the appropriate credentials are
obtained, the scripts run in the context of that user. The scripts are stored
on the MediaAgent and their permissions are dictated by the host operating
system.
Set the account authentication password by using the simple network
manager (SNM) CLI to specify the following environment parameters and
commands.
Are you sure you want to set the account information? (y/n [n]): y
To bypass having to answer the confirmation questions: Confirming Command Execution (% set
STONAVM_RSP_PASS=on)
setlocal
################################################################################
##RUN POWER ON SCRIPT HERE
################################################################################
set PATH=%PATH%;%GALAXY_BASE%
set tmpfile="aux_script.bat.tmp"
if %errorlevel% NEQ 0 (
goto :EOF )
if %errorlevel% NEQ 0 (
goto end )
'@Module Name:
' hds-ps-script.vbs
'@Description:
' Script to power up and power down raid groups for a given set of volumes.
'@Revision History:
'--*/
'///////////////////////////////////////////
'//
const HDS_DFUSER=""
const HDS_DFPASSWD=""
For example:
cscript –nologo hds-ps-script.vbs -powerdown y: c:\mount
Powering up
This is an example of how to use the sample script when powering up in
Windows.
This mounts the list of volumes (separated by space) and powers up the raid
group that supports it. The list of volumes can be drive letters or mount
points.
cscript –nologo hds-ps-script.vbs -powerup <list of volumes>
For example:
cscript –nologo hds-ps-script.vbs -powerup y: c:\mount
UNIX scripts
This is only a snapshot of a Power Saving sample script for UNIX, and does
not include the whole script.
Power down
This is a snapshot of the sample script when powering down in UNIX.
#!/bin/ksh
# PowerOff.ksh
# Arguments:
# Prerequisites:
# Version History:
export STONAVM_HOME=/opt/snm7.11
SNMUserID=jpena
SNMPasswd=sac1sac1
exit 1
fi
MntPoint=$1
exit 2
Power up
This is a snapshot of the sample script when powering up in UNIX.
#!/bin/ksh
# PowerOn.ksh
# Arguments:
# Prerequisites:
# Version History:
export STONAVM_HOME=/opt/snm7.11
SNMUserID=jpena
SNMPasswd=sac1sac1
exit 1
fi
MntPoint=$1
exit 2
set to the userid you defined when you created your account.
SNMPasswd
set to the password you defined when you created your account.
6. Make sure that all the file systems that are going to be mounted and
unmounted are in the mount tab file for your operating system. For
example:
Solaris - /etc/vfstab
For example:
PowerOff.ksh /backup01
Powering up
This is an example of how to use the sample script when powering up in
UNIX. This mounts the file system and powers up the raid group.
PowerOn.ksh
For example:
PowerOn.ksh /backup01
Navigator 2 specifications
Specifications A–1
Hitachi Unified Storage Operations Guide
Navigator 2 specifications
The following sections details specifications for various operating systems
for Navigator 2:
• Windows
• Red Hat Linux
• Solaris
• HP UX
Microsoft Windows
Operating System
A–2 Specifications
Browser IE6.0 (SP1, SP2, SP3) or IE7.0. The 64-bit IE6 (SP1, SP2,
SP3) on Windows Server 2003 R2 (x64) and the 64-bit-IE7.0
on windows Server 2008 (x64) is supported. Only IE8.0
(x86, x64) is supported on Windows 7 and Windows
Server 2008 R2.
Specifications A–3
Hitachi Unified Storage Operations Guide
Red Hat Linux
A–4 Specifications
Sun Solaris
Specifications A–5
Hitachi Unified Storage Operations Guide
Table A-7 details Solaris client specifications.
A–6 Specifications
Volume formatting
The total size of volumes that can be formatted at the same time has
restrictions. If the configuration exceeds the possible formatting size, the
firmware of the array does not execute the formatting (error messages are
displayed). Moreover, if the volumes are expanded, the expanded volume
unit size is automatically formatted and the size becomes the restriction
target that permits which entities can be formatted at the same time.
Note that the possible formatting size differs depending on the array type.
Format the total size of volumes by the recommended batch formatting size
or less as shown in Table A-10.
Table A-9: Batch formatting size by platform
Specifications A–7
Hitachi Unified Storage Operations Guide
The formatting is executed in the following three operations. However, it has
no effect on the DP volumes using the Dynamic Provisioning function
Table A-10 details formatting capacity operation.
Table A-10: Formatting capacity by operation
The restrictions of the possible formatting size becomes the size of totaling
three operations. Perform it so that the total of each operation becomes the
recommended batch formatting size or less.
When the above-mentioned operation is executed and the restrictions of the
possible formatting size are exceeded, the following messages are
displayed. Table A-11 details messages that display when formatting size is
exceeded.
Table A-11: Messages for exceeded size
A–8 Specifications
Specifications A–9
Hitachi Unified Storage Operations Guide
A–10 Specifications
Field Description
Storage System Name
Management console static IP
address (used to log in to
Navigator 2)
Email Notifications
Email Notifications ? Disabled
? Enabled (record your settings below)
Domain Name
Mail Server Address
From Address
Send to Address
Address 1:
Address 2:
Address 3:
Reply To Address
IP Address
Subnet Mask
Field Description
Default Gateway
Controller 1
Configuration ? Automatic (Use DHCP)
? Manual (record your settings below)
IP Address
Subnet Mask
Default Gateway
Controller 0/ Port A
IP Address
Subnet Mask
Default Gateway
Negotiation
Controller 0/ Port B
IP Address
Subnet Mask
Default Gateway
Negotiation
Controller 1/ Port A
IP Address
Subnet Mask
Default Gateway
Negotiation
Controller 1/ Port B
IP Address
Subnet Mask
Default Gateway
Negotiation
VOL Settings
RAID Group
Free Space
VOL
Capacity
Stripe Size
Format the Volume ? Yes
? No
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–1
Hitachi Unified Storage Operations Guide
A
Arbitrated loop
A Fibre Channel topology that requires no Fibre Channel switches.
Devices are connected in a one-way loop fashion. Also referred to as
FC-AL.
Array
A set of hard disks mounted in a single enclosure and grouped logically
together to function as one contiguous storage space.
B
bps
Bits per second. The standard measure of data transmission speeds.
C
Cache
A temporary, high-speed storage mechanism. It is a reserved section of
main memory or an independent high-speed storage device. Two types
of caching are found in computers: memory caching and disk caching.
Memory caches are built into the architecture of microprocessors and
often computers have external cache memory. Disk caching works like
memory caching; however, it uses slower, conventional main memory
that on some devices is called a memory buffer.
Capacity
The amount of information (usually expressed in megabytes) that can
be stored on a disk drive. It is the measure of the potential contents of
a device. In communications, capacity refers to the maximum possible
data transfer rate of a communications channel under ideal conditions.
CBL
3U controller box.
CBXS
Controller box. Two types of CBXS controller boxes are available:
• A 2U CBXSS Controller Box that mounts up to 24 2.5-inch drives.
• A 3U CBXSL Controller Box that mounts up to 12 3.5-inch drives.
CBS
Controller box. There are two types of CBS controller boxes available:
• A 2U CBSS Controller Box that mounts up to 24 2.5-inch drives.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–2
Hitachi Unified Storage Operations Guide
• A 3U CBSL Controller Box that mounts up to 12 3.5-inch drives.
CCI
See command control interface.
CHAP
See Challenge Handshake Authentication Protocol.
CLI
See command line interface.
Cluster
A group of disk sectors. The operating system assigns a unique number
to each cluster and then keeps track of files according to which clusters
they use.
Cluster capacity
The total amount of disk space in a cluster, excluding the space
required for system overhead and the operating system. Cluster
capacity is the amount of space available for all archive data, including
original file data, metadata, and redundant data.
Command devices
Dedicated logical volumes that are used only by management software
such as CCI, to interface with the storage systems. Command devices
are not used by ordinary applications. Command devices can be shared
between several hosts.
CRC
Cyclic Redundancy Check. An error-correcting code designed to detect
accidental changes to raw computer data.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–3
Hitachi Unified Storage Operations Guide
D
Disaster recovery
A set of procedures to recover critical application data and processing
after a disaster or other failure. Disaster recovery processes include
failover and failback procedures.
DMLU
See Differential Management-Logical Unit.
Drive Box
Chassis for mounting drives that connect to the controller box. The
following drive boxes are supported:
• DBS, DBL: 2U drive box
• DBX: 4U drive box
Duplex
The transmission of data in either one or two directions. Duplex modes
are full-duplex and half-duplex. Full-duplex is the simultaneous
transmission of data in two direction. For example, a telephone is a full-
duplex device, because both parties can talk at once. In contrast, a
walkie-talkie is a half-duplex device because only one party can
transmit at a time.
E
Ethernet
A computer networking technology for local-area networks.
Extent
A contiguous area of storage in a computer file system that is reserved
for writing or storing a file.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–4
Hitachi Unified Storage Operations Guide
F
Fabric
Hardware that connects workstations and servers to storage devices in
a Storage-Area Network (SAN)N. The SAN fabric enables any-server-to-
any-storage device connectivity through the use of Fibre Channel
switching technology.
Failover
The automatic substitution of a functionally equivalent system
component for a failed one. The term failover is most often applied to
intelligent controllers connected to the same storage devices and host
computers. If one of the controllers fails, failover occurs, and the
survivor takes over its I/O load.
Fallback
Refers to the process of restarting business operations at a local site
using the P-VOL. It takes place after the storage systems have been
recovered.
Fault tolerance
A system with the ability to continue operating, possibly at a reduced
level, rather than failing completely, when some part of the system
fails.
FC
See Fibre Channel.
FC-AL
See Arbitrated Loop.
FCOE
Fibre Channel
A gigabit-speed network technology primarily used for storage
networking.
Firmware
Software embedded into a storage device. It may also be referred to as
Microcode.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–5
Hitachi Unified Storage Operations Guide
Full-duplex
Transmission of data in two directions simultaneously. For example, a
telephone is a full-duplex device because both parties can talk at the
same time.
G
Gbps
Gigabit per second.
Gigabit Ethernet
A version of Ethernet that supports data transfer speeds of 1 gigabit
per second. The cables and equipment are very similar to previous
Ethernet standards. Abbreviated GbE.
GUI
Graphical user interface.
H
HA
High availability.
Half-duplex
Transmission of data in just one direction at a time. For example, a
walkie-talkie is a half-duplex device because only one party can talk at
a time.
HBA
See Host bus adapter.
Host
A server connected to the storage system via Fibre Channel or iSCSI
ports.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–6
Hitachi Unified Storage Operations Guide
I
IEEE
Institute of Electrical and Electronics Engineers (read “I-Triple-E”). A
non-profit professional association best known for developing standards
for the computer and electronics industry. In particular, the IEEE 802
standards for local-area networks are widely followed.
I/O
Input/output.
IOPS
Input/output per second. A measurement of hard disk performance.
initiator
See iSCSI initiator.
IOPS
I/O per second.
iSCSI
Internet-Small Computer Systems Interface. A TCP/IP protocol for
carrying SCSI commands over IP networks.
iSCSI initiator
iSCSI-specific software installed on the host server that controls
communications between the host server and the storage system.
iSNS
Internet Storage Naming Service. An automated discovery,
management and configuration tool used by some iSCSI devices. iSNS
eliminates the need to manually configure each individual storage
system with a specific list of initiators and target IP addresses. Instead,
iSNS automatically discovers, manages, and configures all iSCSI
devices in your environment.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–7
Hitachi Unified Storage Operations Guide
L
LAN
Local-area network. A computer network that spans a relatively small
area, such as a single building or group of buildings.
Load
In UNIX computing, the system load is a measure of the amount of
work that a computer system is doing.
Logical
Describes a user's view of the way data or systems are organized. The
opposite of logical is physical, which refers to the real organization of a
system. A logical description of a file is that it is a quantity of data
collected together in one place. The file appears this way to users.
Physically, the elements of the file could live in segments across a disk.
M
MIB
Microcode
The lowest-level instructions directly controlling a microprocessor.
Microcode is generally hardwired and cannot be modified. It is also
referred to as firmware embedded in a storage subsystem.
P
Pair
Refers to two volumes that are associated with each other for data
management purposes (for example, replication, migration). A pair is
usually composed of a primary or source volume and a secondary or
target volume as defined by you.
Pair status
Internal status assigned to a volume pair before or after pair
operations. Pair status transitions occur when pair operations are
performed or as a result of failures. Pair statuses are used to monitor
copy operations and detect system failures.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–8
Hitachi Unified Storage Operations Guide
Parity
The technique of checking whether data has been lost or corrupted
when it's transferred from one place to another, such as between
storage units or between computers. It is an error detection scheme
that uses an extra checking bit, called the parity bit, to allow the
receiver to verify that the data is error free. Parity data in a RAID array
is data stored on member disks that can be used for regenerating any
user data that becomes inaccessible.
Parity groups
RAID groups can contain single or multiple parity groups where the
parity group acts as a partition of that container.
Point-to-Point
A topology where two points communicate.
Port
An access point in a device where a link attaches.
R
RAID
Redundant Array of Independent Disks. A storage system in which part
of the physical storage capacity is used to store redundant information
about user data stored on the remainder of the storage capacity. The
redundant information enables regeneration of user data in the event
that one of the storage system's member disks or the access path to it
fails.
RAID group
A set of disks on which you can bind one or more volumes.
Remote path
A route connecting identical ports on the local storage system and the
remote storage system. Two remote paths must be set up for each
storage system (one path for each of the two controllers built in the
storage system).
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–9
Hitachi Unified Storage Operations Guide
S
SAN
See Storage-Area Network
SAS
Serial Attached SCSI. An evolution of parallel SCSI into a point-to-point
serial peripheral interface in which controllers are linked directly to disk
drives. SAS delivers improved performance over traditional SCSI
because SAS enables up to 128 devices of different sizes and types to
be connected simultaneously.
Snapshot
A term used to denote a copy of the data and data-file organization on
a node in a disk file system. A snapshot is a replica of the data as it
existed at a particular point in time.
SNM2
See Storage Navigator Modular 2.
Storage-Area Network
A dedicated, high-speed network that establishes a direct connection
between storage systems and servers.
Striping
A way of writing data across drive spindles.
Subnet
In computer networks, a subnet or subnetwork is a range of logical
addresses within the address space that is assigned to an organization.
Subnetting is a hierarchical partitioning of the network address space of
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–10
Hitachi Unified Storage Operations Guide
an organization (and of the network nodes of an autonomous system)
into several subnets. Routers constitute borders between subnets.
Communication to and from a subnet is mediated by one specific port
of one specific router, at least momentarily. SNIA.
Switch
A network infrastructure component to which multiple nodes attach.
Unlike hubs, switches typically have internal bandwidth that is a
multiple of link bandwidth, and the ability to rapidly switch node
connections from one to another. A typical switch can accommodate
several simultaneous full link bandwidth transmissions between
different pairs of nodes. SNIA.
T
Target
The receiving end of an iSCSI conversation, typically a device such as a
disk drive.
TCP
Transmission Control Protocol. A common Internet protocol that
ensures packets arrive at the end point in order, acknowledged, and
error-free. Usually combined with IP in the phrase TCP/IP.
10 GbE
10 gigabit Ethernet computer networking standard, with a nominal data
rate of 10 Gbit/s, 10 times as fast as gigabit Ethernet
U
URL
Uniform Resource Locator. A standard way of writing an Internet
address that describes both the location of the resource, and its type.
W
World Wide Name (WWN)
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–11
Hitachi Unified Storage Operations Guide
access to a specified logical unit or a group of logical units.
Z
Zoning
A logical separation of traffic between host and resources. By breaking
up into zones, processing activity is distributed evenly.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Glossary–12
Hitachi Unified Storage Operations Guide
Index
Index-1
Hitachi Unified Storage Operations Guide
correction copy management information base
dynamic sparing 5-37 defined 9-5
Create & Map Volume wizard 4-15 extended MIBs 9-45
connecting to a host 4-20 dfCommandExecutionCondition
defining host groups or iSCSI targets 4-19 group 9-49
defining logical units 4-18 dfCommandExecutionInternalCondi-
creating tion group 9-55
Host Groups (FC) 6-32 dfPort group 9-51
iSCSI targets 6-39 dfSystemParameter group 9-45
dfWarningCondition group 9-46
D installation 9-30
MIB access mode 9-25
Data Execution Prevention 3-5
MIB II 9-31
Data Retention Utility
at group 9-35
Expiration Lock configuration 5-50
egp group 9-41
setting attributes 5-50
icmp group 9-41
setup guidelines 5-48, 5-48
interfaces group 9-33
S-VOL configuration 5-50
ip group 9-36
Defining
snmp group 9-42
host groups or iSCSI targets 4-19
system group 9-32
Logical units 4-18
tcp group 9-41
deleting accounts. See Account Authentication
udp group 9-41
drive restoration 5-37
OID system assignment 9-25
correction copy 5-37
supported 9-25
drive restoratoin
supported extended traps 9-28
copy back 5-37
supported traps 9-28
Dynamic Provisioning
object identifiers 9-6
logical unit capacity 7-26
operating environment file 9-15
operational guidelines 9-22
E preparing
Email alerts 4-9 SNMP manager 9-14
environment 5-50 storage array 9-14
Explorer Panel 3-31 referencing SNMP environment 9-20
registering SNMP environment 9-18
F SNMP command messages 9-6
Failed installation on Windows 3-15 SNMP manager and agents 9-5
fibre channel SNMP overview 9-2
adding host groups 6-30 SNMP traps 9-8
deleting host groups 6-35 SNMP versions 9-4, 9-57
initializing Host Group 000 6-35 storage array name file 9-18
fibre channel setup workflow. See LUN Manager supported configurations 9-11
Firewalls 2-9 theory of operation 9-5
Hitachi Storage Command Suite Common
Components, preinstallation
G
considerations 2-9
ghost disks 11-44 Host
connecting to in Create & Map Volume
H wizard 4-20
Hardware considerations 4-3 host
help connecting SNM2 3-2
Arrays screen 3-30 operating system A-4, A-6
individual screen 3-30 Host groups, defining 4-19
High Availability cluster software 5-47 Host port configuration 4-12
Hitachi SNMP Agent Support HP-UX 5-46
additional resources 9-57
confirming setup 9-21 I
frame types 9-12 Initial (Array) Setup wizard 4-8
installing 9-12 email alert configuration 4-9
license key 9-12 host port configuration 4-12
Index-2
Hitachi Unified Storage Operations Guide
management port configuration 4-11 logical volume
spare drive configuration 4-14 with Protect attribute 5-41
system date and time configuration 4-14 logical volumes
Inquiry command 5-41 protecting 5-42
Installation LU detachment 5-37
types 3-10 LUN Manager 5-38
installation adding host groups 6-30–6-35
Command Suite Common Components 3-3 creating iSCSI targets 6-39
firewall 3-4 fibre channel setup workflow 6-25
Linux 3-2 Host Group 000 6-35
preparation 3-2 host group security, fibre channel 6-30
services operational status 3-3 iSCSI setup workflow 6-26
Solaris 3-2 LVM 5-46
windows 3-2
Installation fails on Windows 3-15 M
Installing
Management console
Navigator 2 3-10
connecting to storage system 4-3
installing
Management port configuration 4-11
preparation 3-2
Menu Panel 3-31
installing SNM2 3-2, 3-10
Microsoft Windows
Interface of Navigator 2 3-31
Navigator 2 installation 3-11
invisible from inquiry command, access
Navigator 2 installation fails 3-15
attribute 5-37
migrating volumes. See Modular Volume Migration
Invisible mode 5-41
Modular Volume Migration
iSCSI
copy pace, changing 11-26
adding targets 6-43
migration pairs, canceling. 11-29
creating a target 6-39
migration pairs, confirming 11-27
creating iSCSI targets 6-39
migration pairs, splitting 11-28
deleting targets 6-47
Reserved LUs, adding 11-20
editing authentication properties 6-49
Reserved LUs, deleting 11-24
editing target information 6-48
setup guidelines 11-19
host platform options 6-46
initializing Target 000 6-50
nicknames, changing 6-50
N
system configuration 6-14 Navigator 2
Target 000 6-47 activities 3-32
using CHAP 6-38, 6-50, ??–8-9 hardware considerations 4-3
iSCSI setup workflow. See LUN Manager logging in 4-4
iSCSI targets, defining 4-19 operating environment 2-8
recording setting B-1
L terms 2-7
understanding the interface 3-31
Linux
Navigator 2 installation 3-10
Navigator 2 installation 3-18
fails on Microsoft Windows 3-15
Logging in to Navigator 2 4-4
Linux 3-18
logical unit
Microsoft Windows 3-11
deleting 5-38
Solaris 3-16
growing 5-38
types of 3-10
inhibiting assignment as secondary
Navigator 2 interface
volume 5-42
Button Panel 3-32
shrinking 5-38
Explorer Panel 3-31
Logical units
Menu Panel 3-31
defining 4-18
Page Panel 3-32
logical units
Navigator 2 settings, recording B-1
assigning access attributes 5-40
notes
deleting, growing, shrinking 5-38
failure 11-52
number allowed 5-37
operating system 11-44
settable 5-37
power down 11-49
logical units, protecting 5-40
power up 11-51
logical units, using 5-40
NTP, using SNMP 3-29
Index-3
Hitachi Unified Storage Operations Guide
O start 11-41
power up 11-50
Operating environment 2-8
notes 11-51
operating system
time required 11-43
Advanced Interactive eXecutive (AIX) 11-44
UNIX 11-61
client A-6
Windows 11-58
Hewlett Packard UNIX (HP-UX) 11-44
Preconfigured
host A-4, A-6
LUN on AMS 2000 storage systems 6-6
Linux 11-44
Preinstallation
notes 11-44
anti-virus software 2-9
Solaris 11-44, A-6
firewalls 2-9
Windows 11-44
preparation
operations
installation 3-2
retention term 5-52
Linux 3-4
operatoins
protect, access attribute 5-37
expiration lock 5-52
protecting logical volume 5-41
overview 1-5, 5-36
protecting logical volumes 5-42
Advanced Settings 1-6
alerts and events 1-7
command devices 1-6
R
component status 1-5 RAID groups
components 1-5 cannot power down 11-42
DMLU 1-5 read capacity, access attribute 5-37
E-mail alerts 1-6 read only, access attribute 5-37
error monitoring 1-7 read/write, access attribute 5-37
FC settings 1-6 read-only access attribute 5-51
firmware 1-6 read-only attribute
groups 1-5 assigned to a logical volume 5-40
host groups 1-6 copying data from utilities 5-40
iSCSI targets 1-6 read-write operations
LANs 1-6 about 5-40
licenses 1-6 copying data from utilities 5-40
performance 1-7 protecting 5-40
RAID groups 1-5 restricting 5-40
security 1-6 volumes with attribute 5-40
settings 1-5 with open systems volumes 5-40
spare drives 1-6 with ShadowImage 5-40
with SnapShot 5-40
P with TCE 5-40
with TrueCopy 5-40
Page Panel 3-32
Recording Navigator 2 settings B-1
password, default. See account types
Red Hat Linux
Performance Monitor
installation 3-2
exporting information 8-24
preparation 3-4
obtaining system information 8-8
setting kernel 3-6
performance imbalance 8-31–8-32
starting SNM2 3-25
troubleshooting performance issues 8-31
Remote Desktop 3-4
using graphs 8-8–8-9
report zero read cap 5-41
permissions. See Account Authentication
restrictions
power down 11-48
operating systems 5-46
notes 11-49
retention terms 5-41
number of times 11-43
problems 11-42
UNIX 11-60
S
Windows 11-58 scripts
Power Saving UNIX 11-60
effects 11-41 Windows 11-58
modes 11-46 scripts, samples 11-57
operations 11-41 security 11-53
requirements 11-41 security, setting iSCSI target 6-41, 6-42
setting and attribute 5-51
Index-4
Hitachi Unified Storage Operations Guide
setting kernel Linux, Solaris 3-25
Red Hat Linux 3-6 Windows 3-25
Solaris 10 3-8 S-VOL Disable 5-37
Solaris 8 3-7 S-VOL Disable, access attribute 5-37
Solaris 9 3-6 syslog server. See audit logging
SnapShot 5-45 system configuration 6-14
SNM2 System date and time configuration 4-14
Applet dialog box 3-28
Applet screen 3-27 T
array and SMS Array dialog boxes 3-27
TCE 5-45
operations 3-27
Terms associated with Navigator 2 2-7
SNM2 Server 3-28
timeout length, changing 5-22
SNMP
Troubleshooting
MIB information 3-21
installation fails on Microsoft Windows
SNMP manager, dual-controller environment 3-
system 3-15
21 installation fails on Windows 3-15
Solaris
troubleshooting 11-52
installation 3-2
Types of installation 3-10
Navigator 2 installation 3-16
preparation 3-4
setting kernel 3-7
U
starting SNM2 3-25 Understanding the Navigator 2 interface 3-31
Solaris 8 unified logical unit 5-38
setting kernel 3-7 unit of setting 5-37
Solaris 9 UNIX
setting kernel 3-7 power down 11-60
Spare drive configuration 4-14 power up 11-61
specifications use cases 11-60
access attribute change 5-37 Unix 5-46
access attribute restrictions 5-38 unsupported logical units
access attributes 5-37 command device 5-37
Cache Partition Manager 5-38 DMLU 5-37
Cache Residency Manager 5-38 LU as data pool in SnapShot/TCE 5-37
controller detachment 5-37 sub-LU, unified LU 5-37
deleting, growing, shrinking LUs 5-38 unformatted LU 5-37
drive restoration 5-37 use cases 11-57
dynamiic provisioning 5-38
firmware replacement 5-37 V
logical unit detachment 5-37 Volume Migration 5-38, 5-51
number of settable LUs 5-37
password protection 5-38
W
powering on/off 5-37
retention term, range 5-38 Windows
ShawdowImage 5-37 installation 3-2
SnapShot 5-37 power down 11-58
SNMP agent 5-38 power up 11-58
TCE 5-37 starting SNM2 3-24
TrueCopy 5-37 use cases 11-58
unified logical unit 5-38 Windows 2000 5-46
unit of setting 5-37 Windows Server 2003 5-46
unsupported logical units 5-37 Windows Server 2008 5-46
Volume Migration 5-38 Wizards
specificatoins Add Array 4-6
expansion of RAID group 5-38 Create & Map Volume 4-15
starting Initial (Array) Setup 4-8
SNM2 3-24
starting SNM2
client 3-24
host 3-24
Index-5
Hitachi Unified Storage Operations Guide
Index-6
Hitachi Unified Storage Operations Guide
Hitachi Unified Storage Operations Guide
Hitachi Data Systems
Corporate Headquarters
750 Central Expressway
Santa Clara, California 95050-2627
U.S.A.
www.hds.com
Regional Contact Information
Americas
+1 408 970 1000
info@hds.com
Europe, Middle East, and Africa
+44 (0)1753 618000
info.emea@hds.com
Asia Pacific
+852 3189 7900
hds.marketing.apac@hds.com
MK-91DF8275-03