Vplex PDT 62sp1
Vplex PDT 62sp1
2 Service
Pack 1
Product Guide
6.2.1
January 2024
Rev. A00
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2016-2024 Dell Inc. or its subsidiaries. All rights reserved. Dell, Dell EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................5
Tables........................................................................................................................................... 6
Preface.........................................................................................................................................................................................7
Contents 3
HTML5 based new GUI....................................................................................................................................................30
4 Contents
Figures
1 VPLEX active-active................................................................................................................................................. 9
2 VPLEX family: Local and Metro............................................................................................................................. 11
3 Configuration highlights.......................................................................................................................................... 13
4 Claim storage using the GUI (for Flash)............................................................................................................. 14
5 Claim storage using the GUI (for HTML5)......................................................................................................... 14
6 Moving data with VPLEX........................................................................................................................................17
7 VPLEX technology refresh.....................................................................................................................................19
8 High availability infrastructure example............................................................................................................. 20
9 RecoverPoint architecture..................................................................................................................................... 21
10 Replication with VPLEX Local and CLARiiON...................................................................................................23
11 Replication with VPLEX Metro and CLARiiON................................................................................................. 23
12 Support for Site Recovery Manager................................................................................................................... 24
13 Unisphere Performance Monitoring Dashboard (for Flash).......................................................................... 27
14 Unisphere Performance Monitoring Dashboard (for HTML5)......................................................................28
15 Unisphere Performance Monitoring Dashboard - select information to view (for Flash)..................... 28
16 Unisphere Performance Monitoring Dashboard - select information to view (for HTML5)................. 28
17 Unisphere Performance Monitoring Dashboard - sample chart (for Flash).............................................. 28
18 Unisphere Performance Monitoring Dashboard - sample chart (for GUI)................................................ 29
19 Path redundancy: different sites......................................................................................................................... 32
20 Path redundancy: different engines....................................................................................................................35
21 Path redundancy: different ports........................................................................................................................ 36
22 Path redundancy: different directors..................................................................................................................37
23 High level VPLEX Witness architecture............................................................................................................. 39
24 VS6 Quad Engine cluster - Front view............................................................................................................... 46
25 VS2 Quad Engine cluster....................................................................................................................................... 48
Figures 5
Tables
6 Tables
Preface
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its software and hardware.
Therefore, some functions that are described in this document might not be supported by all versions of the software or
hardware currently in use. The product release notes provide the most up-to-date information about product features.
Contact your Dell EMC technical support professional if a product does not function properly or does not function as described
in this document.
NOTE: This document was accurate at publication time. Go to Dell EMC Online Support (https://www.dell.com/support)
to ensure that you are using the latest version of this document.
Purpose
This document is part of the VPLEX documentation set, and includes conceptual information about managing your VPLEX
system.
Audience
This guide is intended for use by customers and service providers to configure and manage a storage environment.
Related Documentation
Contains document titles for the appliance document set. Related documents (available on Dell EMC Online Support and Solve)
include:
● VPLEX Release Notes for GeoSynchrony Releases
● VPLEX Product Guide
● VPLEX Hardware Environment Setup Guide
● VPLEX Configuration Worksheet
● VPLEX Configuration Guide
● VPLEX Security Configuration Guide
● VPLEX CLI Reference Guide
● VPLEX Administration Guide
● Unisphere for VPLEX Help
● VPLEX Element Manager API Guide Version 2 (REST API v2)
● VPLEX Open-Source Licenses
● VPLEX GPL3 Open-Source Licenses
● Procedures provided through the SolVe Desktop
● Dell EMC Host Connectivity Guides
● Dell EMC VPLEX Hardware Installation Guide
● Various best practice technical notes available on Dell EMC Online Support
Typographical conventions
Table 1. Typographical conventions used
Type style Description
Bold Used for names of interface elements, such as names of
windows, dialog boxes, buttons, fields, tab names, key names,
and menu paths (what the user specifically selects or clicks).
Italic Used for full titles of publications referenced in text
Preface 7
Table 1. Typographical conventions used (continued)
Type style Description
Monospace Used for:
● System code
● System output, such as an error message or script
● Pathnames, filenames, prompts, and syntax
● Commands and options
Monospace italic Used for variables
Monospace bold Used for user input
[] Square brackets enclose optional values.
| Vertical bar indicates alternate selections - the bar means
"or".
{} Braces enclose content that the user must specify, such as x
or y or z.
... Ellipses indicate nonessential information that is omitted from
the example.
Your comments
Your suggestions help to improve the accuracy, organization, and overall quality of the user publications. Send your opinions of
this document to: vplex.doc.feedback@dell.com.
8 Preface
1
Introducing VPLEX
This chapter introduces the Dell EMC VPLEX product family.
Topics:
• VPLEX overview
• VPLEX product family
• VPLEX hardware platforms
• Configuration highlights
• Grow your VPLEX without disruption
• Management interfaces
VPLEX overview
Dell EMC VPLEX federates data that is located on heterogeneous storage arrays to create dynamic, distributed and highly
available data centers.
Use VPLEX to:
● Move data nondisruptively between Dell EMC and other third party storage arrays without any downtime for the host.
VPLEX moves data transparently and the virtual volumes retain the same identities and the same access points to the host.
There is no need to reconfigure the host.
● Protect data in the event of disasters or failure of components in your data centers.
With VPLEX, you can withstand failures of storage arrays, cluster components, an entire site failure, or loss of
communication between sites (when two clusters are deployed) and still keep applications and data online and available.
With VPLEX, you can transform the delivery of IT to a flexible, efficient, reliable, and resilient service.
Introducing VPLEX 9
● Availability: VPLEX creates high-availability storage infrastructure across these same varied geographies with unmatched
resiliency.
VPLEX offers the following unique innovations and advantages:
● VPLEX distributed/federated virtual storage enables new models of application and Data Mobility.
VPLEX is optimized for virtual server platforms (VMware ESX, Hyper-V, Oracle Virtual Machine, AIX VIOS).
VPLEX can streamline or accelerate transparent workload relocation over distances, including moving virtual machines.
● Size VPLEX to meet your current needs. Grow VPLEX as your needs grow.
A VPLEX cluster includes one, two, or four engines.
Add an engine to an operating VPLEX cluster without interrupting service.
Add a second cluster to an operating VPLEX cluster without interrupting service.
The scalable architecture of VPLEX ensures maximum availability, fault tolerance, and performance.
● Every engine in a VPLEX cluster can access all the virtual volumes presented by VPLEX.
Every engine in a VPLEX cluster can access all the physical storage connected to VPLEX.
● In a Metro configuration, VPLEX AccessAnywhere provides cache-consistent active-active access to data across two
VPLEX clusters.
VPLEX pools the storage resources in multiple data centers so that the data can be accessed anywhere. With VPLEX, you can:
● Provide continuous availability and workload mobility.
● Replace your tedious data movement and technology refresh processes with VPLEX’s patented simple, frictionless two-way
data exchange between locations.
● Create an active-active configuration for the active use of resources at both sites.
● Provide instant access to data between data centers. VPLEX allows simple, frictionless two-way data exchange between
locations.
● Combine VPLEX with virtual servers to enable private and hybrid cloud computing.
10 Introducing VPLEX
EMC VPLEX Local EMC VPLEX Metro
VPLX-000389
VPLEX Local
VPLEX Local consists of a single cluster. VPLEX Local:
● Federates Dell EMC and non-Dell EMC storage arrays.
Federation allows transparent data mobility between arrays for simple, fast data movement and technology refreshes.
● Standardizes LUN presentation and management using simple tools to provision and allocate virtualized storage devices.
● Improves storage utilization using pooling and capacity aggregation across multiple arrays.
● Increases protection and high availability for critical applications.
Mirrors storage across mixed platforms without host resources.
Leverage your existing storage resources to deliver increased protection and availability for critical applications.
VPLEX Metro
VPLEX Metro consists of two VPLEX clusters connected by inter-cluster links with not more than 10ms Round Trip Time (RTT).
VPLEX Metro:
● Transparently relocates data and applications over distance, protects your data center against disaster.
Manage all of your storage in both data centers from one management interface.
● Mirrors your data to a second site, with full access at near local speeds.
Deploy VPLEX Metro within a data center for:
● Additional virtual storage capabilities beyond that of a VPLEX Local.
● Higher availability.
Metro clusters can be placed up to 100 km apart, allowing them to be located at opposite ends of an equipment room, on
different floors, or in different fire suppression zones; all of which might be the difference between riding through a local
fault or fire without an outage.
Introducing VPLEX 11
Deploy VPLEX Metro between data centers for:
● Mobility: Redistribute application workloads between the two data centers.
● Availability: Applications must keep running in the presence of data center failures.
● Distribution: One data center lacks space, power, or cooling.
Combine VPLEX Metro virtual storage and virtual servers to:
● Transparently move virtual machines and storage across synchronous distances.
● Improve utilization and availability across heterogeneous arrays and multiple sites.
Distance between clusters is limited by physical distance, by host, and by application requirements. VPLEX Metro clusters
contain additional I/O modules to enable the inter-cluster WAN communication over IP or Fibre Channel.
Configuration highlights
A VPLEX cluster primarily consists of:
● One, two, or four VPLEX engines.
Each engine contains two directors.
In a VPLEX cluster, all the engines must be of VS2 platform, or of VS6 platform.
Dual-engine or quad-engine clusters contain:
○ One pair of Fibre Channel switches on VS2 hardware, and dual InfiniBand switches on VS6 hardware, for the
communication between the directors .
○ Two Uninterruptible Power Sources (UPS) for battery power backup of the switches and the management server on VS2
hardware. Two Uninterruptible Power Sources (UPS) for battery power backup of the switches on VS6 hardware.
● A management server that acts as the management interface to other VPLEX components in the cluster. The management
servers in VS2 and the VS6 hardware are as follows:
○ VS2 hardware: One management server in a cluster.
○ VS6 hardware: Two management servers, which are called Management Module Control Stations (MMCS-A and MMCS-
B), in the first engine. All the remaining engines will have Akula management modules for the management connectivity.
The management server has a public Ethernet port, which provides cluster management services when connected to your
network.
12 Introducing VPLEX
HP, Oracle (Sun),
Microsoft, Linux, IBM Oracle, VMware, Microsoft
Brocade,
Cisco
VPLEX
Brocade,
Cisco
VPLEX conforms to established world wide naming (WWN) guidelines that can be used for zoning. It also supports Dell EMC
storage and arrays from other storage vendors, such as HDS, HP, and IBM. VPLEX provides storage federation for operating
systems and applications that support clustered file systems, including both physical and virtual server environments with
VMware ESX and Microsoft Hyper-V. The network fabrics from Brocade and Cisco are supported in VPLEX.
See the Dell EMC Simple Support Matrix, Dell EMC VPLEX and GeoSynchrony, available at http://elabnavigator.EMC.com
under the Simple Support Matrix tab.
Management interfaces
In a VPLEX Metro configuration, both clusters can be managed from either management server.
Inside VPLEX clusters, management traffic traverses a TCP/IP based private management network.
In a VPLEX Metro configuration, management traffic traverses a VPN tunnel between the management servers on both
clusters.
Web-based GUI
Web-based graphical user interface (GUI) of VPLEX provides an easy-to-use point-and-click management interface.
The following figures show the screen to claim storage:
Introducing VPLEX 13
Figure 4. Claim storage using the GUI (for Flash)
The GUI supports most of the VPLEX operations, and includes Dell EMC Unisphere for VPLEX Online help to assist new users in
learning the interface.
VPLEX operations that are not available in the GUI, are supported by the Command Line Interface (CLI), which supports full
functionality.
NOTE: Starting with VPLEX GeoSynchrony 6.2, the Flash GUI is depreciated.
VPLEX CLI
The VPLEX CLI supports all VPLEX operations.
The CLI is divided into command contexts:
● Global commands are accessible from all contexts.
● Other commands are arranged in a hierarchical context tree, and can be executed only from the appropriate location in the
context tree.
The following example shows a CLI session that performs the same tasks as shown in Claim storage using the GUI (for Flash).
Example 1 Claim storage using the CLI:
In the following example, the claimingwizard command finds unclaimed storage volumes, claims them as thin storage, and
assigns names from a CLARiiON hints file:
VPlexcli:/clusters/cluster-1/storage-elements/
storage-volumes> claimingwizard --file /home/service/clar.txt
--thin-rebuild
Found unclaimed storage-volume
VPD83T3:6006016091c50e004f57534d0c17e011 vendor DGC:
claiming and naming clar_LUN82.
14 Introducing VPLEX
Found unclaimed storage-volume
VPD83T3:6006016091c50e005157534d0c17e011 vendor DGC:
claiming and naming clar_LUN84.
Claimed 2 storage-volumes in storage array car
Claimed 2 storage-volumes in total.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
The Dell EMC VPLEX CLI Guide provides a comprehensive list of VPLEX commands and detailed instructions on using those
commands.
Introducing VPLEX 15
2
VPLEX use cases
This chapter describes the general features, benefits, and the important use cases of VPLEX.
Topics:
• General use cases and benefits
• Mobility
• Availability
• Redundancy with RecoverPoint
• MetroPoint
Mobility
Use VPLEX to move data between data centers, relocate a data center or consolidate data, without disrupting host application
access to the data.
ACCESS ANYWHERE
The source and target arrays can be in the same data center (VPLEX Local) or in different data centers separated by up to
10ms (VPLEX Metro). The source and target arrays can be heterogeneous.
When you use VPLEX to move data, the data retains its original VPLEX volume identifier during and after the mobility operation.
No change in volume identifiers eliminates application cut over. The application continues to use the same data, though the data
has been moved to a different storage array.
There are many types and reasons to move data:
● Move data from a hot storage device.
● Move the data from one storage device to another without moving the application.
● Move operating system files from one storage device to another.
● Consolidate data or database instances.
● Move database instances.
● Move storage infrastructure from one physical location to another.
With VPLEX, you no longer need to spend significant time and resources preparing to move data and applications. You do not
have to plan for an application downtime or restart the applications as part of the data movement activity. Instead, a move can
be made instantly between sites, over distance, and the data remains online and available during the move without any outage
or downtime. Considerations before moving the data include the business impact, type of data to be moved, site locations, total
amount of data, and schedules.
The data mobility feature of VPLEX is useful for disaster avoidance, planned upgrade, or physical movement of facilities.
Technology refresh
In typical IT environments, migrations to new storage arrays (technology refreshes) require that the data that is being used by
hosts be copied to a new volume on the new array. The host must then be configured again to access the new storage. This
process requires downtime for the host.
VPLEX makes it easier to replace heterogeneous storage arrays on the back-end. Migrations between heterogeneous arrays can
be complicated and may require additional software or functionality. Integrating heterogeneous arrays in a single environment is
difficult and requires a staff with a diverse skill set.
When VPLEX is inserted between the front-end and back-end redundant fabrics, VPLEX appears as the target to hosts and as
the initiator to storage.
The data resides on virtual volumes in VPLEX, and it can be copied nondisruptively from one array to another without any
downtime. It is not required to reconfigure the host. The physical data relocation is performed by VPLEX transparently, and the
virtual volumes retain the same identities and the same access points to the host.
In the following figure, the virtual disk is made up of the disks of Array A and Array B. The site administrator has determined that
Array A has become obsolete and should be replaced with a new array. Array C is the new storage array. Using Mobility Central,
the administrator:
● Adds Array C into the VPLEX cluster.
● Assigns a target extent from the new array to each extent from the old array.
● Instructs VPLEX to perform the migration.
VPLEX copies data from Array A to Array C while the host continues its access to the virtual volume without disruption.
After the copy of Array A to Array C is complete, Array A can be decommissioned:
VPLX-000380
As the virtual machine is addressing its data to the abstracted virtual volume, its data continues to flow to the virtual volume
without any changes to the address of the data store.
Although this example uses virtual machines, the same is true for traditional hosts. Using VPLEX, the administrator can move
data that is used by an application to a different storage array without the application or server being aware of the change.
This allows you to change the back-end storage arrays transparently, without interrupting I/O.
Availability
VPLEX features allow the highest possible resiliency in the event of an outage. The following figure shows a VPLEX Metro
configuration where storage has become unavailable at one of the cluster sites.
ACCESS ANYWHERE
X
Maintain availability and non-stop access by mirroring across locations.
Eliminate storage operatios nfrom failover.
VPLEX redundancy provides reduced Recovery Time Objective (RTO) and Recovery Point Objective (RPO). Because VPLEX
GeoSynchrony AccessAnywhere mirrors all data, applications continue without disruption using the back-end storage at the
unaffected site.
With the Federated AccessAnywhere feature of VPLEX, the data remains consistent, online, and always available. VPLEX does
not need to ship the entire file back and forth like other solutions. It only sends the changed updates as they are made, greatly
reducing bandwidth costs and offering significant savings over other solutions.
To know more about high availability with VPLEX, see Chapter 4 Integrity and resiliency. .
NOTE: RecoverPoint integration is offered for VPLEX Local and VPLEX Metro configurations.
The VPLEX splitter works with a RecoverPoint Appliance (RPA) to orchestrate the replication of data either remotely or locally,
or both.
The VPLEX splitter enables VPLEX volumes in a VPLEX Local or VPLEX Metro to mirror I/O to a RecoverPoint Appliance.
RecoverPoint/VPLEX configurations
RecoverPoint can be configured on VPLEX Local or Metro systems as follows:
● VPLEX Local and local protection
● VPLEX Local and local/remote protection
● VPLEX Metro and RecoverPoint local at one site
● VPLEX Metro and RecoverPoint with both local and remote replications
In VPLEX Local systems, RecoverPoint can replicate local volumes.
In VPLEX Metro systems, RecoverPoint can replicate local volumes and distributed RAID 1 volumes.
Virtual volumes can be replicated locally, remotely, or both.
Distances between production sources and replication volumes vary based on the recovery objectives, inter-site bandwidth,
latency, and other limitations outlined in the Dell EMC Simple Support Matrix (ESSM) for RecoverPoint.
VPLEX
Journaling Journaling
Volume Volume
RecoverPoint
RecoverP
RecoverPoint
erP nt Applian
Ap
Appliance
plian RecoverPoint
RecoverP
RecoverPoint
erP nt Applian
Ap
Appliance
pliance
plian
RecoverPoint Appliance RecoverPoint Appliance
Write RecoverPoint WAN
RecoverPoint
Group Appliance RecoverPoint Appliance
RecoverPoint Appliance RecoverPoint Appliance
VPXX-0044
In the configuration depicted below, host writes to the distributed virtual volumes are written to both the legs of the distributed
RAID 1 volume. Additionally, a copy of the I/O is sent to the RPA. RPA then distributes to the replica on the CLARiiON array at a
remote disaster recovery site:
VPLEX VPLEX
WAN - COM
Distributed Device
Replica
Volume Volume volume
CLARiiON CLARiiON
Journaling Journaling
Volume Volume
RecoverPoint
Reco
RecoverP
verPoint
verPoint App
Applian
Appliance
lian RecoverPoint
Reco
RecoverP
verPoint App
verP Applian
Appliance
lian
RecoverPoint
WriteAppliance RecoverPoint Appliance
RecoverPoint
Group Appliance RecoverPoint WAN RecoverPoint Appliance
RecoverPoint Appliance RecoverPoint Appliance
VPXX-0045
When an outage occurs in VPLEX Local or VPLEX Metro configurations, the virtual machines can be restarted at the replication
site with automatic synchronization to the VPLEX configuration when the outage is over.
MetroPoint
VPLEX GeoSynchrony configured with RecoverPoint in a VPLEX Metro provides the MetroPoint topology. This MetroPoint
topology provides a 3-site or 4-site solution for continuous availability, operational and disaster recovery, and continuous data
protection. MetroPoint also supports a 2-site topology with the ability to expand to a third remote site in future.
The MetroPoint topology provides full RecoverPoint protection of both sides of a VPLEX distributed volume across both sides
of a VPLEX Metro configuration, maintaining replication and protection at a consistency group level, even when a link from one
side of the VPLEX Metro to the replication site is down.
In MetroPoint, VPLEX Metro and RecoverPoint replication are combined in a fully redundant manner to provide data protection
at both sides of the VPLEX Metro and at the replication site. With this solution, data is replicated only once from the active
source site to the replication site. The standby source site is ready to pick up and continue replication even under a complete
failure of the active source site.
MetroPoint combines the high availability of the VPLEX Metro with redundant replication and data protection of RecoverPoint.
MetroPoint protection allows for one production copy of a distributed volume on each Metro site, one local copy at each Metro
site, and one remote copy for each MetroPoint consistency group. Each production copy can have multiple distributed volumes.
MetroPoint offers the following benefits:
● Full high availability for data access and protection.
● Continuous data protection and disaster recovery.
● Operational recovery at all three sites for redundancy.
● Efficient data transfer between VPLEX Metro sites and to the remote site.
● Load balancing across replication links and bi-directional replication.
● Out of region data protection with asynchronous replication.
● Any-Point-in-Time operational recovery in the remote site and optionally in each of the local sites. RecoverPoint provides
continuous data protection with any-point-in-time recovery.
● Full support for all operating systems and clusters normally supported with VPLEX Metro.
● Support for a large variety of Dell EMC and third-party storage arrays.
The Dell EMC VPLEX GeoSynchrony Administration Guide provides you additional information on the MetroPoint topologies,
installation, configuration and upgrade of MetroPoint, and the failover scenarios.
ALUA
Asymmetric Logical Unit Access (ALUA) routes I/O of the LUN directed to non-active/failed storage processor to the active
storage processor without changing the ownership of the LUN.
Each LUN has two types of paths:
● Active/optimized paths are direct paths to the storage processor that owns the LUN.
Active/optimized paths are usually the optimal path and provide higher bandwidth than active/non-optimized paths.
● Active/non-optimized paths are indirect paths to the storage processor that does not own the LUN through an
interconnect bus.
Features in VPLEX 25
I/Os that traverse through the active/non-optimized paths must be transferred to the storage processor that owns the
LUN. This transfer increases latency and has an impact on the array.
VPLEX detects the different path types and performs round robin load balancing across the active/optimized paths.
VPLEX supports all three flavors of ALUA:
● Explicit ALUA - The storage processor changes the state of paths in response to commands (for example, the Set Target
Port Groups command) from the host (the VPLEX backend).
The storage processor must be explicitly instructed to change a path’s state.
If the active/optimized path fails, VPLEX issues the instruction to transition the active/non-optimized path to active/
optimized.
There is no need to failover the LUN.
● Implicit ALUA - The storage processor can change the state of a path without any command from the host (the VPLEX
back end).
If the controller that owns the LUN fails, the array changes the state of the active/non-optimized path to active/optimized
and fails over the LUN from the failed controller.
On the next I/O, after changing the path’s state, the storage processor returns a Unit Attention “Asymmetric Access State
Changed” to the host (the VPLEX backend).
VPLEX then re-discovers all the paths to get the updated access states.
● Implicit/explicit ALUA - Either the host or the array can initiate the access state change.
Storage processors support implicit only, explicit only, or both.
Other storage
Other storage refers to storage from arrays that are not integrated with VPLEX through AMPs. Because VPLEX cannot access
functionality on the array, you cannot use array functionality such as storage pools. Therefore, you can only provision from
storage volumes discovered on the array. There are two ways to provision from storage volumes: EZ-Provisioning and advanced
provisioning.
NOTE: The Dell EMC Simplified Support Matrix for VPLEX provides more information on the supported storage volumes.
26 Features in VPLEX
● Discovery of the back-end storage volumes capable for thin provisioning - During the back-end storage volume discovery,
VPLEX gathers all thin provisioning related storage volume properties. VPLEX also performs a consistency check on all the
properties related to thin-provisioning.
● Reporting thin provisioning enabled VPLEX virtual volumes to hosts - VPLEX shares the details of the thin provisioning-
enabled virtual volumes with the hosts.
● Reclaiming the unused storage blocks - Through a command, VPLEX removes the mapping between a deleted virtual
machine and its storage volumes and reclaims the storage blocks corresponding to the VMFS blocks used by that virtual
machine.
● Handling storage exhaustion - The exhaustion of storage blocks on non-mirrored storage volumes are notified to the host
as a space allocation failure. This error notification is posted to the host and the VMware hosts stop the impacted virtual
machine.
To prevent potential mapping of all the blocks in the storage volumes that are thin capable, VPLEX uses thin rebuilds. Thin
rebuilds can be configured to be set or unset for any claimed storage volume on which VPLEX builds virtual volumes. This
property controls how VPLEX does its mirror rebuilding.
The unmap feature reclaims the unused VMFS blocks by removing the mapping between the logical blocks and the physical
blocks. This essentially removes the link between a logical block and a physical block that has unknown or unused resources.
Performance monitoring
VPLEX performance monitoring provides a customized view into the performance of your system. You decide which aspects of
the system's performance to view and compare.
You can view and assess the VPLEX performance using these methods:
● Unisphere Performance Monitoring Dashboard, which shows real-time performance monitoring data for up to one hour of
history.
● Performance statistics collection using the CLI and the API. These methods let you collect and view the statistics, and
export them to an external application for analysis.
● Monitoring with Simple Network Management Protocol (SNMP).
Features in VPLEX 27
Figure 14. Unisphere Performance Monitoring Dashboard (for HTML5)
You decide which aspects of the system performance you want to view and compare:
Figure 15. Unisphere Performance Monitoring Dashboard - select information to view (for Flash)
Figure 16. Unisphere Performance Monitoring Dashboard - select information to view (for HTML5)
Performance information is displayed as a set of charts. For example, the following figure shows front-end throughput for a
selected director (for Flash) and all directors (for HTML5):
Figure 17. Unisphere Performance Monitoring Dashboard - sample chart (for Flash)
28 Features in VPLEX
Figure 18. Unisphere Performance Monitoring Dashboard - sample chart (for GUI)
For additional information about the statistics available through the Performance Monitoring Dashboard, see the Dell EMC
Unisphere for VPLEX online help available in the VPLEX GUI.
Features in VPLEX 29
● Read Latency (usec)
● Write Latency (usec)
NOTE: Starting with VPLEX GeoSynchrony 6.2, the Flash GUI is depreciated.
30 Features in VPLEX
4
Integrity and resiliency
This chapter describes how the high availability and the redundancy features of VPLEX provide robust system integrity and
resiliency.
Topics:
• About VPLEX resilience and integrity
• Site distribution
• Cluster
• Metadata volumes
• Backup metadata volumes
• Logging volumes
• Global cache
• High availability and VPLEX hardware
• High Availability with VPLEX Witness
• VPLEX Metro Hardware
Site distribution
When two VPLEX clusters are connected together with VPLEX Metro, VPLEX gives you shared data access between sites.
VPLEX can withstand a component failure, a site failure, or loss of communication between sites and still keep the application
and data online and available.
VPLEX Metro ensures that if a data center goes down, or even if the link to that data center goes down, the other site can
continue processing the host I/O.
In the following figure, despite a site failure at Data Center B, I/O continues without disruption in Data Center A.
Engine 1 Engine 1
VPLX-000394
Cluster
VPLEX is a true cluster architecture. That is, all components are always available and I/O that enters the cluster from anywhere
can be serviced by any node within the cluster, while cache and coherency is maintained for all reads and writes.
As you add more engines to the cluster, you get the added benefits of more cache, increased processing power, and more
performance.
A VPLEX cluster provides N–1 fault tolerance, which means that any component failure can be sustained, and the cluster will
continue to operate as long as one director survives.
A VPLEX cluster (running either on VS2 hardware or VS6 hardware) consists of redundant hardware components.
A single engine supports two directors. If one director in an engine fails, the second director in the engine continues to service
I/O. Similarly, if a VPLEX cluster contains multiple engines, VPLEX can handle more than one failure without disrupting any
services as long as quorum (defined by set rules) is not lost.
All hardware resources (CPU cycles, I/O ports, and cache memory) are pooled.
A two-cluster configuration (Metro) offers true high availability. Operations continue and data remains online even if an entire
site fails. It also provides a high availability solution with zero recovery point objective (RPO).
Quorum
Quorum refers to the minimum number of directors required for the cluster to service and maintain operations.
There are different quorum rules for a cluster to become operational and start servicing I/Os when it is booting up, also called
“gaining quorum.” Different rules for an operational cluster seeing director failures to either continue servicing operations and
I/O after failure handling is called “maintaining quorum.” Stopping servicing operations and I/O is called “losing quorum.” These
rules are described below:
● Gaining quorum - A non-operational VPLEX cluster gains quorum and becomes operational when more than half of the
configured directors restart and come in contact with each other. In a single engine cluster, it refers to all the directors.
● Maintaining quorum - An operational VPLEX cluster seeing failures will continue operating in the following scenarios:
○ Director failures
■ If less than half of the operational directors with quorum fail.
■ If half of the operational directors with quorum fail, then the remaining directors will check the operational status of
the failed directors over the management network and remain alive.
After recovering from this failure, a cluster can tolerate further similar director failures until only one director is
remaining. In a single engine cluster, a maximum of one director failure can be tolerated.
○ Intra-cluster communication failure
Metadata volumes
Meta-volumes store VPLEX metadata, including virtual-to-physical mappings, data about devices, virtual volumes, and system
configuration settings.
Metadata is stored in cache and backed up on specially designated external volumes called meta-volumes.
After the meta-volume is configured, updates to the metadata are written to both the cache and the meta-volume when the
VPLEX configuration is modified.
Each VPLEX cluster maintains its own metadata, including:
● The local configuration for the cluster.
● Distributed configuration information shared between clusters.
At system startup, VPLEX reads the metadata and loads the configuration information onto each director.
When you make changes to the system configuration, VPLEX writes these changes to the metadata volume.
If VPLEX loses access to the metadata volume, the VPLEX directors continue uninterrupted, using the in-memory copy of the
configuration. VPLEX blocks changes to the system until access is restored or the automatic backup meta-volume is activated.
Meta-volumes experience high I/O only during system startup and upgrade.
I/O activity during normal operations is minimal.
Logging volumes
Logging volumes keep track of blocks written:
● During an inter-cluster link outage.
● When one leg of a DR1 becomes unreachable and then recovers.
After the inter-cluster link or leg is restored, the VPLEX system uses the information in logging volumes to synchronize the
mirrors by sending only changed blocks across the link.
Logging volumes also track changes during loss of a volume when that volume is one mirror in a distributed device.
CAUTION: If no logging volume is accessible, then the entire leg is marked as out-of-date. A full re-
synchronization is required once the leg is reattached.
The logging volumes on the continuing cluster experience high I/O during:
● Network outages or cluster failures
Global cache
Memory systems of individual directors ensure durability of user and critical system data. Synchronous systems (write-through
cache mode) leverage the back-end array by writing user data to the array. An acknowledgment for the written data must be
received before the write is acknowledged back to the host.
VPLEX engines
A VPLEX engine contains two directors, I/O modules, fans, and redundant power supplies. A VPLEX cluster can have one
(single), two (dual), or four (quad) engines. A cluster that has multiple engines uses redundant network switches for the
intra-cluster communication. Each switch is backed by a dedicated uninterruptible power supply (UPS). The directors provide
redundant front-end and back-end I/O connections. A redundant standby power supply provides battery backup to each engine
in the events of power outages.
NOTE: In a cluster that runs on VS6 hardware, the first engine contains the Management Module Control Stations
(MMCS-A and MMCS-B), which is the management entity in VS6 hardware.
In a dual-engine or quad-engine configuration, if one engine goes down, another engine completes the host I/O processing as
shown in the following figure.
Engine 1 Engine 2
Virtual Volume
VPLX-000393
In a VPLEX Metro configuration, multi-pathing software plus volume presentation on different engines yields continuous data
availability in the presence of engine failures.
Directors
A VPLEX director is the component that process the I/O requests from the hosts in a VPLEX environment. It interacts with the
backend storage arrays for servicing the I/Os.
A director has two I/O modules for servicing I/Os from the arrays; one for the connectivity with the storage arrays on the back
end, and another for connecting to the hosts on the front end. The management module in the director is used for management
connectivity to the directors and for intra-cluster communication. The local communication module is dedicated to intra-cluster
communication.
The front-end ports on all directors can provide access to any virtual volume in the cluster. Include multiple front-end ports in
each storage view to protect against port failures. When a director port fails, the host multi-pathing software seamlessly fails
over to another path through a different port, as shown in the following figure:
Engine 1
Virtual Volume
VPLX-000376
Combine multi-pathing software plus redundant volume presentation for continuous data availability in the presence of port
failures.
Back-end ports, local COM ports, and WAN COM ports provide similar redundancy for additional resilience.
Each VPLEX engine includes redundant directors. Each director can service I/O for any other director in the cluster due to the
redundant nature of the global directory and cache coherency.
If one director in the engine fails, another director continues to service I/O from the host.
In the following figure, Director 1-1-A has failed, but Director 1-1-B services the host I/O that was previously being serviced by
Director 1-1-A.
Engine 1
Virtual Volume
VPLX-000392
Management server
Each VPLEX cluster has one management server. You can manage both clusters in a VPLEX Metro configuration from a single
management server. The management server acts as a management interfaces to other VPLEX components in the cluster.
Redundant internal network IP interfaces connect the management server to the public network. Internally, the management
server is on a dedicated management IP network that provides accessibility to all major components in the cluster.
The larger role of the management server includes:
● Coordinating data collection, VPLEX software upgrades, configuration interfaces, diagnostics, event notifications, and some
director-to-director communication.
● Forwarding VPLEX Witness traffic between directors in the local cluster and the remote VPLEX Witness server.
The management servers in VS2 and the VS6 hardware are as follows:
● VS2 hardware: One management server in a cluster
● VS6 hardware: Two management servers, MMCS-A and MMCS-B in the first engine. All the remaining engines will have
Akula management modules for the management connectivity.
Power supplies
The VPLEX cluster is connected to your AC power source. Each cluster is equipped with ample uninterruptible power supplies
that enable the cluster to ride out power disruptions. The power supply units differ between the VS2 and the VS6 hardware.
The power supply architecture of a VPLEX cluster that runs on VS2 hardware contains:
● Power distribution panels (PDPs) that connect to the AC power source of the cluster and transfer power to the VPLEX
components through power distribution units (PDUs). This provides a centralized power interface and distribution control for
the power input lines. The PDPs contain manual on/off power switches for their power receptacles.
● PDUs that connect to the AC power source of the cluster to supply power to the VPLEX components through SPS. The
PDUs contain circuit breakers, which protects the hardware components from power fluctuations.
● Standby power supply (SPS) units that have sufficient capacity to ride through transient site power failures. A single
standby power supply provides enough power for the attached engine to ride through two back-to-back 5-minute losses
of power. One SPS assembly (two SPS modules) provides backup power to each engine in the event of an AC power
interruption. Each SPS module maintains power for two five-minute periods of AC loss while the engine shuts down.
● Two uninterruptible power supplies, UPS-A and UPS-B for the dual and quad engine clusters. One UPS provides battery
backup for the first network switch and the management server, and a second UPS provides battery backup for the other
switch. Each UPS module maintains power for two five-minute periods of AC loss while the engine shuts down.
The power supply architecture of a VPLEX cluster that runs on VS6 hardware contains:
● PDUs that connect to the AC power source of the cluster to supply power to the VPLEX components through Power Supply
Units (PSU). The PDUs contain circuit breakers, which protect the hardware components from power fluctuations.
● Backup Battery Units (BBU) that have sufficient capacity to ride through transient site power failures. One BBU assembly
(two BBU modules) provides backup power to each engine in the event of an AC power interruption.
● Two uninterruptible power supplies, UPS-A and UPS-B for the dual and quad engine clusters. One UPS provides battery
backup for the first network switch, and a second UPS provides battery backup for the other switch. Each UPS module
maintains power for two five-minute periods of AC loss while the engine shuts down.
Failure Domain #3
VPLEX Witness
Inter-cluster
Network A
Inter-cluster
Network B
The VPLEX Witness server must be deployed in a separate failure domain to both of the VPLEX clusters. This deployment
enables VPLEX Witness to distinguish between a site outage and a link and to provide the correct guidance.
VPLEX Metro HA
VPLEX Metro High Availability (HA) configurations consist of a VPLEX Metro system deployed in conjunction with VPLEX
Witness. There are two types of Metro HA configurations:
● VPLEX Metro HA can be deployed in places where the clusters are separated by 5 ms latency RTT or less.
● VPLEX Metro HA combined with Cross Connect between the VPLEX clusters and hosts can be deployed where the clusters
are separated by 1 ms latency RTT or less.
Host failure
If hosts at one site fail, then VMware HA restarts the virtual machines on the surviving hosts. Since surviving hosts are
connected to the same datastore, VMware can restart the virtual machines on any of the surviving hosts.
Cluster failure
If a VPLEX cluster fails:
● VPLEX Witness guides the surviving cluster to continue.
● VMware re-routes I/O to the surviving cluster.
● No disruption to I/O.
Higher availability
Combine VPLEX Witness with VMware and cross cluster connection to create even higher availability.
GeoSynchrony
GeoSynchrony is the operating system that runs on the VPLEX directors. GeoSynchrony runs on both the VS2 and the VS6
hardware.
GeoSynchrony is:
● Designed for highly available, robust operation in geographically distributed environments
● Driven by real-time I/O operations
● Intelligent about locality of access
● Designed to provide the global directory that supports AccessAnywhere
Software upgrades
VPLEX is fully redundant for:
● Ports
● Paths
● Directors
● Engines
This redundancy allows GeoSynchrony on VPLEX Local and Metro to be upgraded without interrupting host access to storage, it
does not require service window or application disruption.
NOTE: VS6 uses fan filler modules only for the drive
bay.
Cable management assembly ● The cable management assembly is the routing tray for the
SLiC cables.
● A single engine cluster has two cable management
assemblies.
● In dual and quad engine clusters, the cable management
assemblies between the engines carry cables from both
engines.
Power supply units The power supply units contain:
● PDUs that connect to the AC power source of the cluster
to supply power to the VPLEX components through Power
Supply Units (PSU).
● Backup Battery Units (BBU) that have sufficient capacity
to ride through transient site power failures or to vault
their cache when power is not restored.
● Two uninterruptible power supplies, UPS-A and UPS-B for
the IB switches in a dual or a quad engine cluster.
High availability and VPLEX hardware provides you with more information on the VPLEX components and how they support the
integrity and resiliency features of VPLEX.
Engine 4, Director B ON
I
O
OFF
ON
I
O
OFF
Engine 4, Director A
SPS 4B SPS 4A
ON ON
I I
O O
OFF OFF
SPS 3B SPS 3A
Laptop tray
Fibre Channel switch B
UPS B
Fibre Channel switch A
UPS A
OFF OFF
O O
I I
Management server
ON ON
SPS 2B SPS 2A
OFF OFF
O O
I I
ON ON
SPS 1B SPS 1A
VPLX-000228
High availability and VPLEX hardware provides you with more information about the VPLEX components and how they support
the integrity and resiliency features of VPLEX.
E P
Element Manager API 15 passwords 25
path optimization 25
F performance 27, 29
port usage 25
failures 31, 40 power 38
Failures 41 preface 7
faults 32 provisioning storage 26
front-end load 27
Q
G
quad-engine cluster 34
GeoSynchrony 43 quorum 32
global cache 34
R
H
RecoverPoint 20–23
HTTPS 25 RecoverPoint Appliance 20, 22
RecoverPoint cluster 22
RecoverPoint splitter 20, 22
redundancy 20, 31, 32
related documentation 7
remote protection 21
remote replication 22
replica volumes
rollback 21
replication 20, 22–24
resilience 33
resilliance 31
REST 15
RPA 22
RPO 38
RTO 38–40
RTT 39
rule sets 38
S
security 25
single-engine cluster 34
site distribution 31
site recovery 23
Site Recovery Manager 23
splitter 22
SRM 23
statistics 29
support information 7
switches 38
T
technology refresh 18
thin volume 26
U
Unisphere for VPLEX 13
Unisphere GUI 13
Unisphere monitoring tool 27
unmap 26
upgrade 44
user roles 25
V
vCenter 23
VMware cross-cluster connection 42
VMware HA 39
VPLEX hardware 34
VPLEX hardware platforms 12
VPLEX Metro
Metro over IP 42
VPLEX Performance Monitor 29
VPLEX Witness 31, 38–42
W
WAN link load 27
WWN 12