0% found this document useful (0 votes)
308 views82 pages

Vplex Fundamentals 6.0 SRG PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views82 pages

Vplex Fundamentals 6.0 SRG PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Welcome to VPLEX Fundamentals.

Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as of its
publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and
service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this
publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party that owns the
Trademark.

EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip,
Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor,
DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences,
Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC
LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix,
Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint,
MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools,
Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC
RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO
Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data
Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso,
Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: 08/2016

Revision Number: MR-1WP-VPLEXFD.6.0.0.1.0

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 1


This course covers an overview of the VPLEX architecture, features, and functionality.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 2


This module focuses on explaining the basics of the VPLEX solution and how it can benefit
IT infrastructures. This module also covers the use cases of the VPLEX solution.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 3


This lesson covers the overview of VPLEX and its benefits.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 4


VPLEX Overview

Introduction

EMC VPLEX is a virtual storage technology that federates data located on


multiple heterogeneous storage systems. Resources in multiple data centers
can be pooled together and accessed from anywhere.
The VPLEX family addresses three primary IT needs:
• Mobility
• Availability
• Collaboration

Click Next to proceed.


Mobility

The ability to non-disruptively move applications and data across different


storage installations, whether within the same data center, across a campus,
or within a geographical region.
Availability

The ability to create high-availability storage infrastructure across the same


varied geographies with unmatched resiliency.
Collaboration

The ability to provide efficient real-time data collaboration over distance for
big data applications, such as video, geographic/oceanographic research, and
others.
VPLEX Benefits

Introduction

VPLEX offers many unique innovations and advantages that includes


data mobility, availability, and data collaboration features.

Click each term to learn more.


1. Data Mobility

VPLEX technology enables new models of application and data


mobility, leveraging distributed/federated virtual storage. For example,
VPLEX is specifically optimized for virtual server platforms like
VMware, Hyper-V, Oracle Virtual Machine, AIX VIOS. It can streamline,
and even accelerate, transparent workload relocation over distances,
which includes moving virtual machines over those distances.
2. Availability

With its unique, highly available, scale-out clustered architecture,


VPLEX can be configured with one, two, or four engines - and engines
can be added to a VPLEX cluster without disruption. All virtual
volumes presented by VPLEX are always accessible from every engine
in a VPLEX cluster. Similarly, all physical storage connected to VPLEX
is accessible from every engine in the VPLEX cluster. Combined, this
scale-out architecture uniquely ensures maximum availability, fault
tolerance, and scalable performance.
3. Data Collaboration

Advanced data collaboration, through AccessAnywhere, provides cache-


consistent active-active access to data across two VPLEX clusters over
synchronous distances with VPLEX Metro.
This lesson covers VPLEX family use cases and VPLEX with RecoverPoint.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 7


VPLEX enables additional use cases under each category described earlier — mobility,
availability, and collaboration.
• For mobility, VPLEX enables disaster avoidance, data center migration, and workload
rebalancing.
• For availability, VPLEX allows for a high availability infrastructure as well as for
eliminating failover from storage operations.
• For collaboration, VPLEX enables instant and simultaneous data access over distance
and streamlined workflows.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 8


There is no application impact during migration between arrays within a data center when
using VPLEX. VPLEX also features online application migration between data centers and
non-disruptive migration of the entire virtual stack between data centers.

The benefits are:


• Non-disruptive migration of the entire virtual stack between data centers
• Flexibility in locating resources
• Simplification of migration process
• Uniform migration process between heterogeneous arrays

The use-cases are:


• Disaster avoidance
• Data center migrations
• Workload rebalancing
• Lease roll over/tech refresh

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 9


Here is an example of the migration process using VPLEX. VPLEX is installed and batches
perform their scheduled maintenance. At this point, the appropriate storage in the
environment is “virtualized”. Virtual volumes are assigned to the host, taking into account
application and system high availability requirements. This prepares the environment for
the arrival of a new array.

When the new array arrives and is installed, it is time to install and virtualize the new
storage, where the actual migration takes place. To do this VPLEX copies data in the
background. Note that no external tools are required to be running on the hosts or the
array. Once the data is copied to the new array and VPLEX non-disruptively links the
servers to the new array, no outage takes place during this process. Also it is fully
reversible - the VPLEX can just re-link the servers to the old array if performance or other
factors dictate.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 10


Customers can improve availability by extending protection between arrays using VPLEX.
VPLEX provides protection across arrays between data centers. Virtualized servers can
move VMs, applications, and data between arrays or data centers dynamically:
• Server clusters can now operate over distance without alteration
• Infrastructure can be refreshed, both locally and over distance, without downtime due to
remediation, data movement, host reboot, and so on
• VMs restart automatically on the surviving site

The benefits are:


• Non-disruptive to host applications
• Uniform process to protect between heterogeneous arrays

The use-cases are:


• Array maintenance
• Protection against unplanned array failure

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 11


Combining RecoverPoint version 4.1 or later with VPLEX version 5.5 or later allows for
continuous data replication to a third remote site if using a MetroPoint topology. This is an
active-active multi-site infrastructure data protection that will continue even with a
complete site failure of one of the Metro sites.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 12


This module covered the basics of the VPLEX solution and how it can benefit IT
infrastructure. This module also covered the use cases of VPLEX solution.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 13


This module focuses on the architecture and components of the VPLEX solution.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 14


This lesson covers VPLEX cluster components, supported configurations and engines.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 15


VPLEX operates in the data path between host and storage. VPLEX virtualizes the storage,
enabling data mobility operations that are non-disruptive to hosts. VPLEX is architected to
scale as the number of hosts and storage arrays scale.

For example, as more storage and host are added, we can add another VPLEX engine and
create a dual-engine VPLEX cluster to handle the increased workload. Each engine provides
more processing power, cache, and I/O connectivity to the VPLEX cluster.

VPLEX can be configured with 1, 2, or 4 engines in a cluster. The maximum cluster size is 4
engines configured in a quad-engine configuration.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 16


All supported VPLEX configurations ship in a standard, single rack. The shipped rack
contains the selected number of engines, one Management Server, redundant Standby
Power Supplies (SPS) for each engine, and any other needed internal components. The pair
of SPS units provide DC power to the engines in case there is a loss of AC power.

The dual and quad configurations include redundant internal FC switches for LCOM
connection between the directors. In addition, dual and quad configurations contain
redundant Uninterruptible Power Supplies (UPS) that service the FC switches and the
Management Server.

Engines are numbered 1-4 from the bottom to the top. Any spare space in the shipped rack
is to be preserved for potential engine upgrades in the future. Since the engine number
dictates its physical position in the rack, numbering will remain intact as engines are added
during a cluster upgrade.

Note: GeoSynchrony is pre-installed on the VPLEX hardware and the system is pre-cabled,
and also pre-tested.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 17


The new VPLEX VS6 technology introduces the 3rd Generation VPLEX hardware. VS6 racks
are more densely populated than VS2 racks, leaving space for additional IT equipment.
There is no external management modules. VS6 introduces InfiniBand intra cluster
communication and supports 16 Gbps FC for FE (Front End), BE (Back End) and WAN COM
(Cluster to Cluster) connectivity. The new flash-optimized platform is an ideal match for the
performance demands and workload requirements of All-Flash arrays.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 18


Just like VS2, VPLEX VS6 clusters come in single, dual and quad engine configurations. At
initial release of VS6 with GeoSynchrony 6.0 however, there is no procedure to upgrade the
number of engines in a cluster. Those procedures will become available with a later version
of GeoSynchrony.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 19


VPLEX engines can be deployed as a single, dual, or quad engine cluster. The VPLEX
advanced data caching algorithms detect sequential reads to disk. As a result, VPLEX
engines fetch data from disk to cache in order to improve host read performance.

VPLEX engines are the brains of a VPLEX system, they contain two directors, each providing
front end (FE) and back end (BE) I/O connectivity. They also contain redundant power
supplies, fans, I/O modules, and management modules.

The directors are the workhorse components of the system and are responsible for:
• Processing I/O requests from the hosts
• Serving and maintaining data in the distributed cache
• Providing the virtual-to-physical I/O translations, and
• Interacting with the storage arrays to service I/O

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 20


A VPLEX VS2 Engine has 10 I/O modules slots but only 8 have I/O modules. Each director
has:
• A four-port 8 Gb/s Fibre Channel I/O module used for front-end SAN (host) connectivity
• A four-port 8 Gb/s Fibre Channel I/O module used for back-end SAN (storage array)
connectivity
Each module has 40 Gb/s effective PCI bandwidth to the CPUs of their corresponding
director
• An I/O module, called the WAN COM module, is used for inter-cluster communication.
Two variants of this module are offered, one four-port 8 Gb/s Fibre Channel module and
one two-port 10 Gb/s Ethernet module
• An I/O module provides two ports of 8 Gb/s Fibre Channel connectivity for intra-cluster
communication
• The last slot in each director is a filler panel.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 21


A VPLEX VS6 engine has 4U DPE (Disk Processor Enclosure), 2 dual socket director, 4(N+1
per side) 1100W Hot Swappable Power Supplies, and 4 Hot Swappable Battery Backup Unit
(BBU) Modules, 2 (paired/not n+1) per director.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 22


This lesson covers VPLEX terminologies, configurations and their comparisons. It also
covers the VPLEX architecture, management of IP infrastructure, cache coherence, path
redundancy, and failure handling.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 23


VPLEX Constructs

Introduction

Here are some of the common VPLEX constructs.


Click each tab to know more.
Storage Volume

The underlying physical block storage volume that is presented from the
array to VPLEX. Storage volumes are also known as array LUNs. Backend
storage arrays are configured to present LUNs to VPLEX backend ports. Each
presented backend LUN maps to one VPLEX storage volume.
Extent

An extent is a continuous section of a storage volume; or it may be the


entire storage volume. A device is created from one or more extents.
Device

A storage object formed from one or more Extents or Devices. A Device has a
RAID level of 0, 1, or C. Devices can be used to create virtual volumes.
Virtual Volume

A VPLEX volume that is created and presented to the host. This volume has a
one-to-one relation with a device.
Distributed Device

A device with complete synchronized copies (mirrors) of data in more than


one cluster.
Storage View

A view for hosts that defines which hosts can access which virtual volumes
on which VPLEX ports. Once a storage view is properly configured as
described, the host should be able to detect and use virtual volumes after
initiating a bus scan on its HBAs. Every frontend path to a virtual volume is
an active path.
VPLEX Terminology

Introduction

Here are the common VPLEX terms.


Click each term to learn more.
Storage Virtualization

A virtual layer between hosts and storage, abstracting the physical elements
into logical views within a single site.
Local Federation

Provides the transparent cooperation and mobility between storage


elements to enable data to be shared, accessed, and relocated transparently
within a single site.
Distributed Federation

Provides the transparent cooperation and mobility between storage


elements to enable data to be shared, accessed, and relocated transparently
between two locations at synchronous distances.
AccessAnywhere™

Cache coherency technology that enables a consistent view of data to be


presented, shared, accessed, and or relocated between federated VPLEX
Metro clusters.
Meta-volume

VPLEX metadata includes virtual to physical mappings, data about devices,


virtual volumes, and system configuration settings. Metadata is stored in
cache and backed up on specially designated external volumes called meta-
volumes.
VPLEX Engine

Consists of two high-availability directors, each with redundant power


supplies that are backed up by battery to preserve data in case of a power
outage.
VPLEX Local™

Provides transparent cooperation and mobility of physical elements within a


site.
VPLEX Metro™

Extends access between two locations at synchronous distances with up to 10


ms round-trip latency between sites.
VPLEX Terminology (Cont.)

Cache Coherency

Cache coherency is the consistency of data stored in local caches of a shared


resource.
Distributed Cache Coherency

Distributed Cache Coherency is the consistency of data stored in local and


remote caches of a shared resource. This VPLEX feature guarantees that
both remote and local hosts will read the same data from a shared
distributed device.
GeoSynchrony™

GeoSynchronyTM is the software that runs on VPLEX hardware that provides


all VPLEX intelligence and features, such as volume virtualization, data
mobility, cache coherence, data availability, and system monitoring and
reporting.
Logging Volume

A volume used to keep track of changes to the mirror legs of a distributed


device in the event of an inter-cluster communications failure.
The VPLEX product family has currently released two configuration options: VPLEX Local
and Metro.

VPLEX Local provides seamless, non-disruptive data mobility and the ability to manage
and mirror data between multiple heterogeneous arrays from a single interface within a
data center. VPLEX Local consists of a single VPLEX cluster. It contains a next-generation
architecture that allows increased availability, simplified management, and improved
utilization and availability across multiple arrays.

VPLEX Metro enables active/active, block level access to data between two sites within
synchronous distances. The distance is limited not only by physical distance but also by
host and application requirements. Depending on the application, VPLEX clusters should be
installed with inter-cluster links that can support not more than 10ms 1 round trip delay
(RTT). The combination of virtual storage with VPLEX Metro and virtual servers enable the
transparent movement of virtual machines and storage across synchronous distances. This
technology provides improved utilization and availability across heterogeneous arrays and
multiple sites.

Note: RTT is application dependent.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 27


This table provides a quick comparison of the three different VPLEX VS2 single cluster
configurations available.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 28


This table provides a quick comparison of the VPLEX VS6 and VS2 single cluster
configurations available.

The new platform increases performance and scale when compared to the current VS2
platform:

• 2X IOPS

• 1/3rd the latency of VS2 (70% improvement)

• Support for 12,000 volumes (1.2X for Local and 1.5X for Metro)

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 29


EMC VPLEX is a next generation architecture for data mobility and information access. It is
based on unique technology that combines scale-out clustering and advanced data caching,
with the unique distributed cache coherence intelligence to deliver radically new and
improved approaches to storage management. This architecture allows data to be accessed
and shared between locations over distance via a distributed federation of storage
resources.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 30


Internal management of VPLEX is performed with a dedicated IP network. This is a high-
level architectural view of the management connections between the Management Server
and directors. This figure shows NO internal VPLEX IP switches. The directors are daisy
chained together via two redundant Ethernet connections.

The Management Server also connects via two redundant Ethernet connections to the
directors in the cluster. The Management Server is the only VPLEX component that is
configured with a “public” IP on the data center network. From the data center IP network,
the Management Server can be accessed via SSH or HTTPS.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 31


Internal management of the VPLEX in VS6 is performed with an integrated management
module. Management Module Control Station (MMCS) in director A of engine 1 serves as
the cluster management server. VPLEXCLI commands run on MMCS A while MMCS B is
connected to the customer network without VPLEXCLI and has routing/connectivity
purposes for the internal management network.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 32


Cache coherence creates a consistent global view of a volume. Distributed cache coherence
is maintained using a directory. There is one directory per user volume and each directory
is split into chunks (4096 directory entries within each). These chunks exist only if they are
populated. There is one directory entry per global cache page, and responsible for:
• Tracking page owner(s) and remembering the last writer
• Managing locking and queuing

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 33


Virtual volumes presented out of VPLEX can tolerate port failures by connecting the host to
multiple directors and by utilizing multi-pathing software to control the paths. Here virtual
volumes are presented out of multiple VPLEX front-end ports on different directors. This
yields continuous data availability in the presence of port or director failure.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 34


This module covered an overview of the VPLEX architecture, including key terminologies,
available topologies, and some of the components within the VPLEX system.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 35


This module focuses on the capabilities of VPLEX and the most commonly used features.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 36


This lesson covers an overview of the VPLEX mobility features and capabilities.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 37


Extent mobility is an EMC VPLEX mechanism to move all data from a source extent to a
target extent. After the data is moved, the source extent will be ready for reuse. It is
completely non-disruptive to any layered devices and completely transparent to hosts using
virtual volumes involving those devices. Over time, storage volumes can become
fragmented by adding and deleting extents. Extent mobility can be used to help defrag
fragmented storage volumes.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 38


The data on devices can be moved to other devices within the same cluster or at remote
cluster in a VPLEX Metro. VPLEX VS6 further extends VPLEX data mobility to arrays that are
NOT permanently attached to VPLEX.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 39


The Data Mobility feature allows you to non-disruptively move data on an Extent or device
to another Extent or device in the same cluster. The procedure for moving Extents and
devices is the same and uses either devices or Extents as the source or target.

You can run up to a total of 25 Extent and device migrations concurrently. The system
allocates resources and queues any remaining mobility jobs as necessary. You can view the
status and progress of a mobility job in Mobility Central, which also provides a central
location to create, view, and manage all Extent and device mobility jobs.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 40


Batched mobility provides the ability to script large-scale mobility operations without having
to specify individual Extent-by-Extent or Device-by-Device mobility jobs. Batched mobility
can only be performed in the CLI.

Batch migrations are run as batch jobs from reusable batch migration plan files. Migration
Plan files are created prior to executing a batch using the create-plan command. A single
batch migration plan can be for either Devices or Extents, but not both.

Use batch migrations to retire storage arrays (off-lease arrays) and bring new ones online,
and to migrate devices to a different class of storage array. Up to 25 local and 25
distributed migrations can be in progress at the same time.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 41


This lesson covers an overview of the VPLEX distributed device features and capabilities.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 42


A Distributed Device is a RAID-1 device that is created within a VPLEX Metro. The
distributed device is composed of two local devices that exist at each site and are mirrors of
each other. A mirrored volume has the same volume identity at either cluster. Legs of the
mirrored volumes are derived from storage residing at each cluster. VPLEX Metro offers
synchronous updates to the distributed device.

I/O can be issued from either site concurrently. Read and write operations pass through the
VPLEX WAN. Data is protected because writes must travel to the back-end storage of both
clusters before being acknowledged to the host. Distributed coherent shared cache
preserves data integrity of distributed virtual volumes.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 43


Distributed devices are mirrored between clusters in a VPLEX Metro. In order to create a
distributed device, a local device must exist at both sites. The distributed RAID-1 device
that is created upon the two local devices can only be as large as the smaller of the two
devices. This is due to the way RAID-1 operates. Distributed devices are created through
the Distributed Devices option.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 44


VPLEX consistency groups aggregate volumes together to ensure the common application of
a set of properties to the entire group. Consistency groups are created for sets of volumes
that require the same I/O behavior in the event of a link failure, like those from a single
application.

In the event of a director, cluster, or inter-cluster link failure, consistency groups prevent
possible data corruption. Note that the optional VPLEX Witness failure recovery option
applies only to volumes that are in consistency groups. In addition, you can even move a
consistency group from one cluster to another if required.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 45


For VPLEX Metro, an optional component called the VPLEX Witness can be deployed at a
third location to improve data availability in the presence of cluster failures and inter-cluster
communication loss. VPLEX Witness is implemented as a virtual machine and requires a
VMware ESXi server for its operation. The customer host must be deployed in a separate
failure domain from VPLEX cluster to eliminate the possibility of a single fault affecting both
a cluster and VPLEX Witness. VPLEX Witness connects to both VPLEX clusters over the
Management IP network.

VPLEX Witness observes the state of the clusters, and thus can distinguish between an
outage of the inter-cluster link and a cluster failure. VPLEX Witness uses this information to
guide the clusters to either resume or suspend I/O. In Metro systems, VPLEX Witness
provides seamless zero RTO failover for storage volumes in synchronous consistency
groups.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 46


This module covered the topologies, features, and capabilities of VPLEX.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 47


This module focuses on various VPLEX management options and capabilities.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 48


VPLEX provides two methods of management through the VPLEX management console. The
management console can be accessed through a command line interface (CLI) as well as a
graphical user interface (GUI).

The CLI is accessed by connecting with SSH to the Management Server and then entering
the command vplexcli. This command causes the CLI to telnet to port 49500.

The GUI is accessed by providing Management Server IP using https protocol. The GUI is
based on Flash and requires the client to have Adobe Flash installed. Every time the
management console is launched, it creates a session log in the /var/log/VPlex/cli/
directory. The log is created when launching the CLI as well as the GUI. This is helpful in
determining which commands were run while a user was using VPLEX.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 49


EMC VPLEX software architecture is object-oriented, with various types of objects defined
with specific attributes for each. The fundamental philosophy of the management
infrastructure is based on the idea of viewing, and potentially modifying attributes of an
object.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 50


The VPLEX CLI is based on a tree structure similar to the structure of a Linux file system.
Fundamental to the VPLEX CLI is the notion of “object context”, which is determined by the
current location or pwd within the directory tree of managed objects. Many VPLEX CLI
operations can be performed from the current context. However, some commands may
require the user to change the directory to a different directory using the cd command
before running the commands.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 51


VPLEX CLI supports both standard UNIX styles of options. Some commands can be
executed anywhere in the tree and some must be executed in specific directories. Within
any context typing, the tab key provides the complete list of valid commands within the
current context. This is a useful mechanism to “discover” the set of valid operations on each
type of EMC VPLEX managed object.

The VPLEX CLI includes detailed online help for every command. Users can find help on any
VPLEX CLI command by entering –h directly after the command. VPLEX CLI also supports
Linux tab completion for commands. If part of a command has been entered, the user can
press the tab key to have the system discover the commands that match the letters
entered. If more than one command matches the letters entered, multiple commands are
returned as output.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 52


EMC Unisphere for VPLEX provides a graphical user interface (GUI) for managing VPLEX.
The VPLEX GUI provides many of the features that the VPLEX CLI provides. The GUI is very
easy to navigate and does not require knowledge of VPLEX CLI commands. Operations are
accomplished by clicking the VPLEX icons and selecting desired values.

System Status on the navigation bar shows a graphical representation of your system. It
allows you to quickly view the status of your system and some of its major components,
such as directors, storage arrays, and storage views. The cluster display also shows the size
of the cluster configuration (single-engine, dual-engine, or quad-engine). System Status is
the default screen when you log into the GUI.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 53


The VPLEX Integrated Array Service (VIAS) feature provides VPLEX a medium to interact
with EMC arrays that support integrated services to enable storage provisioning. VPLEX
uses Array Management Providers (AMPs) to streamline provisioning, and allows you to
provision a VPLEX virtual volume from a pool on the array.

The VIAS feature uses the Storage Management Initiative -Specification (SMI-S) provider to
communicate with the arrays that support integrated services to enable provisioning. An
SMI-S provider must be installed and configured correctly on the same storage area
network as VPLEX, and both VPLEX and the SMI-S provider must be configured on the same
arrays. After SMI-S is configured, you can register SMI-S as the AMP with VPLEX using the
CLI or the Unisphere GUI. After the registration is complete, the managed arrays and pools
on the array are visible, and you can provision virtual volumes from those pools. The pools
used for provisioning must be created on the array. VIAS does not create the pools for
provisioning.

Note: XtremIO uses REST instead of SMI-S.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 54


From the EMC perspective, there are three fundamental characteristics to software-defined storage:
simple, extensible, and open.

• To simplify management, the entire storage infrastructure must provide a single control point, so
it can be managed through automation and policies.
• The storage infrastructure must be easy to extend so that new storage capabilities can be added
to the underlying arrays in software.
• The platform must be built in an open manner, so that customers, other vendors, partners, or
startups can write new services and build a community around it.

EMC ViPR is the software-defined storage solution that was built with all of these requirements in mind. It
transforms existing heterogeneous physical storage into a simple, extensible, and open storage platform. ViPR
was built from the ground up to provide a policy-based storage management system for automating standardized
storage offerings in a multi-tenant environment across heterogeneous storage infrastructures.

At the top of the diagram is an exposed API. This provides an access point to platform management functions and
storage services. Out of the box tools, utilities, and applications use these APIs as an access point to the system
and also allow easy development of custom applications for management or data access.

At the bottom of the diagram is a sample of supported storage platforms supported by ViPR today. These platforms
service a broad range of applications and business functions but, when placed in a single environment, present
challenges to administrators and data center management staff. ViPR discovers these devices and creates resource
pools which simplify provisioning, management, and planning which translates into a better and more efficient user
experience.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 55


ViPR is aware of, and leverages intelligence such as FAST, snapshots, and cloning
capabilities within individual models of storage arrays. The same applies to protection
technologies such as RecoverPoint (RP) and VPLEX. With VPLEX, ViPR also performs the
automation of VPLEX-metro disaster recovery via RecoverPoint and Array Snaps of VPLEX
volumes.

First, we will discuss automating disaster recovery. With ViPR, customers can automate the
provisioning of VPLEX high availability in a metro configuration to just a few steps in the
ViPR UI. A user can create virtual pools describing what type of storage to provision. Based
on the configuration of the pools, a user can assign different properties to VPLEX pools.

A customer can have ‘Add Disaster Recovery Protection’ as an attribute of the pool which
provides an added level of Disaster Recover protection for the VPLEX pool. This all can be
provisioned and managed in the ViPR layer instead of users doing the orchestration of this
solution manually in the VPLEX UI and RecoverPoint UI.

Now let us talk about the second new VPLEX capability — Array Snaps of VPLEX Volumes; a
customer can also take snapshots of the VPLEX volume. A restore from a snapshot to a
VPLEX volume is a manual and error prone process if done in the VPLEX layer. ViPR can
automate the entire process of creation and restoring of a snapshot. Snapshots have
several use cases - reduce backup windows, enhance productivity, and enable fast restore.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 56


Monitoring and integration with VPLEX is supported through several interfacing protocols
and products. The primary methods of management and monitoring include:
• Basic SNMP monitoring using traps and polling
• VAAI for integration with VMware
• Monitoring, events, and alerting functions may be configured with SRM Suite, including
ProSphere and Watch4net

This is an example of the EMC Storage Resource Management Suite custom dashboard.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 57


This module covered the various VPLEX management options and capabilities.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 58


This course covered an overview of the VPLEX architecture, features, and functionality.

This concludes the training. Proceed to the course assessment on the next slide.

Copyright 2016 EMC Corporation. All rights reserved. VPLEX Fundamentals 59

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy