Vplex Fundamentals 6.0 SRG PDF
Vplex Fundamentals 6.0 SRG PDF
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as of its
publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and
service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this
publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party that owns the
Trademark.
EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip,
Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor,
DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences,
Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC
LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix,
Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint,
MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools,
Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC
RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO
Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data
Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso,
Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
Introduction
The ability to provide efficient real-time data collaboration over distance for
big data applications, such as video, geographic/oceanographic research, and
others.
VPLEX Benefits
Introduction
When the new array arrives and is installed, it is time to install and virtualize the new
storage, where the actual migration takes place. To do this VPLEX copies data in the
background. Note that no external tools are required to be running on the hosts or the
array. Once the data is copied to the new array and VPLEX non-disruptively links the
servers to the new array, no outage takes place during this process. Also it is fully
reversible - the VPLEX can just re-link the servers to the old array if performance or other
factors dictate.
For example, as more storage and host are added, we can add another VPLEX engine and
create a dual-engine VPLEX cluster to handle the increased workload. Each engine provides
more processing power, cache, and I/O connectivity to the VPLEX cluster.
VPLEX can be configured with 1, 2, or 4 engines in a cluster. The maximum cluster size is 4
engines configured in a quad-engine configuration.
The dual and quad configurations include redundant internal FC switches for LCOM
connection between the directors. In addition, dual and quad configurations contain
redundant Uninterruptible Power Supplies (UPS) that service the FC switches and the
Management Server.
Engines are numbered 1-4 from the bottom to the top. Any spare space in the shipped rack
is to be preserved for potential engine upgrades in the future. Since the engine number
dictates its physical position in the rack, numbering will remain intact as engines are added
during a cluster upgrade.
Note: GeoSynchrony is pre-installed on the VPLEX hardware and the system is pre-cabled,
and also pre-tested.
VPLEX engines are the brains of a VPLEX system, they contain two directors, each providing
front end (FE) and back end (BE) I/O connectivity. They also contain redundant power
supplies, fans, I/O modules, and management modules.
The directors are the workhorse components of the system and are responsible for:
• Processing I/O requests from the hosts
• Serving and maintaining data in the distributed cache
• Providing the virtual-to-physical I/O translations, and
• Interacting with the storage arrays to service I/O
Introduction
The underlying physical block storage volume that is presented from the
array to VPLEX. Storage volumes are also known as array LUNs. Backend
storage arrays are configured to present LUNs to VPLEX backend ports. Each
presented backend LUN maps to one VPLEX storage volume.
Extent
A storage object formed from one or more Extents or Devices. A Device has a
RAID level of 0, 1, or C. Devices can be used to create virtual volumes.
Virtual Volume
A VPLEX volume that is created and presented to the host. This volume has a
one-to-one relation with a device.
Distributed Device
A view for hosts that defines which hosts can access which virtual volumes
on which VPLEX ports. Once a storage view is properly configured as
described, the host should be able to detect and use virtual volumes after
initiating a bus scan on its HBAs. Every frontend path to a virtual volume is
an active path.
VPLEX Terminology
Introduction
A virtual layer between hosts and storage, abstracting the physical elements
into logical views within a single site.
Local Federation
Cache Coherency
VPLEX Local provides seamless, non-disruptive data mobility and the ability to manage
and mirror data between multiple heterogeneous arrays from a single interface within a
data center. VPLEX Local consists of a single VPLEX cluster. It contains a next-generation
architecture that allows increased availability, simplified management, and improved
utilization and availability across multiple arrays.
VPLEX Metro enables active/active, block level access to data between two sites within
synchronous distances. The distance is limited not only by physical distance but also by
host and application requirements. Depending on the application, VPLEX clusters should be
installed with inter-cluster links that can support not more than 10ms 1 round trip delay
(RTT). The combination of virtual storage with VPLEX Metro and virtual servers enable the
transparent movement of virtual machines and storage across synchronous distances. This
technology provides improved utilization and availability across heterogeneous arrays and
multiple sites.
The new platform increases performance and scale when compared to the current VS2
platform:
• 2X IOPS
• Support for 12,000 volumes (1.2X for Local and 1.5X for Metro)
The Management Server also connects via two redundant Ethernet connections to the
directors in the cluster. The Management Server is the only VPLEX component that is
configured with a “public” IP on the data center network. From the data center IP network,
the Management Server can be accessed via SSH or HTTPS.
You can run up to a total of 25 Extent and device migrations concurrently. The system
allocates resources and queues any remaining mobility jobs as necessary. You can view the
status and progress of a mobility job in Mobility Central, which also provides a central
location to create, view, and manage all Extent and device mobility jobs.
Batch migrations are run as batch jobs from reusable batch migration plan files. Migration
Plan files are created prior to executing a batch using the create-plan command. A single
batch migration plan can be for either Devices or Extents, but not both.
Use batch migrations to retire storage arrays (off-lease arrays) and bring new ones online,
and to migrate devices to a different class of storage array. Up to 25 local and 25
distributed migrations can be in progress at the same time.
I/O can be issued from either site concurrently. Read and write operations pass through the
VPLEX WAN. Data is protected because writes must travel to the back-end storage of both
clusters before being acknowledged to the host. Distributed coherent shared cache
preserves data integrity of distributed virtual volumes.
In the event of a director, cluster, or inter-cluster link failure, consistency groups prevent
possible data corruption. Note that the optional VPLEX Witness failure recovery option
applies only to volumes that are in consistency groups. In addition, you can even move a
consistency group from one cluster to another if required.
VPLEX Witness observes the state of the clusters, and thus can distinguish between an
outage of the inter-cluster link and a cluster failure. VPLEX Witness uses this information to
guide the clusters to either resume or suspend I/O. In Metro systems, VPLEX Witness
provides seamless zero RTO failover for storage volumes in synchronous consistency
groups.
The CLI is accessed by connecting with SSH to the Management Server and then entering
the command vplexcli. This command causes the CLI to telnet to port 49500.
The GUI is accessed by providing Management Server IP using https protocol. The GUI is
based on Flash and requires the client to have Adobe Flash installed. Every time the
management console is launched, it creates a session log in the /var/log/VPlex/cli/
directory. The log is created when launching the CLI as well as the GUI. This is helpful in
determining which commands were run while a user was using VPLEX.
The VPLEX CLI includes detailed online help for every command. Users can find help on any
VPLEX CLI command by entering –h directly after the command. VPLEX CLI also supports
Linux tab completion for commands. If part of a command has been entered, the user can
press the tab key to have the system discover the commands that match the letters
entered. If more than one command matches the letters entered, multiple commands are
returned as output.
System Status on the navigation bar shows a graphical representation of your system. It
allows you to quickly view the status of your system and some of its major components,
such as directors, storage arrays, and storage views. The cluster display also shows the size
of the cluster configuration (single-engine, dual-engine, or quad-engine). System Status is
the default screen when you log into the GUI.
The VIAS feature uses the Storage Management Initiative -Specification (SMI-S) provider to
communicate with the arrays that support integrated services to enable provisioning. An
SMI-S provider must be installed and configured correctly on the same storage area
network as VPLEX, and both VPLEX and the SMI-S provider must be configured on the same
arrays. After SMI-S is configured, you can register SMI-S as the AMP with VPLEX using the
CLI or the Unisphere GUI. After the registration is complete, the managed arrays and pools
on the array are visible, and you can provision virtual volumes from those pools. The pools
used for provisioning must be created on the array. VIAS does not create the pools for
provisioning.
• To simplify management, the entire storage infrastructure must provide a single control point, so
it can be managed through automation and policies.
• The storage infrastructure must be easy to extend so that new storage capabilities can be added
to the underlying arrays in software.
• The platform must be built in an open manner, so that customers, other vendors, partners, or
startups can write new services and build a community around it.
EMC ViPR is the software-defined storage solution that was built with all of these requirements in mind. It
transforms existing heterogeneous physical storage into a simple, extensible, and open storage platform. ViPR
was built from the ground up to provide a policy-based storage management system for automating standardized
storage offerings in a multi-tenant environment across heterogeneous storage infrastructures.
At the top of the diagram is an exposed API. This provides an access point to platform management functions and
storage services. Out of the box tools, utilities, and applications use these APIs as an access point to the system
and also allow easy development of custom applications for management or data access.
At the bottom of the diagram is a sample of supported storage platforms supported by ViPR today. These platforms
service a broad range of applications and business functions but, when placed in a single environment, present
challenges to administrators and data center management staff. ViPR discovers these devices and creates resource
pools which simplify provisioning, management, and planning which translates into a better and more efficient user
experience.
First, we will discuss automating disaster recovery. With ViPR, customers can automate the
provisioning of VPLEX high availability in a metro configuration to just a few steps in the
ViPR UI. A user can create virtual pools describing what type of storage to provision. Based
on the configuration of the pools, a user can assign different properties to VPLEX pools.
A customer can have ‘Add Disaster Recovery Protection’ as an attribute of the pool which
provides an added level of Disaster Recover protection for the VPLEX pool. This all can be
provisioned and managed in the ViPR layer instead of users doing the orchestration of this
solution manually in the VPLEX UI and RecoverPoint UI.
Now let us talk about the second new VPLEX capability — Array Snaps of VPLEX Volumes; a
customer can also take snapshots of the VPLEX volume. A restore from a snapshot to a
VPLEX volume is a manual and error prone process if done in the VPLEX layer. ViPR can
automate the entire process of creation and restoring of a snapshot. Snapshots have
several use cases - reduce backup windows, enhance productivity, and enable fast restore.
This is an example of the EMC Storage Resource Management Suite custom dashboard.
This concludes the training. Proceed to the course assessment on the next slide.