0% found this document useful (0 votes)
96 views50 pages

HP 3PAR StoreServ 7000

Uploaded by

faruq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views50 pages

HP 3PAR StoreServ 7000

Uploaded by

faruq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

HP 3PAR

A technical overview of the HP 3PAR Utility Storage


The world’s most agile and efficient Storage Arrays

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Eliminating distinctions between Midrange and Tier 1
Polymorphic Simplicity: Storage Without Boundaries
• New 3PAR StoreServ 7000
• New 3PAR File Services HP 3PAR
StoreServ
• New All-SSD Array 10800
HP 3PAR
• New EVA to 3PAR Upgrade Path StoreServ
• ONE Architecture – mid to high 10400

Only HP
HP 3PAR
StoreServ 7400

HP 3PAR
StoreServ 7200
NEW
Tier 1 Storage at Less than $40K USD!
NEW
Redefining the Midrange from $25K USD!
2 © 2012 HP – Peter Mattei
HP 3PAR ASIC
Hardware Based for Performance

Thin Built in Zero Detect

Fast RAID 10, 50 & 60 Tightly-Coupled Cluster


Rapid RAID Rebuild
High Bandwidth, Low Latency Interconnect
Integrated XOR Engine

Mixed Workload
Independent Metadata and Data Processing

3 © 2012 HP – Peter Mattei


HP 3PAR StoreServ 7000 Controller Nodes
2 to 4 nodes per system – installed in pairs
Per Node configuration
Internal 24 • Thin Built In™ Gen4 ASIC
SFF Drives
3PAR OS Control • Intel 1.8 GHz Processor
Data Cache
& Backup Cache
SSD 8GB
4 GB or 8GB • 7200 4-core
• 7400 6-core
Inter-
node
Links
• Data Cache
Processor • 7200 4GB
Complex 3PAR Gen4 ASIC
Thin Built In®
• 7400 8GB
PCIe
• 8Gb Control Cache
Switch
• 2 built-in 8Gb/s FC Ports
• Optional PCI-e Adapter
SAS 6Gb/s SAS 8Gb/s FC Optional
• 4-Port 8Gb FC or
Expander HBA HBA PCI-e Slot
• 2-Port 10Gb/s CNA

4 © 2012 HP – Peter Mattei


3PAR Mixed workload support
Multi-tenant performance
= control information (metadata)
= data
I/O Processing : Traditional Storage
Heavy throughput
workload applied
Host Unified Processor Disk
interface and/or Memory interface disk
Heavy transaction
workload applied small IOPs wait for large IOPs to be processed

hosts
I/O Processing : 3PAR
Heavy throughput 3PAR ASIC & Memory
workload sustained
Host Disk
interface interface disk
Control Processor &
Heavy transaction Memory
workload sustained

control information and data are pathed and processed separately


5 © 2012 HP – Peter Mattei
HP 3PAR OS™
Virtualization Concepts

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
3PAR Hardware Architecture
Cost-effective, scalable, resilient, meshed, active-active

7200 7400 7400


with 2 nodes with 2 nodes with 4 nodes

10400
10800
or 10800
Host Ports with 8
2 nodes
4
Cache
Disk Ports

7 © 2012 HP – Peter Mattei


HP 3PAR virtualization advantage
HP 3PAR
Traditional Array • All RAID levels can reside on same drives
• Each RAID level requires dedicated drives • Distributed sparing, no dedicated spare drives
• Dedicated spare disk required • Built-in wide-striping based on Chunklets
• Limited single LUN performance

0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7 R1 R1 R5 R6 R6 R1 R5 R5
R1 R1 R5 R6 R6 R1 R5 R5 3PAR StoreServ Controllers
Traditional Controllers
RAID5 Set RAID1 RAID6 Set

Physical drives
LUN 6 LUN 4

Spare
LUN 7 LUN 5
LUN 3

RAID1 Set RAID5 Set


LUN 0
Spare

LUN 1 LUN 2

8 © 2012 HP – Peter Mattei


Why are Chunklets so Important?
Ease of use and Drive Utilization
• Same drive spindle can service many different LUNs, RAID types
and RAID sizes at the same time
− RAID1
− RAID5 – 2:1 to 8:1
− RAID6 – 4:2; 6:2; 8:2; 10:2; 14:2
• Array managed by policies, not by administrative planning 0 1 2 3 4 5 6 7
• Enables easy mobility between drives, RAID types and service R1 R1 R5 R6 R6 R1 R5 R5
levels by using Dynamic or Adaptive Optimization
3PAR StoreServ Controllers
Performance
• Enables wide-striping across hundreds of drives
• Avoids hot-spots
• Autonomic data restriping after disk installations Physical Disks
High Availability – selectable by CPG
• HA Magazine - Protect against magazine failure (Industry
standard)
• HA Cage - Protect against a cage (full disk shelf) failure.

9 © 2012 HP – Peter Mattei


Common Provisioning Groups (CPG)
CPGs are Policies that define Service and Availability level by
• Drive type (SSD, Fast Class, Nearline)
• Number of Drives (striping width)
• RAID level (R10 / R50 2:1 to 8:1 / R60 4:2; 6:2; 8:2; 10:2; 14:2)
Multiple CPGs can be configured and optionally overlap the same drives
• i.e. a System with 200 drives can have one CPG containing all 200 drives and other CPGs
with overlapping subsets of these 200 drives.
CPGs have many functions:
• They are the policies by which free Chunklets are assembled into logical disks
• They are a container for existing volumes and used for reporting
• They are the basis for service levels and our optimization products.

10 © 2012 HP – Peter Mattei


HP 3PAR Virtualization – the Logical View
3PAR autonomy User initiated
Physical Disks Chunklets Logical Disks CPGs Virtual Exported
Volumes LUNs

Physical Disks are divided in Chunklets (E-, F-, S-, T-Class 256MB, P10000 1GB)
• The majority is used to build Logical Disks (LD), some for distributed sparing
Logical Disks (LD)
• Are collections of Raidlets - Chunklets arranged as rows of RAID sets (Raid 0, 10, 50, 60)
• Are automatically created when required and provide the space for Virtual Volumes, Snapshot and Logging Disks
Common Provisioning Groups (CPG)
• User created virtual pools of Logical Disks that allocates space to virtual volumes on demand
• The CPG defines RAID level, disk type and number, striping pattern etc.
Virtual Volumes (VV) – Exported LUNs
• User created fat or thin provisioned volumes composed of LDs according to the specified CPG policies
• User exports VV as LUN
11 © 2012 HP – Peter Mattei
Rebalancing and Tuning - Tunesys
6,400 IOP’s Dbase Application 19,200 IOP’s available Dbase Application
Before Dynamic Optimization Free After Dynamic Optimization Free
Used Used
600 600
REBALANCE
500
500

400
400
Chunklets

Chunklets
300
300

200
200

100
100

0
1 20 39 58 77 96 0
Physical Disks 1 20 39 58 77 96
Physical Disks

• Intelligently ordered Optimize QoS levels with autonomic


• Policy-abiding
• Throttled rebalance of all volumes
rebalancing without pre-planning
• base volumes & snapshots
• fat & thin, tiered or not
• Intelligent sub-volume rebalance
•Ability to rebalance after upgrades for nodes and drives without
Dynamic Optimization license for the 3PAR StoreServ 7000
•Ability to schedule on a regular basis
12 © 2012 HP – Peter Mattei
HP 3PAR High Availability
Spare Disk Drives vs. Distributed Sparing
3PAR StoreServ
Traditional Arrays

Spare chunklets

Many-to-many rebuild
Spare drive parallel rebuilds in less time

Few-to-one rebuild
hotspots & long rebuild exposure

13 © 2012 HP – Peter Mattei


HP 3PAR High Availability
Write Cache Re-Mirroring 3PAR StoreServ

Traditional Mid-range Arrays


Write-Cache stays on
Ctrl 1 Ctrl 2 thanks to redistribution

Write Cache Write Cache

Mirror Mirror

Write-Cache off for data security


Persistent Write-Cache Mirroring
• No write-thru mode – consistent performance
Traditional Write-Cache Mirroring • Works with 4 and more nodes
Either poor performance due to write-thru mode  F400
or risk of write data loss  T400, T800
 7400
 10400, 10800
14 © 2012 HP – Peter Mattei
OnLine Firmware Upgrade
3Par
• Firmware loaded via Service Processor
• Firmware pushed to master node
• All nodes receive new firmware (cluster)
• Nodes independently, one at a time, update to new firmware but run on old till all nodes are
updated
• After all nodes update firmware, upgrade finish command points all nodes to new firmware
(userspace)
• Copy of old firmware (userspace) is left in altroot in case of a rollback
• NPIV will allow greater failover flexibility during node upgrades

15 © 2012 HP – Peter Mattei


HP 3PAR Persistent Ports
HP 3PAR with
Traditional Array Persistent Ports
• All paths stay online in case of
Host Host
node maintenance or failure
MPIO

• No user intervention required


X X
• NPIV based port ID swap
Fabric Fabric Fabric Fabric
1 2 1 2 • Server will not “see” the swap
of the 3PAR port WWN thus no
X MPIO path failover required

0:0:1
0:0:1

X
0:0:1 0:0:2 1:0:1 1:0:2 0:0:1 0:0:2 1:0:1 1:0:2 Native Port Identity
1:0:1 1:0:2 0:0:1 0:0:2 Guest Port Identity

Ctrl 0 Ctrl 1 Node


Node00 Node 1
MPIO path failover required All paths stay online
Node 1 takes over
16 © 2012 HP – Peter Mattei
HP 3PAR Online Import for EVA
Reduce cost and time to migrate your EVA data
Agile
• Migrate your EVA data over to HP 3PAR StoreServ using the new
Online Import feature
• HP Services available to help with your datacenter transition

Simple
• Start from what you know - All driven from your known Command
View EVA interface
• Avoid the human errors of a manual migration

Efficient
• Get Thin with on-the-fly Thin Conversion
• No need for additional hardware or software needed
− Online Import license included for free (6 month)

17 © 2012 HP – Peter Mattei


EVA to 3PAR Data Migration Options

Alternatives to HP 3PAR Online Import


Should the customers environment not supportd the use of Online Import, you can still use existing migration
technologies, for example:
• MPX200
• VMware Storage VMotion
• other host based solutions (e.g. volume manager based mirroring on Unix/Linux)

HP Consulting Services to assess and help the transition


• HP Consulting offers flexible «EVA to 3PAR Acceleration Consulting Services» including assessment and migration offering for
EVA to 3PAR
• HP Consulting can utilize HP 3PAR Online Import as well as all of the above technologies as part of their Service offering
• Most importantly HP Consulting can help on the overall Infrastructure refresh, that often goes along with a storage migration.

18 © 2012 HP – Peter Mattei


HP 3PAR – Management Options
• 3PAR Management Client (GUI)
• Fat client GUI - Windows, RedHat Linux
• Storage Management GUI

• CLI
• 3PAR CLI or ssh
• Storage Management Interface
• Storage Server - very rich, complete command set Management LAN

• SMI-S
• Management from third party management tools

• Web API
• RESTful Interface

• Service Processor (SP)


CLI / SSH SMI-S 7000
• Health checks by collecting configuration and Fat Client Web API
access access SP VM
performance data
• Reporting to HP 3PAR Central
• Anomalies reported back to customer via OSSA SP instance
• Array management SP eth connect
3PAR node management eth connect
19 © 2012 HP – Peter Mattei
HP 3PAR Virtual Service Processor
Secure Remote Support
Virtual Service Processor
• Cost-efficient, secure gateway for remote
connectivity
• Effortless, one-click configuration
• Supported on VMware vSphere
• Enables
− Remote, online SW upgrade
− Proactive fault detection with remote call home
diagnostics
− Remote serviceability
− Alert notifications
• Optional HW Service Processor available

20 © 2012 HP – Peter Mattei


Introducing the HP 3PAR Arrays

• F-Class
• StoreServ7000
• StoreServ10000

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
3PAR StoreServ 7000
HP 3PAR StoreServ 7200 HP 3PAR StoreServ 7400

Controller Nodes 2 2 4*
Max SFF drives 144 240 480
Cache 24 GB 32 GB 64GB
8Gbit/s FC ports 12 (4/8) 12 (4/8) 24 (8/16)
total (built-in/optional)

optional 10Gbit/s iSCSI/FCOE** 4 4 8


Built-in IP remote copy port 2 2 4
Controller Enclosures 1 1 2
2U with 24 SFF drive slots each

Drive Enclosures 0 to 5 0 to 9 0 to 18
2U with 24 SFF and/or 4U with 24 LFF drive slots each

* Field upgradeable
22 © 2012 HP – Peter Mattei ** FCoE available after first release
3PAR StoreServ 7000 controller enclosure

Rear view Front view

Built-in Eth Remote Copy Port


Node 1 Eth Mgmt. Port
2x built-in 8Gbit/s FC
2x 4-lane 6Gbit/s SAS for drive chassis connection
Optional PCIe card
Node 0 - 4x 8GB/s HBA or
- 2x 10Gbit/s CNA
Controller interconnect (7400 only)

23 © 2012 HP – Peter Mattei


3PAR 7000 disk chassis
Mix and match drives and shelves as required
4U with 24 LFF drive slots
2U with 24 SFF drive slots

24 © 2012 HP – Peter Mattei


3PAR 7200 max. Configurations

Node Pair

Node Pair
Drive Chassis

Drive Chassis

144 SFF drives 24 SFF + 120 LFF drives

25 © 2012 HP – Peter Mattei


3PAR 7400 2-node max Configurations

Drive Chassis

2U free for
Node Pair Node Pair Node Upgrade

Drive Chassis Drive Chassis

240 SFF drives 24 SFF + 216 LFF drives

26 © 2012 HP – Peter Mattei


3PAR 7400 4-node max Configurations

Drive Chassis Drive Chassis

Drive Chassis
Node Pairs Node Pairs

Drive Chassis
Drive Chassis

480 SFF drives 48 SFF + 432 LFF drives

27 © 2012 HP – Peter Mattei


Drive Specification Overview
Feature HP 3PAR StoreServ 7200 HP 3PAR StoreServ 7400 HP 3PAR StoreServ 7400
2 Node System 2 Node System 4 Node System
RAID Levels RAID 0, 10, 50, 60 RAID 0, 10, 50, 60 RAID 0, 10, 50, 60
RAID 5 Data to Parity Ratios 2:1 to 8:1 2:1 to 8:1 2:1 to 8:1
RAID 6 Data to Parity Ratios 4:2; 6:2; 8:2; 10:2; 14:2 4:2; 6:2; 8:2; 10:2; 14:2 4:2; 6:2; 8:2; 10:2; 14:2
Available SFF SSD 100GB, 200GB SLC SSD 100GB, 200GB SLC SSD 100GB, 200GB SLC SSD
2.5” Drives 15krpm 300GB SAS 300GB SAS 300GB SAS
10krpm 450GB, 900GB SAS 450GB, 900GB SAS 450GB, 900GB SAS
7.2krpm NA NA NA
Available LFF SSD 100GB, 200GB SLC SSD 100GB, 200GB SLC SSD 100GB, 200GB SLC SSD
3.5” Drives 15krpm NA NA NA
10krpm NA NA NA
7.2krpm 2TB, 3TB MDL SAS 2TB, 3TB MDL SAS 2TB, 3TB MDL SAS
Density 2U node chassis 24 SFF drives 24 SFF drives 24 SFF drives
2U drive chassis 24 SFF drives 24 SFF drives 24 SFF drives
4U drive chassis 24 LFF drives 24 LFF drives 24 LFF drives
# of 24 drive add-on Drive Chassis 0 to 5 0 to 9 0 to 18
# of Drives 8 to 144 8 to 240 8 to 480

28 © 2012 HP – Peter Mattei


HP 3PAR Software and Features

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Two License Models: Spindle and Frame
HP 3PAR 7000 Software Suites Spindle Based Suite
Spindle Based SW available standalone
Frame Based Suite
Application Suite for
Replication Suite Data Optimization Suite Application Suite for Not sold separately
Exchange
Virtual Copy VMware Recovery Manager for
Application Suite for
Recovery Manager for Oracle Exchange*
Remote Copy Dynamic Optimization VMware* Recovery Manager for
VSS Provider
Peer Persistence VASA Oracle*
(Note: requires Remote Copy)
Adaptive Optimization Application Suite for SQL Reporting Suite
(Note: requires System Reporter)
Mgmt Plug In for
Security Suite VMware Recovery Manager for
System Reporter
Host Explorer for SQL*
Virtual Domains Peer Motion VMware
VSS Provider 3PARinfo
Virtual Lock
*Note: Recovery Manager requires Virtual Copy

3PAR 7000 OS Suite


System Tuner Online Import license
Thin Provisioning Web Services API SmartStart
(180 days)
Thin Conversion VSS Provider Management Console Host Explorer Multi Path IO SW

Thin Persistence Thin Copy Reclamation Autonomic Rebalance Scheduler Virtual SP

Full Copy Access Guard Autonomic Groups Persistent Cache Persistent Ports

Autonomic 3PAR OS Admin Tools


Rapid Provisioning Host Personas SMI-S
Replication Groups (CLI Client, SNMP)
30 © 2012 HP – Peter Mattei
HP 3PAR StoreServ 7000 software & support licensing
Software Suites Licensing
• 9 suites (4 main array software suites, 4 application suites, • Separate software LTUs per model (7200 vs. 7400)
1 reporting suite) • Licenses are enforced by the 3PAR Array
• Standalone software titles still available if needed—suites provide 25+ Drive-based licenses
percent price advantage • Two licenses to buy software title
• 3PAR OS Suite includes Thin Suite, rebalancing, and 180-day Online – Base LTU: one per system
Import license – Drive LTU: one per drive up to the system cap
• Software caps
Licensed per drive Licensed per system
– 48 LTUs for 7200
HP 3PAR 7000 Application Suite – 168 LTUs for 7400
HP 3PAR 7000 OS Suite
for VMware System-based licenses
HP 3PAR 7000 Application Suite • 1 LTU per system
HP 3PAR 7000 Replication Suite
for Exchange
Service and Support
HP 3PAR 7000 Data Optimization HP 3PAR 7000 Application Suite
• Software Installation and Startup (I&S) services keyed to Base
Suite for SQL
LTU SKUs* only; I&S is optional, although highly recommended
HP 3PAR 7000 Application Suite for new array deployment
HP 3PAR 7000 Security Suite
for Oracle • Software support keyed to Base LTU SKUs only (system-based
license)
HP 3PAR 7000 Reporting Suite
• Support contract required to receive support, patches, and
* Software Installation and Startup services available for Replication Suite, Data Optimization Suite, App Suite updates
for VMware, Microsoft® Exchange, SQL, and Reporting Suite.

31 © 2012 HP – Peter Mattei


HP 3PAR
Thin Technologies

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Thin Technologies Leadership Overview
Start Thin Get Thin Stay Thin
Traditional 3PAR Thin
Provisioning Provisioning

4TB 4TB
1TB 1TB

Thin Provisioning Thin Conversion Thin Persistence


• No pool management or • Eliminate time & complexity of getting thin • Free stranded capacity
reservations
• Open, heterogeneous migrations for any • Automated reclamation based on
• No professional services
array to 3PAR T10 write same or unmap operations
• Fine capacity allocation units
• Service levels preserved during conversion • Snapshots and Remote Copies stay thin
• Variable QoS for snapshots

Buy up to 75% less Reduce Tech Refresh Thin Deployments


storage capacity Costs by up to 60% Stay Thin Over time

33 © 2012 HP – Peter Mattei


HP 3PAR Optimization
• Dynamic Optimization
• Adaptive Optimization

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Dynamic and Adaptive Optimization
Manual or Automatic Tiering
Dynamic Adaptive
Optimization Optimization
Tier 0
SSD

Tier 1
Fibre
Channel
Tier 2
Enterprise
SATA
Region Autonomic data movement, Autonomic tiering and data movement,
CPG single/full volume level sub-volume level
35 © 2012 HP – Peter Mattei
HP 3PAR Dynamic Optimization at a Customer
Optimize QoS levels with autonomic rebalancing without pre-planning

Distribution after 2 disk upgrades Distribution after Dynamic Optimization


Free After Dynamic Optimization Free
Before Dynamic Optimization
Used Used

600 600

500 500

400 400

Chunklets
Chunklets

300 300

200 200

100 100

0 REBALANCE 0
1 20 39 58 77 96 1 20 39 58 77 96

Physical Disks Physical Disks

37 © 2012 HP – Peter Mattei


Performance Example with Dynamic Optimization
Volume Tune from R5, 7+1 SATA to R5, 3+1 FC 10K

38 © 2012 HP – Peter Mattei


Online fat-to-thin conversion
Part of Dynamic Optimization

• Non-disruptively migrate
• fat-to-thin
• thin-to-fat
• between CPGs
• Original volume can either be
• kept
• kept and renamed
• deleted

39 © 2012 HP – Peter Mattei


On-Node Adaptive Optimization

A new version of AO which runs entirely on the InServ

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Adaptive Optimization
Create a configuration

• Adaptive Optimization is
defined in policies by tiers
and schedules
• Up to 128 policies for
different workload can be
defined per 3PAR
• Each policy can be
scheduled individually

41 © 2012 HP – Peter Mattei


HP 3PAR Adaptive Optimization
Creating a configuration
• Each Mode is either Cost, Balanced or
Performance based
− Cost: more data is kept in lower tiers
− Performance: more data is kept in higher
tiers
− Balanced (default): balance between the
two above

• 2 to 3 tiers per policy can be defined


• Each tier is defined by a selected CPG
• A CPG defines drive type, RAID level,
redundancy level and step size
42 © 2012 HP – Peter Mattei
HP 3PAR Adaptive Optimization
Creating a configuration
• Tier movement is based on
analyzing the following
parameters
− Average tier service times
− Average tier access rate
densities
− Space available in the tiers

43 © 2012 HP – Peter Mattei


Adaptive Optimization
Best Practices
SSD recommendations; Default CPG growth
• For SSDs, the CPG grow size should be set to as small as the system will allow so as
little space as possible is left empty (SSD space is expensive!).
Min: 8GB / Node Pair
• For SSD, set a growth warning to use up to 95% of the capacity

• Make sure that the default CPG for VV growth (both data/USR or copy/SNP) should
have plenty of space to grow. (default growth increment recommended)

• The default growth CPG for VVs in an AO configuration should NOT be in an SSD CPG.

44 © 2012 HP – Peter Mattei


Sizing configurations for AO
Always include FC disks. When using AO, locality of IOs matters!
If unsure of what Tiers distribution should be, use the following rule of thumb:
• SSD : 1% of useable capacity – should be able to do 1/3 of workload
• FC : 40% of useable capacity – should be able to sustain 2/3 of workload
• NL : 59% of useable capacity – (not contributing to performance)

Always ensure that no less than 1/3 of the overall capacity is on FC or SAS disks and
it can sustain 2/3 of the applications workload

Tiers should be evenly distributed throughout all disk chassis and node pairs

45 © 2012 HP – Peter Mattei


HP 3PAR
Full and Virtual Copy

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Full Copy V1– restorable copy
Part of the base 3PAR OS
Base Volume

• Full physical point-in-time copy


• Provisionable after copy ends
• Independent of base volume’s RAID and
physical layout properties Intermediate
Snapshot
• Fast resynchronization capability
• Thin Provisioning-aware Full Copy
• Full copies can consume same physical
capacity as thinly provisioned base volume

Full Copy

47 © 2012 HP – Peter Mattei


HP 3PAR Full Copy V2 – instantly accessible copy
Part of the base 3PAR OS
Base Volume

• Share data quickly and easily


• Full physical point-in-time copy
• Immediately provisionable to hosts
• Independent of base volume’s RAID and Intermediate
Snapshot
physical layout properties
• No resynchronization capability Full Copy
• Thin Provisioning-aware
• Full copies can consume same physical
capacity as thinly provisioned base volume
Full Copy

48 © 2012 HP – Peter Mattei


HP 3PAR Virtual Copy – Snapshot at its best
Smart
• Individually erasable and promotable
• Scheduled creation/deletion Top 10 Arrays WW as of July 2012
• Consistency groups
Up to 8192 Snaps per array
Thin # of Snapshots Model

• No reservation, non-duplicative 6559 V800


• Variable QoS 100s of Snaps… 6172 S800
Ready …but only one
6156 S800
• Instantaneously readable and/or writeable CoW required 5138 S800
• Snapshots of snapshots of … 4666 S400
• Virtual Lock for retention of read-only snaps
4482 S800
• Automated erase option
4341 T800
Integrated
4295 T800
• MS SQL
• MS Exchange 3991 T400
• Oracle 3871 T800
• vSphere Base Volume
• HP Data Protector
• …
49 © 2012 HP – Peter Mattei
Be careful - Keep spinning disk utilisation below 50%
Response Time of different drive technologies
Rule of thumb
• Spinning disks should operate below
50% utilization
utilisation ~0% ~50%
15k 4.7ms 9.4ms
10k 6.7ms 13.4ms
7.2k 11.7ms 23.4ms

• FMD & SSD may operate up to


95% utilization
utilisation ~0% ~95%
FMD 0.01ms 0.2ms
SSD 0.03ms 0.6ms

50 © 2012 HP – Peter Mattei


HP 3PAR the right choice!

Thank you

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy