0% found this document useful (0 votes)
29 views115 pages

M07 StorageOptimization

Uploaded by

Ismail Barbour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views115 pages

M07 StorageOptimization

Uploaded by

Ismail Barbour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 115

Storage Optimization

Module 7

© 2014 VMware Inc. All rights reserved


You Are Here

Course Introduction Storage Optimization

VMware Management Resources


CPU Optimization
Performance in a Virtualized
Environment
Memory Performance
Network Scalability
Virtual Machine and Cluster
Network Optimization Optimization

Storage Scalability Host and Management Scalability

VMware vSphere: Optimize and Scale 7-2

© 2014 VMware Inc. All rights reserved


Importance

Storage can limit the performance of enterprise workloads. You


should know how to monitor a host’s storage throughput and
troubleshoot problems that result in overloaded storage and slow
storage performance.

VMware vSphere: Optimize and Scale 7-3

© 2014 VMware Inc. All rights reserved


Module Lessons

Lesson 1: Storage Virtualization Concepts


Lesson 2: Solid-State Drive Storage
Lesson 3: Monitoring Storage Activity
Lesson 4: Command-Line Storage Management
Lesson 5: Troubleshooting Storage Performance Problems

VMware vSphere: Optimize and Scale 7-4

© 2014 VMware Inc. All rights reserved


Lesson 1:
Storage Virtualization Concepts

VMware vSphere: Optimize and Scale 7-5

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objective:
 Describe factors that affect storage performance

VMware vSphere: Optimize and Scale 7-6

© 2014 VMware Inc. All rights reserved


Storage Performance Overview

Storage performance factors


 Storage protocols:
• Fibre Channel, Fibre Channel over
Ethernet (FCoE), hardware iSCSI,
software iSCSI, NFS
 Proper storage configuration
 Load balancing
 Queuing and LUN queue depth
 VMware vSphere® VMFS (VMFS)
configuration:
• Choosing between VMFS and
RDMs
• SCSI reservations
 Virtual disk types

VMware vSphere: Optimize and Scale 7-7

© 2014 VMware Inc. All rights reserved


Storage Protocol Performance

VMware® ESXi™ supports Fibre Channel, Fibre Channel over


Ethernet, hardware iSCSI, software iSCSI, and NFS.
All storage protocols are capable of delivering high throughput
performance.
 When CPU is not a bottleneck, software iSCSI and NFS can be part of
a high-performance solution.
Hardware performance features:
 16Gb Fibre Channel
 Software and hardware iSCSI and NFS support for jumbo frames:
• Using Gigabit,10Gb, and 40Gb Ethernet NICs
• 10Gb iSCSI hardware initiators.

VMware vSphere: Optimize and Scale 7-8

© 2014 VMware Inc. All rights reserved


SAN Configuration

Proper SAN configuration can help to eliminate performance issues.


Each LUN should have the right RAID level and storage
characteristics for the applications in virtual machines that use it.
Use the right path selection policy:
 Most Recently Used (MRU)
 Fixed (Fixed)
 Round Robin (RR)
 Optional Third-Party (PSP)

VMware vSphere: Optimize and Scale 7-9

© 2014 VMware Inc. All rights reserved


Storage Queues

Queuing at the host:


 Device driver queue controls the
queues number of active commands on the
LUN at any time.
• Depth of queue is 32 (default).
 VMkernel queue is an overflow
queue for device driver queue.
Queuing at the storage array:
 Queuing occurs when the number
of active commands to a LUN is too
queues
high for the storage array to handle.
Latency increases with excessive
queuing at host or storage array.

VMware vSphere: Optimize and Scale 7-10

© 2014 VMware Inc. All rights reserved


Performance Impact of Queuing on the Storage Array

Sequential workloads generate random access at the storage array.


 The performance of sequential reads is similar to the performance of
random reads because the array has reads from multiple hosts.
Sequential writes are unaffected because of the write cache.
Aggregate Throughput
200
180
160
140
seq_rd
120
MBps

seq_wr
100 rnd_rd
80 rnd_wr

60
40
20
0
1 2 4 8 16 32 64
Number of hosts

VMware vSphere: Optimize and Scale 7-11

© 2014 VMware Inc. All rights reserved


Device Driver Queue Depth

Device driver queue depth determines


how many commands to a given LUN
can be active at one time.
Set device driver queue depth size
properly to decrease disk latency.
Set LUN
queue  Qlogic adapters depth of queue is 64
depth to its (default)
maximum: 64.  Other brands depth of queue is 32
(default).
 Maximum recommended queue
depth is 64.
Set Disk.SchedNumReqOutstanding
to the same value as the queue depth.

VMware vSphere: Optimize and Scale 7-12

© 2014 VMware Inc. All rights reserved


Network Storage: iSCSI and NFS

Avoid oversubscribing your links.


 Using VLANs does not solve the problem of oversubscription.
Isolate iSCSI traffic and NFS traffic.
Applications that write a lot of data to storage should not share
Ethernet links to a storage device.
For software iSCSI and NFS, protocol processing uses CPU
resources on the host.

VMware vSphere: Optimize and Scale 7-13

© 2014 VMware Inc. All rights reserved


About VMFS-5

VMFS-5 provides improvements in scalability and performance over


VMFS-3:
 The datastore and a single extent can be greater than 2TB.
• The maximum datastore size is 64TB.
• The maximum virtual disk size is 62TB.
 1MB file system block size, which supports files up to 62TB in size:
• The file system subblock size is 8KB.
 Efficient storage of small files:
• Data of small files (less than or equal to 1KB) is stored directly in the file
descriptor.
 Support for the GUID Partition Table format
 Raw device mappings have the following maximum sizes:
• Physical compatibility mode: 64TB
• Virtual compatibility mode: 62TB

VMware vSphere: Optimize and Scale 7-14

© 2014 VMware Inc. All rights reserved


VMFS Performance Factors

VMFS partition alignment:


 The VMware vSphere® Web Client properly aligns a VMFS partition on
the 1MB boundary.
 Performance improvement is dependent on workloads and array types.
Spanning VMFS volumes:
 This feature is effective for increasing VMFS size dynamically.
 Predicting performance is not straightforward.

VMware vSphere: Optimize and Scale 7-15

© 2014 VMware Inc. All rights reserved


SCSI Reservations

A SCSI reservation:
 Causes a LUN to be used exclusively by a single host for a brief period
 Is used by a VMFS instance to lock the file system while the VMFS
metadata is updated
Operations that result in metadata updates:
 Creating or deleting a virtual disk
 Increasing the size of a VMFS volume
 Creating or deleting snapshots
 Increasing the size of a VMDK file
To minimize the impact on virtual machine performance:
 Postpone major maintenance and configuration until off-peak hours.
If the array supports VMware vSphere® Storage APIs – Array
Integration and hardware assisted locking, SCSI reservations are not
necessary.
VMware vSphere: Optimize and Scale 7-16

© 2014 VMware Inc. All rights reserved


VMFS Versus RDMs

VMFS is the preferred option for most enterprise applications.


Examples:
 Databases, ERP, CRM, Web servers, and file servers
RDM is preferred when raw disk access is necessary.

I/O Characteristic Choice for Better Performance

Random reads/writes VMFS and RDM yield similar


I/O operations/second

Sequential reads/writes VMFS and RDM yield similar performance


at small I/O block sizes

Sequential reads/writes VMFS


at larger I/O block sizes

VMware vSphere: Optimize and Scale 7-17

© 2014 VMware Inc. All rights reserved


Virtual Disk Types

Disk Type Description How to Performance Use Case


Create Impact
Eager-zeroed Space allocated and vSphere Web Extended creation Quorum drive
thick zeroed out at the Client or time, but best in an MSCS
time of creation. vmkfstools performance from cluster: Fault
vSphere Storage first write operation Tolerance.
APIs - Array
Integration can
offload zeroing out
to the array.

Lazy-zeroed Space allocated at vSphere Web Shorter creation Good for most
thick the time of creation, Client or time, but reduced cases.
but zeroed on first vmkfstools performance on first
write (default). write to a block

Thin Space allocated and vSphere Web Shorter creation Disk space
zeroed upon Client or time, but reduced utilization is
demand. vmkfstools performance on first the main
write to a block concern.

VMware vSphere: Optimize and Scale 7-18

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objective:


 Describe factors that affect storage performance

VMware vSphere: Optimize and Scale 7-19

© 2014 VMware Inc. All rights reserved


Lesson 2:
Solid-State Drive Storage

VMware vSphere: Optimize and Scale 7-20

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Describe benefits of VMware vSphere® Flash Read Cache™
 Explain the benefit of using Virtual Flash Host Swap Cache when
virtual machine swap files are stored on non-SSD datastores
 Describe the interaction between Flash Read Cache, VMware
vSphere® Distributed Resource Scheduler™ (DRS), and VMware
vSphere® High Availability
 Describe limitations of Flash Read Cache
 Explain the purpose of a VMware® Virtual SAN™™ datastore
 Describe the architecture and requirements of Virtual SAN

VMware vSphere: Optimize and Scale 7-21

© 2014 VMware Inc. All rights reserved


About Flash Read Cache

Key Features

Flash Read Flash Read Flash Read


• Hypervisor-based, software-defined, flash
Cache Cache Cache storage tier solution.
Flash Read Cache infrastructure • Aggregates local flash devices to provide
vSphere
a flash resource for virtual machine and
VMware vSphere® host consumption
(Virtual Flash Host Swap Cache)
• Local flash devices are used as a cache.
SSD SSD SSD SSD SSD
• Integrated with:
• VMware® vCenter Server™
• vSphere HA
• DRS
• VMware vSphere® vMotion®
flash memory as a new storage tier in ESXi

VMware vSphere: Optimize and Scale 7-22

© 2014 VMware Inc. All rights reserved


Flash Read Cache: Host-Based Flash Tier

Key features
 Flash Read Cache infrastructure
Flash Read Cache • Pools local flash devices
• Provides flash-based resource
Flash Read Cache infrastructure
management
vSphere  Cache software
• ESXi host-based caching
• Provides per-VMDK caching
CPU Memory Flash Resource
Benefits
 Easy to manage as a pooled
resource
 Targeted use is per-VMDK
 Transparent to the application and
guest OS
SAN/NAS

VMware vSphere: Optimize and Scale 7-23

© 2014 VMware Inc. All rights reserved


Reasons to Use Flash Read Cache

 Flash Read Cache lets you accelerate virtual machine performance


through the use of host resident flash devices as a cache.
 Flash Read Cache supports write-through and read caching.
• Write-back or write caching are not supported.

write through
write
commit
1
Cache 2
3
ack

 Data reads are satisfied from the cache, if present.


 The use of Flash Read Cache significantly improves virtual machine
read performance.

VMware vSphere: Optimize and Scale 7-24

© 2014 VMware Inc. All rights reserved


Primary Use Cases

Read-intensive operation
workloads:
 Collaboration applications
Flash Read Flash Read Flash Read
Cache Cache Cache

Flash Read Cache infrastructure  Databases


vSphere  Middleware applications

SSD SSD SSD SSD


SSD

flash as a new tier in vSphere

VMware vSphere: Optimize and Scale 7-25

© 2014 VMware Inc. All rights reserved


Flash Read Cache Requirements

vCenter Server 5.5 Flash Read Cache feature is


 Central point of management available in VMware vSphere®
Enterprise Plus Edition™
 vSphere Web Client
To configure Flash Read Cache,
At least one vSphere host
the user’s vCenter Server role
 ESXi version 5.5 or later must include the following
 Maximum of 32 hosts in a privileges:
cluster.  Host.Config.Storage
Virtual machine hardware  Host.Config.AdvancedConfig
 Virtual machine version 10 (for virtual flash resource
configuration)
 ESXi 5.5 or later compatibility

VMware vSphere: Optimize and Scale 7-26

© 2014 VMware Inc. All rights reserved


Flash Read Cache Hardware Requirements

Host on vSphere HCL

Storage arrays

SAS/SATA SSD
 Only solid-state drives (SSD)
are used for a read cache.
 SAN and local hard-disk drives PCIe flash cards
(HDDs) are used as a persistent
store.
 VMware recommends that all SAS/SATA HDD
hosts in a cluster be identically
configured.

VMware vSphere: Optimize and Scale 7-27

© 2014 VMware Inc. All rights reserved


Flash Read Cache Configuration

GO Repeat as If Repeat as
Stop
necessary necessary necessary

Configure
Configure a Configure virtual
Add SSD Flash Read
virtual flash flash host swap
capacity Cache for each
resource cache
virtual machine

VMware vSphere: Optimize and Scale 7-28

© 2014 VMware Inc. All rights reserved


Flash Read Cache Management

All the management tasks pertaining to the installation,


configuration, and monitoring of Flash Read Cache is done from
vSphere Web Client.

VMware vSphere: Optimize and Scale 7-29

© 2014 VMware Inc. All rights reserved


Virtual Flash Resource

 Each host creates a virtual


flash resource, which
contains one or more
SSDs.
• Individual SSDs must be
exclusively allocated to a
virtual flash resource.
 One virtual flash resource
per ESXi host
 A virtual flash resource
contains one virtual flash
volume.

VMware vSphere: Optimize and Scale 7-30

© 2014 VMware Inc. All rights reserved


Virtual Flash Volumes

The file system on a virtual flash resource is a derivative of VMFS:


 Optimized for grouping flash devices

vSphere

virtual flash volume virtual flash volume

SSD SSD SSD SSD

VMware vSphere: Optimize and Scale 7-31

© 2014 VMware Inc. All rights reserved


Virtual Flash Volume Limits

Parameter Value per Host

Number of virtual flash volumes


1 (local only)
per host
Number of SSDs per virtual flash
8 or fewer
volume

SSD or flash size 4TB or less

Virtual flash volume size 32TB or less

VMware vSphere: Optimize and Scale 7-32

© 2014 VMware Inc. All rights reserved


Flash Read Cache Use

When a virtual flash resource is created, the following features can


use the resource:
 Virtual Flash Host Swap Cache
• Provides the ability to use the virtual flash resource for memory swapping.
• Provides legacy support for the swap-to-SSD option.
 Flash Read Cache for virtual machines
• Provides virtual machine transparent flash access.
• Per-VMDK cache configuration enables fine-grained control.
• Cache block size is 4KB to 1024KB.
• The cache size should be big enough to hold the active working set of the
workload.

VMware vSphere: Optimize and Scale 7-33

© 2014 VMware Inc. All rights reserved


Virtual Flash Host Swap Cache

Configure Virtual Flash Host Swap Cache in vSphere Web Client.


vSphere Flash Swap Caching can utilize up to 4TB of vSphere Flash
Resource.

VMware vSphere: Optimize and Scale 7-34

© 2014 VMware Inc. All rights reserved


Flash Read Cache in Use

VMDK1 without VMDK2 with


Flash Read Cache Flash Read Cache

Flash Read Cache software

vSphere Flash infrastructure

vSphere

SSD

VMware vSphere: Optimize and Scale 7-35

© 2014 VMware Inc. All rights reserved


Flash Read Cache Interoperability

 Fully integrated
with vSphere
vMotion, DRS,
and vSphere HA.
 vSphere vMotion
migration
workflows give the
option of whether
or not to migrate
the cache
contents.
 Advanced settings
allow individual
VMDK migration
settings.

VMware vSphere: Optimize and Scale 7-36

© 2014 VMware Inc. All rights reserved


Flash Read Cache Interoperability: vSphere vMotion

vSphere vMotion

Cross-Host
vSphere
Storage vMotion

Flash Read Cache Flash Read Cache

Storage
Array

VMware vSphere: Optimize and Scale 7-37

© 2014 VMware Inc. All rights reserved


Flash Read Cache Interoperability: DRS

Cross-Host
 Virtual flash resources are
vSphere Storage managed at the host level.

vMotion
There is no cluster-wide knowledge
about Flash Read Cache availability
Flash Read Cache Flash Read Cache
or use.
 DRS selects a host that has
available virtual flash capacity to
start a virtual machine.
 There is no automatic virtual
machine migration for Flash Read
Cache optimization.
• DRS migrates a virtual machine only
Storage for mandatory reasons or if
Array
necessary to correct host over-
utilization.

VMware vSphere: Optimize and Scale 7-38

© 2014 VMware Inc. All rights reserved


Flash Read Cache Interoperability: vSphere HA

If a host fails, virtual machines


are restarted on hosts that have
sufficient Flash Read Cache restart
resources.
 A virtual machine cannot power
on if virtual flash resources are Flash Read Cache
vFlash FlashvFlash
Read Cache
Cache Cache
insufficient.
If a virtual machine fails, it is
restarted on the same host. Failed

In both scenarios, the Flash Read


Cache contents are lost and
repopulated.
Storage
Array

VMware vSphere: Optimize and Scale 7-39

© 2014 VMware Inc. All rights reserved


Virtual Flash Resource Management

Virtual machine cache:


 Virtual machine cache is created only when virtual machines are
powered on.
 Space is reclaimed when virtual machines are powered off.
 Virtual machine cache expands and shrinks dynamically, taking
reservation into account.
 Virtual machine cache is migrated (optionally) when virtual machines are
moved to a different host.
Virtual flash resource:
 Virtual flash resource is allocated dynamically across all powered-on
cache-enabled virtual machines.
 Virtual machines fail to power on if virtual flash resource capacity is
insufficient to satisfy the cache reservation.
 Reservation defines the size of the VMDK's cache.
 Shares are not supported.
 Host Flash Cache resource cannot be overcommitted.
VMware vSphere: Optimize and Scale 7-40

© 2014 VMware Inc. All rights reserved


Flash Read Cache: Performance Statistics Counters

A new set of performance


statistics counters are
available in vCenter Server
advanced performance
charts:
 vFlashCacheIOPS
 vFlashCacheLatency
 vFlashCacheThroughput

VMware vSphere: Optimize and Scale 7-41

© 2014 VMware Inc. All rights reserved


Flash Read Cache Limitations

Flash Read Cache has the following limitations:


 Support for only locally attached SSDs
 Not compatible with VMware vSphere® Fault Tolerance
 Cannot share an SSD with Virtual SAN or a VMFS datastore

VMware vSphere: Optimize and Scale 7-42

© 2014 VMware Inc. All rights reserved


About Virtual SAN

vSphere 5.5 offers experimental support for Virtual SAN, a software-


defined storage solution.
Virtual SAN aggregates the direct-attached storage of ESXi hosts to
create a storage pool that can be used by virtual machines.
Virtual SAN has the following benefits:
 vSphere and vCenter Server integration
 Storage scalability
 Built-in resiliency
 SSD caching
 Converged compute and storage resources

VMware vSphere: Optimize and Scale 7-43

© 2014 VMware Inc. All rights reserved


Virtual SAN Architecture

Disks from multiple ESXi hosts are grouped together to form a


Virtual SAN datastore.

vSphere

Virtual SAN datastore


Virtual SAN cluster

disk group disk group disk group

VMware vSphere: Optimize and Scale 7-44

© 2014 VMware Inc. All rights reserved


Object-Based Storage

Virtual SAN stores and manages data in the form of flexible data
containers called objects.

object object container object


VMDK VMDK virtual machine’s
file file metadata files

vSphere

Virtual SAN datastore


Virtual SAN cluster

disk group disk group disk group

VMware vSphere: Optimize and Scale 7-45

© 2014 VMware Inc. All rights reserved


Virtual Machine Storage Policies

Virtual Machine
Storage Policy
Virtual machine storage policies
Capacity
Availability are created before virtual
Performance machine deployment to reflect
the requirements of the
applications running in the
virtual machine.

The storage policy is based on
vSphere the Virtual SAN capabilities.
Virtual SAN datastore Based on virtual machine
requirements, the appropriate
policy is selected at deployment
disk group disk group
time.

Virtual SAN cluster

VMware vSphere: Optimize and Scale 7-46

© 2014 VMware Inc. All rights reserved


Virtual SAN Requirements

A Virtual SAN cluster has the following requirements:


 Minimum of three hosts contributing local disks, running ESXi 5.5 or
later, and managed by vCenter Server 5.5 or later:
• Each host must have at least one SSD and one HDD.
 Minimum of 4GB RAM on each host
 Dedicated network connecting hosts in the cluster:
• 10Gb NIC for each host is recommended.
• Two 10Gb NICs are preferred for fault tolerance purposes.
SSD capacity should make up at least 10 percent of the total storage
capacity.
Virtual SAN does not support datastores with greater than 2TB
capacity.
Virtual SAN and Flash Read Cache are not compatible.

VMware vSphere: Optimize and Scale 7-47

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Describe benefits of Flash Read Cache
 Explain the benefit of using Virtual Flash Host Swap Cache when
virtual machine swap files are stored on non-SSD datastores
 Describe the interaction between Flash Read Cache, DRS, and
vSphere High Availability
 Describe limitations of Flash Read Cache
 Explain the purpose of a Virtual SAN™ datastore
 Describe the architecture and requirements of Virtual SAN

VMware vSphere: Optimize and Scale 7-48

© 2014 VMware Inc. All rights reserved


Lesson 3:
Monitoring Storage Activity

VMware vSphere: Optimize and Scale 7-49

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Determine which disk metrics to monitor
 Identify metrics in vCenter Server and resxtop
 Demonstrate how to monitor disk throughput

VMware vSphere: Optimize and Scale 7-50

© 2014 VMware Inc. All rights reserved


Applying Space Utilization Data to Manage Storage Resources

Select Datastore > Monitor > Performance.

The overview charts on the


performance tab displays
usage details of the
datastore.
By default, the displayed
charts include:
 Space Utilization
 By Virtual Machines
(Top 5)
 1 Day Summary

VMware vSphere: Optimize and Scale 7-51

© 2014 VMware Inc. All rights reserved


Disk Capacity Metrics

Identify disk problems.


 Determine available bandwidth and compare with expectations.
What do you do?
 Check key metrics. In a vSphere environment, the most significant
statistics are:
• Disk throughput
• Latency (device, kernel)
• Number of aborted disk commands
• Number of active disk commands
• Number of active commands queued

VMware vSphere: Optimize and Scale 7-52

© 2014 VMware Inc. All rights reserved


Monitoring Disk Throughput with vSphere Web Client

vSphere Web Client counters:


 Disk Read Rate, Disk Write Rate, Disk Usage

VMware vSphere: Optimize and Scale 7-53

© 2014 VMware Inc. All rights reserved


Monitoring Disk Throughput with resxtop

Adapter view: Type d.

Measure disk throughput with the following:


 READs/s and WRITEs/s
 READs/s + WRITEs/s = IOPS
Or you can use:
 MBREAD/s and MBWRTN/s

VMware vSphere: Optimize and Scale 7-54

© 2014 VMware Inc. All rights reserved


Disk Throughput Example

Adapter view: Type d.

Device view: Type u.

Virtual machine view: Type v.

VMware vSphere: Optimize and Scale 7-55

© 2014 VMware Inc. All rights reserved


Monitoring Disk Latency with vSphere Web Client

vSphere Web Client counters:


 Physical device latency counters and kernel latency counters

VMware vSphere: Optimize and Scale 7-56

© 2014 VMware Inc. All rights reserved


Monitoring Disk Latency with resxtop

Adapter view: Type d.

Host bus adapters (HBAs) latency stats from the


include SCSI, iSCSI, RAID, device, the kernel, and
and FC-HBA adapters. the guest

DAVG/cmd: Average latency (ms) of the device (LUN)


KAVG/cmd: Average latency (ms) in the VMkernel, also called
queuing time
GAVG/cmd: Average latency (ms) in the guest.
GAVG = DAVG + KAVG.

VMware vSphere: Optimize and Scale 7-57

© 2014 VMware Inc. All rights reserved


Monitoring Commands and Command Queuing

Performance Metric Name in vSphere Name in


Web Client resxtop or esxtop
Number of active commands Commands issued ACTV
(I/O operations currently active)

Number of commands queued Queue command QUED


(I/O operations that require latency
processing)

VMware vSphere: Optimize and Scale 7-58

© 2014 VMware Inc. All rights reserved


Disk Latency and Queuing Example

normal
VMkernel
Adapter view: Type d. latency

queuing at
Device view: Type u. the device

VMware vSphere: Optimize and Scale 7-59

© 2014 VMware Inc. All rights reserved


Monitoring Severely Overloaded Storage

vSphere Web
Client counter:
 Disk Command
Aborts
resxtop counter:
 ABRTS/s

VMware vSphere: Optimize and Scale 7-60

© 2014 VMware Inc. All rights reserved


Configuring Datastore Alarms

To configure datastore alarms:


1. Right-click the datastore and select Alarms > New Alarm Definition.
2. Type the condition or event you want to monitor and the action that
must occur as a result.

VMware vSphere: Optimize and Scale 7-61

© 2014 VMware Inc. All rights reserved


Analyzing Datastore Alarms

To display triggered alarms for a datastore, select the datastore and


select Monitor > Issues > Triggered Alarms.

VMware vSphere: Optimize and Scale 7-62

© 2014 VMware Inc. All rights reserved


Introduction to Lab 9: Monitoring Storage Performance

VMFS

Linux01 Linux01.vmdk system disk


fileserver1.sh Linux01_1.vmdk local data disk

shared storage

fileserver2.sh VMFS
datawrite.sh remote
Linux01.vmdk
logwrite.sh data disk
… …
your assigned
LUN

VMware vSphere: Optimize and Scale 7-63

© 2014 VMware Inc. All rights reserved


Lab 9: Monitoring Storage Performance

Use a vSphere advanced chart to monitor disk performance across a series


of tests
1. Prepare for the Lab
2. Prepare the Test Virtual Machine
3. Prepare the IP Storage Network for Testing
4. Create a Real-Time Disk I/O Performance Chart
5. Prepare to Run Tests
6. Measure Continuous Sequential Write Activity to a Virtual Disk on a Remote
Datastore
7. Measure Continuous Random Write Activity to a Virtual Disk on a Remote
Datastore
8. Measure Continuous Random Read Activity to a Virtual Disk on a Remote
Datastore
9. Measure Continuous Random Read Activity to a Virtual Disk on a Local
Datastore
10. Analyze the Test Results
11. Clean Up for the Next Lab
VMware vSphere: Optimize and Scale 7-64

© 2014 VMware Inc. All rights reserved


Review of Lab 9: Monitoring Storage Performance

Test Name Latest Write Rate Latest Read Rate

Test 1
Sequential Writes to a Virtual
Disk on a Remote Datastore
Test 2
Random Writes to a Virtual
Disk on a Remote Datastore
Test 3
Random Reads from a Virtual
Disk on a Remote Datastore
Test 4
Random Reads from a Virtual
Disk on a Local Datastore

VMware vSphere: Optimize and Scale 7-65

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Determine which disk metrics to monitor
 Identify metrics in vCenter Server and resxtop
 Demonstrate how to monitor disk throughput

VMware vSphere: Optimize and Scale 7-66

© 2014 VMware Inc. All rights reserved


Lesson 4:
Command-Line Storage Management

VMware vSphere: Optimize and Scale 7-67

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Use VMware vSphere® Management Assistant to manage vSphere
virtual storage
 Use vmkfstools for VMFS operations
 Use the vscsiStats command

VMware vSphere: Optimize and Scale 7-68

© 2014 VMware Inc. All rights reserved


Managing Storage with vSphere Management Assistant

vSphere Management
Storage Management Task
Assistant Command
Examine LUNs. esxcli

Manage storage paths. esxcli

Manage NAS storage. esxcli

Manage iSCSI storage. esxcli

Mask LUNs. esxcli

Manage PSA plug-ins. esxcli

Migrating virtual machines to a different datastore. svmotion

VMware vSphere: Optimize and Scale 7-69

© 2014 VMware Inc. All rights reserved


Examining LUNs

Use the esxcli command with the storage namespace:


 esxcli conn_options storage core|filesystem
cmd_options
Examples:
 To list all logical devices known to a host:
• esxcli –-server esxi02 storage core device list
 To list a specific device:
• esxcli –-server esxi02 storage core device
list –d mpx.vmhba32:C0:T0:L0
 To display host bus adapter (HBA) information:
• esxcli –-server esxi02 storage core adapter list
 To print mappings of datastores to their mount points and UUIDs:
• esxcli –-server esxi02 storage filesystem list

VMware vSphere: Optimize and Scale 7-70

© 2014 VMware Inc. All rights reserved


Managing Storage Paths

Use the esxcli command with the storage core path|adapter


namespace:
 esxcli conn_options storage core path|adapter
cmd_options
Examples:
 To display mappings between HBAs and devices:
• esxcli –-server esxi02 storage core path list
 To list the statistics for a specific device path:
• esxcli –-server esxi02 storage core path stats get
–-path vmhba33:C0:T2:L0
 To rescan all adapters:
• esxcli –-server esxi02 storage core adapter rescan --all

VMware vSphere: Optimize and Scale 7-71

© 2014 VMware Inc. All rights reserved


Managing NAS Storage

Use the esxcli command with the storage nfs namespace:


 esxcli conn_options storage nfs cmd_options
Examples:
 To list NFS file systems:
• esxcli –-server esxi02 storage nfs list
 To add an NFS file system to an ESXi host:
• esxcli –-server esxi02 storage nfs
add --host=nfs.vclass.local --share=/lun2
--volume-name=MyNFS
 To unmount an NFS file system:
• esxcli –-server esxi02 storage nfs remove --volume-name=MyNFS

VMware vSphere: Optimize and Scale 7-72

© 2014 VMware Inc. All rights reserved


Managing iSCSI Storage

Use the esxcli command with the iscsi namespace:


 esxcli conn_options iscsi cmd_options
Examples:
 To enable software iSCSI:
• esxcli –-server esxi02 iscsi software set –-enabled=true
 To add an iSCSI software adapter to an ESXi host:
• esxcli –-server esxi02 iscsi networkportal add
–n vmk2 -A vmhba33
 To check the software iSCSI adapter status:
• esxcli –-server esxi02 iscsi software get
­ If the command returns the value true, the adapter is enabled.

VMware vSphere: Optimize and Scale 7-73

© 2014 VMware Inc. All rights reserved


Masking LUNs

You can prevent an ESXi host from accessing the following:


 A storage array
 A LUN
 An individual path to a LUN
Use the esxcli command to mask access to storage.
 esxcli --server esxi02 storage core claimrule add
-r claimrule_ID -t type required_option -P MASK_PATH
Example:
 To add rule 429 for the MP claim rule type that claims all paths provided
by an adapter with the mptscsi driver for the MASK_PATH plugin, use:
• esxcli --server esxi02 storage core claimrule add
-r 429 -t driver -D mptscsi -P MASK_PATH
--claimrule-class=MP

VMware vSphere: Optimize and Scale 7-74

© 2014 VMware Inc. All rights reserved


Managing PSA Plug-Ins

Use the esxcli command to manage the NMP.


 List the devices controlled by the NMP
• esxcli storage nmp device list
 Set the PSP for a device
• esxcli storage nmp device set --device naa.xxx
--psp VMW_PSP_FIXED
 View paths claimed by NMP
• esxcli storage nmp path list
 Retrieve PSP information
• esxcli storage nmp psp generic deviceconfig get
--device=device
• esxcli storage nmp psp fixed deviceconfig get
--device=device
• esxcli storage nmp psp roundrobin deviceconfig get
--device=device

VMware vSphere: Optimize and Scale 7-75

© 2014 VMware Inc. All rights reserved


Modifying the Default Round-Robin Path Selection Settings

You can specify when the path should change by using the
--bytes or --iops argument.
 To set the device to switch to the next path each time 12,345 bytes are
sent along the current path:
esxcli conn_options storage nmp psp roundrobin
deviceconfig set --type=bytes --bytes=12345 --device
naa.xxx
 To set the device to switch after 4,200 I/O operations are performed on
a path:
esxcli conn_options storage nmp psp roundrobin
deviceconfig set --type=iops --iops=4200 --device
naa.xxx

VMware vSphere: Optimize and Scale 7-76

© 2014 VMware Inc. All rights reserved


Performing Storage vMotion from the Command Line

Use the svmotion command to perform VMware vSphere® Storage


vMotion® migrations:
 svmotion conn_options
--datacenter=datacenter_name
--vm=VM_config_datastore_path:new_datastore
[--disks=virtual_disk_datastore_path:new_datastore]
Example:
 svmotion --server vc01 --username administrator
--password vmware1! --datacenter=Training
--vm=‘[Local01] LabVM1/LabVM1.vmx: SharedVMs’
To run svmotion in interactive mode:
 svmotion conn_options --interactive
The command prompts you for the information it needs to complete the storage
migration.
The virtual machine must be powered on when the command is run.

VMware vSphere: Optimize and Scale 7-77

© 2014 VMware Inc. All rights reserved


vmkfstools Overview

vmkfstools is a command for managing VMFS volumes and virtual


disks, available in VMware vSphere® ESXi™ Shell and VMware
vSphere® Command-Line Interface.
 vmkfstools performs many storage operations from the command
line.
 Examples of vmkfstools operations:
• Create a VMFS datastore
• Manage a VMFS datastore
• Manipulate virtual disk files
 Common options, including connection options, can be displayed with
the following command:
vmkfstools --help

VMware vSphere: Optimize and Scale 7-78

© 2014 VMware Inc. All rights reserved


vmkfstools Commands

vmkfstools can perform operations on the following:


 Virtual disks
 File systems
 Logical volumes
 Physical storage devices

VMware vSphere: Optimize and Scale 7-79

© 2014 VMware Inc. All rights reserved


vmkfstools General Options

General options for vmkfstools :


 connection_options specifies the target server and authentication
information.
 --help prints a help message for each command-specific option.
 --server specifies an ESXi host to run the command against, or a
vCenter Server system
 --vihost specifies the ESXi host to run the command against when
the target server is a vCenter Server system.
 -v specifies the verbosity level of the command output.

VMware vSphere: Optimize and Scale 7-80

© 2014 VMware Inc. All rights reserved


vmkfstools Command Syntax

vmkfstools supports the following command syntax:


 vmkfstools conn_options target
 You have one or more command-line options to specify the activity for
vmkfstools to perform.
 Target is the location where the operation is performed.
For example, to create a VMFS-5 datastore on the specified device:
 vmkfstools –-createfs vmfs5 –-blocksize 1M disk_IP:P
--setfsname volume_name
 vmkfstools -C vmfs5 -b 1M disk_ID:P

VMware vSphere: Optimize and Scale 7-81

© 2014 VMware Inc. All rights reserved


vmkfstools File System Options

vmkfstools is used to create and manage VMFS file systems:


 --createfs creates a VMFS file system to a specified partition.
 --blocksize specifies the block size of the VMFS file system.
 --setfsname sets name of the VMFS file system to create.
 --spanfs extends the VMFS file system with the specified partition.
 --rescanvmfs rescans the host for new VMFS volumes
 --queryfs lists attributes of a file or directory on a VMFS volume.

VMware vSphere: Optimize and Scale 7-82

© 2014 VMware Inc. All rights reserved


vmkfstools Virtual Disk Options

vmkfstools is used to create and manage virtual machine disks:


 --createvirtualdisk
 --adapterType
 --diskformat
 --clonevirtualdisk
 --deletevirtualdisk
 --renamevirtualdisk
 --extendvirtualdisk
 --createrdm
 --createrdmpassthru
 --querydm
 --geometry
 --writezeros
 --inflatedisk
VMware vSphere: Optimize and Scale 7-83

© 2014 VMware Inc. All rights reserved


vmkfstools Virtual Disk Example

The following example assumes that you are specifying connection


options.
Example:
 Create a virtual disk that is 2GB in size:
• vmkfstools –c 2048m ‘[Storage1] rh6-2.vmdk’

VMware vSphere: Optimize and Scale 7-84

© 2014 VMware Inc. All rights reserved


vscsiStats

vscsiStats collects and reports counters on storage activity:


 Data is collected at the virtual SCSI device level in the kernel.
 Results are reported per VMDK, regardless of the underlying storage
protocol.
 The command name is case-sensitive.
 The command is run in vSphere ESXi Shell.
vscsiStats reports data in histogram form. The data includes:
 I/O size
 Seek distance
 Outstanding I/Os
 Latency in microseconds

VMware vSphere: Optimize and Scale 7-85

© 2014 VMware Inc. All rights reserved


Reasons to Use vscsiStats

Provides virtual disk latency statistics, with microsecond precision,


for both VMFS and NFS datastores
Reports data in histogram form instead of averages:
 Histograms of observed data values can be much more informative
than single numbers like mean, median, and standard deviations from
the mean.
Exposes sequentiality of I/O:
 This tool shows sequential versus random access patterns.
 Sequentiality can help with storage sizing, LUN architecture, and
identification of application-specific behavior.
Provides a breakdown of I/O sizes:
 Knowledge of expected I/O sizes can be used to optimize performance
of the storage architecture.

VMware vSphere: Optimize and Scale 7-86

© 2014 VMware Inc. All rights reserved


Running vscsiStats

vscsiStats <options>
 -l List running virtual machines and their world IDs
(worldGroupID).
 -s Start vscsiStats data collection.
 -x Stop vscsiStats data collection.
 -p Print histograms, specifying histogram type.
 -c Produce results in a comma-delimited list.
 -h Display help menu for more information about
command-line parameters.
 -w Specify worldGroupID

VMware vSphere: Optimize and Scale 7-87

© 2014 VMware Inc. All rights reserved


Lab 10: Command-Line Storage Management

Use various command-line tools to manage and monitor storage


1. Prepare for the Lab
2. Use the ESXCLI Commands to View Storage Configuration Settings
3. Use the vmkfstools Commands to Manage Volumes and Virtual
Disks
4. Generate Disk Activity in a Test Virtual Machine
5. Start the vscsiStats Commands to Collect Virtual Machine Statistics
6. Generate a Disk Latency Histogram
7. Generate the First Seek Distance Histogram
8. Start the Next Test Script and Collect Statistics
9. Generate the Second Seek Distance Histogram and Compare the Data
10. Clean Up for the Next Lab

VMware vSphere: Optimize and Scale 7-88

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Use vSphere Management Assistant to manage vSphere virtual
storage
 Use vmkfstools for VMFS operations
 Use the vscsiStats command

VMware vSphere: Optimize and Scale 7-89

© 2014 VMware Inc. All rights reserved


Lesson 5:
Troubleshooting Storage Performance
Problems

VMware vSphere: Optimize and Scale 7-90

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Describe various storage performance problems
 Discuss causes of storage performance problems
 Propose solutions to correct storage performance problems
 Discuss examples of troubleshooting storage performance problems

VMware vSphere: Optimize and Scale 7-91

© 2014 VMware Inc. All rights reserved


Review: Basic Troubleshooting Checklist for ESXi Hosts

1. Check for VMware® Tools™ status.

2. Check for resource pool


CPU saturation. 11. Check for using only one vCPU
in an SMP virtual machine.
16. Check for low guest CPU
3. Check for host CPU saturation.
utilization.
12. Check for high CPU ready time
4. Check for guest CPU saturation. on virtual machines running in 17. Check for past virtual machine
under-utilized hosts. memory swapping.
5. Check for active virtual machine
memory swapping.
13. Check for slow storage device. 18. Check for high memory
6. Check for virtual machine swap demand in a resource pool.
wait. 14. Check for unexpected increase
in I/O latency on a shared storage 19. Check for high memory
7. Check for active virtual machine
device. demand in a host.
memory compression.

8. Check for an overloaded 20. Check for high guest memory


15. Check for unexpected increase demand.
storage device.
in data transfer rate
on network controllers.
9. Check for dropped receive
packets.

10. Check for dropped transmit


packets.

definite problems likely problems possible problems

VMware vSphere: Optimize and Scale 7-92

© 2014 VMware Inc. All rights reserved


Overloaded Storage

To monitor the number of disk commands aborted on the host:


1. Select the host, click the Monitor tab, and click the Performance tab.
2. On the Chart Options page, select the Commands Aborted counter.
If Commands Aborted > 0 for any LUN, then storage is overloaded on
that LUN.

VMware vSphere: Optimize and Scale 7-93

© 2014 VMware Inc. All rights reserved


Causes of Overloaded Storage

Excessive demand is placed on the storage device.


Storage is misconfigured. Check the following:
 Number of disks per LUN
 RAID level of a LUN
 Assignment of array cache to a LUN

VMware vSphere: Optimize and Scale 7-94

© 2014 VMware Inc. All rights reserved


Slow Storage

For a host’s LUNs, monitor Physical Device Read Latency and


Physical Device Write Latency counters.
 If average > 10ms or peak > 20ms for any LUN, then storage might be
slow on that LUN.
Or monitor the device latency (DAVG/cmd) in resxtop.
 If value > 10, a problem might exist.
 If value > 20, a problem exists.

VMware vSphere: Optimize and Scale 7-95

© 2014 VMware Inc. All rights reserved


Factors Affecting Storage Response Time

Three main workload factors that affect storage response time:


 I/O arrival rate
 I/O size
 I/O locality
Use the storage device’s monitoring tools to collect data to
characterize the workload.

VMware vSphere: Optimize and Scale 7-96

© 2014 VMware Inc. All rights reserved


Unexpected Increase in I/O Latency on Shared Storage

Applications, when consolidated, share expensive physical


resources such as storage.
Situations might occur when high I/O activity on shared storage
could impact the performance of latency-sensitive applications:
 Very high number of I/O requests are issued concurrently.
 Operations, such as a backup operation in a virtual machine, use the
I/O bandwidth of a shared storage device.
To resolve these situations, use VMware vSphere® Storage I/O
Control to control each virtual machine’s access to I/O resources of a
shared datastore.

VMware vSphere: Optimize and Scale 7-97

© 2014 VMware Inc. All rights reserved


Example 1: Bad Disk Throughput

Adapter view: Type d. good low device


throughput latency
resxtop output 1

bad high device latency


throughput (due to disabled cache)
resxtop output 2

VMware vSphere: Optimize and Scale 7-98

© 2014 VMware Inc. All rights reserved


Example 2: Virtual Machine Power On Is Slow

User complaint: Powering on a virtual machine takes longer than


usual.
 Sometimes powering on a virtual machine takes 5 seconds.
 Other times powering on a virtual machine takes 5 minutes!
What do you check?
 Because powering on a virtual machine requires disk activity on the
host, check the disk metrics for the host.

VMware vSphere: Optimize and Scale 7-99

© 2014 VMware Inc. All rights reserved


Monitoring Disk Latency with vSphere Web Client

Maximum disk
latencies range
from 100ms to
1100ms.

This latency is
very high.

VMware vSphere: Optimize and Scale 7-100

© 2014 VMware Inc. All rights reserved


Monitoring Disk Latency With resxtop

very large values


Rule of thumb: for DAVG/cmd
and GAVG/cmd
 GAVG/cmd > 20ms = high latency!
What does this mean?
 Latency when command reaches device is high.
 Latency as seen by the guest is high.
 Low KAVG/cmd means command is not queuing in VMkernel.

VMware vSphere: Optimize and Scale 7-101

© 2014 VMware Inc. All rights reserved


Solving the Problem of Slow Virtual Machine Power On

Host/Datastore events can show that


a disk has connectivity
issues.

This leads to
high latencies.

Monitor disk latencies if there is slow access to storage.


The cause of the problem might not be related to virtualization.

VMware vSphere: Optimize and Scale 7-102

© 2014 VMware Inc. All rights reserved


Example 3: Logging In to Virtual Machines Is Slow

Scenario:
 A group of virtual machines runs on a host.
 Each virtual machine’s application reads from and writes to the same
NAS device.
• The NAS device is also a virtual machine.
Problem:
 Users suddenly cannot log in to any of the virtual machines.
• The login process is very slow.
Initial speculation:
 Virtual machines are saturating a resource.

VMware vSphere: Optimize and Scale 7-103

© 2014 VMware Inc. All rights reserved


Monitoring Host CPU Usage

Chaotic CPU usage.


Host is saturated.
Predictable CPU
usage. Host is
not saturated.

VMware vSphere: Optimize and Scale 7-104

© 2014 VMware Inc. All rights reserved


Monitoring Host Disk Usage

Predictable, balanced
disk usage Uneven, reduced
disk usage

VMware vSphere: Optimize and Scale 7-105

© 2014 VMware Inc. All rights reserved


Monitoring Disk Throughput

Steady read
and write traffic
Increased write traffic.
Zero read traffic

VMware vSphere: Optimize and Scale 7-106

© 2014 VMware Inc. All rights reserved


Solving the Problem of Slow Virtual Machine Login

Problem diagnosis:
 CPU usage increased per virtual machine.
 Write traffic increased per virtual machine.
 Write traffic to the NAS virtual machine significantly increased.
One possible problem:
 An application bug in the virtual machine caused the error condition:
• The error condition caused excessive writes to the NAS virtual machine.
• Each virtual machine is so busy writing that it never reads.

VMware vSphere: Optimize and Scale 7-107

© 2014 VMware Inc. All rights reserved


Resolving Storage Performance Problems

Consider the following when resolving storage performance


problems:
 Check your hardware for proper operation and optimal configuration.
 Reduce the need for storage by your hosts and virtual machines.
 Balance the load across available storage.
 Understand the load being placed on storage devices.

VMware vSphere: Optimize and Scale 7-108

© 2014 VMware Inc. All rights reserved


Checking Storage Hardware

To resolve the problems of slow or overloaded storage, solutions can


include the following:
 Ensure that hardware is working properly.
 Configure the HBAs and RAID controllers for optimal use.
 Upgrade your hardware, if possible.

VMware vSphere: Optimize and Scale 7-109

© 2014 VMware Inc. All rights reserved


Reducing the Need for Storage

Consider the trade-off between memory capacity and storage


demand.
 Some applications, such as databases, cache frequently used data in
memory, thus reducing storage loads.
Eliminate all possible swapping to reduce the burden on the storage
subsystem.

VMware vSphere: Optimize and Scale 7-110

© 2014 VMware Inc. All rights reserved


Balancing the Load

Spread I/O loads over the


available paths to the storage.
Configure VMware vSphere®
Storage DRS™ to balance
capacity and IOPS.
For disk-intensive workloads:
 Use enough HBAs to handle
the load.
 If necessary, separate storage
processors to separate
systems.

VMware vSphere: Optimize and Scale 7-111

© 2014 VMware Inc. All rights reserved


Understanding the Workload Placed on Storage Devices

Understand the workload:


 Use storage array tools
to capture workload
statistics.
Strive for complementary
workloads:
 Mix disk-intensive with non-
disk-intensive virtual
machines on a datastore.
 Mix virtual machines with
different peak access times.
 Configure vSphere Storage
I/O Control to prioritize
IOPS when latency
thresholds are reached.

VMware vSphere: Optimize and Scale 7-112

© 2014 VMware Inc. All rights reserved


Storage Performance Best Practices

 Configure each LUN with the correct RAID level and storage
characteristics for applications and virtual machines that use the LUN.
 Use VMFS file systems for your virtual machines.
 Avoid oversubscribing paths (SAN) and links (iSCSI and NFS).
 Isolate iSCSI and NFS traffic.
 Applications that write a lot of data to storage should not share Ethernet
links to a storage device.
 Postpone major storage maintenance until off-peak hours.
 Eliminate all possible swapping to reduce the burden on the storage
subsystem.
 In SAN configurations, spread I/O loads over the available paths to the
storage devices.
 Strive for complementary workloads.

VMware vSphere: Optimize and Scale 7-113

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Describe various storage performance problems
 Discuss causes of storage performance problems
 Propose solutions to correct storage performance problems
 Discuss examples of troubleshooting storage performance problems

VMware vSphere: Optimize and Scale 7-114

© 2014 VMware Inc. All rights reserved


Key Points

 Flash Read Cache enables you to accelerate virtual machine


performance through the use of host-resident flash devices as a cache.
 Virtual SAN is a hybrid storage system that leverages and aggregates
local SSDs and local HDDs to provide a clustered, shared datastore.
 Factors that affect storage performance include storage protocols,
storage configuration, queuing, and VMFS configuration.
 Disk throughput and latency are two key metrics to monitor storage
performance.
 Overloaded storage and slow storage are common storage
performance problems.

Questions?

VMware vSphere: Optimize and Scale 7-115

© 2014 VMware Inc. All rights reserved

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy