VMware Inst., Conf. & Man. 7.0
VMware Inst., Conf. & Man. 7.0
Slide 1
Welcome back! We will now begin Lesson 1: Overview of vSphere and Virtual Machines.
Slide 2
We will begin with the learner objectives for this lesson.
After completing this lesson, you should be able to meet the following objectives:
• Explain basic virtualization concepts,
• Describe how vSphere fits into the software-defined data center and the cloud
infrastructure,
• And describe how to proactively manage your vSphere environment.
Slide 3
Here we have the terminology covered in this lesson.
Virtualization is associated with several key concepts, products, and features, and it is
important to know and understand them.
To begin, an Operating System is software designed to allocate physical resources to
applications, and by extension an Application is software that runs on an operating system,
consuming said physical resources.
A Virtual Machine is a Specialized application that abstracts hardware resources into software.
A Guest, or Guest operating system, is the operating system that runs in a VM.
The Hypervisor is a specialized operating system designed to run VMs, examples are ESXi,
Workstation, and Fusion.
And the Host. It is the physical computer that provides resources to the ESXi hypervisor.
Slide 4
Next, we have vSphere, which is a server virtualization product of VMware that combines the
ESXi hypervisor and the vCenter Server management platform.
A Cluster is a group of ESXi hosts whose resources are shared by VMs.
vSphere vMotion is a feature that supports the migration of powered-on VMs from host to host
without service interruption.
vSphere HA is a Cluster feature that protects against host hardware failures by restarting VMs
on hosts that are running normally.
And finally, vSphere DRS is also a Cluster feature that uses vSphere vMotion to place VMs on
hosts and ensure that each VM receives the resources that it needs.
Slide 5
Alright, now let’s look at Virtual Machines.
A virtual machine or VM is a software representation of a physical computer and its
components. The virtualization software converts the physical machine and its components into
files. A VM includes a set of specification and configuration files and is supported by the
physical resources of a host. Every VM has virtual devices that provide the same functionality
as physical hardware but are more portable, more secure, and easier to manage. VMs typically
include an operating system, applications, VMware Tools, and both virtual resources and
hardware resources that you manage in much the same way as you manage a physical
computer.
VMware Tools is a bundle of drivers. Using these drivers, the guest operating system can
interact efficiently with the guest hardware. VMware Tools adds extra functionality so that
ESXi can better manage the VM's use of physical hardware.
Slide 6
Here we can see some of the Benefits of Using Virtual Machines.
In physical machines, the operating system (for example, Windows or Linux) is installed
directly on the hardware.
The operating system requires specific device drivers to support specific hardware. If the
computer is upgraded with new hardware, new device drivers are required. If applications
interface directly with hardware drivers, an upgrade to the hardware, drivers, or both can have
significant repercussions if incompatibilities exist.
Because of these potential repercussions, hands-on technical support personnel must test
hardware upgrades against a wide variety of application suites and operating systems.
Such testing costs time and money.
Virtualizing these systems saves on such costs because VMs are 100 percent software. Multiple
VMs are isolated from one another. You can have a database server and an email server
running on the same physical computer.
The isolation between the VMs means that software-dependency conflicts are not a problem.
Even users with system administrator privileges on a VM’s guest operating system cannot
breach this layer of isolation to access another VM. These users must explicitly be granted
access by the ESXi system administrator. As a result of VM isolation, if a guest operating
system running in a VM fails, other VMs on the same host are unaffected and continue to run.
A guest operating system failure does not affect access and performance since:
Slide 7
Now we will look at the different types of Virtualization.
Virtualization is the process of creating a software-based representation of something physical,
such as a server, desktop, network, or storage device.
Virtualization is the single most effective way to reduce IT expenses while boosting efficiency
and agility for all business sizes.
Here we see Server virtualization which addresses inefficiencies by allowing multiple operating
systems to run on a single physical server as VMs, each with access to the underlying server’s
computing resources.
Network virtualization is the complete reproduction of a physical network in software.
Applications run on the virtual network exactly as if on a physical network.
Storage virtualization is the process of creating a software-based representation of network
storage devices into what appears to be a single unit. And by deploying desktops as a managed
service, you can respond more quickly to changing needs and opportunities.
Slide 8
Next, we will talk about the Software-Defined Data Center. A software-defined virtual data
center or SDDC is deployed with isolated computing, storage, networking, and security
resources that are faster than the traditional, hardware-based data center.
All the resources (CPU, memory, disk, and network) of a software-defined data center are
abstracted into files. This abstraction brings the benefits of virtualization at all levels of the
infrastructure, independent of the physical infrastructure. An SDDC can include the following
components:
• First is Service management and automation: We will use service management and
automation to track and analyze the operation of multiple data sources in the
multiregion SDDC. And deploy vRealize Operations Manager and vRealize Log Insight
across multiple nodes for continued availability and increased log ingestion rates.
• Next is the Cloud management layer: This layer includes the service catalog, which
houses the facilities to be deployed. The cloud management layer also includes
orchestration, which provides the workflows to deploy catalog items, and the self-
service portal for end users to access and use the SDDC.
• Then we have the Virtual infrastructure layer: This layer establishes a robust virtualized
environment that all other solutions integrate with. The virtual infrastructure layer
includes the virtualization platform for the hypervisor, pools of resources, and
virtualization control. Additional processes and technologies build on the infrastructure
to support Infrastructure as a Service or IaaS and Platform as a Service or PaaS.
• Then we have the Physical layer: This is the lowest layer of the solution but includes
compute, storage, and network components.
• And lastly, we have Security: Customers use this layer of the platform to meet
demanding compliance requirements for virtualized workloads and to manage business
risk.
Slide 9
Here we will have a look at vSphere and Cloud Computing.
As defined by the National Institute of Standards and Technology or NIST, cloud computing is
a model for the ubiquitous, convenient, and on-demand network access to a shared pool of
configurable computing resources.
For example, networks, servers, storage, applications, and services can be rapidly provisioned
and released with minimal management effort or little service provider interaction.
vSphere is the foundation for the technology that supports shared and configurable resource
pools. vSphere abstracts the physical resources of the data center to separate the workload from
the physical hardware.
A software user interface can provide the framework for managing and maintaining this
abstraction and allocation.
VMware Cloud Foundation is the unified SDDC platform that bundles vSphere (ESXi and
vCenter Server), vSAN, and NSX into a natively integrated stack to deliver enterprise-ready
cloud infrastructure. VMware Cloud Foundation discovers the hardware, installs the VMware
stack (ESXi, vCenter Server, vSAN, and NSX), manages updates, and performs lifecycle
management.
VMware Cloud Foundation can be self-deployed on compatible hardware or preloaded by
partners and can be used in both private and public clouds such as VMware Cloud on AWS or
VMware cloud providers.
Some use cases are:
• The Cloud infrastructure: We can use this to exploit the high performance, availability,
and scalability of the SDDC to run mission-critical applications such as databases, web
applications, and virtual desktop infrastructure (or VDI).
• Next, we have IT automation: This automates infrastructure and application delivery
with self-service capabilities.
• Then we have VDI: This provides a complete solution for VDI deployment at scale. It
simplifies the planning and design with standardized and tested solutions fully
optimized for VDI workloads.
• And lastly, we have the Hybrid cloud: You can build a hybrid cloud with a common
infrastructure and a consistent operational model, connecting your on-premises and off-
premises data center that is compatible, stretched, and distributed.
To find out more about VMware cloud computing, you can go to vmware.com/cloud-
computing/overview.html.
Slide 10
Now we will look at VMware Skyline.
VMware Skyline shortens the time it takes to resolve a problem so that you can get back to
business quickly. VMware Technical Support engineers can use VMware Skyline to view your
environment's configuration and the specific, data-driven analytics to help speed up problem
resolution. VMware Skyline provides the following benefits:
• First, we have Issue avoidance: This proactively identifies potential issues based on
environment-specific configuration, details, and usage.
• And it resolves issues before they even occur, improving environment reliability and
stability.
• It also shortens time to resolution: With Environment-specific, data-driven analytics
which accelerate problem resolution.
• It also provides personalized recommendations: so that the Resolution is specific to your
environment.
• And, there are no additional costs: You receive additional value with your current
support subscription such as Basic, Production, or Premier support.
Slide 11
Here we see the VMware Skyline Family which includes Skyline Health and Skyline Advisor.
Let’s look at some of the differences between them.
With Basic Support, you can access Skyline findings and recommendations for vSphere and
vSAN by using Skyline Health in the vSphere Client (version 6.7 and later).
With Production or Premier Support, you must use Skyline Advisor and the full functionality of
Skyline (including Log Assist).
With Premier Support, you receive additional Skyline features that are not available with
Production Support, for example:
• You receive an advanced set of proactive findings and recommendations.
• You receive scheduled and custom operational summary reports that provide an
overview of the proactive findings and recommendations.
• And you receive all additional benefits of Premier Support, including the following
services:
o A designated support team
o Direct access to senior-level technical support engineers
o Assistance with multivendor troubleshooting
o And onsite support services, such as Mission Critical Support or MCS,
Healthcare Critical Support or HCS, and Carrier Grade Support or CGS
Skyline supports vSphere, NSX for vSphere, vSAN, VMware Horizon, and vRealize
Operations Manager. A Skyline management pack for vRealize Operations Manager is also
available. If you install this management pack, you can see the Skyline proactive findings and
recommendations within the vRealize Operations Manager client.
The identification and tagging of VxRail and VMware Validated Design deployments help you
and VMware Technical Support to better understand and support multiproduct solutions.
Skyline also identifies all ESXi 5.5 objects within a vCenter Server instance and provides
additional information in VMware knowledge base article 51491 at kb.vmware.com/kb/51491.
This article details the end of general support for vSphere 5.5. For versions of vSphere, vSAN,
NSX for vSphere, VMware Horizon, and vRealize Operations Manager that are supported by
Skyline, see the Skyline Collector Release Notes at docs.vmware.com.
Slide 12
To review, you should now be able to meet the following objectives:
• Explain basic virtualization concepts, such as what a host, guest, VM, and hypervisor
are and how they work together.
• You should also be able to describe how vSphere fits into the software-defined data
center and the cloud infrastructure, and the different layers therein.
• And finally, you should be able to describe how to proactively manage your vSphere
environment with tools such as Vmware Skyline, and their suite of offerings.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
instructor. We will see you in the next lesson, and thank you for watching!
Slide 1
Welcome back! Let’s get started with Lesson 2: vSphere Virtualization of Resources.
Slide 2
The Learner Objective for this lesson is that you should be able to explain how vSphere
interacts with CPUs, memory, networks, and storage.
Slide 3
The main thing to look at regarding the interaction of vSphere components and hardware
components is the fact that VMs, and the applications and operating systems (or guests) on
them, consume host resources (CPU, memory, disk, and network) through the ESXi
Hypervisor. As we have seen, a virtual machine is an abstraction in software of a physical
machine. A VM turns components into files that act like physical components. For the list of all
supported operating systems, see the VMware Compatibility Guide at
vmware.com/resources/compatibility.
Slide 4
You can use virtualization to consolidate and run multiple workloads as VMs on a single
computer. This slide shows the differences between a virtualized and a nonvirtualized host.
In traditional architectures, the operating system interacts directly with the installed hardware.
The operating system schedules processes to run; allocates memory to applications; sends and
receives data on network interfaces; and both reads from and writes to attached storage devices.
In comparison, a virtualized host interacts with the installed hardware through a thin layer of
software called the virtualization layer or hypervisor. The hypervisor provides physical
hardware resources dynamically to VMs as needed to support the operation of the VMs. With
the hypervisor, VMs can operate with a degree of independence from the underlying physical
hardware. For example, a VM can be moved from one physical host to another. In addition, its
virtual disks can be moved from one type of storage to another without affecting the
functioning of the VM
Slide 5
With virtualization, you can run multiple VMs on a single physical host, with each VM sharing
the resources of one physical computer across multiple environments. VMs share access to
CPUs and are scheduled to run by the hypervisor. In addition, VMs are assigned their own
region of memory to use and they share access to the physical network cards and disk
controllers. Different VMs can run different operating systems and applications on the same
physical computer. When multiple VMs run on an ESXi host, each VM is allocated a portion of
the physical resources. The hypervisor schedules VMs like a traditional operating system
allocates memory and schedules applications. These VMs run on various CPUs. The ESXi
hypervisor can also overcommit memory. Memory is overcommitted when your VMs can use
more virtual RAM than the physical RAM that is available on the host. VMs, like applications,
use network and disk bandwidth. However, VMs are managed with elaborate control
mechanisms to manage how much access is available for each VM. With the default resource
allocation settings, all VMs associated with the same ESXi host receive an equal share of
available resources.
Let’s look at this resource sharing piece by piece.
Slide 6
The first piece we have is CPU virtualization. The virtualization layer runs instructions only
when needed to make VMs operate as if they were running directly on a physical machine.
CPU virtualization is not emulation. With a software emulator, programs can run on a computer
system other than the one for which they were originally written. Emulation provides
portability but might negatively affect performance. CPU virtualization is not emulation
because the supported guest operating systems are designed for x64 processors. Using the
hypervisor, the operating systems can run natively on the hosts’ physical x64 processors. When
many virtual VMs are running on an ESXi host, those VMs might compete for CPU resources.
When CPU contention occurs, the ESXi host time slices the physical processors across all
virtual machines so that each VM runs as if it had a specified number of virtual processors.
Slide 7
Next, we have Memory Virtualization. When an application starts, it uses the interfaces
provided by the operating system to allocate or release virtual memory pages during the
execution. Virtual memory is a decades-old technique used in most general-purpose operating
systems. Operating systems use virtual memory to present more memory to applications than
they physically have access to. Almost all modern processors have hardware to support virtual
memory. Virtual memory creates a uniform virtual address space for applications. With the
operating system and hardware, virtual memory can handle the address translation between the
virtual address space and the physical address space. This technique adapts the execution
environment to support large address spaces, process protection, file mapping, and swapping in
Slide 8
The next piece of Resource Sharing is Virtual Networking. A VM can be configured with one
or more virtual Ethernet adapters. VMs use virtual switches on the same ESXi host to
communicate with one another by using the same protocols that are used over physical
switches, without the need for additional hardware. Virtual switches also support VLANs that
are compatible with standard VLAN implementations from other networking equipment
vendors. With VMware virtual networking, you can link local VMs together and link local
VMs to the external network through a virtual switch. A virtual switch, like a physical Ethernet
switch, forwards frames at the data link layer. An ESXi host might contain multiple virtual
switches. The virtual switch connects to the external network through outbound Ethernet
adapters, called vmnics. The virtual switch can bind multiple vmnics together, like NIC
teaming on a traditional server, offering greater availability and bandwidth to the VMs using
the virtual switch. Virtual switches are similar to modern physical Ethernet switches in many
ways. Like a physical switch, each virtual switch is isolated and has its own forwarding table.
So every destination that the switch looks up can match only ports on the same virtual switch
where the frame originated. This feature improves security, making it difficult for hackers to
break virtual switch isolation. Virtual switches also support VLAN segmentation at the port
level, so that each port can be configured as an access or trunk port, providing access to either
single or multiple VLANs. However, unlike physical switches, virtual switches do not require
the Spanning Tree Protocol because a single-tier networking topology is enforced. Multiple
virtual switches cannot be interconnected, and network traffic cannot flow directly from one
virtual switch to another virtual switch on the same host. Virtual switches provide all the ports
that you need in one switch. Virtual switches do not need to be cascaded because virtual
switches do not share physical Ethernet adapters, and leaks do not occur between virtual
switches.
Slide 9
Here we will look Virtualized Datastores.
To store virtual disks, ESXi uses datastores, which are logical containers that hide the specifics
of physical storage from VMs and provide a uniform model for storing VM files. Datastores
that you deploy on block storage devices use the VMFS format, a special high-performance file
system format that is optimized for storing virtual machines. VMFS is designed, constructed,
and optimized for a virtualized environment. It is a high-performance cluster file system
designed for virtual machines.
It functions in the following ways:
• It uses distributed journaling of its file system metadata changes for fast and resilient
recovery if a hardware failure occurs
• It also increases resource usage by providing multiple VMs with shared access to a
consolidated pool of clustered storage
Slide 10
And finally, we have GPU Virtualization GPUs can be used by developers of server
applications. Although servers do not usually have monitors, GPU support is important and
relevant to server virtualization, seeing as GPU graphics devices optimize complex graphics
operations. These operations can run at high performance without overloading the CPU.
Virtual GPUs can be added to VMs for the following use cases:
• Rich 2D and 3D graphics
• VMware Horizon virtual desktops
• Graphics-intensive applications, such as those used by architects and engineers,
• And server applications for massively parallel tasks, such as scientific computation
applications
You can configure VMs with up to four vGPU devices to cover use cases requiring multiple
GPU accelerators. VMware supports AMD and NVIDIA graphics cards.
Slide 11
To review, you should now be able to explain how vSphere interacts with CPUs, memory,
networks, and storage, and also understand what GPU Virtualization is used for.
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
instructor. We will see you in the next lesson, and thank you for watching!
Slide 1
Welcome Back! Let’s get started with Lesson 3: vSphere User Interfaces!
Slide 2
The Learner Objective for this Lesson is that you recognize the user interfaces for accessing the
vCenter Server system and ESXi hosts.
Slide 3
Here you can see the different vSphere User Interfaces. You can use the vSphere Client,
PowerCLI, VMware Host Client, and ESXCLI to interact with the vSphere environment.
VMware Host Client provides direct management of individual ESXi hosts. VMware Host
Client is generally used only when management through vCenter Server is not possible. With
the vSphere Client, which is an HTML5-based client, you can manage vCenter Server
Appliance and the vCenter Server object inventory. VMware Host Client and the vSphere
Client provide the following benefits:
• A Clean, modern UI
• No browser plug-ins to install or manage
• And it comes integrated into vCenter Server and ESXi
For information on ports and protocols, see ports.vmware.com.
Slide 4
Here we will take a closer look at the VMware Host Client.
Slide 5
Next, we see the vSphere client, which is also an HTML5-based client like the VMware Host
Client.
You manage the vSphere environment with the vSphere Client by connecting to vCenter Server
Appliance. vSphere Client in the upper-left corner of the banner on the vSphere Client
interface, helps you differentiate vSphere Client from other clients. You access the vSphere
Client from a supported browser at https:// followed by your vCenter Server Appliances Fully
Qualified Domain Name or IP Address and then /ui. When you use this URL to access the
vSphere Client, the URL internally redirects to port 9443 on your vCenter Server system.
Using the vSphere Client alleviates the need for Adobe Flex.
Slide 6
Now we will talk about PowerCLI and ESXCLI.
PowerCLI is a command-line and scripting tool that is built on Windows PowerShell, as such:
• It provides a PowerShell interface to vSphere API
• And it provides more than 700 cmdlets for managing and automating vSphere
The ESXCLI tool allows for remote management of ESXi hosts by using the ESXCLI
command set:
• ESXCLI can be downloaded from the VMware page at
code.vmware.com/web/tool/7.0/esxcli.
• ESXCLI commands can be run against a vCenter Server system and target any ESXi
system.
You can install ESXCLI on a Windows or Linux system, and you can run ESXCLI commands
from either system to manage ESXi systems.
For more information about ESXCLI, see code.vmware.com/web/tool/7.0/esxcli.
For more information about PowerCLI, see code.vmware.com/web/tool/12.0.0/vmware-
powercli.
Slide 7
To Review, you should now be able to recognize the user interfaces for accessing the vCenter
Server system and ESXi hosts.
This is the end of the Lesson 3 lecture. If you have any questions, please contact your
instructor. We will see you in the next lesson, and thank you for watching!
Slide 1
Hello and welcome back! We will now begin Lesson 4: Overview of ESXi.
Slide 2
The Learner Objectives for this lesson are as follows:
• Describe the ESXi host architecture
• Navigate the Direct Console User Interface (or DCUI) to configure an ESXi host
• Recognize user account best practices
• Install an ESXi host
• And finally configure ESXi host settings
Slide 3
The first thing to note when working with ESXi 7.0 is that you must ensure that your physical
servers are supported by ESXi 7.0. To do so, simply check the VMware Compatibility Guide at
vmware.com/resources/compatibility. You can obtain a free version of ESXi, called vSphere
Hypervisor, or you can purchase a licensed version with vSphere. ESXi can be installed on a
hard disk, a USB device, or an SD card. ESXi can also be installed on diskless hosts (directly
into memory) with vSphere Auto Deploy. ESXi has a small disk footprint for added security
and reliability.
ESXi provides additional protection with the following features:
• A Host-based firewall: To minimize the risk of an attack through the management
interface, ESXi includes a firewall between the management interface and the network.
• Memory hardening: The ESXi kernel, user-mode applications, and executable
components, such as drivers and libraries, are located at random, nonpredictable
memory addresses. Combined with the nonexecutable memory protections made
Slide 4
To configure an ESXi Host you use the Direct Console User Interface (or DCUI). The DCUI is
a text-based user interface with keyboard-only interaction, and is a low-level configuration and
management interface, accessible through the console of the server, that is used primarily for
initial basic configuration. You can start customizing system settings by pressing F2.
Slide 5
Administrators use the DCUI to configure an ESXi hosts root access settings. The
administrative username for the ESXi host is root. The root password must be configured
during the ESXi installation process. You can also enable or disable lockdown mode from the
DCUI. This:
• Limits management of the host to vCenter Server
• And can be configured only for hosts managed by a vCenter Server instance
Slide 6
You also use the DCUI to configure an ESXi Hosts Management Network. You must set up
your IP address before your ESXi host is operational. By default, a DHCP-assigned address is
configured for the ESXi host. To change or configure basic network settings, you use the
DCUI. In addition to changing IP settings, you perform the following tasks from the DCUI:
• Configure VLAN settings.
• Configure IPv6 addressing.
• Set custom DNS suffixes.
• Restart the management network (without rebooting the system).
• Test the management network (using ping and DNS requests).
• And, Disable a management network.
Slide 7
From the DCUI, you can change the keyboard layout, view support information, such as the
host’s license serial number, and view system logs. The default keyboard layout is U.S.
English. You can use the troubleshooting options, which are disabled by default, to enable or
disable troubleshooting services:
• Such as vSphere ESXi Shell: which is used for troubleshooting issues locally
• Or SSH: Which is used for troubleshooting issues remotely by using an SSH client, for
example, PuTTY
The best practice is to keep troubleshooting services disabled until they are necessary, for
example, when you are working with VMware technical support to resolve a problem. By
selecting the Reset System Configuration option, you can reset the system configuration to its
software defaults and remove custom extensions or packages that you added to the host.
Slide 8
You can use the vSphere Client to customize essential security settings that control remote
access to an ESXi host. An ESXi host includes a firewall as part of the default installation. On
ESXi hosts, remote clients are typically prevented from accessing services on the host.
Similarly, local clients are typically prevented from accessing services on remote hosts. To
ensure the integrity of the host, few ports are open by default. To provide or prevent access to
certain services or clients, you must modify the properties of the firewall. You can configure
firewall settings for incoming and outgoing connections for a service or a management agent.
For some services, you can manage service details. For example, you can use the Start, Stop, or
Restart buttons to change the status of a service temporarily. Alternatively, you can change the
startup policy so that the service starts with the host or with port use. For some services, you
can explicitly specify IP addresses from which connections are allowed. Also, Services, such as
the NTP client and the SSH client, can be managed by the administrator. And Lockdown mode
prevents remote users from logging in to the host directly. The host is accessible only through
the DCUI or vCenter Server.
Slide 9
On an ESXi host, the root user account is the most powerful user account on the system. The
root user can access all files and all commands. Securing this account is the most important step
that you can take to secure an ESXi host. Whenever possible, use the vSphere Client to log in to
the vCenter Server system and manage your ESXi hosts. In some unusual circumstances, for
example, when the vCenter Server system is down, you use VMware Host Client to connect
directly to the ESXi host. Although you can log in to your ESXi host through the vSphere CLI
or through vSphere ESXi Shell, these access methods should be reserved for troubleshooting or
configuration that cannot be accomplished by using the VMware Host Client. If a host must be
managed directly, avoid creating local users on the host. If possible, join the host to a Windows
domain and log in with domain credentials instead.
To summarize, when assigning user accounts to access ESXi hosts or vCenter Server systems,
ensure that you follow these security guidelines:
• Strictly control root privileges to ESXi hosts.
• Create strong root account passwords that have at least eight characters.
• Use special characters, case changes, and numbers. And make sure you change
passwords periodically.
• Manage ESXi hosts centrally through the vCenter Server system by using the
appropriate vSphere client.
• And Minimize the use of local users on ESXi hosts
Slide 10
Network Time Protocol (or NTP) is an Internet standard protocol that is used to synchronize
computer clock times in a network.
The benefits of synchronizing an ESXi host’s time include:
• Performance data can be displayed and interpreted properly.
• Accurate time stamps appear in log messages, which make audit logs meaningful.
• And VMs can synchronize their time with the ESXi host. Time synchronization is
beneficial to applications, such as database applications, running on VMs.
NTP is a client-server protocol. When you configure the ESXi host to be an NTP client, the
host synchronizes its time with an NTP server, which can be a server on the Internet or your
corporate NTP server. For information about NTP, see www.ntp.org. For more information
about timekeeping, see VMware knowledge base article 1318 at kb.vmware.com/kb/1318.
Slide 11
To review the Learner Objectives, you should now be able to:
• Describe the ESXi host architecture
• Navigate the Direct Console User Interface (or DCUI) to configure an ESXi host
• Recognize user account best practices
• Install an ESXi host
• And Configure ESXi host settings
Slide 12
To tie this back into the Virtual Beans analogy, as a Virtual Beans administrator, you should
now understand essential vSphere terminology.
Your initial takeaways about vSphere should be as follows:
• vSphere is the starting point for building a software-defined data center.
• ESXi hosts are highly secure platforms on which Virtual Beans applications run.
• And you should always check the VMware Compatibility Guide to ensure that your
physical servers support ESXi 7.0.
Slide 13
Some key points from Module 2 are:
• Virtual machines are hardware independent.
• VMs share the physical resources of the ESXi host on which they reside.
• vSphere abstracts CPU, memory, storage, and networking for VM use.
• And the ESXi hypervisor runs directly on the host.
Slide 14
This is the end of the Lesson 4 Lecture and the end of Module 2.
The labs and assignments associated with this module are as follows:
• Lab 1: Navigating the vSphere Web Client
• Lab 2: Installing and Configuring an ESXi Host
• And lastly, we have the Module 2 Quiz: Introduction to vSphere and the Software-
Defined Data Center.
If you have any questions, please contact your Instructor. We will see you in the next Module,
and thanks for watching.
Slide 1
Hello and welcome back! Today we will be going over Lesson 1: Creating Virtual Machines!
Slide 2
The Learner Objectives for this lesson are as follows:
• Create and provision a virtual machine
• Describe how to import a virtual appliance OVF template
• Explain the importance of VMware Tools
• And Install VMware Tools
Slide 3
The optimal method for provisioning VMs for your environment depends on factors such as the
size and type of your infrastructure and the goals that you want to achieve. You can use the
New Virtual Machine wizard to create a single VM if no other VMs in your environment meet
your requirements, such as a particular operating system or hardware configuration. For
example, you might need a VM that is configured only for testing purposes. You can also create
a single VM, install an operating system on it, and use that VM as a template from which to
clone other VMs. You can also deploy VMs, virtual appliances, and vApps stored in Open
Virtual Machine Format (or OVF) to use a preconfigured VM. A virtual appliance is a VM that
typically has an operating system and other software preinstalled. You can deploy VMs from
OVF templates that are on local file systems (for example, local disks such as C:), removable
media (for example, CDs or USB keychain drives), shared network drives, or URLs. In addition
to using the vSphere Client, you can also use VMware Host Client to create a VM by using
OVF files. However, several limitations apply when you use VMware Host Client for this
deployment method. For information about OVF and OVA limitations for the VMware Host
Client, see vSphere Single Host Management - VMware Host Client at this URL
https://docs.vmware.com/en/VMware-vSphere/7.0 com.vmware.vsphere.hostclient.doc/GUID-
509C12B2-32F2-4928-B81B-DE87C7B2A5F6.html.
Slide 4
Here you can see that accessing the New Virtual Machine Wizard is as easy as right clicking
the desired host and selecting “New Virtual Machine”. You can easily provision VMs
following the prompts in the Wizard.
Slide 5
The New Virtual Machine wizard prompts you for standard information:
• The VM name: If using the vSphere Client, you can also specify the folder in which to
place the VM.
• The resource on which the VM runs: If using VMware Host Client, you create the VM
on the host that you are logged in to. If using the vSphere Client, you can specify a host,
a cluster, a vApp, or a resource pool. The VM can access the resources of the selected
object.
• The datastore on which to store the VM’s files: Each datastore might have a different
size, speed, availability, and other properties.
• The available datastores are accessible from the destination resource that you select.
• The guest operating system to be installed into the VM.
• The number of NICs, the network to connect to, and the network adapter type.
• And the Virtual disk provisioning choice.
Slide 6
Here we see the “Customize Settings” page in the New VM Wizard. The resources available to
choose from are based on what hosts and datastores you chose in previous steps. This is also
where you can attach your Guest Operating System ISO to your virtual CD/DVD Drive.
Slide 7
To install the guest operating system, you interact with the VM through the VM console. Using
the vSphere Client, you can attach a CD, DVD, or ISO image containing the installation image
to the virtual CD/DVD drive. On the slide, the Windows Server 2008 guest operating system is
being installed. You can use the vSphere Client to install a guest operating system. You can
also install a guest operating system from an ISO image or a CD. Installing from an ISO image
is typically faster and more convenient than a CD installation.
For more information about installing guest operating systems, see vSphere Virtual Machine
Administration at this address: docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html
For more about the supported guest operating systems, see VMware Compatibility Guide at
vmware.com/resources/compatibility.
Slide 8
A virtual appliance is a preconfigured VM that typically includes a preinstalled guest operating
system and other software. A virtual appliance is usually designed for a specific purpose, for
example, to provide a secure web browser, a firewall, or a backup and recovery utility. A virtual
appliance can be added or imported to your vCenter Server system inventory or ESXi
inventory. Virtual appliances can be imported from websites such as the VMware Virtual
Appliance Marketplace at marketplace.vmware.com/vsx.
Virtual appliances are deployed as OVF templates. OVF is a platform-independent, efficient,
extensible, and open packaging and distribution format for VMs. OVF files are compressed,
resulting in faster downloads. The vSphere Client validates an OVF file before importing it and
ensures that it is compatible with the intended destination server. If the appliance is
incompatible with the selected host, you cannot import it.
Slide 9
VMware Tools improves management of the VM by replacing generic operating system drivers
with VMware drivers tuned for virtual hardware. You install VMware Tools into the guest
operating system.
When you install VMware Tools, you install these items:
• The VMware Tools service: This service synchronizes the time in the guest operating
system with the time in the host operating system.
• A set of VMware device drivers, with additional Perfmon monitoring options.
• And a set of scripts that helps you automate guest operating system operations: You can
configure the scripts to run when the VM's power state changes.
VMware Tools enhances the performance of a VM and makes many of the ease-of-use features
in VMware products possible, Such as:
• Faster graphics performance and Windows Aero on operating systems that support Aero
• Shared folders between host and guest file systems
• Copying and pasting text, graphics, and files between the virtual machine and the host
or client desktop
• And Scripting that helps automate guest operating system operations
Although the guest operating system can run without VMware Tools, many VMware features
are not available until you install VMware Tools. For example, if VMware Tools is not
installed in your VM, you cannot use the shutdown or restart options from the toolbar. You can
use only the power options.
Slide 10
When installing VMware Tools ensure that you select the correct version of VMware Tools for
your guest operating system. To find out which VMware Tools ISO images are bundled with
vSphere 7, see the vSphere 7 Release Notes. The method for installing VMware Tools depends
on the guest operating system type. For more information about using Open VM tools, see
VMware Tools User Guide at docs.vmware.com/en/VMware-Tools/index.html.
Slide 11
Here we see that you can download a specific version of VMware Tools from the VMware
vSphere product download page.
Slide 12
To review the Learner Objectives, you should now be able to:
• Create and provision a virtual machine,
• Describe how to import a virtual appliance OVF template,
• Explain the importance of VMware Tools,
• And Install VMware Tools.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
instructor. We will see you in the next lesson, and thank you for watching!
Slide 1
Welcome back! We will now begin Lesson 2: Virtual Machine Hardware Deep Dive!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify the files that make up a VM
• Compare VM hardware versions
• Recognize the components of a VM
• Navigate the vSphere Client and examine VM settings
• Identify methods for accessing a VM console
• Identify virtual network adapters, including the enhanced VMXNET3
• And Distinguish between types of virtual disk provisioning
Slide 3
vSphere encapsulates each VM into a few files or objects, making VMs easier to manage and
migrate. The files and objects for each VM are stored in a separate folder on a VMFS, NFS,
vSAN, or vSphere Virtual Volumes datastore.
Slide 4
This slide lists some of the files that make up a VM. Except for the log files, the name of each
file starts with the VM's name Denoted here by VM_name. A VM consists of the following
files:
• A configuration file (denoted by the file extension .vmx).
• Swap files (with file extension .vswp) used to reclaim memory during periods of
contention.
• A file containing the VM's BIOS settings (with extension .nvram).
• A VM's current log file (denoted by .log) and a set of files used to archive old log
entries (denoted by a “–”, a number 1-6, and file extension .log).In addition to the
current log file, which is vmware.log, up to six archive log files are maintained at one
time. For example, vmware-1.log to vmware-6.log might exist at first. The next time an
archive log file is created, for example, when the VM is powered off and powered back
on, the following actions occur: The vmware-6.log is deleted, the vmware-5.log is
recalled to vmware-6.log, and so on. Finally, the previous vmware.log is recalled to the
vmware-1.log.
• Next each VM has one or more virtual disk files. The first virtual disk has files
VM_name.vmdk and VM_name-flat.vmdk. If the VM has more than one disk file, the
file pair for the subsequent disk files is called VM_name_#.vmdk and VM_name_#-
flat.vmdk. # is the next number in the sequence, starting with 1. For example, if the VM
called Test01 has two virtual disks, this VM has the Test01.vmdk, Test01-flat.vmdk,
Test01_1.vmdk, and Test01_1-flat.vmdk files.
• And lastly if the VM is converted to a template, a VM template configuration file
(denoted by .vmtx) replaces the VM configuration file (.vmx). A VM template is a
master copy of the VM.
The list of files shown on the slide is not comprehensive. For a complete list of all the types of
VM files, see vSphere Virtual Machine Administration at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-
55238059-912E-411F-A0E9-A7A536972A91.html.
Slide 5
Each guest OS sees ordinary hardware devices. The guest OS does not know that these devices
are virtual. All VMs have uniform hardware, except for a few variations that the system
administrator can apply. Uniform hardware makes VMs portable across VMware virtualization
platforms.
You can configure VM memory and CPU settings. vSphere supports many of the latest CPU
features, including virtual CPU performance counters. You can add virtual hard disks and
NICs. You can also add and configure virtual hardware, such as CD/DVD drives, and SCSI
devices. Not all devices are available to add and configure. For example, you cannot add video
devices, but you can configure available video devices and video cards.
You can add multiple USB devices, such as security dongles and mass storage devices, to a VM
that resides on an ESXi host to which the devices are physically attached. When you attach a
USB device to a physical host, the device is available only to VMs that reside on that host.
Those VMs cannot connect to a device on another host in the data center. A USB device is
available to only one VM at a time. When you remove a device from a VM, it becomes
available to other VMs that reside on the host.
You can add up to 16 PCI vSphere DirectPath I/O devices to a VM. The devices must be
reserved for PCI passthrough on the host on which the VM runs. Snapshots are not supported
with vSphere DirectPath I/O pass-through devices.
The SATA controller provides access to virtual disks and CD/DVD devices. The SATA virtual
controller appears to a virtual machine as an AHCI SATA controller.
The Virtual Machine Communication Interface (or VMCI) is an infrastructure that provides a
high-speed communication channel between a VM and the hypervisor. You cannot add or
remove VMCI devices.
The VMCI SDK facilitates the development of applications that use the VMCI infrastructure.
Without VMCI, VMs communicate with the host using the network layer. Using the network
layer adds overhead to the communication. With VMCI, communication overhead is minimal
and tasks that require communication can be optimized. VMCI can go up to nearly 10 Gbit/s
with 128 K sized queue pairs.
The following types of communication are available:
• Datagrams: Which are connectionless and similar to UDP queue pairs
• And Connection oriented: Which is similar to TCP
VMCI provides socket APIs that are similar to APIs that are used for TCP/UDP applications. IP
addresses are replaced with VMCI ID numbers.
For example, you can port netperf to use VMCI sockets instead of TCP/UDP. VMCI is disabled
by default. For more information about virtual hardware, see vSphere Virtual Machine
Administration at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Slide 6
Each release of a VMware product has a corresponding VM hardware version included. The
table shows the latest hardware version that each ESXi version supports. Each VM
compatibility level supports at least five major or minor vSphere releases.
For a complete list of virtual machine configuration maximums, see VMware Configuration
Maximums at configmax.vmware.com.
Slide 7
You size the VM's CPU and memory according to the applications and the guest operating
system.
You can use the multicore vCPU feature to control the number of cores per virtual socket in a
VM. With this capability, operating systems with socket restrictions can use more of the host
CPU’s cores, increasing overall performance.
A VM cannot have more virtual CPUs than the number of logical CPUs on the host, or more
than 256 vCPU’s, whichever comes first. The number of logical CPUs is the number of
physical processor cores, or twice that number if hyperthreading is enabled. For example, if a
host has 128 logical CPUs, you can configure the VM for 128 vCPUs.
You can set most of the memory parameters during VM creation or after the guest operating
system is installed. Some actions require that you power off the VM before changing the
settings.
The memory resource settings for a VM determine how much of the host’s memory is allocated
to the VM.
The virtual hardware memory size determines how much memory is available to applications
that run in the VM. A VM cannot benefit from more memory resources than its configured
virtual hardware memory size.
ESXi hosts limit the memory resource use to the maximum amount useful for the VM so that
you can accept the default of unlimited memory resources. You can reconfigure the amount of
Slide 8
Storage adapters provide connectivity for your ESXi host to a specific storage unit or network.
ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre
Channel over Ethernet, and Ethernet. ESXi accesses the adapters directly through device drivers
in the VMkernel.
Some choices in storage adapters are:
• BusLogic Parallel: Which is The latest Mylex (BusLogic) BT/KT-958 compatible host
bus adapter.
• LSI Logic Parallel: The LSI Logic LSI53C10xx Ultra320 SCSI I/O Controller is
supported by this.
• The LSI Logic SAS adapter has a serial interface.
• VMware Paravirtual SCSI: Which is a high-performance storage adapter that can
provide greater throughput and lower CPU use.
• The AHCI SATA controller: Which provides access to virtual disks and CD/DVD
devices. The SATA virtual controller appears to a VM as an AHCI SATA controller.
AHCI SATA is available only for VMs with ESXi 5.5 and later compatibility.
• And Virtual NVMe: NVMe is an Intel specification for attaching and accessing flash
storage devices to the PCI Express bus. NVMe is an alternative to existing block-based
server storage I/O access protocols.
Slide 9
If you remember from Lesson 1, when you create a new Virtual Machine you assign it to a
datastore; well you can also determine how that VM is provisioned or “stored” on that
datastore. There are two main types of VM disk provisioning: Thick-Provisioning, which we
talk about in this slide and Thin-Provisioning, which we will talk about in the next slide.
Thick provisioning uses all the defined disk space at the creation of the virtual disk (Thin
provisioning does not).
VM disks consume all the capacity, as defined at creation, regardless of the amount of data in
the guest operating system file. Thick-provisioned disk types are eager zeroed or lazy zeroed: In
a lazy-zeroed thick-provisioned disk, space required for the virtual disk is allocated during
creation. Data remaining on the physical device is not erased during creation. Later, the data is
zeroed out on demand on first write from the VM. This disk type is the default. In an eager-
zeroed thick-provisioned disk, the space required for the virtual disk is allocated during
creation. Data remaining on the physical device is zeroed out when the disk is created.
As you can see in the example above, we have created a new VM on the host and given a
Virtual Hard Disk size of 20GB. It has also been Thick-Provisioned, so the Host will go to the
assigned datastore and allocate out a full 20GB for the VM to use as its personal Hard Drive. If
it was eager zeroed then that entire 20GB would be wiped by putting zeros in every block. If it
was lazy zeroed then the full 20GB would be partitioned and allocated to the VM, but nothing
would be wiped or “zeroed” until the VM started using it, and then it would zero the disk block
by block.
Slide 10
In contrast to Thick-Provisioning, a thin-provisioned disk uses only as much datastore space as
the disk initially needs. If the thin disk needs more space later, it can expand to the maximum
capacity provisioned to it.
Thin provisioning is often used with storage array deduplication to improve storage use and to
back up VMs. Thin provisioning provides alarms and reports that track allocation versus
current use of storage capacity. Storage administrators can use thin provisioning to optimize the
allocation of storage for virtual environments. With thin provisioning, users can optimally but
safely use available storage space through overallocation.
As you can see in the example, we have the original thick-provisioned VM from the previous
slide, and we have created two more VMs each thin provisioned, and assigned to the same
datastore as the first. Now, you may have already noticed, but you can see that we have now
over allocated the datastore by 40GB. This works because the two new VMs are thin
provisioned and are currently only using 60GB (80GB counting the thick provisioned VM),
meaning that there is still 20GB left. This free 20GB of storage can be freely allocated between
the two VMs until they either reach their maximum allocation, or no longer need any more
storage. For instance, the third VM may only ever need 45GB of storage, this leaves 15GB for
the second VM to use. Or if an administrator deletes a user or a large amount of data off of the
third VM, and uses the unmap command to reclaim that space on the datastore, then that extra
space becomes freely available for the second VM to use.
Slide 11
Here we can see the differences between the different provisioning types, and that they differ in
terms of creation time, block allocation, layout, and zeroing out of allocated file blocks.
One thing to note is that while Thin Provisioning has the fastest creation time, and is the most
versatile in resource constrained environments, it, by far, has the most disk fragmentation, and
can cause some latency, especially when working with large datastores that have had several
blocks allocated over a long period of time. While most users will not experience issues with
this it is something to keep in mind.
Slide 12
VMs and physical machines communicate through a virtual network. When you configure
networking for a VM, you select or change the following settings:
• The Network adapter type
• The Port group to connect to
• The Network connection state
• And whether to connect to the network when the VM powers on, or not
For more information about virtual networks, see vSphere Networking at this address:
https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-35B40B0B-0C13-43B2-BC85-
18C9C91BE2D4.html.
Slide 13
The types of network adapters that are available depend on the following factors:
• Your VMs compatibility level (or hardware version), which depends on the host that
created or most recently updated it. For example, the VMXNET3 virtual NIC requires
hardware version 7 (ESX/ESXi 4.0 or later).
• Whether the VM compatibility is updated to the latest version for the current host.
• And what Guest operating system is being used
The following NIC types are supported:
• The E1000E: Which is an emulated version of the Intel 82574 Gigabit Ethernet NIC.
E1000E is the default adapter for Windows 8 and Windows Server 2012.
• And the E1000: Which is an emulated version of the Intel 82545EM Gigabit Ethernet
NIC, with drivers available in most newer guest operating systems, including Windows
XP and later and Linux versions 2.4.19 and later.
• A Flexible NIC: Which identifies itself as a Vlance adapter when a VM starts, but
initializes itself and functions as either a Vlance or a VMXNET adapter, depending on
which driver initializes it. With VMware Tools installed, the VMXNET driver changes
the Vlance adapter to the higher performance VMXNET adapter.
• The Vlance: Which is an emulated version of the AMD 79C970 PCnet32 LANCE NIC,
an older 10 Mbps NIC with drivers available in 32-bit legacy guest operating systems. A
VM configured with this network adapter can use its network immediately.
• VMXNET2 (Enhanced): Which is based on the VMXNET adapter but provides high-
performance features commonly used on modern networks, such as jumbo frames and
hardware offloads. VMXNET2 (Enhanced) is available only for some guest operating
systems on ESX/ESXi 3.5 and later. It is not supported for ESXi 6.7 and later.
• And VMXNET3: Which is a paravirtualized NIC designed for performance.
VMXNET3 offers all the features available in VMXNET2 and adds several new
features, such as multiqueue support (also known as Receive Side Scaling in Windows),
IPv6 offloads, and MSI/MSI-X interrupt delivery.
• Then we have SR-IOV pass-through: Which is a representation of a virtual function on
a physical NIC with SR-IOV support. This adapter type is suitable for VMs that require
more CPU resources or where latency might cause failure. If VMs are sensitive to
network delay, SR-IOV can provide direct access to the virtual functions of supported
physical NICs, bypassing the virtual switches and reducing overhead. SR-IOV pass-
through is available in ESXi 6.0 and later for Red Hat Enterprise Linux 6 and later, and
Windows Server 2008 R2 with SP2. An operating system release might contain a
default virtual function driver for certain NICs. For others, you must download and
install it from a location provided by the NIC or host vendor.
• Next is vSphere DirectPath I/O which allows a guest operating system on a VM to
directly access physical PCI and PCIe devices connected to a host. Pass-through devices
help your environment use resources efficiently and improve performance. You can
configure a pass-through PCI device on a VM by using the vSphere Client. VMs
configured with vSphere DirectPath I/O do not have the following features:
o Hot adding and removing of virtual devices
o Suspend and resume
o Record and replay
o Fault tolerance
o High availability
o Snapshots.
o And vSphere DRS: which has Limited availability
Since the VM can be part of a cluster but cannot migrate across hosts.
• And finally, with PVRDMA, multiple guests can access the RDMA device by using
verbs API, an industry-standard interface. A set of these verbs was implemented to
expose an RDMA-capable guest device (PVRDMA) to applications. The applications
can use the PVRDMA guest driver to communicate with the underlying physical device.
PVRDMA supports RDMA, providing the following functions:
o OS bypass
o Zero-copy
o Low latency and high bandwidth
o And less power use and faster data access
Slide 14
Virtual CPU (or vCPU) and virtual memory are the minimum required virtual hardware.
Having a virtual hard disk, virtual NICs, and other virtual devices make the VM more useful.
Some additional devices are:
• CD/DVD drives: For connecting to a CD, DVD, or ISO image.
• USB 3.0 and 3.1: Which is supported with host-connected and client-connected devices.
• Floppy drives: For connecting a VM to a floppy drive or a floppy image.
• Generic SCSI devices: A VM can be connected to additional SCSI adapters.
• And vGPUs: A VM can use GPUs on the physical host for high-computation activities.
For information about adding virtual devices to a VM, see vSphere Virtual Machine
Administration at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Slide 15
You can open the VM console from the vSphere Client.
The VM console provides the mouse, keyboard, and screen features to control the VM.
You can use the standalone VMware Remote Console Application (or VMRC) to connect to
client devices.
You use the VM console to access the BIOS of the VM, install an operating system on a VM,
power the VM on and off, and reset the VM.
The VM console is normally not used to connect to the VM for daily tasks. Remote Desktop
Connection, Virtual Network Connection, or other options are normally used to connect to a
virtual desktop. The VM console is used for tasks such as power cycling, configuring hardware,
and troubleshooting network issues.
Slide 16
You should now be able to meet the following objectives:
• Identify the files that make up a VM
• Compare VM hardware versions
• Recognize the components of a VM
Slide 1
Welcome back! We will now begin Lesson 3: Introduction to Containers!
Slide 2
The Learner Objectives for this lesson are as follows:
• Describe the benefits and use cases for containers
• Identify the parts of a container system
• And Differentiate between containers and virtual machines
Slide 3
In data centers, traditional applications are enhanced with modern application capabilities and
models. But traditional application development is different from modern application
development. The Traditional Application Development model is described as follows:
Waterfall development: Waterfall development cycles take from 6 to 12 months to deliver a
product. Because this cycle is relatively long in the context of software development,
requirements are at risk of changing. In addition, initial requirements might be misunderstood,
but this misunderstanding might be realized only at the end of the project.
Handover to the operations team: When a product is ready for production, it is handed over to
the operations team. The operations team deploys and manages the software from that point.
Without proper training and documentation, the team can find it difficult to skill up and
effectively manage the software.
Monolithic applications: Traditional applications are developed to run as a single large
monolithic process. Large does not refer to the lines of code but to the large number of
functionalities and responsibilities. Typically, traditional applications are deployed to a single
VM using manual processes. And they are not typically designed to be scalable. The only
option is to increase CPU, disk, and memory to achieve higher performance.
Separate environments: Developers start developing on their workstations. Eventually, code
moves to testing, staging, and production environments. Each environment is manually
configured, resulting in a relatively large amount of effort in all identical environments. Each
environment creates different software libraries, packages, and configurations. This variation
causes issues for developers who must determine why the application works in one
environment but not in the others.
Slide 4
Modern Application Development is aiming to streamline and replace the traditional methods
in that it typically uses microservices style architectures. Meaning that monolithic applications
are broken into many smaller standalone modular functions or services that make it easier for
developers to be innovative when producing and changing code. Also, it should ideally
minimize time to market by: Streamlining the process of deploying new code into a staging
environment for testing. Identifying and address bugs almost immediately. And quickly
deploying small, incremental changes in the production environment that are easily withdrawn
if problems arise.
Modern App Development should deliver updates and features quickly by minimizing the time
it takes to build, test, and release new features. It should increase product quality and avoid risk
by automating tests, getting user feedback, and improving software iteratively.
And finally Modern App Development should have fewer resource requirements and more
productivity by applying continuous development and continuous integration in small iterations
to reduce labor.
Slide 5
Containers are an ideal technology for supporting microservices because the goals of containers
(they're lightweight, easily packaged, can run anywhere) align well with the goals of a
microservices architecture. Applications that run on cloud-based environments are designed
with failure in mind. They are built to be resilient, to tolerate network or database outages, and
to degrade gracefully.
Typically, cloud-native applications use microservice-based architectures. The term micro does
not correlate to lines of code. It refers to functionality and responsibility. Each microservice
should be responsible for specific parts of the system. In the example, the application is broken
into multiple services, including a UI and user, order, and product services. Each service has its
own database. With this architecture, each service can be scaled independently. For example,
during busy times, the order service might need to be scaled to handle high throughput.
The Twelve-Factor App principles describe characteristics of microservice and cloud-native
applications.
Slide 6
Here we have our Container Terminology.
A Container is an application packaged with dependencies.
A Container Engine is a runtime engine that manages containers.
Docker is the most recognized runtime engine for container support, and it is often used as a
synonym for many aspects of container technologies.
A Container Host is a virtual machine or physical machine on which the containers and
container engine run.
And Kubernetes is a Google-developed orchestration for containers.
Slide 7
Similar to how a VM is encapsulated in to files and objects on a datastore, a container is an
encapsulation of an application and dependent binaries and libraries. The application is
decoupled from the operating system and becomes a serverless function.
Among the reasons that containers were popularized by software developers are:
• They make coding easier, locally, and anywhere.
• And you can deploy and test applications quickly in a staging environment. No
operating system or load is required.
Slide 8
Containers are a new format of virtualized workload. They require CPU, memory, network,
security, and storage.
Containers satisfy developers’ need for speed by removing dependencies on underlying
operating systems in that they:
• Change the paradigm on security by using a discard and restart approach to patching
and upgrades.
• They use structured tooling to fully automate updates of application logic running
inside.
• And they provide an easy user experience for developers that is infrastructure-agnostic
(meaning that it can run on any cloud).
The opportunities containers present are many, given the infrastructure and operational
complexity that they offer.
Slide 9
Administrators provide container hosts, which are the base structure that developers use to run
their containers. A robust microservices system includes more deliverables, many of which are
built using containers.
For developers to focus on providing services to customers, operations must provide a reliable
container host infrastructure. In vSphere with Kubernetes, the container hosts are Photon-based
VMs.
Slide 10
Containers have the following characteristics:
• A container can run on any container host with the same operating system kernel that is
specified by that container.
• A running container is accessed using its FQDN or its unique IP address.
• Each container can access only its own resources in the shared environment.
o When you log into a container using a remote terminal (such as SSH), you see
no indication that other containers are running on the same container host.
Slide 11
A container engine is a control plane that is installed on each container host. The control plane
manages the containers on that host. Docker is the most commonly used container engine. The
container engine runs as a daemon process on the container host OS. When a user requests that
a container is run, the container engine gets the container image from an image registry (or
locally, if already downloaded) and runs the container as a process.
Container engines perform several functions, they:
• Build container images from source code (for example, Dockerfile).
Alternatively, load container images from a repository.
• They create running containers based on a container image.
• Commit a running container to an image.
• Save an image and push it to a repository.
• Stop and remove containers.
• Suspend and restart containers.
• And they report container status.
Slide 12
With virtualization, multiple physical machines can be consolidated into a single physical
machine that runs multiple VMs. Each VM provides virtual hardware that the guest OS uses to
run applications. Multiple applications run on a single VM but these applications are still
logically separated and isolated. A concern about VMs is that they are hundreds of megabytes
to gigabytes in size and contain many binaries and libraries that are not relevant to the main
application running on them. With containers, developers take a streamlined base OS file
system and layer on only the required binaries and libraries that the application depends on.
When a container is run as a process on the container host OS, the container can see its
dependencies and base OS packages. The container is isolated from all other processes on the
container host OS. The container processes are the only processes that run on a minimal system.
From the container host OS perspective, the container is another process that is running, but it
has a restricted view of the file system and potentially restricted CPU and memory.
Slide 13
The best way to summarize the differences between VMs and Containers as shown here is that,
while VMs virtualize the Host Hardware through the Hypervisor, Containers virtualize the Host
OS through the Container Engine. Containers are the ideal technology for microservices
because the goals of containers (lightweight, easily packaged, can run anywhere) align with the
goals and benefits of the microservices architecture.
Operators get modularized application components that are small and can fit into existing
resources. Developers can focus on the logic of modularized application components, knowing
that the infrastructure is reliable and supports the scalability of modules.
Slide 14
Kubernetes is the preeminent Container management and orchestration software, and was
originally developed by Google, but was donated to, and is now governed and standardized by
the Cloud Native Computing Foundation (or CNCF). Kubernetes automates many key
operational responsibilities, providing the developer with a reliable environment.
Kubernetes performs the following functions, it:
• Groups containers that make up an application into logical units for easy management
and discovery
• Automatically places containers based on their resource requirements
• Restarts failed containers, replaces, and reschedules containers when hosts fail, and
stops containers that do not respond to your user-defined health check
• It progressively rolls out changes to your application, ensuring that it does not stop all
your instances at the same time and enabling zero downtime
• And it Allocates IP addresses, mounts the storage system of your choice, load balances,
and generally looks after the containers
Kubernetes manages containers across multiple container hosts, similar to how vCenter Server
manages all ESXi hosts in a cluster. Running Docker without Kubernetes is like running ESXi
hosts without vCenter Server to manage them.
Slide 15
Kubernetes orchestrates containers that support the application. However, running Kubernetes
in production is not easy, especially for operations teams. The top challenges of running
Kubernetes are related to reliability, security, networking, scaling, logging, and complexity.
How do you monitor Kubernetes and the underlying infrastructure?
How do you build a reliable platform to deploy your applications?
How do you handle the complexity that this layer of abstraction introduces?
For years, VMware has helped to solve these types of problems for IT. VMware can offer its
expertise and solutions in this area.
Slide 16
Application developers prefer using Kubernetes rather than programming to the infrastructure.
For example, an application developer must build an ELK stack. The developer prefers to deal
with the Kubernetes API. The developer wants to use the resources, load balancer, and all the
primitives that Kubernetes constructs, rather than worry about the underlying infrastructure.
But the infrastructure is still there. It must be mapped for Kubernetes to use it. Usually, that
mapping is done by a platform operator so the developer can use the Kubernetes constructs.
The slide shows how the mapping is done with the VMware software-defined data center.
The resources and availability zones map to vSphere clusters, security policy and load-
balancing map to NSX, persistent volumes map to vSphere datastores, and metrics map to
Wavefront. Each of these items provides value.
Slide 17
You should now be able to meet the following objectives:
• Describe the benefits and use cases for containers
• Identify the parts of a container system
Slide 18
As a Virtual Beans administrator, you want to start creating VMs with different configurations
and testing your applications. Your key takeaways are that:
• The VMware Compatibility Guide can help you determine what versions of Windows
and Linux guest operating systems are supported in ESXi 7.0.
• That virtual machines support a wide selection of virtual hardware devices, for example,
vGPUs and NVME adapters.
• And that vSphere provides the underlying infrastructure on which containers and
Kubernetes run.
Slide 19
The Key Points of Module 3 are that:
• A VM is a set of files that are encapsulated into a folder and placed on a datastore.
• VMs can be provisioned using the vSphere Client and VMware Host Client.
• VMware Tools increases the overall performance of the VM's guest operating system.
• The virtual hardware version, or VM compatibility level, determines the operating
system functions that a VM supports.
• And that Containers are the ideal technology for microservices because the goals of
containers align with the goals and benefits of the microservices architecture.
Slide 20
This is the end of Module 3 and the Lesson 3 Lecture.
The associated labs and assignments for this module are:
• Lab 4: Configuring and Deploying a Virtual Machine
• Lab 5: Adding and Managing Virtual Hardware
• And the Module 3: Virtual Machines Quiz
If you have any questions, please contact your Instructor. We will see you in the next Module,
and thanks for watching!
Slide 1
Welcome Back! We will now begin Lesson 1: Centralized Management with vCenter Server!
Slide 2
After Completing this lesson, you should be able to meet the following objectives:
• Describe the vCenter Server architecture
• Recognize how ESXi hosts communicate with vCenter Server
• And Identify vCenter Server services
Slide 3
vCenter Server acts as a central administration point for ESXi hosts and virtual machines that
are connected in a network,
• It directs the actions of VMs and hosts
• and it runs on a Linux-based appliance
With vCenter Server, you can pool and manage the resources of multiple hosts.
You can deploy vCenter Server Appliance on an ESXi host in your infrastructure. vCenter
Server Appliance is a preconfigured Linux-based virtual machine that is optimized for running
vCenter Server and the vCenter Server components.
vCenter Server Appliance provides advanced features, such as vSphere DRS, vSphere HA,
vSphere Fault Tolerance, vSphere vMotion, and vSphere Storage vMotion.
Slide 4
vCenter Server is a service that runs in vCenter Server Appliance. vCenter Server acts as a
central administrator for ESXi hosts that are connected in a network. The vCenter Server
Appliance package contains the following software:
• Photon
• PostgreSQL database
• And vCenter Server services
During deployment, you can select the vCenter Server Appliance size for your vSphere
environment and the storage size for your database requirements.
Slide 5
vCenter Server services include:
• vCenter Server
• vSphere Client
• vCenter Single Sign-On
• License service
• vCenter Lookup Service
• VMware Certificate Authority
• Content Library
• And vSphere ESXi Dump Collector
When you deploy vCenter Server Appliance, all these services are included.
Although installation of vCenter Server services is not optional, administrators can choose
whether to use their functionalities.
Slide 6
The vCenter Server architecture relies on the following components:
• The vSphere Client: You use this client to connect to vCenter Server so that you can
manage your ESXi hosts centrally. When an ESXi host is managed by vCenter Server,
you should always use vCenter Server and the vSphere Client to manage that host.
• The vCenter Server database: The vCenter Server database is the most important
component. The database stores inventory items, security roles, resource pools,
performance data, and other critical information for vCenter Server.
• And finally managed hosts: You can use vCenter Server to manage ESXi hosts and the
VMs that run on them.
Slide 7
vCenter Single Sign-On provides authentication across multiple vSphere components through a
secure token mechanism:
• In step one, the User logs in to the vSphere Client.
• In step two, vCenter Single Sign-On authenticates credentials against a directory service
(for example, Active Directory).
• For step three A SAML token is sent back to the user's browser.
• And in step four, The SAML token is sent to vCenter Server, and the user is granted
access.
Slide 8
You cannot create an Enhanced Linked Mode group after you deploy vCenter Server
Appliance. An Enhanced Linked Mode group can only be created during the deployment of
vCenter Server Appliance
Enhanced Linked Mode provides the following features:
• You can log in to all linked vCenter Server instances simultaneously with a single user
name and password.
• You can view and search the inventories of all linked vCenter Server instances in the
vSphere Client.
• And Roles, permissions, licenses, tags, and policies are replicated across linked vCenter
Server instances.
To join vCenter Server instances in Enhanced Linked Mode, connect the vCenter Server
instances to the same vCenter Single Sign-On domain.
Enhanced Linked Mode requires the vCenter Server Standard licensing level. This mode is not
supported with vCenter Server Foundation or vCenter Server for Essentials.
Slide 9
vCenter Server provides direct access to the ESXi host through a vCenter Server agent called
virtual provisioning X agent (or vpxa). The vpxa process is automatically installed on the host
and started when the host is added to the vCenter Server inventory. The vCenter Server service
(or vpxd) communicates with the ESXi host daemon (or hostd) through the vCenter Server
agent (or vpxa).
Clients that communicate directly with the host, and bypass vCenter Server, converse with
hostd. The hostd process runs directly on the ESXi host and manages most of the operations on
the ESXi host. The hostd process is aware of all VMsthat are registered on the ESXi host, the
storage volumes visible to the ESXi host, and the status of all VMs. Most commands or
operations come from vCenter Server through vpxa.
Examples include creating, migrating, and powering on virtual machines. Acting as an
intermediary between the vpxd process, which runs on vCenter Server, and the hostd process,
vpxa relays the tasks to perform on the host.
When you are logged in to the vCenter Server system through the vSphere Client, vCenter
Server passes commands to the ESXi host through the vpxa.
The vCenter Server database is also updated. If you use VMware Host Client to communicate
directly with an ESXi host, communications go directly to the hostd process and the vCenter
Server database is not updated.
Notice the ports used in the slide. Remember from Module 2 Lesson 3 that the vSphere client
automatically internally redirects to port 9443. The ESXi Host, and by extension hostd and
vpxa, uses TCP/UDP port 902 for the vCenter Server agent. All other systems use port 443 as
shown.
Slide 10
Here you can see the Configuration Maximums for vCenter Server Appliance.
For more information on vCenter Server Appliance Scalability go to: configmax.vmware.com.
Slide 11
You should now be able to meet the following objectives:
• Describe the vCenter Server architecture
• Recognize how ESXi hosts communicate with vCenter Server
• And Identify vCenter Server services
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We hope to see you in the next Lesson and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 2: Deploying vCenter Server Appliance!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Deploy vCenter Server Appliance into an infrastructure
• And Configure vCenter Server settings
Slide 3
Before deploying vCenter Server Appliance, you must complete several tasks:
• You must verify that all vCenter Server Appliance system requirements are met.
• You must get the fully qualified domain name (or FQDN) or the static IP of the host
machine on which you install vCenter Server Appliance.
• And you must ensure that clocks on all VMs in the vSphere network are synchronized.
For more information, see VMware ESXi Installation and Setup at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-
B2F01BF5-078A-4C7E-B505-5DFFED0B8C38.html.
Slide 4
The GUI installer performs validations and prechecks during the deployment phase to ensure
that no mistakes are made and that a compatible environment is created.
The GUI installer has several features:
• With the GUI installer, you can perform an interactive deployment of vCenter Server
Appliance.
• The GUI installer is a native application for Windows, Linux, and macOS.
• The installer has no dependency on browsers or plug-ins.
• And It performs validations and prechecks during the deployment.
Slide 5
The vCenter Server Appliance installation is a two-stage process:
• Stage 1: Is deployment of OVF
• And Stage 2: is Configuration
The deployment can be fully automated by using JSON templates with the CLI installer on
Windows, Linux, or macOS.
As you can see in the slide the vCenter Server presents several options:
• The Install option installs a new vCenter Server Appliance.
• The Upgrade option upgrades an existing vCenter Server Appliance instance, or
upgrades and converges an existing vCenter Server Appliance instance with external
Platform Services Controller.
• The Migrate option migrates from an existing Windows vCenter Server instance, or
migrates and converges an existing Windows vCenter Server instance with external
Platform Services Controller.
• The Restore option restores from a previous vCenter Server Appliance backup.
Slide 6
Stage 1 begins with the UI phase:
• You accept the EULA.
• You connect to the target ESXi host or vCenter Server system.
• Define the vCenter Server Appliance name and root password.
• Select compute size, storage size, and datastore location.
• And define networking settings.
Stage 1 continues with the deployment phase:
• The OVF is deployed to the ESXi host.
• And Disks and networking are configured.
Slide 7
In stage 2, you configure whether to use the ESXi host or NTP servers as the time
synchronization source.
You can also enable SSH access. SSH access is disabled by default.
You can also Create a vCenter Single Sign-On domain or join an existing SSO domain.
And you can join the Customer Experience Improvement Program (or CEIP) if you so desire.
Slide 8
After you deploy vCenter Server Appliance, use the vSphere Client to log in and manage your
vCenter Server inventory at the address shown.
Slide 9
To access the vCenter Server system settings by using the vSphere Client, select the vCenter
Server system in the navigation pane, click the Configure tab, and expand Settings.
Slide 10
The vCenter Server Appliance Management Interface is an HTML client designed to configure
and monitor vCenter Server Appliance. Using the vCenter Server Appliance Management
Interface (or VAMI), you can configure and monitor your vCenter Server Appliance instance.
Tasks include:
• Monitoring resource use by the appliance
• Backing up the appliance
• Monitoring vCenter Server services
• And Adding additional network adapters
The vCenter Server Appliance Management Interface connects directly to port 5480.
Use this address: https://FQDN_or_IP_address:5480.
Slide 11
With vCenter Server Appliance 7.0 multihoming, you can configure multiple NICs to manage
network traffic.
For example, vCenter Server High Availability requires a second NIC for its private network.
A maximum of four NICs are supported for multihoming. All four multihoming-supported NIC
configurations are preserved during upgrade, backup, and restore processes.
Slide 12
You should now be able to meet the following objectives:
• Deploy vCenter Server Appliance into an infrastructure
• And Configure vCenter Server settings
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
Instructor. We will see you next time, and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 3: vSphere Licensing!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• View licensed features for vCenter Server or an ESXi host.
• And Add license keys to vCenter Server.
Slide 3
Licensing vSphere components is a two-step process:
• Step one is to Add a license to the vCenter License Service.
• And step two is to Assign the license to the ESXi hosts, vCenter Server Appliance
instances, and other vSphere components.
Slide 4
The License Service manages the license assignments for ESXi hosts, vCenter Server systems,
and clusters with vSAN enabled.
The License Service runs on vCenter Server Appliance, and performs the following functions:
• It Provides centralized license management
• Provides an inventory of vSphere licenses
• And it Manages the license assignments for products that integrate with vSphere, such
as Site Recovery Manager.
You can monitor the health and status of the License Service by using the vCenter Appliance
Management Interface.
Slide 5
In the vSphere environment, license reporting and management are centralized. All product and
feature licenses are encapsulated in 25-character license keys that you can manage and monitor
from vCenter Server.
You can view license information by product, license key, or asset:
• A Product is a license to use a vSphere software component or feature, for example,
Evaluation Mode or vSphere Enterprise Plus.
• A License Key is the serial number that corresponds to a product.
• An Asset is a machine on which a product is installed. For an asset to run certain
software legally, the asset must be licensed.
You must assign a license to vCenter Server before its 60-day evaluation period expires.
You select Menu > Administration > Licenses to open the Licenses pane.
Slide 6
The slide shows how to assign a License to a vSphere Component. You can do so by selecting
the asset you want to assign a license too, then selecting “Assign License”, selecting the license
you want to assign in the “Assign License” window, and finally clicking OK.
Slide 7
Before purchasing and activating licenses for ESXi and vCenter Server, you can install the
software and run it in evaluation mode. Evaluation mode is intended for demonstrating the
software or evaluating its features. During the evaluation period, the software is operational.
The evaluation period is 60 days from the time of installation. During this period, the software
notifies you of the time remaining until expiration. The 60-day evaluation period cannot be
paused or restarted. After the evaluation period expires, you can no longer perform some
operations in vCenter Server and ESXi. For example, you cannot power on or reset your virtual
machines. In addition, all hosts are disconnected from the vCenter Server system. To continue
to have full use of ESXi and vCenter Server operations, you must acquire license keys.
Slide 8
You should now be able to meet the following objectives:
• View licensed features for vCenter Server or an ESXi host
• And Add license keys to vCenter Server
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you in Lesson 4, and thank you for watching!
Slide 1
Hello and welcome back! We will now begin Lesson 4: Managing the vCenter Server
Inventory!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Use the vSphere Client to manage the vCenter Server inventory
• Create and organize vCenter Server inventory objects
• Add data center and organizational objects to vCenter Server
• Add hosts to vCenter Server
• And Recognize how to create custom inventory tags for inventory objects
Slide 3
From the vSphere Client Shortcuts page, you can manage your vCenter Server system
inventory, monitor your infrastructure environment, and complete system administration tasks.
To get to the shortcuts page, select Menu and then Shortcuts. The Shortcuts page has a
navigation pane on the left and Inventories, Monitoring, and Administration panes on the right.
Slide 4
You can use the navigation pane on the left to browse and select objectsin the vCenter Server
inventory.
Slide 5
The Hosts and Clusters inventory view shows all host and cluster objects in a data center. You
can further organize the hosts and clusters into folders.
The VMs and Templates inventory view shows all VM and template objects in a data center.
You can also organize the VMs and templates into folders.
Slide 6
As with the other inventory views, you can organize your datastore and network objects into
folders.
Slide 7
As you learned in Lab 1: Navigating the vSphere Web Client, selecting an Object in the
Navigation Pane will show its information in the Monitoring and Administration Panes.
Slide 8
A virtual data center is a logical organization of all the inventory objects required to complete a
fully functional environment for operating VMs:
• You can create multiple data centers to organize sets of environments.
• Each data center has its own hosts, VMs, templates, datastores, and networks.
You might create a data center object for each data center’s geographical location. Or, you
might create a data center object for each organizational unit in your enterprise.
You might create some data centers for high-performance environments and other data centers
for less demanding VMs.
Slide 9
When organizing Inventory Objects into Folders you plan the setup of your virtual environment
depending on your requirements.
A large vSphere implementation might contain several virtual data centers with a complex
arrangement of hosts, clusters, resource pools, and networks. It might include multiple vCenter
Server systems. Smaller implementations might require a single virtual data center with a less
complex topology.
Regardless of the scale of your virtual environment, consider how the VMs that it supports are
used and administered.
Populating and organizing your inventory involves the following tasks:
• Creating data centers
• Creating clusters to consolidate the resources of multiple hosts and VMs
• Adding hosts to the clusters or to the data centers
• Organizing inventory objects in folders
• Setting up networking by using vSphere standard switches or vSphere distributed
switches
• And Configuring storage systems and creating datastore inventory objects to provide
logical containers for storage devices in your inventory.
Slide 10
Here the slide shows where you can go to add new Datacenters, Hosts and Clusters to those
Datacenters, and Folders throughout your inventory.
You can use folders to group objects of the same type for easier management as shown in the
image to the right.
Slide 11
You can add ESXi hosts to vCenter Server in the vSphere Client by right-clicking the object
you want to add the host to, clicking “Add Hosts…”, and then entering the IP address or
FQDN, the Username, and the Password of the host you want to add.
Slide 12
You can use tags to attach metadata to objects in the vCenter Server inventory. Tags help make
these objects more sortable. You can associate a set of objects of the same type by searching for
objectives by a given tag. You can use tags to group and manage VMs, clusters, and datastores,
for example you can Tag VMs that run production workloads or you can Tag VMs based on
their guest operating system.
Slide 13
You should now be able to meet the following objectives:
• Use the vSphere Client to manage the vCenter Server inventory
• Create and organize vCenter Server inventory objects
• Add data center and organizational objects to vCenter Server
• Add hosts to vCenter Server
• And Recognize how to create custom inventory tags for inventory objects
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
Instructor. We will see you next time, and thank you for watching!
Slide 1
Welcome back! Let’s get started on Lesson 5: vCenter Server Roles and Permissions!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Define the term permission in the context of vCenter Server
• Describe the rules for applying permissions
• Create a custom role
• And Create a permission
Slide 3
The authorization to perform tasks in vCenter Server is governed by an access control system.
Through this system, the vCenter Server administrator can specify in detail which users or
groups can perform which tasks on which objects.
The following concepts are important:
• A Privilege is an action that can be performed
• An Object is the target of the action
• A User or Group are indications of who can perform the action
• A Role is a set of privileges
• And a Permission gives one user or group a role (a set of privileges) for the selected
object
A permission is set on an object in the vCenter Server object hierarchy. Each permission
associates the object with a group or user and the group or user access roles. For example, you
can select a VM object, add one permission that gives the Read-only role to group 1, and add a
second permission that gives the Administrator role to user 2. By assigning a different role to a
group of users on different objects, you control the tasks that those users can perform in your
vSphere environment. For example, to allow a group to configure memory for the host, select
that host and add a permission that grants a role to that group that includes the
Host.Configuration.Memory configuration privilege
Slide 4
A role is a set of one or more privileges. For example, the Virtual Machine Power User sample
role consists of several privileges in categories such as Datastore and Global. A role is assigned
to a user or group and determines the level of access of that user or group.
You cannot change the privileges associated with the system roles:
• Such as the Administrator role: Users with this role for an object may view and perform
all actions on the object.
• The Read-only role: Users with this role for an object may view the state of the object
and details about the object.
• The No Access role: Users with this role for an object may not view or change the
object in any way.
• No cryptography administrator role: Users with this role for an object have the same
privileges as users with the Administrator role, except for privileges in the
Cryptographic operations category.
All roles are independent of each other. Hierarchy or inheritance between roles does not apply.
Slide 5
Objects are entities on which actions are performed. Objects include data centers, folders,
clusters, hosts, datastores, networks, and virtual machines.
All objects have a Permissions tab. The Permissions tab shows which user or group and role are
associated with the selected object.
Slide 6
You can assign permissions to objects at different levels of the hierarchy. For example, you can
assign permissions to a host object or to a folder object that includes all host objects. You can
also assign permissions to a global root object to apply the permissions to all objects in all
solutions.
For information about hierarchical inheritance of permissions and global permissions, see
vSphere Security at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.security.doc/GUID-52188148-C579-4F6A-8335-
CFBCE0DD2167.htmlc.
Slide 7
You can view all the objects to which a role is assigned and all the users or groups who are
granted the role.
To view information about a role, click Usage in the Roles pane and select a role from the
Roles list. The information provided to the right shows each object to which the role is assigned
and the users and groups who were granted the role.
Slide 8
In addition to specifying whether permissions propagate downward, you can override
permissions set at a higher level by explicitly setting different permissions for a lower-level
object. On the slide, user Greg is given Read-only access in the Training data center. This role
is propagated to all child objects except one, the Prod03-2 VM. For this VM, Greg is an
administrator.
Slide 9
On the slide, Group1 is assigned the VM_Power_On role, a custom role that contains only one
privilege: the ability to power on a VM. Group2 is assigned the Take_Snapshots role, another
custom role that contains the privileges to create and remove snapshots. Both roles propagate to
the child objects.
Because Greg belongs to both Group1 and Group2, he gets both VM_Power_On and
Take_Snapshots privileges for all objects in the Training data center.
Slide 10
This slide poses the question: If Group1 has the Administrator role and Group2 has the No
Access role, what permissions does Greg have?
Take a second to come up with an answer.
Slide 11
Alright, the answer is that Greg has Administrator privileges, Since Greg is assigned the union
of privileges assigned to Group1 and Group2.
If you think about it Group 1 has practically all privileges, and Group 2 has no privileges. When
a user is assigned to two or more groups the access control system adds the privileges of those
two or more groups together for that user.
The sum of Group 1 (all privileges) plus Group 2 (no privileges or zero) will be Group 1’s
privileges since anything plus zero is whatever that thing is.
Slide 12
You can override permissions set for a higher-level object by explicitly setting different
permissions for a lower-level object.
On the slide, Group1 is assigned the Administrator role at the Training data center and Group2
is assigned the Read-only role on the VM object, Prod03-1. The permission granted to Group1
is propagated to child objects.
Because Greg is a member of both Group1 and Group2, he gets administrator privileges on the
entire Training data center (the higher-level object), except for the VM called Prod03-1 (the
lower-level object). For this VM, he gets read-only access.
Slide 13
On the slide, three permissions are assigned to the Training data center:
• Group1 is assigned the VM_Power_On role.
• Group2 is assigned the Take_Snapshots role.
Slide 14
The Virtual Beans VM Provisioning role is one of many examples of roles that can be created.
Define a role using the smallest number of privileges possible to maximize security and control
over your environment. Give the roles names that explicitly indicate what each role allows, to
make its purpose clear.
Slide 15
Often, you apply a permission to a vCenter Server inventory object such as an ESXi host or a
VM. When you apply a permission, you specify that a user or group has a set of privileges,
called a role, on the object.
Global permissions give a user or group privileges to view or manage all objects in each of the
inventory hierarchies in your deployment. The example on the slide shows that the global root
object has permissions over all vCenter Server objects, including content libraries, vCenter
Server instances, and tags. Global permissions allow access across vCenter Server instances.
vCenter Server permissions, however, are effective only on objects in a particular vCenter
Server instance.
Slide 16
You should now be able to meet the following objectives:
• Define the term permission in the context of vCenter Server
• Describe the rules for applying permissions
• Create a custom role
• And Create a permission
This is the end of the Lesson 5 Lecture. If you have any questions, please contact your
Instructor. We will see you in the next Lecture, and thanks for watching!
Slide 1
Welcome back! Let’s begin Lesson 6: Backing Up and Restoring vCenter Server Appliance!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Create a vCenter Server Appliance backup schedule
• And Restore vCenter Server Appliance from backup
Slide 3
We see here that Virtual Beans has operations policies similar to many companies.
As a Virtual Beans administrator, you must align with these policies by performing the
following tasks:
• Backing up vCenter Server data monthly.
• Making vCenter Server highly available by keeping it 99.99 percent available (or a
downtime per year of 52.56 minutes)
• And Monitoring vCenter Server performance to avoid potential problems in the
infrastructure.
Slide 4
The vCenter Server Appliance Management Interface supports backing up key parts of the
appliance. You can protect vCenter Server data and minimize the time required to restore data
center operations.
The backup process collects key files into a tar bundle and compresses the bundle to reduce the
network load. To minimize the storage impact, the transmission is streamed without caching in
the appliance.
To reduce the total time required to complete the backup operation, the backup process handles
the different components in parallel. You can encrypt the compressed file before transmission
to the backup storage location. When you choose encryption, you must supply a password that
can be used to decrypt the file during restoration.
The backup operation always includes the vCenter Server database and system configuration
files, so that a restore operation has all the data to recreate an operational appliance. Optionally,
you can specify that a backup operation should include Statistics, Events, and Tasks from the
current state of the data center. Current alarms are always included in a backup.
Slide 5
You can use different methods to back up and restore vCenter Server Appliance:
• A File-based backup and restore
o which uses the vCenter Server Appliance Management Interface to create a file-
based backup.
o Restores the backup through the GUI installer of the appliance.
o And Schedules the file-based backup and restore.
• Or an Image-based backup and restore
o which uses vSphere Storage APIs Data Protection with a third-party backup
product to perform centralized, efficient, off-host, LAN-free backups.
Slide 6
You use the vCenter Server Appliance Management Interface to perform a file-based backup of
the vCenter Server core configuration, inventory, and historical data of your choice. The
backed-up data is streamed over the selected protocol to a remote system. The backup is not
stored on vCenter Server Appliance.
When specifying the backup location, use the following syntax: protocol:/folder/subfolder.
Slide 7
You can perform a file-based restore only for a vCenter Server Appliance instance that you
previously backed up by using the vCenter Server Appliance Management Interface. You can
perform the restore operation by using the GUI installer of vCenter Server Appliance. The
process consists of deploying a new vCenter Server Appliance instance and copying the data
from the file-based backup to the new appliance.
You can also perform a restore operation by deploying a new vCenter Server Appliance
instance and using the vCenter Server Appliance Management Interface to copy the data from
the file-based backup to the new appliance.
Slide 8
You can set up a file-based backup schedule to perform periodic backups:
The schedule can be set up with information about the backup location, recurrence, and
retention for the backups, and you can set up only one schedule at a time. The backup scheduler
supports:
Slide 9
You can view the existing defined backup schedule from the vCenter Server Appliance
Management Interface.
The backup schedule can be edited, disabled, or deleted.
Slide 10
You should now be able to meet the following objectives:
• Create a vCenter Server Appliance backup schedule
• And Restore vCenter Server Appliance from backup
This is the end of the Lesson 6 Lecture. If you have any questions, please contact your
Instructor. We will see you next time, and thanks for watching!
Slide 1
Hello and welcome back! Let’s get started with Lesson 7: Monitoring vCenter Server
Appliance!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• View vCenter Server logs and events
• Manage vCenter Server services
• Monitor vCenter Server Appliance for service and disk space usage
• And Use vSphere alarms for resource exhaustion and service failures
Slide 3
The vCenter Server events and audit trails allow selectable retention periods in increments of 30
days:
• User-action information includes the user’s account and specific event details.
• All actions are reported, including file ID, file path, source of operation, operation
name, and date and time of operation.
• Events and alarms are displayed to alert the user to changes in the vCenter Server
service health or when a service fails.
Slide 4
Changes to the logging settings take effect immediately. You do not have to restart the vCenter
Server system.
You can set log levels to control the quantity and type of information logged.
Examples of when to set log levels include:
• When troubleshooting complex issues, set the log level to verbose or trivia.
Troubleshoot and set it back to info.
• And For controlling the amount of information being stored in the log files.
Slide 5
To configure logging levels, follow these steps:
In the vSphere Client, select the vCenter Server instance in the navigation pane.
Then Click the Configure tab.
Next, Under Settings, select General.
Then Click EDIT.
Under Edit vCenter general settings, select Logging settings in the left pane.
And finally, Select an option from the Log level drop-down menu.
Slide 6
vCenter Server and ESXi can stream their log information to a remote Syslog server:
• You can enable this feature in the vCenter Server Appliance Management Interface.
• With this feature, you can further analyze vCenter Server Appliance log files with log
analysis products, such as vRealize Log Insight.
Slide 7
vCenter Server checks the status of the database every 15 minutes:
• By default, database health warnings trigger an alarm when the space used reaches 80
percent.
• The alarm changes from warning to error when the space used reaches 95 percent.
• vCenter Server services shut down so that you can configure more disk space or remove
unwanted content.
You can also monitor database space utilization using the vCenter Server Appliance
Management Interface.
Slide 8
The CPU and Memory views provide a historical view of CPU and memory use.
Using the Disks view, you can monitor the available disk space.
Slide 9
You can use the vCenter Server Appliance Management Interface to monitor the health and
state of the vCenter Server Appliance services.
You can restart, start, or stop services from this interface.
Slide 10
VMware provides monthly security patches for vCenter Server Appliance:
• Critical vulnerability patches are delivered on a monthly release cycle.
• Important and low vulnerabilities are delivered are delivered with the next available
vCenter Server patch or update.
You can configure the vCenter Server Appliance to perform automatic checks for available
patches in the configured repository URL at a regular interval.
If a vCenter Server patch or update occurs in the same time period as the monthly security
patch, the monthly security patch is rolled into the vCenter Server patch or update.
Slide 11
You should be able to meet the following objectives:
• View vCenter Server logs and events
• Manage vCenter Server services
• Monitor vCenter Server Appliance for service and disk space usage
• And Use vSphere alarms for resource exhaustion and service failures
This is the end of the Lesson 7 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 8: vCenter Server High Availability!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Recognize the importance of vCenter Server High Availability
• Explain how vCenter Server High Availability works
• And Identify vCenter Server High Availability requirements
Slide 3
vSphere is a virtualization platform that forms the foundation for building and managing an
organization's virtual, public, and private cloud infrastructures. vCenter Server Appliance sits at
the heart of vSphere and provides services to manage various components of a virtual
infrastructure, such as ESXi hosts, virtual machines, and storage and networking resources. As
large virtual infrastructures are built using vSphere, vCenter Server becomes an important
element in ensuring the business continuity of an organization. vCenter Server must protect
itself from a set of hardware and software failures in an environment and must recover
transparently from such failures.
Slide 4
With vCenter Server High Availability, you can recover quickly from a vCenter Server failure.
Using automated failover, vCenter Server failover occurs with minimal downtime.
vCenter Server High Availability protects vCenter Server Appliance against both hardware and
software failures.
vCenter Server High Availability forms a cluster of nodes:
• The Active node which Runs the active vCenter Server Appliance instance
• The Passive node which Automatically takes over the role of the Active node if a failure
occurs
• And the Witness node which Provides a quorum to protect against a split-brain situation
vCenter Server High Availability is built in to vCenter Server Appliance and is included with
the standard license.
Slide 5
The animation demonstrates what happens if an active node fails. To play the animation, you
can go to this address: https://vmware.bravais.com/s/PlUBZn2zCO7HE5qN2fm4.
The active node runs the active instance of vCenter Server Appliance.
The node uses an IP address on the Management network for the vSphere Client to connect to.
If the active node fails (because of a hardware, software, or network failure), the passive node
takes over the role of the active node. The IP address to which the vSphere Client was
connected is switched from the failed node to the new active node. The new active node starts
serving client requests. Meanwhile, the user must log back in to the vSphere Client for
continued access to vCenter Server.
Because only two nodes are up and running, the vCenter Server High Availability cluster is
considered to be running in a degraded state and subsequent failover cannot occur. A
subsequent failure in a degraded cluster means vCenter Server services are no longer available.
A passive node is required to return the cluster to a healthy state.
Slide 6
If the passive node fails, the active node continues to operate as normal. Because no disruption
in service occurs, users can continue to access the active node using the vSphere Client.
Because the passive node is down, the active node is no longer protected. The cluster is
considered to be running in a degraded state because only two nodes are up and running. A
subsequent failure in a degraded cluster means vCenter Server services are no longer available.
A passive node is required to return the cluster to a healthy state.
Slide 7
The witness node is used to maintain quorum.
If the witness node fails, the active node continues to operate without disruption in service.
Because only two nodes are up and running, the cluster is considered to be running in a
degraded state and failover cannot occur. A subsequent failure in a degraded cluster means
vCenter Server services are no longer available. The witness node is required to return the
cluster to a healthy state.
Slide 8
vCenter Server High Availability provides many benefits:
• vCenter Server Appliance is made more resilient.
• It provides protection against hardware, host, and application failures.
Slide 9
For more information about the vCenter Server High Availability requirements, see vSphere
Availability at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-63F459B7-8884-4818-8872-
C9753B2E0215.html.
Slide 10
You should now be able to meet the following objectives:
• Recognize the importance of vCenter Server High Availability
• Explain how vCenter Server High Availability works
• And Identify vCenter Server High Availability requirements
Slide 11
To tie this back into our Virtual Beans Datacenter you should plan to maintain vCenter Server
and keep it up and running by:
Backing up vCenter Server data monthly using VAMI and a NFS datastore.
Making vCenter Server highly available by configuring vCenter Server HA and achieving
99.99 percent availability.
And Monitoring vCenter Server regularly using the vSphere Client and VAMI.
Slide 12
Some key points of Module 4 are:
• vCenter Server Appliance uses the Photon operating system and the PostgreSQL
database.
• You use the vSphere Client to connect to vCenter Server instances and manage vCenter
Server inventory objects.
• A permission, defined in vCenter Server, gives one user or group a role (set of
privileges) for a selected object.
• You can use the vCenter Server Appliance Management Interface to monitor appliance
resource use and perform a file-based backup of the appliance.
• And vCenter Server High Availability is built in to vCenter Server Appliance and
protects the appliance from both hardware and software failures.
Slide 12
This is the end of the Lesson 8 Lecture and the end of Module 4. The labs and assignments
associated with this module are as follows:
• Lab 3: Working with vCenter Server. You should note that this is an exceptionally long
and complex lab, since you must install a working vCenter Server Appliance, and you
should make your lab reservation for at least 3 hours.
• Lab 6: Managing vSphere Licenses
• Lab 7: Creating and Managing the vCenter Server Inventory
• Lab 8: Configure Active Directory by Joining a Domain and Adding an Identity Source
• And the Module 4 Quiz: vCenter Server
If you have any questions, please contact your Instructor. We will see you in the next Module,
and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 1: Introduction to vSphere Standard Switches!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify virtual switch connection types
• Configure and view standard switch configurations
• And Distinguish between the features of standard and distributed switches
Slide 3
Virtual switches connect VMs to the physical network.
They provide connectivity between VMs on the same ESXi host or on different ESXi hosts.
They also support VMkernel services, such as vSphere vMotion migration, iSCSI, NFS, and
access to the management network.
Slide 4
The ESXi management network port is a VMkernel port that connects to network or remote
services, including vpxd on vCenter Server and VMware Host Client.
Each ESXi management network port and each VMkernel port must be configured with its own
IP address, netmask, and gateway.
To help configure virtual switches, you can create port groups. A port group is a template that
stores configuration information to create virtual switch ports on a virtual switch. VM port
groups connect VMs to one another with common networking properties.
VM port groups and VMkernel ports connect to the outside world through the physical Ethernet
adapters that are connected to the virtual switch uplink ports.
Slide 5
When you design your networking environment, you can team all your networks on a single
virtual switch. Alternatively, you can opt for multiple virtual switches, each with a separate
network. The decision partly depends on the layout of your physical networks.
For example, you might not have enough network adapters to create a separate virtual switch
for each network. Instead, you might place your network adapters in a single virtual switch and
isolate the networks by using VLANs.
Because physical NICs are assigned at the virtual switch level, all ports and port groups that are
defined for a particular switch share the same hardware.
Slide 6
VLANs provide for logical groupings of switch ports. All virtual machines or ports in a VLAN
communicate as if they are on the same physical LAN segment. A VLAN is a software-
configured broadcast domain. Using a VLAN provides the following benefits:
• Creation of logical networks that are not based on the physical topology,
• Improved performance by confining broadcast traffic to a subset of ports on a switch,
• And Cost savings by partitioning the network without the overhead of deploying new
routers
VLANs can be configured at the port group level. The ESXi host provides VLAN support
through virtual switch tagging, which is provided by giving a port group a VLAN ID. By
default, a VLAN ID is optional. The VMkernel takes care of all tagging and untagging as the
packets pass through the virtual switch.
The port on a physical switch to which an ESXi host is connected must be defined as a static
trunk port. A trunk port is a port on a physical Ethernet switch that is configured to send and
receive packets tagged with a VLAN ID. No VLAN configuration is required in the VM. In
fact, the VM does not know that it is connected to a VLAN.
For more information about how VLANs are implemented, see VMware knowledge base article
1003806 at kb.vmware.com/kb/1003806.
Slide 7
A virtual network supports standard and distributed switches. Both switch types are elastic:
Ports are created and removed automatically.
A Standard switch is a Virtual switch that is configured for a single host.
A Distributed switch is a Virtual switch that is configured for an entire data center.
• Up to 2,000 hosts can be attached to the same distributed switch.
• The configuration is consistent across all attached hosts.
• And Hosts must either have an Enterprise Plus license or belong to a vSAN cluster.
Slide 8
You can add new standard switches to an ESXi host or configure existing ones using the
vSphere Client or VMware Host Client. You will learn this in detail in the associated lab.
Slide 9
The slide shows the standard switch vSwitch0 on the sa-esxi-01.vclass.local ESXi host. By
default, the ESXi installation creates a virtual machine port group named VM Network and a
VMkernel port named Management Network. You can create additional port groups such as the
Production port group, which you can use for the production virtual machine network.
For performance and security, you should remove the VM Network virtual machine port group
and keep VM networks and management networks separated.
Slide 10
You can change the connection speed and duplex of a physical adapter to transfer data in
compliance with the traffic rate. If the physical adapter supports SR-IOV, you can enable it and
configure the number of virtual functions to use for virtual machine networking.
Although the speed and duplex settings are configurable, the best practice is to leave the
settings at autonegotiate.
Slide 11
vCenter Server owns the configuration of the distributed switch. The configuration is consistent
across all hosts that use the distributed switch.
Slide 12
As you can see here, Standard and distributed switches have several features in common.
Slide 13
Distributed switches include several features that are not part of standard switches.
During a vSphere vMotion migration, a distributed switch tracks the virtual networking state
(for example, counters and port statistics) as the virtual machine moves from host to host. The
tracking provides a consistent view of a virtual network interface, regardless of the virtual
machine location or vSphere vMotion migration history. Tracking simplifies network
monitoring and troubleshooting activities where vSphere vMotion is used to migrate virtual
machines between hosts.
Slide 14
You should now be able to meet the following objectives:
• Identify virtual switch connection types
• Configure and view standard switch configurations
• And Distinguish between the features of standard and distributed switches
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 2: Configuring Standard Switch Policies!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Explain how to set the security policies for a standard switch port group
• Explain how to set the traffic shaping policies for a standard switch port group
• And Explain how to set the NIC teaming and failover policies for a standard switch port
group
Slide 3
Networking security policy provides protection against MAC address impersonation and
unwanted port scanning.
Traffic shaping is useful when you want to limit the amount of traffic to a VM or a group of
VMs.
Use the teaming and failover policy to determine the following information:
• How the network traffic of VMs and VMkernel adapters that are connected to the
switch is distributed between physical adapters
• And How the traffic should be rerouted if an adapter fails. These policies can be set at
the Standard switch level, or at the Port group level.
Slide 4
The network security policy contains the following exceptions:
• Promiscuous mode: Promiscuous mode allows a virtual switch or port group to forward
all traffic regardless of their destinations. The default is Reject.
• MAC address changes: The default is Reject. If this option is set to Reject and the guest
attempts to change the MAC address assigned to the virtual NIC, it stops receiving
frames.
• And Forged transmits: A frame’s source address field might be altered by the guest and
contain a MAC address other than the assigned virtual NIC MAC address. You can set
the Forged Transmits parameter to accept or reject such frames. The default is Reject.
In vSphere 7, these security settings are set to Reject by default.
In general, these policies give you the option of disallowing certain behaviors that might
compromise security. For example, a hacker might use a promiscuous mode device to capture
network traffic for unscrupulous activities. Or, someone might impersonate a node and gain
unauthorized access by spoofing its MAC address.
Set Promiscuous mode to Accept to use an application in a VM that analyzes or sniffs packets,
such as a network-based intrusion detection system.
Keep the MAC address changes and Forged transmits set to Reject to help protect against
attacks launched by a rogue guest operating system.
Set MAC address changes and Forged transmits to Accept if your applications change the
mapped MAC address, as do some guest operating system-based firewalls.
Slide 5
A virtual machine’s network bandwidth can be controlled by enabling the network traffic
shaper. The network traffic shaper, when used on a standard switch, shapes only outbound
network traffic. To control inbound traffic, use a load-balancing system or turn on rate-limiting
features on your physical router.
Slide 6
The ESXi host shapes only outbound traffic by establishing parameters for the following traffic
characteristics:
• Average bandwidth (in Kbps) Establishes the number of kilobits per second to allow
across a port, averaged over time. The average bandwidth is the allowed average load.
• Peak bandwidth (in Kbps) is the maximum number of kilobits per second to allow
across a port when it is sending a burst of traffic. This number tops the bandwidth that is
used by a port whenever the port is using the burst bonus that is configured using the
Burst size parameter.
• And Burst size (in KB) this is the maximum number of kilobytes to allow in a burst. If
this parameter is set, a port might gain a burst bonus if it does not use all its allocated
bandwidth. Whenever the port needs more bandwidth than specified in the Average
bandwidth field, the port might be allowed to temporarily transmit data at a faster speed
if a burst bonus is available. This parameter tops the number of kilobytes that have
accumulated in the burst bonus and so transfers at a faster speed.
Network traffic shaping is off by default. Although you can establish a traffic-shaping policy at
either the virtual switch level or the port group level, settings at the port group level override
settings at the virtual switch level.
Slide 7
NIC teaming increases the network bandwidth of the switch and provides redundancy. To
determine how the traffic is rerouted when an adapter fails, you include physical NICs in a
failover order.
To determine how the virtual switch distributes the network traffic between the physical NICs
in a team, you select load-balancing algorithms depending on the needs and capabilities of your
environment:
• Load-balancing policy: is a policy that determines how network traffic is distributed
between the network adapters in a NIC team. Virtual switches load balance only the
outgoing traffic. Incoming traffic is controlled by the load-balancing policy on the
physical switch.
• The failback policy: By default, a failback policy is enabled on a NIC team. If a failed
physical NIC returns online, the virtual switch sets the NIC back to active by replacing
the standby NIC that took over its slot. If the physical NIC that stands first in the
failover order experiences intermittent failures, the failback policy might lead to
frequent changes in the NIC that is used. The physical switch sees frequent changes in
MAC addresses, and the physical switch port might not accept traffic immediately when
an adapter comes online. To minimize such delays, you might consider changing the
following settings on the physical switch.
• And the Notify switches policy: With this policy, you can determine how the ESXi host
communicates failover events. When a physical NIC connects to the virtual switch or
when traffic is rerouted to a different physical NIC in the team, the virtual switch sends
notifications over the network to update the lookup tables on physical switches.
Notifying the physical switch offers the lowest latency when a failover or a migration
with vSphere vMotion occurs.
Default NIC teaming and failover policies are set for the entire standard switch. These default
settings can be overridden at the port group level. The policies show what is inherited from the
settings at the switch level.
Slide 8
To play the animation, go to this address:
https://vmware.bravais.com/s/7jEkuYyY0f7OxeWbmmvZ.
The load-balancing method that uses the originating virtual port ID is simple and fast and does
not require the VMkernel to examine the frame for the necessary information. The NIC is
determined by the ID of the virtual port to which the VM is connected. With this method, no
single-NIC VM gets more bandwidth than can be provided by a single physical adapter.
This method has advantages:
• Traffic is evenly distributed if the number of virtual NICs is greater than the number of
physical NICs in the team.
• Resource consumption is low because, in most cases, the virtual switch calculates
uplinks for the VM only once.
Slide 9
To play the animation, go to this address:
https://vmware.bravais.com/s/MmjsUVkaURaNJzMnsao2.
The load-balancing method based on source MAC hash has low overhead and is compatible
with all switches, but it might not spread traffic evenly across all the physical NICs. In addition,
no single-NIC virtual machine gets more bandwidth than a single physical adapter can provide.
This method has advantages:
• VMs use the same uplink because the MAC address is static. Powering a VM on or off
does not change the uplink that the VM uses.
• And No changes on the physical switch are required.
This method has disadvantages as well:
• The bandwidth that is available to a VM is limited to the speed of the uplink that is
associated with the relevant port ID, unless the VM uses multiple source MAC
addresses.
• Resource consumption is higher than with a route based on the originating virtual port
because the virtual switch calculates an uplink for every packet.
• And The virtual switch is not aware of the load of the uplinks, so uplinks might become
overloaded.
Slide 10
To play the animation, go to this address:
https://vmware.bravais.com/s/55sfU1JyzGzuBGWETPu9.
The IP-based method requires 802.3ad link aggregation support or EtherChannel on the switch.
The Link Aggregation Control Protocol is a method to control the bundling of several physical
ports to form a single logical channel. LACP is part of the IEEE 802.3ad specification.
EtherChannel is a port trunking technology that is used primarily on Cisco switches. With this
technology, you can group several physical Ethernet links to create one logical Ethernet link for
providing fault tolerance and high-speed links between switches, routers, and servers.
With this method, a single-NIC virtual machine might use the bandwidth of multiple physical
adapters.
The IP-based load-balancing method only affects outbound traffic. For example, a VM might
choose a particular NIC to communicate with a particular destination VM. The return traffic
might not arrive on the same NIC as the outbound traffic. The return traffic might arrive on
another NIC in the same NIC team.
This method has advantages:
• The load is more evenly distributed compared to the route based on the originating
virtual port and the route based on source MAC hash because the virtual switch
calculates the uplink for every packet.
• And VMs that communicate with multiple IP addresses have a potentially higher
throughput.
This method has disadvantages:
• Resource consumption is the highest compared to the other load-balancing algorithms.
• The virtual switch is not aware of the actual load of the uplinks.
• Changes on the physical network are required.
• And The method is complex to troubleshoot.
Slide 11
Monitoring the link status that is provided by the network adapter detects failures such as cable
pulls and physical switch power failures. This monitoring does not detect configuration errors,
such as a physical switch port being blocked by the Spanning Tree Protocol or misconfigured
VLAN membership. This method cannot detect upstream, nondirectly connected physical
switch or cable failures.
Beaconing introduces a 62-byte packet load approximately every 1 second per physical NIC.
When beaconing is activated, the VMkernel sends out and listens for probe packets on all NICs
that are configured as part of the team. This technique can detect failures that link-status
monitoring alone cannot. Consult your switch manufacturer to verify the support of beaconing
in your environment. For information on beacon probing, see VMware knowledge base article
1005577 at kb.vmware.com/kb/1005577.
A physical switch can be notified by the VMkernel whenever a virtual NIC is connected to a
virtual switch. A physical switch can also be notified whenever a failover event causes a virtual
NIC’s traffic to be routed over a different physical NIC. The notification is sent over the
network to update the lookup tables on physical switches. In most cases, this notification
process is beneficial because, without it, VMs experience greater latency after failovers and
vSphere vMotion operation.
Do not set this option when the VMs connected to the port group are running unicast-mode
Microsoft Network Load Balancing (or NLB). NLB in multicast mode is unaffected. For more
information about the NLB issue, see VMware knowledge base article 1556 at
kb.vmware.com/kb/1556.
When using explicit failover order, always use the highest order uplink from the list of active
adapters that pass failover-detection criteria. The failback option determines how a physical
adapter is returned to active duty after recovering from a failure:
• If Failback is set to Yes, the failed adapter is returned to active duty immediately on
recovery, displacing the standby adapter that took its place at the time of failure.
• If Failback is set to No, a failed adapter is left inactive even after recovery, until another
currently active adapter fails, requiring its replacement.
Slide 12
Your virtual networking environment relies on the physical network infrastructure. As a
vSphere administrator, you should discuss your vSphere networking needs with your network
administration team.
The following issues are topics for discussion:
• Number of physical switches
• Network bandwidth required
Slide 13
You should now be able to meet the following objectives:
• Explain how to set the security policies for a standard switch port group
• Explain how to set the traffic shaping policies for a standard switch port group
• And Explain how to set the NIC teaming and failover policies for a standard switch port
group
Slide 14
As a Virtual Beans administrator, you have a few decisions to make about your network
infrastructure. As you plan your network, you consider these key takeaways about vSphere
networking:
• You must create port groups for the VLANs that you want to use in your vSphere
environment.
• You can use NIC teaming in the virtual switch to avoid a single point of failure.
• You can separate infrastructure service traffic from your application traffic by putting
each traffic type on its own VLAN. Segmenting traffic can improve performance and
enhance security by limiting network access to a specific traffic type.
• You should research the benefits of using distributed switches in your environment.
Distributed switches have additional features over standard switches.
Slide 15
Some key points from Module 5 are:
• Virtual switches can have the following connection types: VM port group, VMkernel
port, and physical uplinks.
• A standard switch is a virtual switch configuration for a single host.
• Network policies set at the standard switch level can be overridden at the port group
level.
• And A distributed switch provides centralized management and monitoring for the
networking configuration of all ESXi hosts that are associated with the switch.
Slide 16
This is the end of Module 5 and the Lesson 2 Lecture. The Labs and Assignments associated
with this Module are as follow:
• Lab 9: Using Standard Switches
• And the Module 5 Quiz: Configuring and Managing Virtual Networks
If you have any questions, please contact your Instructor. We will see you in the next Module
and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 1: Storage Concepts
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Recognize vSphere storage technologies
• And Identify types of datastores.
Slide 3
A datastore is a generic term for a container that holds files and objects. Datastores are logical
containers, analogous to file systems, that hide the specifics of each storage device and provide
a uniform model for storing virtual machine files. A VM is stored as a set of files in its own
directory or as a group of objects in a datastore. You can display all datastores that are available
to your hosts and analyze their properties.
Slide 4
Depending on the type of storage that you use, datastores can be formatted with VMFS or NFS.
In the vSphere environment, ESXi hosts support several storage technologies such as:
• Direct-attached storage: Internal or external storage disks or arrays attached to the host
through a direct connection instead of a network connection.
• Fibre Channel (or FC): which is a high-speed transport protocol used for SANs. Fibre
Channel encapsulates SCSI commands, which are transmitted between Fibre Channel
nodes. In general, a Fibre Channel node is a server, a storage system, or a tape drive. A
Fibre Channel switch interconnects multiple nodes, forming the fabric in a Fibre
Channel network.
• In FCoE: The Fibre Channel traffic is encapsulated into Fibre Channel over Ethernet (or
FCoE) frames. These FCoE frames are converged with other types of traffic on the
Ethernet network.
• iSCSI: which is a SCSI transport protocol, providing access to storage devices and
cabling over standard TCP/IP networks. iSCSI maps SCSI block-oriented storage over
TCP/IP. Initiators, such as an iSCSI host bus adapter (or HBA) in an ESXi host, send
SCSI commands to targets, located in iSCSI storage systems.
• NAS: which is storage shared over standard TCP/IP networks at the file system level.
NAS storage is used to hold NFS datastores. The NFS protocol does not support SCSI
commands.
• And iSCSI, network-attached storage (or NAS), and FCoE can run over high-speed
networks providing increased storage performance levels and ensuring sufficient
bandwidth. With sufficient bandwidth, multiple types of high-bandwidth protocol traffic
can coexist on the same network.
For more information about physical NIC support and maximum ports supported, see VMware
Configuration Maximums at configmax.vmware.com.
Slide 5
Note that Direct-attached storage (or DAS) supports vSphere vMotion when combined with
vSphere Storage vMotion.
Direct-attached storage, as opposed to SAN storage, is where many administrators install ESXi.
Direct-attached storage is also ideal for small environments because of the cost savings
associated with purchasing and managing a SAN. The drawback is that you lose many of the
features that make virtualization a worthwhile investment for example, balancing the workload
on a specific ESXi host. Direct-attached storage can also be used to store noncritical data such
as:
• CD/DVD ISO images
• Decommissioned VMs
• And VM templates
In comparison, storage LUNs must be pooled and shared so that all ESXi hosts can access
them. Shared storage provides the following vSphere features:
• vSphere vMotion
• vSphere HA
• and vSphere DRS
Using shared SAN storage also provides robust features in vSphere such as:
• Central repositories for VM files and templates
• Clustering of VMs across ESXi hosts
• And Allocation of large amounts (terabytes) of storage to your ESXi hosts
ESXi supports different methods of booting from the SAN to avoid handling the maintenance
of additional direct-attached storage or if you have diskless hardware configurations, such as
blade systems. If you set up your host to boot from a SAN, your host’s boot image is stored on
one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on
the SAN rather than from its direct-attached disk.
For ESXi hosts, you can boot from software iSCSI, a supported independent hardware SCSI
adapter, and a supported dependent hardware iSCSI adapter. The network adapter must support
only the iSCSI Boot Firmware Table (or iBFT) format, which is a method of communicating
parameters about the iSCSI boot device to an operating system.
Slide 6
VMFS is a clustered file system where multiple ESXi hosts can read and write to the same
storage device simultaneously. The clustered file system provides unique, virtualization-based
services such as:
• Migration of running VMs from one ESXi host to another without downtime
• Automatic restarting of a failed VM on a separate ESXi host
• And Clustering of VMs across various physical servers
Using VMFS, IT organizations can simplify VM provisioning by efficiently storing the entire
VM state in a central location. Multiple ESXi hosts can access shared VM storage concurrently.
The size of a VMFS datastore can be increased dynamically when VMs residing on the VMFS
datastore are powered on and running. A VMFS datastore efficiently stores both large and small
files belonging to a VM. A VMFS datastore can support virtual disk files. A virtual disk file has
a maximum of 62 TB. A VMFS datastore uses subblock addressing to make efficient use of
storage for small files.
VMFS provides block-level distributed locking to ensure that the same VM is not powered on
by multiple servers at the same time. If an ESXi host fails, the on-disk lock for each VM is
released and VMs can be restarted on other ESXi hosts.
On the slide, each ESXi host has two VMs running on it. The lines connecting the VMs to the
VM disks (or VMDKs) are logical representations of the association and allocation of the larger
VMFS datastore. The VMFS datastore includes one or more LUNs. The VMs see the assigned
storage volume only as a SCSI target from within the guest operating system. The VM contents
are only files on the VMFS volume.
VMFS can be deployed on three kinds of SCSI-based storage devices:
• Direct-attached storage
• Fibre Channel storage
• And iSCSI storage
A virtual disk stored on a VMFS datastore always appears to the VM as a mounted SCSI
device. The virtual disk hides the physical storage layer from the VM's operating system.
For the operating system in the VM, VMFS preserves the internal file system semantics. As a
result, the operating system running in the VM sees a native file system, not VMFS. These
semantics ensure correct behavior and data integrity for applications running on the VMs.
Slide 7
NAS is a specialized storage device that connects to a network and can provide file access
services to ESXi hosts.
NFS datastores are treated like VMFS datastores because they can hold VM files, templates,
and ISO images. In addition, like a VMFS datastore, an NFS volume allows the vSphere
vMotion migration of VMs whose files reside on an NFS datastore. The NFS client built in to
ESXi uses NFS protocol versions 3 and 4.1 to communicate with the NAS or NFS servers.
ESXi hosts do not use the Network Lock Manager protocol, which is a standard protocol that is
used to support the file locking of NFS-mounted files. VMware has its own locking protocol.
NFS 3 locks are implemented by creating lock files on the NFS server. NFS 4.1 uses server-
side file locking.
Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use
different NFS versions to mount the same datastore on multiple hosts. Accessing the same
virtual disks from two incompatible clients might result in incorrect behavior and cause data
corruption.
Slide 8
When vSAN is enabled on a cluster, a single vSAN datastore is created. This datastore uses the
storage components of each host in the cluster.
vSAN can be configured as hybrid or all-flash storage.
In a hybrid storage architecture, vSAN pools server-attached HDDs and SSDs to create a
distributed shared datastore. This datastore abstracts the storage hardware to provide a
software-defined storage tier for VMs.
Flash is used as a read cache/write buffer to accelerate performance, and magnetic disks
provide capacity and persistent data storage.
Alternately, vSAN can be deployed as an all-flash storage architecture in which flash devices
are used as a write cache. SSDs provide capacity, data persistence, and consistent, fast response
times. In the all-flash architecture, the tiering of SSDs results in a cost-effective
implementation: a write-intensive, enterprise-grade SSD cache tier and a read-intensive, lower-
cost SSD capacity tier.
Slide 9
vSphere Virtual Volumes virtualizes SAN and NAS devices by abstracting physical hardware
resources into logical pools of capacity.
vSphere Virtual Volumes provides the following benefits:
• Lower storage cost,
• Reduced storage management overhead,
• Greater scalability,
• and better response to data access and analytical requirements.
Slide 10
Raw device mapping (or RDM) is a file stored in a VMFS volume that acts as a proxy for a raw
physical device.
Instead of storing VM data in a virtual disk file that is stored on a VMFS datastore, you can
store the guest operating system data directly on a raw LUN. Storing the data is useful if you
run applications in your VMs that must know the physical characteristics of the storage device.
By mapping a raw LUN, you can use existing SAN commands to manage storage for the disk.
Use RDM when a VM must interact with a real disk on the SAN. This condition occurs when
you make disk array snapshots or have a large amount of data that you do not want to move
onto a virtual disk as a part of a physical-to-virtual conversion.
Slide 11
For information to help you plan for your storage needs, see vSphere Storage at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.html.
Another good resource of information is the vSphere Storage page at storagehub.vmware.com.
Slide 12
You should now be able to meet the following objectives:
• Recognize vSphere storage technologies,
• And Identify types of datastores
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time, and thanks for watching.
Slide 1
Welcome back! Let’s get started with Lesson 2: Fibre Channel Storage!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe uses of Fibre Channel with ESXi
• Identify Fibre Channel components and addressing
• And Explain how multipathing with Fibre Channel works
Slide 3
To connect to the Fibre Channel SAN, your host should be equipped with Fibre Channel host
bus adapters (or HBAs).
Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route
storage traffic. If your host contains FCoE adapters, you can connect to your shared Fibre
Channel devices by using an Ethernet network.
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches
and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available
to the host. You can access the LUNs and create datastores for your storage needs. These
datastores use the VMFS format.
Alternatively, you can access a storage array that supports vSphere Virtual Volumes and create
vSphere Virtual Volumes datastores on the array’s storage containers.
Slide 4
Each SAN server might host numerous applications that require dedicated storage for
applications processing.
The following components are involved:
• SAN switches: SAN switches connect various elements of the SAN. SAN switches
might connect hosts to storage arrays. Using SAN switches, you can set up path
redundancy to address any path failures from host server to switch, or from storage
array to switch.
• Next we have Fabric: The SAN fabric is the network portion of the SAN. When one or
more SAN switches are connected, a fabric is created. The Fibre Channel (or FC)
protocol is used to communicate over the entire network. A SAN can consist of multiple
interconnected fabrics. Even a simple SAN often consists of two fabrics for redundancy.
• And Connections (such as HBAs and storage processors): Host servers and storage
systems are connected to the SAN fabric through ports in the fabric:
o A host connects to a fabric port through an HBA.
o And storage devices connect to the fabric ports through their storage processors.
Slide 5
A port connects from a device into the SAN. Each node in the SAN includes each host, storage
device, and fabric component (such as a router or switch). Each node in the SAN has one or
more ports that connect it to the SAN. Ports can be identified in the following ways:
• With World Wide Port Name (or WWPN): A globally unique identifier for a port that
allows certain applications to access the port. The Fibre Channel switches discover the
WWPN of a device or host and assign a port address to the device.
• And with Port_ID: Within SAN, each port has a unique port ID that serves as the Fibre
Channel address for that port. The Fibre Channel switches assign the port ID when the
device logs in to the fabric. The port ID is valid only while the device is logged on.
You can use zoning and LUN masking to segregate SAN activity and restrict access to storage
devices.
You can protect access to storage in your vSphere environment by using zoning and LUN
masking with your SAN resources. For example, you might manage zones defined for testing
independently within the SAN so that they do not interfere with activity in the production
zones. Similarly, you might set up different zones for different departments. When you set up
zones, consider host groups that are set up on the SAN device.
Zoning and masking capabilities for each SAN switch and disk array, and the tools for
managing LUN masking, are vendor-specific. See your SAN vendor’s documentation and
vSphere Storage at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-8AE88758-20C1-4873-99C7-
181EF9ACFA70.html.
Slide 6
A Fibre Channel path describes a route:
• From a specific HBA port in the host
• Through the switches in the fabric
• and into a specific storage port on the storage array
By default, ESXi hosts use only one path from a host to a given LUN at any one time. If the
path actively being used by the ESXi host fails, the server selects another available path.
The process of detecting a failed path and switching to another is called path failover. A path
fails if any of the components along the path (HBA, cable, switch port, or storage processor)
fail.
Distinguishing between active-active and active-passive disk arrays can be useful:
• An active-active disk array allows access to the LUNs simultaneously through the
available storage processors without significant performance degradation. All the paths
are active at all times (unless a path fails).
• In an active-passive disk array, one storage processor is actively servicing a given LUN.
The other storage processor acts as a backup for the LUN and might be actively
servicing other LUN I/O.
I/O can be sent only to an active processor. If the primary storage processor fails, one of the
secondary storage processors becomes active, either automatically or through administrative
intervention.
Slide 7
The Fibre Channel traffic is encapsulated into FCoE frames. These FCoE frames are converged
with other types of traffic on the Ethernet network.
When both Ethernet and Fibre Channel traffic are carried on the same Ethernet link, use of the
physical infrastructure increases. FCoE also reduces the total number of network ports and
cabling.
Slide 8
Step 1 of configuring your software FCoE is to Connect the VMkernel to the physical FCoE
NICs that are installed on your host.
Note that:
• The VLAN ID and the priority class are discovered during FCoE initialization. The
priority class is not configured in vSphere.
• And that ESXi supports a maximum of four network adapter ports for software FCoE.
Slide 9
In step 2 you add the software FCoE adapter by selecting the host, clicking the Configure tab,
selecting Storage Adapters, and clicking Add Software Adapter.
Slide 10
You should now be able to meet the following objectives:
• Describe uses of Fibre Channel with ESXi,
• Identify Fibre Channel components and addressing,
• And Explain how multipathing with Fibre Channel works
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
Instructor. We will see you in Lesson 3 and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 3: iSCSI Storage!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify uses of IP storage with ESXi
• Describe iSCSI components and addressing
• Configure iSCSI initiators
• And Recognize storage device naming conventions
Slide 3
An iSCSI SAN consists of an iSCSI storage system, which contains one or more LUNs and one
or more storage processors. Communication between the host and the storage array occurs over
a TCP/IP network. The ESXi host is configured with an iSCSI initiator. An initiator can be
hardware-based, where the initiator is an iSCSI HBA. Or the initiator can be software-based,
known as the iSCSI software initiator.
An initiator transmits SCSI commands over the IP network. A target receives SCSI commands
from the IP network. Your iSCSI network can include multiple initiators and targets. iSCSI is
SAN-oriented for the following reasons:
• The initiator finds one or more targets.
• A target presents LUNs to the initiator.
• And The initiator sends SCSI commands to a target.
An initiator resides in the ESXi host. Targets reside in the storage arrays that are supported by
the ESXi host. To restrict access to targets from hosts, iSCSI arrays can use various
mechanisms, including IP address, subnets, and authentication requirements.
Slide 4
The main addressable, discoverable entity is an iSCSI node. An iSCSI node can be an initiator
or a target. An iSCSI node requires a name so that storage can be managed regardless of
address.
The iSCSI name can use one of the following formats: The iSCSI qualified name (or IQN) or
the extended unique identifier (or EUI).
The IQN can be up to 255 characters long. Several naming conventions are used:
• A Prefix of iqn
• A Date code specifying the year and month in which the organization registered the
domain or subdomain name that is used as the naming authority string
• An Organizational naming authority string, which consists of a valid, reversed domain
or subdomain name
• (Optionally) A Colon (:), followed by a string of the assigning organization’s choosing,
which must make each assigned iSCSI name unique
EUI naming conventions are as follows:
• the Prefix is eui.
• And a 16-character name follows the prefix.
The name includes 24 bits for a company name that is assigned by the IEEE and 40 bits for a
unique ID, such as a serial number.
Slide 5
On ESXi hosts, SCSI storage devices use various identifiers. Each identifier serves a specific
purpose. For example, the VMkernel requires an identifier, generated by the storage device,
which is guaranteed to be unique to each LUN. If the storage device cannot provide a unique
identifier, the VMkernel must generate a unique identifier to represent each LUN or disk.
The following SCSI storage device identifiers are available:
• The Runtime name: Which is the name of the first path to the device. The runtime name
is a user-friendly name that is created by the host after each reboot. It is not a reliable
identifier for the disk device because it is not persistent. The runtime name might
change if you add HBAs to the ESXi host. However, you can use this name when you
use command-line utilities to interact with storage that an ESXi host recognizes.
• And the iSCSI name: Which is a worldwide unique name for identifying the node.
iSCSI uses the IQN and EUI. IQN uses the format iqn.yyyy-mm.naming-
authority:unique name. Storage device names appear in various panels in the vSphere
Client.
Slide 6
The iSCSI initiators transport SCSI requests and responses, encapsulated in the iSCSI protocol,
between the host and the iSCSI target. Your host supports two types of initiators: software
iSCSI and hardware iSCSI.
A software iSCSI initiator is VMware code built in to the VMkernel. Using the initiator, your
host can connect to the iSCSI storage device through standard network adapters. The software
iSCSI initiator handles iSCSI processing while communicating with the network adapter. With
the software iSCSI initiator, you can use iSCSI technology without purchasing specialized
hardware.
A hardware iSCSI initiator is a specialized third-party adapter capable of accessing iSCSI
storage over TCP/IP. Hardware iSCSI initiators are divided into two categories: dependent
hardware iSCSI and independent hardware iSCSI.
A dependent hardware iSCSI initiator, also known as an iSCSI host bus adapter, is a standard
network adapter that includes the iSCSI offload function. To use this type of adapter, you must
configure networking for the iSCSI traffic and bind the adapter to an appropriate VMkernel
iSCSI port.
An independent hardware iSCSI adapter handles all iSCSI and network processing and
management for your ESXi host. In this case, a VMkernel iSCSI port is not required. For
configuration information, see vSphere Storage at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.html.
Slide 7
Networking configuration for software iSCSI involves creating a VMkernel port on a virtual
switch to handle your iSCSI traffic. Depending on the number of physical adapters that you
want to use for the iSCSI traffic, the networking setup can be different:
• If you have one physical network adapter, you need a VMkernel port on a virtual
switch.
• Or If you have two or more physical network adapters for iSCSI, you can use these
adapters for host-based multipathing.
For performance and security, isolate your iSCSI network from other networks. by physically
separating the networks. If physically separating the networks is impossible, logically separate
the networks from one another on a single virtual switch by configuring a separate VLAN for
each network.
Slide 8
You must activate your software iSCSI adapter so that your host can use it to access iSCSI
storage. You can activate only one software iSCSI adapter.
NOTE:
If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled, and the
network configuration is created at the first boot. If you disable the adapter, it is reenabled each
time you boot the host.
Slide 9
The ESXi host supports the following iSCSI target-discovery methods:
• Static discovery: The initiator does not have to perform discovery. The initiator knows
in advance all the targets that it will contact. It uses their IP addresses and domain
names to communicate with them.
• And Dynamic discovery or SendTargets discovery: Each time the initiator contacts a
specified iSCSI server, it sends the SendTargets request to the server. The server
responds by supplying a list of available targets to the initiator.
The names and IP addresses of these targets appear as static targets in the vSphere Client. You
can remove a static target that is added by dynamic discovery. If you remove the target, the
target might be returned to the list during the next rescan operation. The target might also be
returned to the list if the HBA is reset or the host is rebooted.
Slide 10
You can implement CHAP to provide authentication between iSCSI initiators and targets.
ESXi supports the following CHAP authentication methods:
• Unidirectional or one-way CHAP: The target authenticates the initiator, but the initiator
does not authenticate the target. You must specify the CHAP secret so that your
initiators can access the target.
• And Bidirectional or mutual CHAP: With an extra level of security, the initiator can
authenticate the target. You must specify different target and initiator secrets.
CHAP uses a three-way handshake algorithm to verify the identity of your host and, if
applicable, of the iSCSI target when the host and target establish a connection. The verification
is based on a predefined private value, or CHAP secret, that the initiator and target share.
ESXi implements CHAP as defined in RFC 1994. ESXi supports CHAP authentication at the
adapter level. All targets receive the same CHAP secret from the iSCSI initiator. For both
software iSCSI and dependent hardware iSCSI initiators, ESXi also supports per-target CHAP
authentication.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system and
check the CHAP authentication method that the system supports. If CHAP is enabled, you must
enable it for your initiators, verifying that the CHAP authentication credentials match the
credentials on the iSCSI storage.
Using CHAP in your iSCSI SAN implementation is recommended, but consult with your
storage vendor to ensure that best practices are followed.
You can protect your data in additional ways. For example, you might protect your iSCSI SAN
by giving it a dedicated standard switch. You might also configure the iSCSI SAN on its own
VLAN to improve performance and security. Some inline network devices might be
implemented to provide encryption and further data protection.
Slide 11
When setting up your ESXi host for multipathing and failover, you can use multiple hardware
iSCSI adapters or multiple NICs. The choice depends on the type of iSCSI initiators on your
host.
With software iSCSI and dependent hardware iSCSI, you can use multiple NICs that provide
failover for iSCSI connections between your host and iSCSI storage systems.
With independent hardware iSCSI, the host typically has two or more available hardware iSCSI
adapters, from which the storage system can be reached by using one or more switches.
Alternatively, the setup might include one adapter and two storage processors so that the
adapter can use a different path to reach the storage system.
After iSCSI multipathing is set up, each port on the ESXi system has its own IP address, but the
ports share the same iSCSI initiator IQN. When iSCSI multipathing is configured, the
VMkernel routing table is not consulted for identifying the outbound NIC to use. Instead, iSCSI
multipathing is managed using vSphere multipathing modules. Because of the latency that can
be incurred, routing iSCSI traffic is not recommended.
Slide 12
With software iSCSI and dependent hardware iSCSI, multipathing plug-ins do not have direct
access to physical NICs on your host. For this reason, you must first connect each physical NIC
to a separate VMkernel port. Then you use a port-binding technique to associate all VMkernel
ports with the iSCSI initiator. For dependent hardware iSCSI, you must correctly install the
physical network card, which should appear on the host's Configure tab in the Virtual Switches
view.
Slide 13
You should now be able to meet the following objectives:
• Identify uses of IP storage with ESXi
• Describe iSCSI components and addressing
• Configure iSCSI initiators
• And Recognize storage device naming conventions
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 4: VMFS Datastores!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Create a VMFS datastore
• Increase the size of a VMFS datastore
• And Delete a VMFS datastore
Slide 3
You can create VMFS datastores on any SCSI-based storage devices that the host discovers,
including Fibre Channel, iSCSI, and local storage devices.
Slide 4
The Datastores pane lists all datastores currently configured for all managed ESXi hosts.
The example shows the contents of the VMFS datastore named Class-Datastore. The contents
of the datastore are folders that contain the files for virtual machines or templates.
Slide 5
A VMFS datastore primarily serves as a repository for VM files.
This type of datastore is optimized for storing and accessing large files, such as virtual disks
and memory images of suspended VMs.
A VMFS datastore can have a maximum volume size of 64 TB.
Slide 6
Using thin-provisioned virtual disks for your VMs is a way to make the most of your datastore
capacity. But if your datastore is not sized properly, it can become overcommitted. A datastore
becomes overcommitted when the full capacity of its thin-provisioned virtual disks is greater
than the datastore’s capacity.
When a datastore reaches capacity, the vSphere Client prompts you to provide more space on
the underlying VMFS datastore and all VM I/O is paused.
Monitor your datastore capacity by setting alarms to alert you about how much a datastore’s
disks are fully allocated or how much disk space a VM is using.
Manage your datastore capacity by dynamically increasing the size of your datastore when
necessary. You can also use vSphere Storage vMotion to mitigate space use issues.
For example, with vSphere Storage vMotion, you can migrate a VM off a datastore. The
migration can be done by changing from virtual disks of thick format to thin format at the target
datastore.
Slide 7
Increasing a VMFS datastore’s size gives it more space and can possibly improve performance.
In general, before changing your storage allocation you need to perform a rescan to ensure that
all hosts see the most current storage and record the unique identifier of the volume that you
want to expand.
An example of the unique identifier of a volume is the NAA ID. You require this information to
identify the VMFS datastore that must be increased.
You can dynamically increase the capacity of a VMFS datastore if the datastore has insufficient
disk space. You discover whether insufficient disk space is an issue when you create a VM or
you try to add more disk space to a VM.
To increase capacity, you can use one of the following methods:
• Add an extent to the VMFS datastore: An extent is a partition on a LUN. You can add
an extent to any VMFS datastore. The datastore can stretch over multiple extents, up to
32.
• Or expand the VMFS datastore: You can expand the size of the VMFS datastore by
expanding its underlying extent first.
Slide 8
Before taking a datastore out of service, place the datastore in maintenance mode.
By selecting the "Let me migrate storage for all virtual machines and continue entering
maintenance mode after migration." check box, all VMs and templates on the datastore are
automatically migrated to the datastore of your choice. The datastore enters maintenance mode
after all VMs and templates are moved off the datastore.
Datastore maintenance mode is a function of the vSphere Storage DRS feature, but you can use
maintenance mode without enabling vSphere Storage DRS. For more information on vSphere
Storage DRS, see vSphere Resource Management at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-
98BD5A8A-260A-494F-BAAE-74781F5C4B87.html.
Slide 9
Unmounting a VMFS datastore preserves the files on the datastore but makes the datastore
inaccessible to the ESXi host.
Do not perform any configuration operations that might result in I/O to the datastore while the
unmounting is in progress.
You can delete any type of VMFS datastore, including copies that you mounted without
resignaturing. Although you can delete the datastore without unmounting, you should unmount
the datastore first. Deleting a VMFS datastore destroys the pointers to the files on the datastore,
so the files disappear from all hosts that have access to the datastore.
Before you delete or unmount a VMFS datastore, power off all VMs whose disks reside on the
datastore. If you do not power off the VMs and you try to continue, an error message tells you
that the resource is busy. Before you unmount a VMFS datastore, use the vSphere Client to
verify the following conditions:
• That No virtual machines reside on the datastore.
• The datastore is not part of a datastore cluster.
• The datastore is not managed by vSphere Storage DRS.
• vSphere Storage I/O Control is disabled.
• And that the datastore is not used for vSphere HA heartbeat.
To keep your data, back up the contents of your VMFS datastore before you delete the
datastore.
Slide 10
The Pluggable Storage Architecture is a VMkernel layer responsible for managing multiple
storage paths and providing load balancing. An ESXi host can be attached to storage arrays
with either active-active or active-passive storage processor configurations.
VMware offers native load-balancing and failover mechanisms. VMware path selection policies
include the following examples:
• Round Robin,
• Most Recently Used (or MRU),
• and Fixed.
Third-party vendors can design their own load-balancing techniques and failover mechanisms
for particular storage array types to add support for new arrays. Third-party vendors do not need
to provide internal information or intellectual property about the array to VMware.
Slide 11
Multiple paths from an ESXi host to a datastore are possible.
For multipathing with Fibre Channel or iSCSI, the following path selection policies are
supported:
• First is Fixed which is where the host always uses the preferred path to the disk when
that path is available. If the host cannot access the disk through the preferred path, it
tries the alternative paths. This policy is the default policy for active-active storage
devices.
• Next is Most Recently Used which is where the host selects the first working path
discovered at system boot time. When the path becomes unavailable, the host selects an
alternative path. The host does not revert to the original path when that path becomes
available. The Most Recently Used policy does not use the preferred path setting. This
policy is the default policy for active-passive storage devices and is required for those
devices.
• And lastly, we have the Round Robin path selection policy, which is where the host uses
a path selection algorithm that rotates through all available paths. In addition to path
failover, the Round Robin multipathing policy supports load balancing across the paths.
Before using this policy, check with storage vendors to find out whether a Round Robin
configuration is supported on their storage.
Slide 12
You should now be able to meet the following objectives:
• Create a VMFS datastore
• Increase the size of a VMFS datastore
• And Delete a VMFS datastore
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 5: NFS Datastores!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify NFS components
• Recognize the differences between NFS 3 and NFS 4.1
• And Configure and manage NFS datastores
Slide 3
An NFS file system is on a NAS device that is called the NFS server. The NFS server contains
one or more directories that are shared with the ESXi host over a TCP/IP network. An ESXi
host accesses the NFS server through a VMkernel port that is defined on a virtual switch.
Slide 4
Compatibility issues between the two NFS versions prevent access to datastores using both
protocols at the same time from different hosts. If a datastore is configured as NFS 4.1, all hosts
that access that datastore must mount the share as NFS 4.1. Data corruption can occur if hosts
access a datastore with the wrong NFS version.
Slide 5
NFS 4.1 provides the following enhancements:
• Native multipathing and session trunking: NFS 4.1 provides multipathing for servers
that support session trunking. When trunking is available, you can use multiple IP
addresses to access a single NFS volume. Client ID trunking is not supported.
• Kerberos authentication: NFS 4.1 introduces Kerberos authentication in addition to the
traditional AUTH_SYS method used by NFS 3.
• Improved built-in file locking.
• Enhanced error recovery using server-side tracking of open files and delegations.
• And Many general efficiency improvements including session leases and less protocol
overhead.
The NFS 4.1 client offers the following new features:
• Stateful locks with share reservation using a mandatory locking semantic
• Protocol integration, meaning side-band (auxiliary) protocol is no longer required to
lock and mount
• Trunking (or true NFS multipathing), where multiple paths (or sessions) to the NAS
array can be created and load-distributed across those sessions,
• And Enhanced error recovery to mitigate server failure and loss of connectivity.
Slide 6
For each ESXi host that accesses an NFS datastore over the network, a VMkernel port must be
configured on a virtual switch. The name of this port can be anything that you want.
For performance and security reasons, isolate your NFS networks from the other networks, such
as your iSCSI network and your virtual machine networks.
Slide 7
You must take several configuration steps to prepare each ESXi host to use Kerberos
authentication.
Kerberos authentication requires that all nodes involved (the Active Directory server, the NFS
server, and the ESXi hosts) be synchronized so that little to no time drift exists. Kerberos
authentication fails if any significant drift exists between the nodes.
To prepare your ESXi host to use Kerberos authentication, configure the NTP client settings to
reference a common NTP server (or the domain controller, if applicable).
When planning to use NFS Kerberos, consider the following points:
• NFS 3 and 4.1 use different authentication credentials, resulting in incompatible UID
and GID on files.
• Using different Active Directory users on different hosts that access the same NFS share
can cause the vSphere vMotion migration to fail.
• NFS Kerberos configuration can be automated by using host profiles to reduce
configuration conflicts.
• And the Time must be synchronized between all participating components.
Slide 8
After performing the initial configuration steps, you can configure the datastore to use Kerberos
authentication.
The screenshot shows a choice of Kerberos authentication only (krb5) or authentication with
data integrity (krb5i). The difference is whether only the header or the header and the body of
each NFS operation is signed using a secure checksum.
For more information about how to configure the ESXi hosts for Kerberos authentication, see
vSphere Storage at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-8AE88758-20C1-4873-99C7-
181EF9ACFA70.html.
Slide 9
Unmounting an NFS datastore causes the files on the datastore to become inaccessible to the
ESXi host.
Before unmounting an NFS datastore, you must stop all VMs whose disks reside on the
datastore.
Slide 10
Examples of a single point of failure in the NAS architecture include the NIC card in an ESXi
host, and the cable between the NIC card and the switch. To avoid single points of failure and
to create a highly available NAS architecture, configure the ESXi host with redundant NIC
cards and redundant physical switches.
The best approach is to install multiple NICs on an ESXi host and configure them in NIC
teams. NIC teams should be configured on separate external switches, with each NIC pair
configured as a team on the respective external switch.
In addition, you might apply a load-balancing algorithm, based on the link aggregation protocol
type supported on the external switch, such as 802.3ad or EtherChannel.
An even higher level of performance and availability can be achieved with cross-stack,
EtherChannel-capable switches. With certain network switches, you can team ports across two
or more separate physical switches that are managed as one logical switch.
NIC teaming across virtual switches provides additional resilience and some performance
optimization. Having more paths available to the ESXi host can improve performance by
enabling distributed load sharing.
Only one active path is available for the connection between the ESXi host and a single storage
target (such as a LUN or mount point). Although alternative connections might be available for
failover, the bandwidth for a single datastore and the underlying storage is limited to what a
single connection can provide.
To use more available bandwidth, an ESXi host requires multiple connections from the ESXi
host to the storage targets. You might need to configure multiple datastores, each using separate
connections between the ESXi host and the storage.
The slide shows the recommended configuration for NFS multipathing.
Slide 11
NFS 4.1 provides multipathing for servers that support the session trunking. When trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking
is not supported.
Slide 12
You should now be able to meet the following objectives:
• Identify NFS components
• Recognize the differences between NFS 3 and NFS 4.1
• Configure and manage NFS datastores.
This is the end of the Lesson 5 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 6: vSAN Datastores!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Explain the purpose of a vSAN datastore,
• Describe the architecture and requirements of vSAN configuration,
• And Explain the purpose of vSAN storage policies,
Slide 3
vSAN datastores help administrators use software-defined storage in the following ways:
• With storage policy per VM architecture: Since with multiple policies per datastore,
each VM can have different storage.
• with vSphere and vCenter Server integration since vSAN capability is built in and
requires no appliance. You create a vSAN cluster, like vSphere HA, or vSphere DRS.
• With Scale-out storage which enables up to 64 ESXi hosts in a cluster, and can Scale
Out by populating new nodes in the cluster.
• And Built-in resiliency: Since the default vSAN storage policy establishes RAID 1
redundancy for all VMs.
Slide 4
vSAN uses the concept of disk groups to pool together cache devices and capacity devices as
single management constructs. A disk group is a pool of one cache device and one to seven
capacity devices.
Slide 5
vSAN requires several hardware components that hosts do not normally have such as:
• One Serial Attached SCSI (or SAS), SATA solid-state drive (or SSD), or PCIe flash
device and one to seven magnetic drives for each hybrid disk group.
• One SAS, SATA SSD, or PCIe flash device and one to seven flash disks with flash
capacity enabled for all-flash disk groups.
• A Dedicated 1 Gbps network (10 Gbps recommended) for hybrid disk groups.
• A Dedicated 10 Gbps network for all-flash disk groups. Since 1 Gbps network speeds
result in detrimental congestion for an all-flash architecture and are unsupported.
• And the vSAN network must be configured for IPv4 or IPv6 and support unicast.
In addition, each host should have a minimum of 32 GB of memory to accommodate a
maximum number of five disk groups and a maximum number of seven capacity devices per
disk group.
Slide 6
The Summary tab of the vSAN datastore shows the general vSAN configuration information.
Slide 7
A vSAN cluster stores and manages data as flexible data containers called objects. When you
provision a VM on a vSAN datastore, a set of objects is created:
• A VM home namespace which stores the virtual machine metadata (or configuration
files)
• A VMDK, or a Virtual machine disk
• A VM swap, or a Virtual machine swap file, which is created when the VM is powered
on,
• VM memory which is a Virtual machine’s memory state when a VM is suspended or
when a snapshot is taken of a VM and its memory state is preserved
• And a Snapshot delta which is Created when a virtual machine snapshot is taken
Slide 8
VM storage policies are a set of rules that you configure for VMs. Each storage policy reflects a
set of capabilities that meet the availability, performance, and storage requirements of the
application or service-level agreement for that VM.
You should create storage policies before deploying the VMs that require these storage policies.
You can apply and update storage policies after deployment.
A vSphere administrator who is responsible for the deployment of VMs can select policies that
are created based on storage capabilities.
Based on the policy that is selected for the object VM, these capabilities are pushed back to the
vSAN datastore. The object is created across ESXi hosts and disk groups to satisfy these
policies.
Slide 9
The consumption of vSAN storage is based on the VM’s storage policy.
The VM’s hard disk view provides the following information:
• A display of the VM storage policy,
• And the location of disk files on a vSAN datastore.
Slide 10
You should now be able to meet the following objectives:
• Explain the purpose of a vSAN datastore,
• Describe the architecture and requirements of vSAN configuration,
• And Explain the purpose of vSAN storage policies,
Slide 11
As a Virtual Beans administrator, you are planning how to use NAS and iSCSI storage with
vSphere:
• For NAS storage, you can create one or more NFS datastores and share them across
ESXi hosts:
o To use the datastores to hold templates, VMs, and vCenter Server Appliance
backups.
• For iSCSI storage, you can create one or more iSCSI datastores and share them across
ESXi hosts:
o To use the datastores to hold templates and VMs.
Slide 12
As a Virtual Beans administrator, you think that vSAN storage is the best option for the
company's new storage requirements.
Take a minute to pause the video and try to name as many benefits of using vSAN storage as
you can. We will give you our answer in the next slide.
Slide 13
So, what are the benefits to Virtual Beans of using vSAN storage?
The benefits we came up with include, but are not limited to:
• The fact that you can use the vSphere Client to manage the vSAN configuration.
Meaning no separate user interface is necessary.
• That vSphere administrators do not need special storage hardware training.
• That you can use vSAN storage policies to define specific levels of service for a VM.
• And that you can expand the vSAN capacity by adding one or more hosts to the vSAN
cluster (also known as scale out).
Slide 14
Some key points from Module 6 are:
• ESXi hosts support various storage technologies: Such as Direct-attached storage, Fibre
Channel, FCoE, iSCSI, and NAS.
• You use VMFS and NFS datastores to hold VM files.
• Shared storage is integral to vSphere features such as vSphere vMotion, vSphere HA,
and vSphere DRS.
• And vSAN clusters direct-attached server disks to create shared storage designed for
VMs.
Slide 15
This is the end of Module 6 and the Lesson 6 Lecture.
The Labs and Assignments associated with this Module are as follows:
• Lab 10: Accessing iSCSI Storage
• Lab 11: Managing VMFS Datastores
• Lab 12: Accessing NFS Storage
• Lab 13: Using a vSAN Datastore
• And the Module 6 Quiz: Configuring and Managing Virtual Storage.
If you have any questions, please contact your Instructor. We will see you in the next Module
and thanks for watching!
Slide 1
Welcome Back! Let’s get started with Lesson 1: Creating Templates and Clones!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Create a template of a virtual machine,
• Deploy a virtual machine from a template,
• Clone a virtual machine,
• And Create customization specifications for guest operating systems.
Slide 3
A template is a master copy of a virtual machine. You use templates to create and provision
new VMs.
A template typically includes:
• A guest operating system,
• One or more applications,
• A specific VM configuration,
• And VMware Tools.
Creating templates makes the provisioning of virtual machines much faster and less error-prone
than provisioning physical machines and creating a VM by using the New Virtual Machine
wizard.Templates coexist with VMs in the inventory. You can organize collections of VMs and
templates into arbitrary folders and apply permissions to VMs and templates. You can change
VMs into templates without having to make a full copy of the VM files and create an object.
You can deploy a VM from a template. The deployed VM is added to the folder that you
selected when creating the template.
Slide 4
You can create templates using different methods. One method is to clone the VM to a
template. The VM can be powered on or off.
The Clone to Template option offers you a choice of format for storing the VM's virtual disks:
• Same format as source,
• Thin-provisioned format,
• Thick-provisioned lazy-zeroed format,
• And Thick-provisioned eager-zeroed format.
Slide 5
You can create a template by converting a VM to a template. In this case, the VM must be
powered off.
The Convert to Template option does not offer a choice of format and leaves the VM’s disk file
intact.
Slide 6
You can create a template from an existing template, or clone a template.
Slide 7
You update a template to include new patches, make system changes, and install new
applications.
To update a template you:
• Convert the template to a VM.
• Place the VM on an isolated network to prevent user access.
• Make appropriate changes to the VM.
• And Convert the VM to a template.
To update your template to include new patches or software, you do not need to create a
template. Instead, you convert the template to a VM. You can then power on the VM.
For added security, you might want to prevent users from accessing the VM while you update
it. To prevent access, either disconnect the VM from the network or place it on an isolated
network.
Log in to the VM’s guest operating system and apply the patch or install the software. When
you finish, power off the VM and convert it to a template again.
Slide 8
To deploy a VM, you must provide information such as the VM name, inventory location, host,
datastore, and guest operating system customization data.
When you place ISO files in a content library, the ISO files are available only to VMs that are
registered on an ESXi host that can access the datastore where the content library is located.
These ISO files are not available to VMs on hosts that cannot see the datastore on which the
content library is located.
Slide 9
To clone a VM, you must be connected to vCenter Server. You cannot clone VMs if you use
VMware Host Client to manage a host directly.
When you clone a VM that is powered on, services and applications are not automatically
quiesced when the VM is cloned.
When deciding whether to clone a VM or deploy a VM from a template,
consider the following points:
• VM templates use storage space, so you must plan your storage space requirements
accordingly.
• Deploying a VM from a template is quicker than cloning a running VM, especially
when you must deploy many VMs at a time.
• And when you deploy many VMs from a template, all the VMs start with the same base
image. Cloning many VMs from a running VM might not create identical VMs,
depending on the activity happening within the VM when the VM is cloned.
Slide 10
You customize the guest operating system to make VMs, created from the same template or
clone, unique.
By customizing a guest operating system, you can change information, including the following
details:
• The Computer name,
• Network settings,
• License settings,
• And the Windows Security Identifier.
Customizing the guest operating system prevents conflicts that might occur when you deploy a
VM and a clone with identical guest OS settings simultaneously.
Slide 11
You can create a customization specification to prepare the guest operating system.
Specifications are stored in the vCenter Server database, and Windows and Linux guests are
supported.
To manage customization specifications, select Policies and Profiles from the Menu
On the VM Customization Specifications pane, you can create specifications or manage
existing ones.
Slide 12
When cloning a VM or deploying a VM from a template, you can use a customization
specification to prepare the guest operating system. You can define the customization settings
by using an existing customization specification during cloning or deployment. You create the
specification ahead of time. During cloning or deployment, you can select the customization
specification to apply to the new VM.
VMware Tools must be installed on the guest operating system that you want to customize.
The guest operating system must be installed on a disk attached to SCSI node 0:0 in the VM
configuration. For more about guest operating system customization, see vSphere Virtual
Machine Administration at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Slide 13
You can use Instant Clone Technology to create a powered-on VM from the running state of
another powered-on VM.
The processor state, virtual device state, memory state, and disk state of the destination (or
child) VM are identical to the state of the source (or parent) VM.
Snapshot-based disk sharing is used to provide storage efficiency and to improve the speed of
the cloning process.
Through instant cloning, the source VM does not lose its state because of the cloning process.
You can move to just-in-time provisioning, given the speed and state-persisting nature of this
operation. During an instant clone operation, the source VM is stunned for a short time, less
than 1 second.
While the source VM is stunned, a new writable delta disk is generated for each virtual disk,
and a checkpoint is taken and transferred to the destination VM.
The destination VM powers on by using the source’s checkpoint. After the destination VM is
fully powered on, the source VM resumes running.
Instant clone VMs are fully independent vCenter Server inventory objects. You can manage
instant clone VMs like regular VMs, without any restrictions.
Slide 14
Instant cloning is convenient for large-scale application deployments because it ensures
memory efficiency, and you can create many VMs on a single host.
To avoid network conflicts, you can customize the virtual hardware of the destination VM
during the instant cloning operation. For example, you can customize the MAC addresses of the
virtual NICs or the serial and parallel port configurations of the destination VM.
Starting with vSphere 7, you can customize the guest operating system for Linux VMs only.
You can customize networking settings such as IP address, DNS server, and the gateway. You
can change these settings without having to power off or restart the VM.
Slide 15
You should now be able to meet the following objectives:
• Create a template of a virtual machine,
• Deploy a virtual machine from a template,
• Clone a virtual machine,
• And Create customization specifications for guest operating systems.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 2: Working with Content Libraries!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify the benefits of a content library
• Recognize types of content libraries
• And Deploy a virtual machine from a content library.
Slide 3
Content libraries are repositories of OVF templates and other file types that can be shared and
synchronized across vCenter Server systems globally.
Organizations might have multiple vCenter Server instances in data centers around the globe.
On these vCenter Server instances, organizations might have a collection of templates, ISO
images, and so on. The challenge is that all these items are independent of one another, with
different versions of these files and templates on various vCenter Server instances.
The content library is the solution to this challenge. IT can store OVF templates, ISO images, or
any other file types in a central location. The templates, images, and files can be published, and
other content libraries can subscribe to and download content. The content library keeps content
up to date by periodically synchronizing with the publisher, ensuring that the latest version is
available.
Slide 4
Sharing content and ensuring that the content is kept up to date are major tasks.
For example, for a main vCenter Server instance, you create a central content library to store
the master copies of OVF templates, ISO images, and other file types. When you publish this
content library, other libraries, which might be located anywhere in the world, can subscribe
and download an exact copy of the data.
When an OVF template is added, modified, or deleted from the published catalog, the
subscriber synchronizes with the publisher, and the libraries are updated with the latest content.
Starting with vSphere 7, you can update a template while simultaneously deploying VMs from
the template. In addition, the content library keeps two copies of the VM template, the previous
and current versions. You can roll back the template to reverse changes made to the template.
Slide 5
You can create a local library as the source for content that you want to save or share. You
create the local library on a single vCenter Server instance. You can then add or remove items
to and from the local library.
You can publish a local library, and this content library service endpoint can be accessed by
other vCenter Server instances in your virtual environment. When you publish a library, you
can configure the authentication method, which a subscribed library must use to authenticate to
it.
You can create a subscribed library and populate its content by synchronizing it to a published
library. A subscribed library contains copies of the published library files or only the metadata
of the library items.
The published library can be on the same vCenter Server instance as the subscribed library, or
the subscribed library can reference a published library on a different vCenter Server instance.
You cannot add library items to a subscribed library. You can add items only to a local or
published library.
After synchronization, both libraries contain the same items, or the subscribed library contains
the metadata for the items.
Slide 6
Library items include VM templates, vApp templates, or other VMware objects that can be
contained in a content library.
VMs and vApps have several files, such as log files, disk files, memory files, and snapshot files
that are part of a single library item. You can create library items in a specific local library or
remove items from a local library. You can also upload files to an item in a local library so that
the libraries subscribed to it can download the files to their NFS or SMB server, or datastore.
Slide 7
The templates in the content library can be used to deploy VMs and vApps.
Each VM template, vApp template, or other type of file in a library is a library item.
You can also mount an ISO file directly from a content library.
Slide 8
You should now be able to meet the following objectives:
• Identify the benefits of a content library
• Recognize types of content libraries
• And Deploy a virtual machine from a content library
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 3: Modifying Virtual Machines!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe virtual machine settings and options,
• Add a hot-pluggable device,
• And Dynamically increase the size of a virtual disk.
Slide 3
You can modify a VM’s configuration by editing the VM's settings such as:
• Adding virtual hardware,
• Removing virtual hardware,
• Setting VM options,
• And Controlling a VM’s CPU and memory resources.
You might have to modify a VM’s configuration, for example, to add a network adapter or a
virtual disk. You can make all VM changes while the VM is powered off. Some VM hardware
changes can be made while the VM is powered on.
vSphere 7.0 makes the following virtual devices available:
• A Watchdog timer: which is a Virtual device used to detect and recover from operating
system problems. If a failure occurs, the watchdog timer attempts to reset or power off
the VM. This feature is based on Microsoft specifications: Watchdog Resource Table
(WDRT) and Watchdog Action Table (WDAT). The watchdog timer is useful with high
availability solutions such as Red Hat High Availability and the MS SQL failover
cluster. This device is also useful on VMware Cloud and in hosted environments for
implementing custom failover logic to reset or power off VMs.
• A Precision Clock: which is a Virtual device that presents the ESXi host's system time
to the guest OS. Precision Clock helps the guest operating system achieve clock
accuracy in the 1 millisecond range. The guest operating system uses Precision Clock
time as reference time. Precision Clock is not directly involved in guest OS time
synchronization. Precision Clock is useful when precise timekeeping is a requirement
for the application, such as for financial services applications. Precision Clock is also
useful when precise time stamps are required on events that track financial transactions.
• And a Virtual SGX: which is a Virtual device that exposes Intel's SGX technology to
VMs. Intel’s SGX technology prevents unauthorized programs or processes from
accessing certain regions in memory. Intel SGX meets the needs of the Trusted
Computing Industry. Virtual SGX is useful for applications that must conceal
proprietary algorithms and encryption keys from unauthorized users. For example,
cloud service providers cannot inspect a client’s code and data in a virtual SGX-
protected environment.
Slide 4
Adding devices to a physical server or removing devices from a physical server requires that
you physically interact with the server in the data center. When you use VMs, resources can be
added dynamically without a disruption in service. You must shut down a VM to remove
hardware, but you can reconfigure the VM without entering the data center.
You can add CPU and memory while the VM is powered on. These features are called the CPU
Hot Add and Memory Hot Plug, which are supported only on guest operating systems that
support hot-pluggable functionality. These features are disabled by default. To use these hot-
plug features, the following requirements must be satisfied:
• You must install VMware Tools.
• The VM must use hardware version 11 or later.
• The guest operating system in the VM must support CPU and memory hot-plug
features.
• And the hot-plug features must be enabled in the CPU or Memory settings on the
Virtual Hardware tab.
If virtual NUMA is configured with virtual CPU hot-plug settings, the VM is started without
virtual NUMA. Instead, the VM uses UMA (Uniform Memory Access).
Slide 5
You can increase the size of a virtual disk that belongs to a powered-on VM.
When you increase the size of a virtual disk, the VM must not have snapshots attached.
After you increase the size of a virtual disk, you might need to increase the size of the file
system on this disk. Use the appropriate tool in the guest OS to enable the file system to use the
newly allocated disk space.
Slide 6
Thin-provisioned virtual disks can be converted to a thick, eager-zeroed format.
To inflate a thin-provisioned disk:
• Ensure that the VM is powered off.
• Then Right-click the VM’s file with the .vmdk extension and select Inflate.
Or you can use vSphere Storage vMotion and select a thick-provisioned disk as the destination.
When you inflate a thin-provisioned disk, the inflated virtual disk occupies the entire datastore
space originally provisioned to it. Inflating a thin-provisioned disk converts a thin disk to a
virtual disk in thick-provisioned format.
Slide 7
You can use the VM Options tab to modify properties such as the display name for the VM and
the type of guest operating system that is installed.
Under General Options, you can view the location and name of the configuration file (with the
.vmx extension) and the location of the VM’s directory.
You can select the text for the configuration file and the working location to copy and paste
them into a document. However, only the display name and the guest operating system type can
be modified. Changing the display name does not change the names of all the VM files or the
directory that the VM is stored in. When a VM is created, the filenames and the directory name
associated with the VM are based on its display name. But changing the display name later does
not modify the filename and the directory names.
Slide 8
You can use the VMware Tools controls to customize the power buttons on the VM.
When you use the VMware Tools controls to customize the power buttons on the VM, the VM
must be powered off.
You can select the Check and upgrade VMware Tools before each power on check box to check
for a newer version of VMware Tools. If a newer version is found, VMware Tools is upgraded
when the VM is power cycled.
When you select the Synchronize guest time with host check box, the guest operating system’s
clock synchronizes with the host. For information about time keeping best practices for the
guest operating systems that you use, see VMware knowledge base articles 1318 at
kb.vmware.com/kb/1318 and 1006427 at kb.vmware.com/kb/1006427.
Slide 9
When you build a VM and select a guest operating system, BIOS or EFI is selected
automatically, depending on the firmware supported by the operating system. Mac OS X Server
guest operating systems support only Extensible Firmware Interface (or EFI). If the operating
system supports BIOS and EFI, you can change the boot option as needed. However, you must
change the option before installing the guest OS. UEFI Secure Boot is a security standard that
helps ensure that your PC boots use only software that is trusted by the PC manufacturer. In an
OS that supports UEFI Secure Boot, each piece of boot software is signed, including the
bootloader, the operating system kernel, and operating system drivers. If you enable Secure
Boot for a VM, you can load only signed drivers into that VM.
With the Boot Delay value, you can set a delay between the time when a VM is turned on and
the guest OS starts to boot. A delayed boot can help stagger VM startups when several VMs are
powered on.
You can change the BIOS or EFI settings. For example, you might want to force a VM to start
from a CD/DVD. The next time the VM powers on, it goes straight into the BIOS. A forced
entry into the firmware setup is much easier than powering on the VM, opening a console, and
quickly trying to press the F2 key.
With the Failed Boot Recovery setting, you can configure the VM to retry booting after 10
seconds (the default) if the VM fails to find a boot device.
Slide 10
You can remove a VM in the following ways:
• By Removing it from the inventory so that:
o The VM is unregistered from the ESXi host and vCenter Server.
o The VM’s files remain on the disk.
o And the VM can later be registered (added) back to the inventory.
• Or deleting it from disk so that:
o All VM files are permanently deleted from the datastore.
o And the VM is unregistered from the ESXi host and vCenter Server.
When a VM is removed from the inventory, its files remain at the same storage location, and
the VM can be re-registered in the datastore browser.
Slide 11
You should now be able to meet the following objectives:
• Describe virtual machine settings and options,
• Add a hot-pluggable device,
• And Dynamically increase the size of a virtual disk.
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 4: Migrating VMs with vSphere vMotion!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Recognize the types of VM migrations that you can perform within a vCenter Server
instance and across vCenter Server instances
• Explain how vSphere vMotion works
• Verify vSphere vMotion requirements
• And Migrate virtual machines using vSphere vMotion.
Slide 3
Migration means moving a VM from one host, datastore, or vCenter Server instance to another
host, datastore, or vCenter Server instance. Depending on the power state of the VM that you
migrate, migration can be cold or hot:
• A cold migration involves moving a powered-off or suspended VM to a new host.
• While A hot migration involves moving a powered-on VM to a new host.
Depending on the VM resource type, you can perform different types of migrations.
A deciding factor for using a particular migration technique is the purpose of performing the
migration. For example, you might need to stop a host for maintenance but keep the VMs
running. You use vSphere vMotion to migrate the VMs instead of performing a cold or
suspended VM migration. If you must move a VM’s files to another datastore to better balance
the disk load or transition to another storage array, you use vSphere Storage vMotion.
Some migration techniques, such as vSphere vMotion migration, have special hardware
requirements that must be met to function properly. Other techniques, such as a cold migration,
do not have special hardware requirements to function properly.
You can perform the different types of migration on either powered-off (cold) or powered-on
(hot) VMs.
Slide 4
Using vSphere vMotion, you can migrate running VMs from one ESXi host to another ESXi
host with no disruption or downtime. With vSphere vMotion, vSphere DRS can migrate
running VMs from one host to another to ensure that the VMs have the resources that they
require.
With vSphere vMotion, the entire state of the VM is moved from one host to another, but the
data storage remains in the same datastore. The state information includes the current memory
content and all the information that defines and identifies the VM. The memory content
includes transaction data and whatever bits of the operating system and applications are in
memory. The definition and identification information stored in the state includes all the data
that maps to the VM hardware elements, such as the BIOS, devices, CPU, and MAC addresses
for the Ethernet cards.
Slide 5
To enable vSphere vMotion, you must configure a VMkernel port with the vSphere vMotion
service enabled on the source and destination host.
Slide 6
To play the animation, go to this address:
https://vmware.bravais.com/s/FbzaDb6owpSMKyKc940F.
A vSphere vMotion migration consists of the following steps:
1. A shadow VM is created on the destination host.
2. The VM’s memory state is copied over the vSphere vMotion network from the source host to
the target host through the vSphere vMotion network. Users continue to access the VM and,
potentially, update pages in memory. A list of modified pages in memory is kept in a memory
bitmap on the source host.
3. After the first pass of memory state copy completes, another pass of memory copy is
performed to copy any pages that changed during the last iteration. This iterative memory
copying continues until no changed pages remain.
4. After most of the VM’s memory is copied from the source host to the target host, the VM is
quiesced. No additional activity occurs on the VM. In the quiesce period, vSphere vMotion
transfers the VM device state and memory bitmap to the destination host.
5. Immediately after the VM is quiesced on the source host, the VM is initialized and starts
running on the target host. A Gratuitous Address Resolution Protocol (or GARP) request
notifies the subnet that VM A’s MAC address is now on a new switch port.
6. Users access the VM on the target host instead of the source host.
7. The memory pages that the VM was using on the source host are marked as free.
Slide 7
For migration with vSphere vMotion, a VM must meet these requirements:
• If it uses an RDM disk, the RDM file and the LUN to which it maps must be accessible
by the destination host.
• And It must not have a connection to a virtual device, such as a CD/DVD or floppy
drive, with a host-local image mounted.
In vSphere 7, you can use vSphere vMotion to migrate a VM with a device attached through a
remote console.
Remote devices include physical devices or disk images on the client machine running the
remote console.
For the complete list of vSphere vMotion migration requirements, see vCenter Server and Host
Management at this address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B5AF2B1-C534-4426-B97A-
D14019A8010F.html.
Slide 8
Source and destination hosts must have the following characteristics:
• Accessibility to all the VM’s storage,
• A VMkernel port with vSphere vMotion enabled,
• And Matching management network IP address families (IPv4 or IPv6) between the
source and destination hosts.
You cannot migrate a VM from a host that is registered to vCenter Server with an IPv4 address
to a host that is registered with an IPv6 address.
Copying a swap file to a new location can result in slower migrations. If the destination host
cannot access the specified swap file location, it stores the swap file with the VM configuration
file.
Slide 9
Source and destination hosts must also have these characteristics:
• At least a 1 Gigabit Ethernet network:
o Each active vSphere vMotion process requires a minimum throughput of 250
Mbit/second on the vSphere vMotion network.
o Concurrent migrations are limited to four on a 1 Gbps network.
o Concurrent migrations are limited to eight on a 10 Gbps (or faster) network.
o For better performance, dedicate at least two port groups to the
vSphere vMotion traffic.
• And Compatible CPUs:
o The CPU feature sets of both the source host and the destination host must be
compatible.
o Some features can be hidden by using Enhanced vMotion Compatibility or
compatibility masks.
Using 1 GbE network adapters for the vSphere vMotion network might result in migration
failure, if you migrate VMs with large vGPU profiles.
Slide 10
When you select the host and cluster, a validation check is performed to verify that most
vSphere vMotion requirements are met.
If validation succeeds, you can continue in the wizard. If validation does not succeed, a list of
vSphere vMotion errors and warnings displays in the Compatibility pane.
With warnings, you can still perform a vSphere vMotion migration. But with errors, you cannot
continue. You must exit the wizard and fix all errors before retrying the migration.
If a failure occurs during the vSphere vMotion migration, the VM is not migrated and continues
to run on the source host.
Slide 11
When migrating encrypted VMs, you always use encrypted vSphere vMotion.
Encrypted vSphere vMotion secures confidentiality, integrity, and authenticity of data that is
transferred with vSphere vMotion. Encrypted vSphere vMotion supports all variants of vSphere
vMotion, including migration across vCenter Server systems. Encrypted vSphere Storage
vMotion is not supported.
You cannot turn off encrypted vSphere vMotion for encrypted VMs.
Slide 12
With vSphere vMotion, you can migrate VMs between linked vCenter Server systems.
Migration of VMs across vCenter Server instances is helpful in the following cases:
• Balancing workloads across clusters and vCenter Server instances that are in the same
site or in another geographical area.
• Moving VMs between environments that have different purposes, for example, from a
development environment to production environment.
• And Moving VMs to meet different Service Level Agreements (or SLAs) for storage
space, performance, and so on.
Slide 13
Cross vCenter migrations have the following requirements:
• ESXi hosts and vCenter Server systems must be at vSphere 6.0 or later.
• vCenter Server instances must be in Enhanced Linked Mode.
• And Hosts must be time-synchronized.
You can perform cross vCenter migrations between vCenter Server instances of different
versions. For information on the supported versions, see VMware knowledge base article
2106952 at kb.vmware.com/kb/2106952.
Slide 14
vCenter Server performs several network compatibility checks to prevent the following
configuration problems:
• MAC address incompatibility on the destination host
• vSphere vMotion migration from a distributed switch to a standard switch
• And vSphere vMotion migration between distributed switches of different versions.
Slide 15
The VMkernel networking layer provides connectivity to hosts and handles the standard system
traffic of vSphere vMotion, IP storage, vSphere Fault Tolerance, vSAN, and others.
Consider the following key points about TCP/IP stacks at the VMkernel level:
• Default TCP/IP stack: Provides networking support for the management traffic between
vCenter Server and ESXi hosts and for system traffic such as vSphere vMotion, IP
storage, and vSphere Fault Tolerance.
• vSphere vMotion TCP/IP stack: Supports the traffic for hot migrations of VMs.
• Provisioning TCP/IP stack: Supports the traffic for VM cold migration, cloning, and
snapshot creation. You can use the provisioning TPC/IP stack to handle NFC traffic
during long-distance vSphere vMotion migration. VMkernel adapters configured with
the provisioning TCP/IP stack handle the traffic from cloning the virtual disks of the
migrated VMs in long-distance vSphere vMotion. By using the provisioning TCP/IP
stack, you can isolate the traffic from the cloning operations on a separate gateway.
After you configure a VMkernel adapter with the provisioning TCP/IP stack, all
adapters on the default TCP/IP stack are disabled for the provisioning traffic.
• And Custom TCP/IP stacks: You can create a custom TCP/IP stack on a host to forward
networking traffic through a custom application. Open an SSH connection to the host
and run the following vSphere CLI command: esxcli network ip netstack add -
N="stack_name"
Take appropriate security measures to prevent unauthorized access to the management and
system traffic in your vSphere environment. For example, isolate the vSphere vMotion traffic in
a separate network that includes only the ESXi hosts that participate in the migration. Isolate
the management traffic in a network that only network and security administrators can access.
Slide 16
vSphere vMotion TCP/IP stacks support the traffic for hot migrations of VMs. Use the vSphere
vMotion TCP/IP stack to provide better isolation for the vSphere vMotion traffic. After you
create a VMkernel adapter on the vSphere vMotion TCP/IP stack, you can use only this stack
for vSphere vMotion migration on this host.
The VMkernel adapters on the default TCP/IP stack are disabled for the vSphere vMotion
service after you create a VMkernel adapter on the vSphere vMotion TCP/IP stack. If a hot
migration uses the default TCP/IP stack while you configure VMkernel adapters with vMotion
TCP/IP stack, the migration completes successfully. However, these VMkernel adapters on the
default TCP/IP stack are disabled for future vSphere vMotion sessions.
Slide 17
Long-distance vSphere vMotion migration is an extension of cross vCenter migration.
Use cases for long-distance vSphere vMotion migration include:
• Permanent migrations
• Disaster avoidance
• Site Recovery Manager and disaster avoidance testing
• Multisite load balancing
• And Follow-the-sun scenario support.
In the follow-the-sun scenario, a global support team might support a certain set of VMs. As
one support team ends their workday, another support team in a different time zone takes over
support duty. The VMs being supported can be moved from one geographical location to
another so that the support team on duty can access those VMs locally instead of long distance.
Slide 18
Long-distance vSphere vMotion migrations must connect over layer 3 connections:
• The Virtual machine network is an L2 connection, and the same VM IP address is
available at the destination.
• The vSphere vMotion network is an L3 connection, is Secure (if you are not using
vSphere 6.5 or later encrypted vSphere vMotion), needs 250 Mbps per vSphere vMotion
operation, and the Round-trip time between hosts can take up to 150 milliseconds.
Slide 19
You should now be able to meet the following objectives:
• Recognize the types of VM migrations that you can perform within a vCenter Server
instance and across vCenter Server instances
• Explain how vSphere vMotion works
• Verify vSphere vMotion requirements
• And Migrate virtual machines using vSphere vMotion.
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 5: Enhanced vMotion Compatibility!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe Enhanced vMotion Compatibility
• Configure EVC mode on a vSphere cluster
• And Explain how per-VM EVC mode works with vSphere vMotion.
Slide 3
CPU compatibility between source and target hosts is a vSphere vMotion requirement that must
be met.
Depending on the CPU characteristic, an exact match between the source and target host might
or might not be required. For example, if hyperthreading is enabled on the source host and
disabled on the destination host, the vSphere vMotion migration continues because the
VMkernel handles this difference in characteristics.
But, if the source host processor supports SSE4.1 instructions and the destination host processor
does not support them, the hosts are considered incompatible and the vSphere vMotion
migration fails. SSE4.1 instructions are application-level instructions that bypass the
virtualization layer and might cause application instability if mismatched after a migration with
vSphere vMotion.
Slide 4
Enhanced vMotion Compatibility is a cluster feature that prevents vSphere vMotion migrations
from failing because of incompatible CPUs.
This feature works at the cluster level, using CPU baselines to configure all processors in the
cluster that are enabled for Enhanced vMotion Compatibility.
Enhanced vMotion Compatibility ensures that all hosts in a cluster present the same CPU
feature set to VMs, even if the CPUs on the hosts differ.
Enhanced vMotion Compatibility facilitates safe vSphere vMotion migration across a range of
CPU generations. With Enhanced vMotion Compatibility, you can use vSphere vMotion to
migrate VMs among CPUs that otherwise are considered incompatible.
Enhanced vMotion Compatibility allows vCenter Server to enforce vSphere vMotion
compatibility among all hosts in a cluster by forcing hosts to expose a common set of CPU
features (baseline) to VMs. A baseline is a set of CPU features that are supported by every host
in the cluster. When you configure Enhanced vMotion Compatibility, you set all host
processors in the cluster to present the features of a baseline processor. After the features are
enabled for a cluster, hosts that are added to the cluster are automatically configured to the CPU
baseline.
Hosts that cannot be configured to the baseline are not permitted to join the cluster. VMs in the
cluster always see an identical CPU feature set, no matter which host they happen to run on.
Because this process is automatic, Enhanced vMotion Compatibility is easy to use and requires
no specialized knowledge of CPU features and masks.
Slide 5
Before you create an Enhanced vMotion Compatibility cluster, ensure that the hosts that you
intend to add to the cluster meet the requirements.
Enhanced vMotion Compatibility automatically configures hosts whose CPUs have Intel
FlexMigration and AMD-V Extended Migration technologies to be compatible with vSphere
vMotion with hosts that use older CPUs.
For Enhanced vMotion Compatibility to function properly, the applications on the VMs must
be written to use the CPU ID machine instruction for discovering CPU features as
recommended by the CPU vendors. vSphere cannot support Enhanced vMotion Compatibility
with applications that do not follow the CPU vendor recommendations for discovering CPU
features.
To determine which EVC modes are compatible with your CPU, search the VMware
Compatibility Guide at vmware.com/resources/compatibility. Search for the server model or
CPU family, and click the entry in the CPU Series column to display the compatible EVC
modes.
Slide 6
You enable EVC mode on an existing cluster to ensure vSphere vMotion CPU compatibility
between the hosts in the cluster.
You can use one of the following methods to create an Enhanced vMotion Compatibility
cluster:
• Create an empty cluster with EVC mode enabled and move hosts into the cluster.
• Or Enable EVC mode on an existing cluster.
For information about Enhanced vMotion Compatibility processor support, see VMware
knowledge base article 1003212 at kb.vmware.com/kb/1003212.
Slide 7
Several EVC mode approaches are available to ensure CPU compatibility:
• If all the hosts in a cluster are compatible with a newer EVC mode, you can change the
EVC mode of an existing Enhanced vMotion Compatibility cluster.
• You can enable EVC mode for a cluster that does not have EVC mode enabled.
You can raise or lower the EVC mode, but the VMs must be in the correct power state to do so.
Slide 8
With per-VM EVC mode, the EVC mode becomes an attribute of the VM rather than the
specific processor generation it happens to be booted on in the cluster. This feature supports
seamless migration between two data centers that have different processors. Further, the feature
is persisted per VM and does not lose the EVC mode during migrations across clusters or
during power cycles.
In this diagram, EVC mode is not enabled on the cluster. The cluster consists of differing CPU
models with different feature sets. The VMs with per-VM EVC mode can run on any ESXi host
that can satisfy the defined EVC mode.
Slide 9
You should now be able to meet the following objectives:
• Describe Enhanced vMotion Compatibility
• Configure EVC mode on a vSphere cluster
• And Explain how per-VM EVC mode works with vSphere vMotion
This is the end of the Lesson 5 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 6: Migrating VMs with vSphere Storage
vMotion!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Explain how vSphere Storage vMotion works
• Recognize guidelines for using vSphere Storage vMotion
• Migrate virtual machines using vSphere Storage vMotion
• And Migrate both the compute resource and storage of a virtual machine.
Slide 3
vSphere Storage vMotion provides flexibility to optimize disks for performance or transform
disk types, which you can use to reclaim space.
You can place the VM and all its disks in a single location, or you can select separate locations
for the VM configuration file and each virtual disk. During a migration with vSphere Storage
vMotion, the VM does not change the host that it runs on.
With vSphere Storage vMotion, you can rename a VM's files on the destination datastore. The
migration renames all virtual disk, configuration, snapshot, and .nvram files.
Slide 4
To play the animation, go to this address
https://vmware.bravais.com/s/FnHZwq043PJ8dV3ZRV7p.
The vSphere Storage vMotion migration process includes the following steps:
1. Initiate storage migration.
2. Use the VMkernel data mover or vSphere Storage APIs - Array Integration to copy data.
3. Start a new VM process.
4. Mirror I/O calls to file blocks that are already copied to the virtual disk on the destination
datastore.
5. Transition to the destination VM process to begin accessing the virtual disk copy.
The storage migration process does a single pass of the disk, copying all the blocks to the
destination disk. If blocks are changed after they are copied, the blocks are synchronized from
the source to the destination through the mirror driver, with no need for recursive passes.
This approach guarantees complete transactional integrity and is fast enough to be unnoticeable
to the end user. The mirror driver uses the VMkernel data mover to copy blocks of data from
the source disk to the destination disk. The mirror driver synchronously mirrors writes to both
disks during the vSphere Storage vMotion operation.
Finally, vSphere Storage vMotion operations are performed either internally on a single ESXi
host or offloaded to the storage array. Operations performed internally on the ESXi host use a
data mover built into the VMkernel. Operations are offloaded to the storage array if the array
supports vSphere Storage APIs - Array Integration, also called hardware acceleration.
Slide 5
vSphere Storage vMotion offloads its operations to the storage array if the array supports
VMware vSphere Storage APIs - Array Integration, also called hardware acceleration.
Use the vSphere Client to determine whether your storage array supports hardware acceleration.
Slide 6
A VM and its host must meet certain resource and configuration requirements for the virtual
machine disks (VMDKs) to be migrated with vSphere Storage vMotion. One of the
requirements is that the host on which the VM runs must have access both to the source
datastore and to the target datastore.
During a migration with vSphere Storage vMotion, you can change the disk provisioning type.
Migration with vSphere Storage vMotion changes VM files on the destination datastore to
match the inventory name of the VM. The migration renames all virtual disk, configuration,
snapshot, and .nvram-extension files If the new names exceed the maximum filename length,
the migration does not succeed.
Slide 7
When you change both compute resource and storage during migration, a VM changes its host,
datastores, networks, and vCenter Server instances simultaneously:
• This technique combines vSphere vMotion and vSphere Storage vMotion into a single
operation.
• You can migrate VMs across clusters, data centers, and vCenter Server instances.
You can migrate VMs beyond storage accessibility boundaries and between hosts, within and
across clusters, data centers, and vCenter Server instances.
This type of migration is useful for performing cross-cluster migrations, when the target cluster
VMs might not have access to the source cluster’s storage. Processes on the VM continue to run
during the migration with vSphere vMotion.
Slide 8
Compute resource and storage migration is useful for virtual infrastructure administration tasks.
Slide 9
You should now be able to meet the following objectives:
• Explain how vSphere Storage vMotion works
• Recognize guidelines for using vSphere Storage vMotion
• Migrate virtual machines using vSphere Storage vMotion
• And Migrate both the compute resource and storage of a virtual machine.
This is the end of the Lesson 6 Lecture. If you have any questions, please contact you
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 7: Creating Virtual Machine Snapshots!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Take a snapshot of a virtual machine
• Manage multiple snapshots
• Delete virtual machine snapshots
• And Consolidate snapshots.
Slide 3
With snapshots, you can preserve the state of the VM so that you can repeatedly return to the
same state.
For example, if problems occur during the patching or upgrading process, you can stop the
process and revert to the previous state. VM snapshots are not recommended as a VM backup
strategy. Snapshots are useful when you want to revert repeatedly to the same state but do not
want to create multiple VMs. Examples include patching or upgrading the guest operating
system in a VM. The relationship between snapshots is like the relationship between a parent
and a child. Snapshots are organized in a snapshot tree. In a snapshot tree, each snapshot has
one parent and one or more children, except for the last snapshot, which has no children.
Slide 4
A snapshot captures the entire state of the VM at the time that you take the snapshot, including
the following states:
• Memory state: Which is the content of the VM’s memory. The memory state is captured
only if the VM is powered on and if you select the Snapshot the virtual machine’s
memory check box (selected by default).
• Settings state: Which is the VM settings.
• And the Disk state: Which is the state of all the VM’s virtual disks.
At the time that you take the snapshot, you can also quiesce the guest operating system. This
action quiesces the file system of the guest operating system. This option is available only if
you do not capture the memory state as part of the snapshot.
Slide 5
Delta disks use different sparse formats depending on the type of datastore.
• VMFSsparse: VMFS5 uses the VMFSsparse format for virtual disks smaller than 2 TB.
VMFSsparse is implemented on top of VMFS. The VMFSsparse layer processes I/O
operations issued to a snapshot VM. Technically, VMFSsparse is a redo log that starts
empty, immediately after a VM snapshot is taken. The redo log expands to the size of its
base VMDK, when the entire VMDK is rewritten with new data after the VM snapshot.
This redo log is a file in the VMFS datastore. On snapshot creation, the base VMDK
attached to the VM is changed to the newly created sparse VMDK.
• And SEsparse: SEsparse is a default format for all delta disks on the VMFS6 datastores.
On VMFS5, SEsparse is used for virtual disks of the size 2 TB and larger. SEsparse is a
format that is like VMFSsparse with some enhancements. This format is space efficient
and supports the space-reclamation technique. With space reclamation, blocks that the
guest OS deletes are marked. The system sends commands to the SEsparse layer in the
hypervisor to unmap those blocks. The unmapping helps to reclaim space allocated by
SEsparse after the guest operating system deletes the data.
Slide 6
A VM can have one or more snapshots. For each snapshot, the following files are created:
• Snapshot delta file: This file contains the changes to the virtual disk’s data since the
snapshot was taken. When you take a snapshot of a VM, the state of each virtual disk is
preserved. The VM stops writing to its -flat.vmdk file. Writes are redirected to some
number delta.vmdk (or some number sesparse.vmdk instead (The numbers in front
represent the next number in the sequence). You can exclude one or more virtual disks
from a snapshot by designating them as independent disks. Configuring a virtual disk as
independent is typically done when the virtual disk is created, but this option can be
changed whenever the VM is powered off.
• Disk descriptor file: -00000#.vmdk. This file is a small text file that contains
information about the snapshot.
• Configuration state file: Or -.vmsn. This file holds the active memory state of the VM at
the point that the snapshot was taken, including virtual hardware, power state, and
hardware version.
• Memory state file: Or -.vmem. This file is created if the option to include memory state
was selected during the creation of the snapshot. It contains the entire contents of the
VMs at the time that the snapshot of the VM was taken.
• Snapshot active memory file: -.vmem. This file contains the contents of the VM
memory if the option to include memory is selected during the creation of the snapshot.
• The .vmsd file is the snapshot list file and is created at the time that the VM is created. It
maintains snapshot information for a VM so that it can create a snapshot list in the
vSphere Client. This information includes the name of the snapshot .vmsn file and the
name of the virtual disk file.
• The snapshot state file has a .vmsn extension and is used to store the state of a VM
when a snapshot is taken. A new .vmsn file is created for every snapshot that is created
on a VM and is deleted when the snapshot is deleted. The size of this file varies, based
on the options selected when the snapshot is created. For example, including the
memory state of the VM in the snapshot increases the size of the .vmsn file.
You can exclude one or more of the VMDKs from a snapshot by designating a virtual disk in
the VM as an independent disk. Placing a virtual disk in independent mode is typically done
when the virtual disk is created. If the virtual disk was created without enabling independent
mode, you must power off the VM to enable it.
Other files might also exist, depending on the VM hardware version. For example, each
snapshot of a VM that is powered on has an associated _.vmem file, which contains the guest
operating system main memory, saved as part of the snapshot.
Slide 7
The following examples show the snapshot and virtual disk files that are created when a VM
has no snapshots, one snapshot, and two snapshots. There are no snapshots shown in the slide.
The “You are here” marker shows that we are on the root VM.
Slide 8
Here we see that a snapshot has been taken while the VM was powered on and with the
memory state, so that whenever you revert to this snapshot the VM will turn on.
We can also see that snapshots can be named individually. The “You are here” marker shows
that we are currently on the snapshot.
Slide 9
Here we see a second snapshot was taken without the memory, so that it does not record
whether the VM was on or off. If you revert to this snapshot while the VM is powered on it will
turn off.
Slide 10
You can perform the following actions from the Manage Snapshots window:
• Edit the snapshot: This allows you to edit the snapshot name and description.
• Delete the snapshot: This removes the snapshot from the Snapshot Manager,
consolidating the snapshot files to the parent snapshot disk, and merges with the VM
base disk.
• Delete all snapshots: This commits all the intermediate snapshots before the current-
state icon (You are here) to the VM and removes all snapshots for that VM.
• And revert to a snapshot: This restores, or reverts to, a particular snapshot. The snapshot
that you restore becomes the current snapshot.
When you revert to a snapshot, you return all these items to the state that they were in at the
time that you took the snapshot. If you want the VM to be suspended, powered on, or powered
off when you start it, ensure that the VM is in the correct state when you take the snapshot.
Deleting a snapshot (Using DELETE or DELETE ALL) consolidates the changes between
snapshots and previous disk states. Deleting a snapshot also writes to the parent disk all data
from the delta disk that contains the information about the deleted snapshot. When you delete
the base parent snapshot, all changes merge with the base VMDK.
Slide 11
To play the animation, go to this address:
https://vmware.bravais.com/s/WhbcXR4sSwk2Vl7MeaXD.
If you delete a snapshot one or more levels above the You are here level, the snapshot state is
deleted.
In this example, the snap01 data is committed into the parent (base disk), and the foundation for
snap02 is retained.
Slide 12
To play the animation, go to this address:
https://vmware.bravais.com/s/l0JYYQzMTv7pvxBqNcQp.
If you delete the latest snapshot, the changes are committed to its parent.
The snap02 data is committed into snap01 data, and the snap02 -delta.vmdk file is deleted.
Slide 13
To play the animation, go to this address:
https://vmware.bravais.com/s/NiQxPT3iycemQ8WYXKom.
If you delete a snapshot one or more levels below the You are here level, subsequent snapshots
are deleted, and you can no longer return to those states. The snap02 data is deleted.
Slide 14
To play the animation, go to this address:
https://vmware.bravais.com/s/L3ilQHlrywEhIgr5p7RP.
The delete-all-snapshots mechanism uses storage space efficiently. The size of the base disk
does not increase. Snap01 is committed to the base disk before snap02 is committed.
All snapshots before the You are here point are committed all the way up to the base disk. All
snapshots after You are here are discarded. Like a single snapshot deletion, changed blocks in
the snapshot overwrite their counterparts in the base disk.
Slide 15
Snapshot consolidation is a way to clean unneeded delta disk files from a datastore. If no
snapshots are registered for a VM, but delta disk files exist, snapshot consolidation commits the
chain of the delta disk files and removes them.
If consolidation is not performed, the delta disk files might expand to the point of consuming all
the remaining space on the VM’s datastore or the delta disk file reaches its configured size. The
delta disk cannot be larger than the size configured for the base disk.
Slide 16
With snapshot consolidation, vCenter Server displays a warning when the descriptor and the
snapshot files do not match. After the warning displays, you can use the vSphere Client to
commit the snapshots.
Slide 17
After the snapshot consolidation warning appears, you can use the vSphere Client to
consolidate the snapshots.
All snapshot delta disks are committed to the base disks.
For a list of best practices for using snapshots in a vSphere environment, see VMware
knowledge base article 1025279 at kb.vmware.com/kb/1025279.
Slide 18
You should now be able to meet the following objectives:
• Take a snapshot of a virtual machine
• Manage multiple snapshots
• Delete virtual machine snapshots
• And Consolidate snapshots.
This is the end of the Lesson 7 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 8: vSphere Replication and Backup!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify the components in the vSphere Replication architecture
• Deploy and configure vSphere Replication
• Recover replicated virtual machines
• Explain the backup and restore solution for VMs
• And Describe the benefits of vSphere Storage APIs - Data Protection.
Slide 3
vSphere Replication is an alternative to array-based replication. vSphere Replication protects
VMs from partial or complete site failures by replicating the VMs between the following sites:
• From a source site to a target site
• Within a single site from one cluster to another
• Or from multiple source sites to a shared remote target site.
vSphere Replication provides several benefits as compared to array-based replication:
• Data protection at lower cost per VM
• A replication solution that supports flexibility in storage vendor selection at the source
and target sites
• And Overall lower cost per replication.
Slide 4
The vSphere Replication appliance includes the following components:
• A vSphere Replication server that provides the core of the vSphere Replication
infrastructure
• An embedded database that stores replication configuration and management
information
• A vSphere Replication management server that performs the following functions:
o Configures the vSphere Replication server
o Enables, manages, and monitors replications
o And authenticates users and checks their permissions to perform vSphere
Replication operations.
• And A plug-in to the vSphere Client that provides a user interface for vSphere
Replication.
You can use vSphere Replication immediately after you deploy the appliance. The vSphere
Replication appliance provides the virtual appliance management interface (or VAMI) that is
used to reconfigure the appliance after deployment. For example, you can use the VAMI to
change the appliance security settings, change the network settings, or configure an external
database. You can deploy additional vSphere Replication servers by using a separate OVF
package.
Slide 5
You can replicate a VM between two sites. vSphere Replication is installed on both source and
target sites. Only one vSphere Replication appliance is deployed on each vCenter Server. The
vSphere Replication (or VR) appliance contains an embedded vSphere Replication server that
manages the replication process. To meet the load-balancing needs of your environment, you
might need to deploy additional vSphere Replication servers at each site.
When you configure a VM for replication, the vSphere Replication agent sends changed blocks
in the VM disks from the source site to the target site. The changed blocks are applied to the
copy of the VM. This process occurs independently of the storage layer. vSphere Replication
performs an initial full synchronization of the source VM and its replica copy. You can use
replication seeds to reduce the network traffic that is generated by data transfer during the
initial full synchronization.
Slide 6
You can deploy vSphere Replication with either an IPv4 or IPv6 address. Mixing IP addresses,
for example having a single appliance with an IPv4 and an IPv6 address, is not supported.
After you deploy the vSphere Replication appliance, you use the VAMI to register the endpoint
and the certificate of the vSphere Replication management server with the vCenter Lookup
Service. You also use the VAMI to register the vSphere Replication solution user with the
vCenter Single Sign-On administration server.
For more details on deploying the vSphere Replication appliance, see VMware vSphere
Replication Documentation at docs.vmware.com/en/vSphere-Replication/index.html.
Slide 7
vSphere Replication can protect individual VMs and their virtual disks by replicating them to
another location.
Slide 8
To configure vSphere Replication for a VM in the vSphere Client, right-click the VM in the
inventory and select All vSphere Replication Actions and then Configure.
The value that you set for the recovery point object (or RPO) affects replication scheduling.
When you configure replication, you set an RPO to determine the time between replications.
For example, an RPO of 1 hour aims to ensure that a VM loses no more than 1 hour of data
during the recovery. For smaller RPOs, less data is lost in a recovery, but more network
bandwidth is consumed to keep the replica up to date.
For a discussion about how the RPO affects replication scheduling, see vSphere Replication
Administration at this address: https://docs.vmware.com/en/vSphere-
Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-
876E-9D2E6BE4DDBA.html.
Slide 9
With vSphere Replication, you can recover VMs that were successfully replicated at the target
site.
You can recover one VM at a time on the Incoming Replications tab.
To perform the recovery, you use the Recover virtual machine wizard in the vSphere Client at
the target site.
You are asked to select either to recover the VM with all the latest data or to recover the VM
with the most recent data available on the target site:
• If you select Recover with recent changes to avoid data loss, vSphere Replication
performs a full synchronization of the VM from the source site to the target site before
recovering the VM. This option requires that the data of the source VM be accessible.
You can select this option only if the VM is powered off.
• If you select Recover with latest available data, vSphere Replication recovers the VM
by using the data from the most recent replication on the target site, without performing
synchronization. Selecting this option results in the loss of any data that changed since
the most recent replication. Select this option if the source VM is inaccessible or if its
disks are corrupted.
vSphere Replication validates the input that you provide and recovers the VM. If successful, the
VM status changes to Recovered. The VM appears in the inventory of the target site.
Slide 10
vSphere Storage APIs – Data Protection is VMware’s data protection framework, which was
introduced in vSphere 4.0. A backup product that uses this API can back up VMs from a central
backup system (physical or virtual). The backup does not require backup agents or any backup
processing to be done inside the guest operating system.
Backup processing is offloaded from the ESXi host. In addition, vSphere snapshot capabilities
are used to support backups across the SAN without requiring downtime for VMs. As a result,
backups can be performed nondisruptively at any time of the day without requiring extended
backup windows.
For frequently asked questions about vSphere Storage APIs - Data Protection, see VMware
knowledge base article 1021175 at kb.vmware.com/s/article/1021175.
Slide 11
One of the biggest bottlenecks that limits backup performance is the backup server that is
handling all the backup coordination tasks. One of these backup tasks is copying data from
point A to point B. Other backup tasks do much CPU processing. For example, tasks are
performed to determine what data to back up and what not to back up. Other tasks are
performed to deduplicate data and compress data that is written to the target.
A server with insufficient CPU resources can greatly reduce backup performance. So you
should provide enough resources for your backup server. A physical server or VM with an
ample amount of memory and CPU capacity is necessary for the best backup performance
possible. The motivation to use LAN-free backups is to reduce the stress on the physical
resources of the ESXi host when VMs are backed up. LAN-free backups reduce the stress by
offloading backup processing from the ESXi host to a backup proxy server.
You can configure your environment for LAN-free backups to the backup server, also called
the backup proxy server. For LAN-free backups, the backup server must be able to access the
storage managed by the ESXi hosts on which the VMs for backup are running. If you use NAS
or direct-attached storage, ensure that the backup proxy server accesses the volumes with a
network-based transport. If you run a direct SAN backup, zone the SAN and configure the disk
subsystem host mappings. The host mappings must be configured so that all ESXi hosts and the
backup proxy server access the same disk volumes.
Slide 12
Changed-block tracking (or CBT) is a VMkernel feature that tracks the storage blocks of VMs
as they change over time. The VMkernel tracks block changes on VMs, enhancing the backup
process for applications that are developed to exploit vSphere Storage APIs - Data Protection.
By using CBT during restores, vSphere Data Protection offers fast and efficient recoveries of
VMs to their original location. During a restore process, the backup solution uses CBT to
determine which blocks changed since the last backup. The use of CBT reduces data transfer
within the vSphere environment during a recovery operation and, more important, reduces the
recovery time.
Slide 13
You should now be able to meet the following objectives:
• Identify the components in the vSphere Replication architecture
• Deploy and configure vSphere Replication
• Recover replicated virtual machines
• Explain the backup and restore solution for VMs
• And Describe the benefits of vSphere Storage APIs - Data Protection.
Slide 14
As a Virtual Beans administrator, you work with your team to consider which vSphere features
to use for key VM management processes. Take a moment to think of one or more suggestions
for each process. We will provide some answers in the next slides.
Slide 15
For the provisioning and deploying of VMs you should use VM templates and manage all
templates with the content library.
When maintaining VMs (patching and upgrading operating systems and applications) you
should take a snapshot of the VM before applying any patches or updates and manage all
templates with the content library.
Slide 16
When backing up VMs you should use a vSphere Storage APIs - Data Protection solution.
For disaster recovery and business continuity you should use vSphere Replication and then use
the various types of vSphere vMotion migrations to move VMs between hosts, between
vCenter Server instances, and even between data centers.
Slide 17
Some key points of Module 7 are:
• vCenter Server provides features for provisioning virtual machines, such as templates,
cloning, and content libraries.
• By deploying VMs from a template, you can create many VMs easily and quickly.
• You can dynamically manage a VM's configuration by adding hot-pluggable devices
and increasing the size of a VM's virtual disk.
• Hot migrations use vSphere vMotion, vSphere Storage vMotion, or both.
• You can use VM snapshots to preserve the state of the VM so that you can return
repeatedly to the same state.
• You can use vSphere Replication to protect VMs as part of a disaster recovery strategy.
• And Backup products that use vSphere Storage APIs - Data Protection can be used to
back up VM data.
Slide 18
This is the end of Module 7 and the Lesson 8 Lecture. The Labs and Assignments associated
with this Module are as follows:
• Lab 14: Using VM Templates: Creating Templates and Deploying VMs
• Lab 15: Using Content Libraries
• Lab 16: Modifying Virtual Machines
• Lab 17: vSphere vMotion Migrations
• Lab 18: Working with and Managing Snapshots
• And the Module 7 Quiz: Virtual Machine Management
If you have any questions, please contact your Instructor. We will see you in the next Module
and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 1: Virtual CPU and Memory Concepts!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe CPU and memory concepts in relation to a virtualized environment
• Recognize techniques for addressing memory resource overcommitment
• Identify additional technologies that improve memory usage
• Describe how VMware Virtual SMP works
• And Explain how the VMkernel uses hyperthreading.
Slide 3
vSphere has the following layers of memory:
• Guest OS virtual memory is presented to applications by the operating system.
• Guest OS physical memory is presented to the virtual machine by the VMkernel.
• And Host machine memory that is managed by the VMkernel provides a contiguous,
addressable memory space that is used by the VM.
When running a virtual machine, the VMkernel creates a contiguous addressable memory space
for the VM. This memory space has the same properties as the virtual memory address space
presented to applications by the guest operating system. This memory space allows the
VMkernel to run multiple VMs simultaneously while protecting the memory of each VM from
being accessed by others. From the perspective of an application running in the VM, the
VMkernel adds an extra level of address translation that maps the guest physical address to the
host physical address.
Slide 4
The total configured memory sizes of all VMs might exceed the amount of available physical
memory on the host. However, this condition does not necessarily mean that memory is
overcommitted. Memory is overcommitted when the working memory size of all VMs exceeds
that of the ESXi host’s physical memory size.
Because of the memory management techniques used by the ESXi host, your VMs can use
more virtual RAM than the available physical RAM on the host. For example, you can have a
host with 32 GB of memory and run four VMs with 10 GB of memory each. In that case, the
memory is overcommitted. If all four VMs are idle, the combined consumed memory is below
32 GB. However, if all VMs are actively consuming memory, then their memory footprint
might exceed 32 GB and the ESXi host becomes overcommitted. An ESXi host can run out of
memory if VMs consume all reservable memory in an overcommitted-memory environment.
Although the powered-on VMs are not affected, a new VM might fail to power on because of
lack of memory.
Overcommitment makes sense because, typically, some VMs are lightly loaded whereas others
are more heavily loaded, and relative activity levels vary over time.
Extra memory from a VM is gathered into a swap file with the .vswp extension. The memory
overcommitment process on the host uses the vmx-*.vswp swap file to gather and track
memory overhead. Memory from this file is swapped out to disk when host machine memory is
overcommitted.
Slide 5
The VMkernel uses various techniques to dynamically reduce the amount of physical RAM that
is required for each VM. Each technique is described in the order that the VMkernel uses it:
• Page sharing: ESXi can use a proprietary technique to transparently share memory
pages between VMs, eliminating redundant copies of memory pages. Although pages
are shared by default within VMs, as of vSphere 6.0, pages are no longer shared by
default among VMs.
• Ballooning: If the host memory begins to get low and the VM's memory use approaches
its memory target, ESXi uses ballooning to reduce that VM's memory demands. Using
the VMware-supplied vmmemctl module installed in the guest operating system as part
of VMware Tools, ESXi can cause the guest operating system to relinquish the memory
pages it considers least valuable. Ballooning provides performance closely matching
that of a native system under similar memory constraints. To use ballooning, the guest
operating system must be configured with sufficient swap space.
• Memory compression: If the VM's memory use approaches the level at which host-level
swapping is required, ESXi uses memory compression to reduce the number of memory
pages that it must swap out. Because the decompression latency is much smaller than
the swap-in latency, compressing memory pages has significantly less impact on
performance than swapping out those pages.
• Swap to host cache: Host swap cache is an optional memory reclamation technique that
uses local flash storage to cache a virtual machine’s memory pages. By using local flash
storage, the virtual machine avoids the latency associated with a storage network that
might be used if it swapped memory pages to the virtual swap (.vswp) file.
• And Regular host-level swapping: When memory pressure is severe and the hypervisor
must swap memory pages to disk, the hypervisor swaps to a host swap cache rather than
to a .vswp file. When a host runs out of space on the host cache, a virtual machine’s
cached memory is migrated to a virtual machine’s regular .vswp file. Each host must
have its own host swap cache configured.
Slide 6
You can configure a VM with up to 256 virtual CPUs (or vCPUs). The VMkernel includes a
CPU scheduler that dynamically schedules vCPUs on the physical CPUs of the host system.
The VMkernel scheduler considers socket-core-thread topology when making scheduling
decisions. Intel and AMD processors combine multiple processor cores into a single integrated
circuit, called a socket in this discussion.
A socket is a single package with one or more physical CPUs. Each core has one or more
logical CPUs (LCPU in the diagram) or threads. With logical CPUs, the core can schedule one
thread of execution. On the slide, the first system is a single-core, dual-socket system with two
cores and, therefore, two logical CPUs.
When a vCPU of a single-vCPU or multi-vCPU VM must be scheduled, the VMkernel maps
the vCPU to an available logical processor.
In addition to the physical host configuration, the number of vCPUs configured for a VM also
depends on the guest operating system, the applications, and the specific use case for the VM
itself.
Slide 7
If hyperthreading is enabled, ESXi can schedule two threads at the same time on each processor
core (the physical CPU). Hyperthreading provides more scheduler throughput. That is,
hyperthreading provides more logical CPUs on which vCPUs can be scheduled.
The drawback of hyperthreading is that it does not double the power of a core. So, if both
threads of execution need the same on-chip resources at the same time, one thread has to wait.
Still, on systems that use hyperthreading technology, performance is improved.
An ESXi host that is enabled for hyperthreading should behave almost exactly like a standard
system. Logical processors on the same core have adjacent CPU numbers. Logical processors 0
and 1 are on the first core, logical processors 2 and 3 are on the second core, and so on. Consult
the host system hardware documentation to verify whether the BIOS includes support for
hyperthreading. Then, enable hyperthreading in the system BIOS. Some manufacturers call this
option Logical Processor and others call it Enable Hyperthreading.
Use the vSphere Client to ensure that hyperthreading for your host is turned on. To access the
hyperthreading option, go to the host’s Summary tab and select CPUs under Hardware.
Slide 8
The CPU scheduler can use each logical processor independently to execute VMs, providing
capabilities that are similar to traditional symmetric multiprocessing (or SMP) systems. The
VMkernel intelligently manages processor time to guarantee that the load is spread smoothly
across processor cores in the system. Every 2 milliseconds to 40 milliseconds (depending on the
socket-core-thread topology), the VMkernel seeks to migrate vCPUs from one logical processor
to another to keep the load balanced.
The VMkernel does its best to schedule VMs with multiple vCPUs on two different cores rather
than on two logical processors on the same core. But, if necessary, the VMkernel can map two
vCPUs from the same VM to threads on the same core.
If a logical processor has no work, it is put into a halted state. This action frees its execution
resources, and the VM running on the other logical processor on the same core can use the full
execution resources of the core. Because the VMkernel scheduler accounts for this halt time, a
VM running with the full resources of a core is charged more than a VM running on a half core.
This approach to processor management ensures that the server does not violate the ESXi
resource allocation rules.
Slide 9
You should now be able to meet the following objectives:
• Describe CPU and memory concepts in relation to a virtualized environment
• Recognize techniques for addressing memory resource overcommitment
• Identify additional technologies that improve memory usage
• Describe how VMware Virtual SMP works
• And Explain how the VMkernel uses hyperthreading.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 2: Resource Controls!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Assign share values for CPU and memory resources
• Describe how virtual machines compete for resources
• And Define CPU and memory reservations and limits.
Slide 3
Beyond the CPU and memory configured for a VM, you can apply resource allocation settings
to a VM to control the amount of resources granted:
• A reservation specifies the guaranteed minimum allocation for a VM.
• A limit specifies an upper bound for CPU or memory that can be allocated to a VM.
• A share is a value that specifies the relative priority or importance of a VM's access to a
given resource.
Because VMs simultaneously use the resources of an ESXi host, resource contention can occur.
To manage resources efficiently, vSphere provides mechanisms to allow less, more, or an equal
amount of access to a defined resource. vSphere also prevents a VM from consuming large
amounts of a resource. vSphere grants a guaranteed amount of a resource to a VM whose
performance is not adequate or that requires a certain amount of a resource to run properly.
When host memory or CPU is overcommitted, a VM’s allocation target is somewhere between
its specified reservation and specified limit, depending on the VM’s shares and the system load.
vSphere uses a share-based allocation algorithm to achieve efficient resource use for all VMs
and to guarantee a given resource to the VMs that need it most.
Slide 4
RAM reservations:
• Memory reserved to a VM is guaranteed never to swap or balloon.
• If an ESXi host does not have enough unreserved RAM to support a VM with a
reservation, the VM does not power on.
• Reservations are measured in MB, GB, or TB.
• The default is 0 MB.
• Adding a vSphere DirectPath I/O device to a VM sets memory reservation to the
memory size of the VM.
When configuring a memory reservation for a VM, you can specify the VM's configured
amount of memory to reserve all of the VM's memory. For example, if a VM is configured with
4 GB of memory, you can set a memory reservation of 4 GB for the VM. You might configure
such a memory reservation for a critical VM that must maintain a high level of performance.
Alternatively, you can select the Reserve All Guest Memory (All locked) check box. Selecting
this check box ensures that all of the VM's memory gets reserved even if you change the total
amount of memory for the VM. The memory reservation is immediately readjusted when the
VM's memory configuration changes.
Slide 5
CPU reservations:
• CPU that is reserved for a VM is guaranteed to be immediately scheduled on physical
cores. The VM is never placed in a CPU ready state.
• If an ESXi host does not have enough unreserved CPU to support a VM with a
reservation, the VM does not power on.
• Reservations are measured in MHz or GHz.
• The default is 0 MHz.
Slide 6
RAM limits:
• VMs never consume more physical RAM than is specified by the memory allocation
limit.
• VMs might use the VM swap mechanism (.vswp) if the guest OS attempts to consume
more RAM than is specified by the limit.
CPU limits:
• VMs never consume more physical CPU than is specified by the CPU allocation limit.
• CPU threads are placed in a ready state if the guest OS attempts to schedule threads
faster than the limit allows. Usually, specifying a limit is not necessary, but specifying
limits has the following benefits and drawbacks:
o Benefits: Assigning a limit is useful if you start with a few VMs and want to
manage user expectations. The performance deteriorates as you add more VMs.
You can simulate having fewer resources available by specifying a limit.
o Drawbacks: You might waste idle resources if you specify a limit. The system
does not allow VMs to use more resources than the limit, even when the system
is underused and idle resources are available. Specify the limit only if you have
good reasons for doing so.
Slide 7
Shares define the relative importance of a VM:
• If a VM has twice as many shares of a resource as another VM, the VM is entitled to
consume twice as much of that resource when these two VMs compete for resources.
• Share values apply only if an ESXi host experiences contention for a resource. High,
normal, and low settings represent share values with a 4:2:1 ratio, respectively. A
custom value of shares assigns a specific number of shares (which expresses a
proportional weight) to each VM.
Slide 8
VMs are resource consumers. The default resource settings that you assign during VM creation
work well for most VMs.
The proportional share mechanism applies to CPU, memory, storage I/O, and network I/O
allocation. The mechanism operates only when VMs contend for the same resource.
Slide 9
You can add shares to a VM while it is running, and the VM gets more access to that resource
(assuming competition for the resource). When you add a VM, it gets shares too. The VM’s
share amount factors into the total number of shares, but existing VMs are guaranteed not to be
starved for the resource.
Slide 10
Shares guarantee that a VM is given a certain amount of a resource (CPU, RAM, storage I/O, or
network I/O). For example, consider the third row of VMs on the slide:
• VM D is powered on with 1,000 shares.
• Before VM D was powered on, a total of 5,000 shares were available, but VM D’s
addition increases the total shares to 6,000.
• The result is that the other VMs' shares decline in value. But each VM’s share value still
represents a minimum guarantee. VM A is still guaranteed one-sixth of the resource
because it owns one-sixth of the shares.
Slide 11
When you delete or power off a VM, fewer total shares remain, so the surviving VMs get more
access.
Slide 12
You can edit a VM's settings to configure CPU and memory resource allocations.
Slide 13
You can view reservations, limits, and shares settings for all VMs in a cluster.
Slide 14
You should now be able to meet the following objectives:
• Assign share values for CPU and memory resources
• Describe how virtual machines compete for resources
• And Define CPU and memory reservations and limits.
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
Instructor. We will see you in Lesson 3 and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 3: Resource Monitoring Tools!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe the performance-tuning methodology
• Identify resource-monitoring tools
• And Use vCenter Server performance charts to view performance.
Slide 3
The best practice for performance tuning is to take a logical step-by-step approach:
• For a complete view of the performance situation of a VM, use monitoring tools in the
guest operating system and in vCenter Server.
• Identify the resource that the VM relies on the most. This resource is most likely to
affect the VM’s performance if the VM is constrained by it.
• Give a VM more resources or decrease the resources of other VMs.
• After making more of the limiting resource available to the VM, take another
benchmark and record changes.
Be cautious when making changes to production systems because a change might negatively
affect the performance of the VMs.
Slide 4
Tools in the guest operating system are available from sources external to VMware and are
used in various VMware applications. Many tools used outside of the guest OS are made
available by VMware for use with vSphere and other applications. A partial list of these
resource-monitoring tools is shown on the slide.
Slide 5
To monitor performance in the guest operating system, use tools that you are familiar with,
such as Windows Task Manager.
Windows Task Manager helps you measure CPU and memory use in the guest operating
system.
The measurements that you take with tools in the guest operating system reflect resource usage
of the guest operating system, not necessarily of the VM itself.
Slide 6
VMware Tools includes a library of functions called the Perfmon DLL. With Perfmon, you can
access key host statistics in a guest VM. Using the Perfmon performance objects (VM
Processor and VM Memory), you can view actual CPU and memory usage and observed CPU
and memory usage of the guest operating system.
For example, you can use the VM Processor object to view the % Processor Time counter,
which monitors the VM’s current virtual processor load. Likewise, you can use the Processor
object and view the % Processor Time counter (not shown), which monitors the total use of the
processor by all running processes.
Slide 7
The esxtop utility is the primary real-time performance monitoring tool for vSphere:
• It can be run from the host’s local vSphere ESXi Shell as esxtop,
• It can be run remotely from vSphere CLI as resxtop,
• And it works like the top performance utility in Linux operating systems. In this
example, you enter lowercase c and uppercase V to view CPU metrics for VMs.
You can run the esxtop utility by using vSphere ESXi Shell to communicate with the
management interface of the ESXi host. You must have root user privileges.
Slide 8
Data on a wide range of metrics is collected at frequent intervals, processed, and archived in the
vCenter Server database. You can access statistical information through command-line
monitoring utilities or by viewing performance charts in the vSphere Client.
Slide 9
You can access overview and advanced performance charts in the vSphere Client.
Overview performance charts show the performance statistics that VMware considers most
useful for monitoring performance and diagnosing problems.
Depending on the object that you select in the inventory, the performance charts provide a
quick visual representation of how your host or VM is performing.
Slide 10
In the vSphere Client, you can customize the appearance of advanced performance charts.
Advanced charts have the following features:
• More information than overview charts: Point to a data point in a chart to display details
about that specific data point.
• Customizable charts: You can change chart settings. Save custom settings to create your
own charts.
• And you can save data to an image file or a spreadsheet.
To customize advanced performance charts, select Advanced under Performance. Click the
Chart Options link in the Advanced Performance pane.
Slide 11
Real-time information is information that is generated for the past hour at 20-second intervals.
Historical information is generated for the past day, week, month, or year, at varying
specificities.
By default, vCenter Server has four archiving intervals: day, week, month, and year. Each
interval specifies a length of time that statistics are archived in the vCenter Server database.
You can configure which intervals are used and for what period of time.
You can also configure the number of data counters that are used during a collection interval by
setting the collection level.
Together, the collection interval and the collection level determine how much statistical data is
collected and stored in your vCenter Server database.
For example, using the table, past-day statistics show one data point every 5 minutes, for a total
of 288 samples. Past-year statistics show 1 data point per day, for 365 samples.
Real-time statistics are not stored in the database. They are stored in a flat file on ESXi hosts
and in memory on vCenter Server instances. ESXi hosts collect real-time statistics only for the
host or the VMs that are available on the host. Real-time statistics are collected directly on an
ESXi host every 20 seconds.
If you query for real-time statistics, vCenter Server queries each host directly for the data.
vCenter Server does not process the data at this point. vCenter Server only passes the data to
the vSphere Client.
On ESXi hosts, the statistics are kept for 30 minutes, after which 90 data points are collected.
The data points are aggregated, processed, and returned to vCenter Server. vCenter Server then
archives the data in the database as a data point for the day collection interval.
To ensure that performance is not impaired when collecting and writing the data to the
database, cyclical queries are used to collect data counter statistics. The queries occur for a
specified collection interval. At the end of each interval, the data calculation occurs.
Slide 12
Depending on the metric type and object, performance metrics are displayed in different types
of charts, such as bar charts and pie charts. Bar charts display storage metrics for datastores in a
selected data center. Each datastore is represented as a bar in the chart. Each bar displays
metrics based on the file type: virtual disks, other VM files, snapshots, swap files, and other
files.
Pie charts display storage metrics for a single object, based on the file types or VMs. For
example, a pie chart for a datastore can display the amount of storage space occupied by the
VMs that take up the largest space.
Slide 13
In a line chart, the data for each performance counter is plotted on a separate line in the chart.
For example, a CPU chart for a host can contain a line for each of the host's CPUs. Each line
plots the CPU's usage over time.
Slide 14
Stacked charts display metrics for the child objects that have the highest statistical values. All
other objects are aggregated, and the sum value is displayed with the term Other. For example,
a host’s stacked CPU usage chart displays CPU usage metrics for the five VMs on the host that
are consuming the most CPU resources. The Other amount contains the total CPU usage of the
remaining VMs. The metrics for the host itself are displayed in separate line charts. By default,
the 10 child objects with the highest data counter values appear.
Slide 15
Per-VM stacked graphs are available only for hosts.
Slide 16
In the vSphere Client, you can save data from the advanced performance charts to a file in
various graphics formats or in Microsoft Excel format. When you save a chart, you select the
file type and save the chart to the location of your choice.
Slide 17
Performance charts graphically display CPU, memory, disk, network, and storage metrics for
devices and entities managed by vCenter Server.
In vCenter Server, you can determine how much or how little information about a specific
device type is displayed. You can control the amount of information a chart displays by
selecting one or more objects and counters.
An object refers to an instance for which a statistic is collected. For example, you might collect
statistics for an individual CPU, all CPUs, a host, or a specific network device.
A counter represents the actual statistic that you are collecting. An example is the amount of
CPU used or the number of network packets per second for a given device.
Slide 18
The statistics type refers to the measurement that is used during the statistics interval and is
related to the unit of measurement.
The statistics type is one of the following:
• Rate: Which is the value over the current statistics interval
• Delta: Is the change from the previous statistics interval
• And Absolute: Is absolute value (independent of the statistics interval).
For example, CPU usage is a rate, CPU ready time is a delta, and memory active is an absolute
value.
Slide 19
Data is displayed at different specificities according to the historical interval. Past-hour
statistics are shown at a 20-second specificity, and past-day statistics are shown at a 5-minute
specificity. The averaging that is done to convert from one time interval to another is called
rollup.
Different rollup types are available. The rollup type determines the type of statistical values
returned for the counter:
• Average: The data collected during the interval is aggregated and averaged.
• Minimum: The minimum value is rolled up.
• And Maximum: The maximum value is rolled up.
The minimum and maximum values are collected and displayed only in collection level 4.
Minimum and maximum rollup types are used to capture peaks in data during the interval. For
real-time data, the value is the current minimum or current maximum. For historical data, the
value is the average minimum or average maximum.
For example, the following information for the CPU usage chart shows that the average is
collected at collection level 1 and that the minimum and maximum values are collected at
collection level 4:
• Counter: Is usage
• The Unit: Is in percentage,
• The Rollup Type: is Average (Minimum/Maximum)
• And the Collection Level: is 1
Statistics levels include summation and latest:
• Summation: The collected data is summed. The measurement displayed in the
performance chart represents the sum of data collected during the interval.
• Latest: The data that is collected during the interval is a set value. The value displayed
in the performance chart represents the current value. For example, if you look at the
CPU Used counter in a CPU performance chart, the rollup type is summation. So, for a
given 5-minute interval, the sum of all the 20-second samples in that interval is
represented.
Slide 20
You should now be able to meet the following objectives:
• Describe the performance-tuning methodology
• Identify resource-monitoring tools
• And Use vCenter Server performance charts to view performance.
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 4: Monitoring Resource Use!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Monitor the key factors that can affect a virtual machine's performance
• And Use performance charts to view and improve performance.
Slide 3
vCenter Server monitoring tools and guest OS monitoring tools provide different points of
view.
The key to interpreting performance data is to observe the range of data from the perspective of
the guest operating system, the VM, and the host.
The CPU usage statistics in Task Manager, for example, do not give you the complete picture.
View CPU usage for the VM and the host on which the VM is located.
Use the performance charts in the vSphere Client to view this data.
Slide 4
If CPU use is high, check the VM's CPU usage statistics. Use either the overview charts or the
advanced charts to view CPU usage. The slide displays an advanced chart tracking a VM’s
CPU usage.
If a VM’s CPU use remains high over a period of time, the VM is constrained by CPU. Other
VMs on the host might have enough CPU resources to satisfy their needs.
If more than one VM is constrained by CPU, the key indicator is CPU ready time. Ready time
refers to the interval when a VM is ready to execute instructions but cannot because it cannot
get scheduled onto a CPU. Several factors affect the amount of ready time:
• Overall CPU use: You are more likely to see ready time when use is high because the
CPU is more likely to be busy when another VM becomes ready to use.
• Number of resource consumers (in this case, guest operating systems): When a host is
running a larger number of VMs, the scheduler is more likely to queue a VM behind
VMs that are already running or queued.
A good ready time value varies from workload to workload. To find a good ready time value
for your workload, collect ready time data over time for each VM. When you have this ready
time data for each VM, estimate how much of the observed response time is ready time. If the
shortfalls in meeting response-time targets for the applications appear largely because of the
ready time, take steps to address the excessive ready time.
Slide 5
To determine whether a VM is being constrained by CPU resources, view CPU usage in the
guest operating system using, for example, Task Manager.
If more than one VM is constrained by CPU, the key indicator is CPU readiness. CPU readiness
is the percent of time that the VM cannot run because it is contending for access to the physical
CPUs.
You are more likely to see readiness values when use is high because the CPU is more likely to
be busy when another VM becomes ready to run. You are also more likely to see readiness
values when a host is running many VMs. In this case, the scheduler is more likely to queue a
VM behind VMs that are already running or queued.
A good readiness value varies from workload to workload.
Slide 6
Compare a VM's memory consumed and granted values to determine whether the VM is
memory-constrained, or not.
Slide 7
If a VM consumes its entire memory allocation, the VM might be memory-constrained, and
you should consider increasing the VM’s memory size.
Slide 8
You might see VMs with high ballooning activity and VMs being swapped in and out by the
VMkernel. This serious situation indicates that the host memory is overcommitted and must be
increased.
Slide 9
Disk performance problems are commonly caused by saturating the underlying physical storage
hardware. You can use the vCenter Server advanced performance charts to measure storage
performance at different levels. These charts provide insight about a VM performance. You can
monitor everything from the VM's datastore to a specific storage path.
If you select a host object, you can view throughput and latency for a datastore, a storage
adapter, or a storage path. The storage adapter charts are available only for Fibre Channel
storage. The storage path charts are available for Fibre Channel and iSCSI storage, not for NFS.
If you select a VM object, you can view throughput and latency for the VM’s datastore or
specific virtual disk.
To monitor throughput, view the Read rate and Write rate counters. To monitor latency, view
the Read latency and Write latency counters.
Slide 10
To determine whether your vSphere environment is experiencing disk problems, monitor the
disk latency data counters. Use the advanced performance charts to view these statistics. In
particular, monitor the following counters:
• Kernel command latency: This data counter measures the average amount of time, in
milliseconds, that the VMkernel spends processing each SCSI command. For best
performance, the value should be 0 through 1 millisecond. If the value is greater than 4
milliseconds, the VMs on the ESXi host are trying to send more throughput to the
storage system than the configuration supports.
• And physical device command latency: This data counter measures the average amount
of time, in milliseconds, for the physical device to complete a SCSI command.
Slide 11
Like disk performance problems, network performance problems are commonly caused by
saturating a network link between client and server. Use a tool such as Iometer, or a large file
transfer, to measure the effective bandwidth.
Network performance depends on application workload and network configuration. Dropped
network packets indicate a bottleneck in the network. To determine whether packets are being
dropped, use the advanced performance charts to examine the droppedTx and droppedRx
network counter values of a VM.
In general, the larger the network packets, the faster the network speed. When the packet size is
large, fewer packets are transferred, which reduces the amount of CPU that is required to
process the data. In some instances, large packets can result in high network latency.
When network packets are small, more packets are transferred, but the network speed is slower
because more CPU is required to process the data.
Slide 12
You should now be able to meet the following objectives:
• Monitor the key factors that can affect a virtual machine's performance,
• And Use performance charts to view and improve performance.
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 5: Using Alarms!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Use predefined alarms in vCenter Server
• View and acknowledge alarms
• And Create custom alarms.
Slide 3
An alarm is a notification that is sent in response to an event or condition that occurs with an
object in the inventory.
You can acknowledge an alarm to let other users know that you take ownership of the issue.
For example, a VM has an alarm set to monitor CPU use. The alarm is configured to send an
email to an administrator when the alarm is triggered. The VM CPU use spikes, triggering the
alarm, which sends an email to the administrator. The administrator acknowledges the triggered
alarm to let other administrators know the problem is being addressed.
After you acknowledge an alarm, the alarm actions are discontinued, but the alarm does not get
cleared or reset when acknowledged. You reset the alarm manually in the vSphere Client to
return the alarm to a normal state.
Slide 4
You can access many predefined alarms for various inventory objects, such as hosts, virtual
machines, datastores, networks, and so on.
Slide 5
You can edit predefined alarms, or you can make a copy of an existing alarm and modify the
settings as needed.
To make a copy of an alarm, select the alarm and click ADD.
Slide 6
If the predefined alarms do not address the event, state, or condition that you want to monitor,
define custom alarm definitions instead of modifying predefined alarms.
Slide 7
On the Name and Targets page, you can name the alarm, give it a description, and select the
type of inventory object that this alarm monitors.
You can create custom alarms for the following target types:
• Virtual machines
• Hosts, clusters, and data centers
• Datastores and datastore clusters
• Distributed switches and distributed port groups
• And vCenter Server.
Slide 8
An alarm rule must contain at least one trigger.
A trigger can monitor the current condition or state of an object, for example
• if a VM’s current snapshot is more than 2 GB,
• if a host is using 90 percent of its total memory,
• or if a datastore is disconnected from all hosts.
A trigger can monitor events that occur in response to operations occurring on a managed
object, for example
• if the health of a host’s hardware changes,
• if a license expires in the data center,
• or if a host leaves the distributed switch.
You configure the alarm trigger to show as a warning or critical event when the specified
criteria are met:
• You can monitor the current condition or state of virtual machines, hosts, and
datastores. Conditions or states include power states, connection states, and performance
metrics such as CPU and disk use.
• You can monitor events that occur in response to operations occurring with a managed
object in the inventory or vCenter Server itself. For example, an event is recorded each
time a VM (which is a managed object) is cloned, created, deleted, deployed, and
migrated.
Slide 9
You select and configure the events, states, or conditions that trigger the alarm.
You must create a separate alarm definition for each trigger. The OR operator is not supported
in the vSphere Client. However, you can combine more than one condition trigger with the
AND operator.
Slide 10
You configure the notification method to use when the alarm is triggered. The methods are
sending an email, sending an SNMP trap, or running a script.
Slide 11
You can select and configure the events, states, or conditions to reset the alarm to normal.
Sometimes, as in this example, you can access only one option to reset the alarm.
Slide 12
On the Review page, the new alarm definition is enabled by default.
Slide 13
If you use email or SNMP traps as the notification method, you must configure vCenter Server
to support these notification methods. To configure email, specify the mail server FQDN or IP
address and the email address of the sender account.
You can configure up to four receivers of SNMP traps. They must be configured in numerical
order, and each SNMP trap requires a corresponding host name, port, and community.
Slide 14
You should now be able to meet the following objectives:
• Use predefined alarms in vCenter Server
• View and acknowledge alarms
• And Create custom alarms.
Slide 15
Which tools can Virtual Beans use to meet its goals for managing and monitoring the vSphere
environment? You can take a moment to match each Virtual Beans requirement with the
appropriate vSphere feature. We will provide our answers in the next slide.
Slide 16
To increase compute resources for business-critical workloads, particularly during peak months
you would use Shares, limits, and reservations;
To provide proactive recommendations to help avoid problems before they occur you would
use VMware Skyline;
To create monthly reports, for management, that contain graphs of VM resource usage you
would use vCenter Server performance charts;
To be notified when ESXi hosts experience high CPU and memory usage you would use
Alarms.
Slide 17
Some key points from Lesson 8 are:
• An ESXi host uses memory overcommit techniques to allow the overcommitment of
memory while possibly avoiding the need to page memory out to disk.
• The VMkernel balances processor time to guarantee that the load is spread smoothly
across processor cores in the system.
• You can apply reservations, limits, and shares against a VM to control the amount of
CPU and memory resources granted.
• The key to interpreting performance data is to observe the range of data from the
perspective of the guest operating system, the virtual machine, and the host.
• And you use alarms to monitor the vCenter Server inventory objects and send
notifications when selected events or conditions occur.
Slide 18
This is the end of Module 8 and the Lesson 5 Lecture. The Labs and Assignments associated
with this Module are as follows:
• Lab 19: Controlling VM Resources,
• Lab 20: Monitoring Virtual Machine Performance,
• Lab 21: Using Alarms,
• And the Module 8 Quiz: Resource Management and Monitoring.
If you have any questions, please contact your Instructor. We will see you in the next Module
and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 1: vSphere Clusters Overview!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe the benefits of vSphere clusters
• Create a vSphere cluster
• And View information about a vSphere cluster.
Slide 3
A cluster is used in vSphere to share physical resources between a group of ESXi hosts.
vCenter Server manages cluster resources as a single pool of resources.
You can create one or more clusters based on the purpose each cluster must fulfill,
for example:
• Management
• Production
• or Compute
A cluster can contain up to 64 ESXi hosts.
Slide 4
You can enable the following services in a vSphere cluster:
• vSphere HA: for high availability
• vSphere DRS: for VM placement and load balancing
• and vSAN: for shared storage.
You can also manage host updates using images. With vSphere Lifecycle Manager, you can
update all hosts in the cluster collectively, using a specified ESXi image.
Slide 5
The Cluster Quickstart workflow guides you through the deployment process for clusters. It
covers every aspect of the initial configuration, such as host, network, and vSphere settings.
With Cluster Quickstart, you can also add additional hosts to a cluster as part of the ongoing
expansion of clusters.
Cluster Quickstart reduces the time it takes to configure a cluster.
The workflow includes the following tasks:
• Setting up services such as vSphere HA and vSAN
• Verifying hardware and software compatibility
• Deploying Virtual Distributed Switches
• Configuring network settings for vSphere vMotion and vSAN
• Creating a vSAN stretched cluster or vSAN fault domains
• And Ensuring consistent NTP configuration across the cluster.
The Cluster quickstart page provides workflow cards for configuring your new cluster:
• Cluster basics: which lists the services that you have already enabled and provides an
option for editing the cluster's name.
• Add hosts: which adds ESXi hosts to the cluster. These hosts must already be present in
the inventory. After hosts are added, the workflow shows the total number of hosts that
are present in the cluster and provides health check validation for those hosts. At the
start, this workflow is empty.
• And Configure cluster: Informs you about what can be automatically configured,
provides details on configuration mismatch, and reports cluster health results through
the vSAN health service even after the cluster is configured.
For more information about creating clusters, see vCenter Server and Host Management at this
address: https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B5AF2B1-C534-4426-B97A-
D14019A8010F.html.
Slide 6
Alternatively, you can use the Configure tab to manually configure a cluster's settings.
Slide 7
To add a host to a cluster, drag the host onto the cluster object in the inventory.
Slide 8
For a quick view of your cluster configuration, the Summary tab provides general information
about a cluster's resources and its consumers.
Slide 9
You can view a report of total cluster CPU, memory, memory overhead, storage capacity, the
capacity reserved by VMs, and how much capacity remains available. vCenter Server uses
vSphere HA admission control to ensure that sufficient resources are available in a cluster to
provide failover protection and to ensure that VM resource reservations are respected.
Slide 10
You should now be able to meet the following objectives:
• Describe the benefits of vSphere clusters
• Create a vSphere cluster
• And View information about a vSphere cluster.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 2: vSphere DRS!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe the functions of a vSphere DRS cluster
• Explain how vSphere DRS determines VM placement on hosts in the cluster
• Enable vSphere DRS in a cluster
• And Monitor a vSphere DRS cluster.
Slide 3
vSphere DRS is a cluster feature that helps improve resource allocation across all hosts in a
cluster. It aggregates computing capacity across a collection of servers into logical resource
pools.
vSphere DRS is used in the following situations:
• Initial placement of a VM when it is powered on,
• Load balancing,
• And Migrating VMs when an ESXi host is placed in maintenance mode.
When you power on a VM in the cluster for the first time, vSphere DRS either places the VM
on a particular host or makes a recommendation. DRS attempts to improve resource use across
the cluster by performing automatic migrations of VMs (with vSphere vMotion) or by
providing a recommendation for VM migrations.
Before an ESXi host enters maintenance mode, VMs running on the host must be migrated to
another host (either manually or automatically by DRS) or shut down.
Slide 4
vSphere DRS is VM focused:
• While the VM is powered on, vSphere DRS operates on an individual VM basis by
ensuring that each VM's resource requirements are met.
• vSphere DRS calculates a score for each VM and gives recommendations (or migrates
VMs) for meeting VM's resource requirements.
The DRS algorithm recommends where individual VMs should be moved for maximum
efficiency. If the cluster is in fully automated mode, DRS executes the recommendations and
migrates VMs to their optimal host based on the underlying calculations performed every
minute.
Slide 5
The VM DRS score is a metric that tracks a VM’s execution efficiency on a given host.
Execution efficiency is the frequency that the VM is reported as having its resources
requirements met:
• Values closer to 0% indicate severe resource contention.
• While Values closer to 100% indicate mild to no resource contention.
A VM DRS score is computed from an individual VM's CPU, memory, and network metrics.
DRS uses these metrics to gauge the goodness or wellness of the VM.
In vSphere 7, the DRS algorithm runs every minute. The Cluster DRS Score is the last result of
DRS running and is filed into one of five buckets. These buckets are simply 20 percent ranges:
0-20, 20-40, 40-60, 60-80 and 80-100 percent over the sample period.
Slide 6
The cluster's Monitor tab lists the VM DRS Score and more detailed metrics for all the VMs in
the cluster.
The VM DRS Score page shows the following values for VMs that are powered on:
• DRS Score
• Active CPU
• Used CPU
• CPU Readiness
• Granted Memory
• Swapped Memory
• And Ballooned Memory.
Slide 7
The advanced performance chart for a cluster object provides the DRS Score counter.
Slide 8
The DRS Score counter displays the DRS scores for VMs in the cluster over the selected time
period.
Slide 9
When you click VIEW DRS SETTINGS, the main vSphere DRS parameters and their current
values are shown.
vSphere DRS settings include:
• Automation level
• And Migration threshold
To view the vSphere DRS pane, go to the cluster's Summary tab.
Slide 10
You can configure the automation level for the initial placement of VMs and for dynamic
balancing while VMs are running.
The automation level determines whether vSphere DRS makes migration recommendations or
automatically places VMs on hosts. vSphere DRS makes placement decisions when a VM
powers on and when VMs must be rebalanced across hosts in the cluster.
The following automation levels are available:
• Manual: When you power on a VM, vSphere DRS displays a list of recommended hosts
on which to place the VM. When the cluster becomes imbalanced, vSphere DRS
displays recommendations for VM migration.
• Partially automated: When you power on a VM, vSphere DRS places it on the best-
suited host. When the cluster becomes imbalanced, vSphere DRS displays
recommendations for manual VM migration.
• And Fully automated: When you power on a VM, vSphere DRS places it on the best-
suited host. When the cluster becomes imbalanced, vSphere DRS migrates VMs from
overused hosts to underused hosts to ensure balanced use of cluster resources.
Slide 11
The migration threshold determines how aggressively vSphere DRS selects to migrate VMs.
The following migration threshold settings are available:
• Level 1 (Conservative): This applies only priority 1 recommendations. vCenter Server
applies only recommendations that must be taken to satisfy cluster constraints, such as
affinity rules and host maintenance.
• Level 2: This applies priority 1 and priority 2 recommendations. vCenter Server applies
recommendations that promise a significant improvement to the cluster’s load balance.
• Level 3 (or default): This applies priority 1, priority 2, and priority 3 recommendations.
vCenter Server applies recommendations that promise at least good improvement to the
cluster’s load balance.
• Level 4: This applies priority 1, priority 2, priority 3, and priority 4 recommendations.
vCenter Server applies recommendations that promise even a moderate improvement to
the cluster’s load balance.
• And Level 5 (Aggressive): This applies all recommendations. vCenter Server applies
recommendations that promise even a slight improvement to the cluster’s load balance.
Slide 12
vSphere DRS and vRealize Operations Manager combine data to predict future demand and
determine when and where high resource utilization occurs.
To make predictive decisions, the vSphere DRS data collector retrieves the following data:
• Resource usage statistics from ESXi hosts
• and predicted usage statistics from the vRealize Operations Manager server.
Predicted usage statistics always take precedence over current usage statistics.
Slide 13
By default, swap files for a VM are on a datastore in the folder containing the other VM files.
For all VMs in the cluster, you can place VM swap files on an alternative datastore.
If vSphere DRS is enabled, you should place the VM swap file in the VM's directory.
A VM's files can be on a VMFS datastore, an NFS datastore, a vSAN datastore, or a vSphere
Virtual Volumes datastore. On a vSAN datastore or a vSphere Virtual Volumes datastore, the
swap file is created as a separate vSAN or vSphere Virtual Volumes object.
A swap file is created by the ESXi host when a VM is powered on. If this file cannot be
created, the VM cannot power on. Instead of accepting the default, you can also use the
following options:
• Use per-VM configuration options to change the datastore to another shared storage
location.
• Or use host-local swap, which allows you to specify a datastore stored locally on the
host. You can swap at a per-host level. However, it can lead to a slight degradation in
performance for vSphere vMotion because pages swapped to a local swap file on the
source host must be transferred across the network to the destination host. Currently,
vSAN and vSphere Virtual Volumes datastores cannot be specified for host-local swap.
Slide 14
After a vSphere DRS cluster is created, you can edit its properties to create rules that specify
affinity.
The following types of rules can be created:
• Affinity rules: vSphere DRS keeps certain VMs together on the same host (for example,
for performance reasons).
• And Anti-affinity rules: vSphere DRS ensures that certain VMs are not together (for
example, for availability reasons).
If two rules conflict, you are prevented from enabling both.
When you add or edit a rule, and the cluster is immediately in violation of that rule, the cluster
continues to operate and tries to correct the violation.
For vSphere DRS clusters that have a default automation level of manual or partially
automated, migration recommendations are based on both rule fulfillment and load balancing.
Slide 15
VM groups and host groups are used in defining VM-Host affinity rules. The VM-Host affinity
rule specifies whether VMs can or cannot be run on a host.
The Types of groups are:
• The VM group: One or more VMs,
Slide 16
A VM-Host affinity or anti-affinity rule specifies whether the members of a selected VM group
can run on the members of a specific host group.
Unlike an affinity rule for VMs, which specifies affinity (or anti-affinity) between individual
VMs, a VM-Host affinity rule specifies an affinity relationship between a group of VMs and a
group of hosts.
Because VM-Host affinity rules are cluster-based, the VMs and hosts that are included in a rule
must all reside in the same cluster. If a VM is removed from the cluster, the VM loses its
membership from all VM groups, even if it is later returned to the cluster.
Slide 17
Preferential rules can be violated to allow the proper functioning of vSphere DRS, vSphere HA,
and VMware vSphere DRM.
On the slide, Group A and Group B are VM groups. Blade Chassis A and Blade Chassis B are
host groups. The goal is to force the VMs in Group A to run on the hosts in Blade Chassis A
and to force the VMs in Group B to run on the hosts in Blade Chassis B. If the hosts fail,
vSphere HA restarts the VMs on the other hosts in the cluster. If the hosts are put into
maintenance mode or become overused, vSphere DRS moves the VMs to the other hosts in the
cluster.
Slide 18
A VM-Host affinity rule that is required, instead of preferential, can be used when the software
running in your VMs has licensing restrictions.
You can enforce this rule when the software running in your VMs has licensing restrictions.
You can place such VMs in a VM group. Then you can create a rule that requires the VMs to
run on a host group, which contains hosts with the required licenses.
When you create a VM-Host affinity rule that is based on the licensing or hardware
requirements of the software running in your VMs, you are responsible for ensuring that the
groups are properly set up. The rule does not monitor the software running in the VMs. Nor
does it know which third-party licenses are in place on which ESXi hosts.
On the slide, Group A is a VM group. You can force Group A to run on hosts in the ISV-
Licensed group to ensure that the VMs in Group A run on hosts that have the required licenses.
But if the hosts in the ISV-Licensed group fail, vSphere HA cannot restart the VMs in Group A
on hosts that are not in the group. If the hosts in the ISV-Licensed group are put into
maintenance mode or become overused, vSphere DRS cannot move the VMs in Group A to
hosts that are not in the group.
Slide 19
By setting the automation level for individual VMs, you can fine-tune automation to suit your
needs. For example, you might have a VM that is especially critical to your business. You want
more control over its placement so you set its automation level to Manual.
If a VM’s automation level is set to disabled, vCenter Server does not migrate that VM or
provide migration recommendations for it. As a best practice, enable automation. Select the
automation level based on your environment mand level of comfort.
For example, if you are new to vSphere DRS clusters, you might select Partially Automated
because you want control over the movement of VMs.
When you are comfortable with what vSphere DRS does and how it works, you might set the
automation level to Fully Automated.
You can set the automation level to Manual on VMs over which you want more control, such as
your business-critical VMs.
Slide 20
ESXi hosts that are added to a vSphere DRS cluster must meet certain requirements to use
cluster features successfully:
• To use vSphere DRS for load balancing, the hosts in your cluster must be part of a
vSphere vMotion network:
o If the hosts are not part of a vSphere vMotion network, vSphere DRS can still
make initial placement recommendations.
o vSphere DRS works best if the VMs meet vSphere vMotion requirements.
• And configure all managed hosts to use shared storage.
You can create vSphere DRS clusters, or you can enable vSphere DRS for existing vSphere HA
or vSAN clusters.
Slide 21
The CPU Utilization and Memory Utilization charts show all the hosts in the cluster and how
their CPU and memory resources are allocated to each VM.
• For CPU usage, the VM information is represented by a colored box. If you point to the
colored box, the VM’s CPU usage information appears. If the VM is receiving the
resources that it is entitled to, the box is green. Green means that 100 percent of the
VM’s entitled resources are delivered. If the box is not green (for example, entitled
resources are 80 percent or less) for an extended time, you might want to investigate
what is causing this shortfall (for example, unapplied recommendations).
• And For memory usage, the VM boxes are not color-coded because the relationship
between consumed memory and entitlement is often not easily categorized.
In the Network Utilization chart, the displayed network data reflects all traffic across physical
network interfaces on the host.
Slide 22
In the DRS Recommendations pane, you can see the current set of recommendations that are
generated for optimizing resource use in the cluster through either migrations or power
management. Only manual recommendations awaiting user confirmation appear in the list.
To refresh the recommendations, click RUN DRS NOW.
Slide 23
Maintenance mode Removes a host's resources from a cluster, making those resources
unavailable for use, and is often used to service a host in a cluster
To place a host in maintenance mode All running VMs on the host must be migrated to another
host, shut down or suspended. When DRS is in fully automated mode, powered-on VMs are
automatically migrated from a host that is placed in maintenance mode.
Standby mode Is used by vSphere DPM to optimize power usage. When a host is placed in
standby mode, it is powered off.
A host enters or leaves maintenance mode as the result of a user request. While in maintenance
mode, the host does not allow you to deploy or power on a VM.
VMs that are running on a host entering maintenance mode must be shut down or migrated to
another host, either manually (by a user) or automatically (by vSphere DRS). The host
continues to run the Enter Maintenance Mode task until all VMs are powered down or moved
away.
When no more running VMs are on the host, the host’s icon indicates that it has entered
maintenance mode. The host’s Summary tab indicates the new state.
Place a host in maintenance mode before servicing the host, for example, when installing more
memory or removing a host from a cluster.
You can place a host in standby mode manually. However, the next time that vSphere DRS
runs, it might undo your change or recommend that you undo the changes. If you want a host to
remain powered off, place it in maintenance mode and turn it off.
Slide 24
To remove a host from a cluster:
1. Place the host in maintenance mode.
And 2. Drag the host to a different inventory location, for example, the data center or another
cluster.
The resources available for the cluster decrease.
When a host is put into maintenance mode, all its running VMs must be shut down, suspended,
or migrated to other hosts by using vSphere vMotion. VMs with disks on local storage must be
powered off, suspended, or migrated to another host and datastore.
When you remove the host from the cluster, the VMs that are currently associated with the host
are also removed from the cluster. If the cluster still has enough resources to satisfy the
reservations of all VMs in the cluster, the cluster adjusts resource allocation to reflect the
reduced amount of resources.
Slide 25
Dynamic DirectPath I/O improves the vSphere DirectPath I/O functionality by adding a layer of
abstraction between a VM and the physical PCI device.
A pool of PCI devices that are available in the cluster can be assigned to the VM.
Slide 26
For New PCI device, click Dynamic DirectPath IO. Clicking SELECT HARDWARE displays
a list of devices that can be attached to the VM. You can select one or more devices from the
list. In the image, the VM can use either an Intel NIC with the RED hardware label or vmxnet3
NIC with the RED hardware label.
Slide 27
You should now be able to meet the following objectives:
• Describe the functions of a vSphere DRS cluster
• Explain how vSphere DRS determines VM placement on hosts in the cluster
• Enable vSphere DRS in a cluster
• And Monitor a vSphere DRS cluster.
This is the end of the Lesson 2 Lecture. If you have any questions, please contact Instructor.
We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 3: Introduction to vSphere HA!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify options for configuring a highly available vSphere environment,
• And Describe how vSphere HA responds when an ESXi host, a virtual machine, or an
application fails.
Slide 3
Whether planned or unplanned, downtime brings with it considerable costs. However, solutions
to ensure higher levels of availability are traditionally costly, hard to implement, and difficult to
manage. VMware software makes it simpler and less expensive to provide higher levels of
availability for important applications. With vSphere, organizations can easily increase the
baseline level of availability provided for all applications and provide higher levels of
availability more easily and cost effectively. With vSphere, you can:
• Provide higher availability independent of hardware, operating system, and applications.
• Reduce planned downtime for common maintenance operations.
• Provide automatic recovery in cases of failure.
vSphere HA provides a base level of protection for your VMs by restarting VMs if a host fails.
vSphere Fault Tolerance provides a higher level of availability, allowing users to protect any
VM from a host failure with no loss of data, transactions, or connections. vSphere Fault
Tolerance provides continuous availability by ensuring that the states of the primary and
secondary VMs are identical at any point in the instruction execution of the VM.
vSphere vMotion and vSphere Storage vMotion keep VMs available during a planned outage,
for example, when hosts or storage must be taken offline for maintenance. System recovery
from unexpected storage failures is simple, quick, and reliable with the encapsulation property
of VMs. You can use vSphere Storage vMotion to support planned storage outages resulting
from upgrades to storage arrays to newer firmware or technology and VMFS upgrades.
With vSphere Replication, a vSphere platform can protect VMs natively by copying their disk
files to another location where they are ready to be recovered.
VM encapsulation is used by third-party backup applications that support file and image-level
backups using vSphere Storage APIs - Data Protection. Backup solutions play prominent roles
in recovering from deleted files or disks and corrupt or infected guest operating systems or file
systems.
With Site Recovery Manager, you can quickly restore your organization’s IT infrastructure,
shortening the time that you experience a business outage. Site Recovery Manager automates
setup, failover, and testing of disaster recovery plans. Site Recovery Manager requires that you
install vCenter Server at the protected site and at the recovery site. Site Recovery Manager also
requires either host-based replication through vSphere Replication or preconfigured array-based
replication between the protected site and the recovery site.
Slide 4
vSphere HA provides rapid recovery from outages and cost-effective high availability for
applications running in VMs. vSphere HA protects application availability in several ways.
It protects against:
• ESXi host failure: By restarting the VMs on other hosts within the cluster,
• VM failure: By restarting the VM when a VMware Tools heartbeat is not received
within a set time,
• Application failure: By restarting the VM when an application heartbeat is not received
within a set time,
• Datastore accessibility failure: By restarting the affected VMs on other hosts that still
can access the datastores,
• And network isolation: By restarting VMs if their host becomes isolated on the
management or vSAN network. This protection is provided even if the network
becomes partitioned.
Unlike other clustering solutions, vSphere HA protects all workloads by using the infrastructure
itself. After you configure vSphere HA, no actions are required to protect new VMs. All
workloads are automatically protected by vSphere HA.
Slide 5
To play the animation, go to this address:
https://vmware.bravais.com/s/kvK76lswrsbmJq8kRueo.
vSphere HA can also determine whether a ESXi host is isolated or has failed. If an ESXi host
fails, vSphere HA attempts to restart any VMs that were running on the failed host by using one
of the remaining hosts in the cluster.
In every cluster, the time to recover depends on how long it takes your guest operating systems
and applications to restart when the VM is failed over.
Slide 6
To play the animation, go to this address https://vmware.bravais.com/s/ikio4LtOkS6fPivIJpR6.
If VM monitoring is enabled, the vSphere HA agent on each individual host monitors VMware
Tools in each VM running on the host. When a VM stops sending heartbeats, the guest
operating system is reset. The VM stays on the same host.
Slide 7
To play the animation, go to this address
https://vmware.bravais.com/s/OgIz03mC2MiGVVPKCxdh.
The agent on each host can optionally monitor heartbeats of applications running in each VM.
When an application fails, the VM on which the application was running is restarted on the
same host. Application monitoring requires a third-party application monitoring agent designed
to work with VM application monitoring.
Slide 8
If VM Component Protection (or VMCP) is enabled, vSphere HA can detect datastore
accessibility failures and provide automated recovery for affected VMs.
You can determine the response that vSphere HA makes to such a failure, ranging from the
creation of event alarms to VM restarts on other hosts:
• All paths down (or APD):
o Is Recoverable,
o Represents a transient or unknown accessibility loss,
o And the Response can be either Issue events, Power off and restart VMs -
Conservative restart policy, or Power off and restart VMs - Aggressive restart
policy.
• And Permanent device loss (or PDL):
o Which is Unrecoverable loss of accessibility,
o Occurs when a storage device reports that the datastore is no longer accessible
by the host,
o And the Response can be either Issue events or Power off and restart VMs.
With Power off and restart VMs - Conservative restart policy vSphere HA does not attempt to
restart the affected VMs unless vSphere HA determines that another host can restart the VMs.
The host experiencing the all paths down (or APD) communicates with the vSphere HA master
host to determine whether the cluster has sufficient capacity to power on the affected VMs. If
the master host determines that sufficient capacity is available, the host experiencing the APD
stops the VMs so that the VMs can be restarted on a healthy host. If the host experiencing the
APD cannot communicate with the master host, no action is taken.
With Power off and restart VMs - Aggressive restart policy vSphere HA stops the affected VMs
even if it cannot determine that another host can restart the VMs. The host experiencing the
APD attempts to communicate with the master host to determine whether the cluster has
sufficient capacity to power on the affected VMs. If the master host is not reachable, sufficient
capacity to restart the VMs is unknown. In this scenario, the host takes the risk and stops the
VMs so they can be restarted on the remaining healthy hosts. However, if sufficient capacity is
not available, vSphere HA might not be able to recover all the affected VMs. This result is
common in a network partition scenario where a host cannot communicate with the master host
to get a definitive response to the likelihood of a successful recovery.
Slide 9
vSphere HA restarts VMs if their host becomes isolated on the management or vSAN network.
Host network isolation occurs when a host is still running, but it can no longer observe traffic
from vSphere HA agents on the management network:
• The host tries to ping the isolation addresses. An isolation address is an IP address or
FQDN that can be manually specified (the default is the host's default gateway).
• If pinging fails, the host declares that it is isolated from the network.
• This protection is provided even if the network becomes partitioned.
If you ensure that the network infrastructure is sufficiently redundant and that at least one
network path is always available, host network isolation is less likely to occur.
Slide 10
Redundant heartbeat networks ensure reliable failure detection and minimize the chance of
host-isolation scenarios.
In a vSphere HA cluster, heartbeats have the following characteristics:
• They are sent between the master host and the subordinate hosts.
• They are used to determine whether a master host or a subordinate host has failed.
• And They are sent over a heartbeat network.
Redundant heartbeat networking is the best approach for your vSphere HA cluster. When a
master host’s connection fails, a second connection is still available to send heartbeats to other
hosts. If you do not provide redundancy, your failover setup has a single point of failure.
Slide 11
A heartbeat network is implemented in the following ways:
• By using a VMkernel port that is marked for management
• And By using a VMkernel port that is marked for vSAN traffic when vSAN is in use
You can use NIC teaming to create a redundant heartbeat network on ESXi hosts.
In this example, vmnic0 and vmnic1 form a NIC team in the Management network. The vmk0
VMkernel port is marked for management.
Slide 12
You can create redundancy by configuring more heartbeat networks.
On each ESXi host, create a second VMkernel port on a separate virtual switch with its own
physical adapter.
Redundant management networking supports the reliable detection of failures and prevents
isolation or partition conditions from occurring, because heartbeats can be sent over multiple
networks.
In most implementations, NIC teaming provides sufficient heartbeat redundancy, but as an
alternative you can create a second management network connection attached to a separate
virtual switch. The original management network connection is used for network and
management purposes. When the second management network connection is created, the
master host sends heartbeats over both management network connections. If one path fails, the
master host still sends and receives heartbeats over the other path.
Slide 13
You should now be able to meet the following objectives:
• Identify options for configuring a highly available vSphere environment,
• And Describe how vSphere HA responds when an ESXi host, a virtual machine, or an
application fails.
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 4: vSphere HA Architecture!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify the heartbeat mechanisms used by vSphere HA
• Describe failure scenarios
• And Recognize vSphere HA design considerations.
Slide 3
The vSphere HA cluster is managed by a master host. All other hosts are called subordinate
hosts. Fault Domain Manager (or FDM) services on subordinate hosts all communicate with
FDM on the master host. Hosts cannot participate in a vSphere HA cluster if they are in
maintenance mode, in standby mode, or disconnected from vCenter Server.
To determine which host is the master host, an election process takes place. The host that can
access the greatest number of datastores is elected the master host. If more than one host sees
the same number of datastores, the election process determines the master host by using the
host Managed Object ID (or MOID) assigned by vCenter Server.
The election process for a new master host completes in approximately 15 seconds and occurs
under these circumstances:
• vSphere HA is enabled.
• The master host encounters a system failure because of one of the following factors:
o The master host is placed in maintenance mode.
o The master host is placed in standby mode.
o Or vSphere HA is reconfigured.
• And the subordinate hosts cannot communicate with the master host because of a
network problem.
During the election process, the candidate vSphere HA agents communicate with each other
over the management network, or the vSAN network in a vSAN cluster, by using User
Datagram Protocol (or UDP). All network connections are point-to-point. After the master host
is determined, the master host and subordinate hosts communicate using secure TCP. When
vSphere HA is started, vCenter Server contacts the master host and sends a list of hosts with
membership in the cluster with the cluster configuration. That information is saved to local
storage on the master host and then pushed out to the subordinate hosts in the cluster. If
additional hosts are added to the cluster during normal operation, the master host sends an
update to all hosts in the cluster.
The master host provides an interface for vCenter Server to query the state of and report on the
health of the fault domain and VM availability. vCenter Server tells the vSphere HA agent
which VMs to protect with their VM-to-host compatibility list. The agent learns about state
changes through hostd and vCenter Server learns through vpxa. The master host monitors the
health of the subordinate hosts and takes responsibility for VMs that were running on a failed
subordinate host. A subordinate host monitors the health of VMs running locally and sends
state changes to the master host. A subordinate host also monitors the health of the master host.
vSphere HA is configured, managed, and monitored through vCenter Server. The vpxd process,
which runs on the vCenter Server system, maintains the cluster configuration data. The vpxd
process reports cluster configuration changes to the master host. The master host advertises a
new copy of the cluster configuration information and each subordinate host fetches an updated
copy. Each subordinate host writes the updated configuration information to local storage. A
list of protected VMs is stored on each datastore. The VM list is updated after each user-
initiated power-on (protected) and power off (unprotected) operation. The VM list is updated
after vCenter Server observes these operations.
A VM becomes protected when an operation results in a power on. Reverting a VM to a
snapshot with memory state causes the VM to power on and become protected. Similarly, a
user action that causes the VM to power off, for example, reverting to a snapshot without
memory state or a standby operation performed in the guest, causes the VM to become
unprotected.
Slide 4
Heartbeats are sent to each subordinate host from the master host over all configured
management networks. However, subordinate hosts use only one management network to
communicate with the master host. If the management network used to communicate with the
master host fails, the subordinate host switches to another management interface to
communicate with the master host.
If the subordinate host does not respond within the predefined timeout period, the master host
declares the subordinate host as agent unreachable. When a subordinate host is not responding,
the master host attempts to determine the cause of the subordinate host’s inability to respond.
The master host must determine whether the subordinate host crashed, is not responding
because of a network failure, or the vSphere HA agent is in an unreachable state.
Slide 5
When the master host cannot communicate with a subordinate host over the management
network, the master host uses datastore heartbeating to determine the cause:
• Subordinate host failure
• Network partition
• Or Network isolation
Using datastore heartbeating, the master host determines whether a host has failed or a network
isolation has occurred. If datastore heartbeating from the host stops, the host is considered
failed. In this case, the failed host’s VMs are started on another host in the vSphere HA cluster.
Slide 6
vSphere HA can also determine whether an ESXi host is isolated or has failed. Isolation refers
to when an ESXi host cannot see traffic coming from the other hosts in the cluster and cannot
ping its configured isolation address. If an ESXi host fails, vSphere HA attempts to restart the
VMs that were running on the failed host on one of the remaining hosts in the cluster. If the
ESXi host is isolated because it cannot ping its configured isolation address and sees no
management network traffic, the host executes the Host Isolation Response.
Slide 7
The master host must determine whether the subordinate host is isolated or has failed, for
example, because of a misconfigured firewall rule or component failure. The type of failure
dictates how vSphere HA responds.
When the master host cannot communicate with a subordinate host over the heartbeat network,
the master host uses datastore heartbeating to determine whether the subordinate host failed, is
in a network partition, or is network-isolated. If the subordinate host stops datastore
heartbeating, the subordinate host is considered to have failed, and its virtual machines are
restarted elsewhere.
For VMFS, a heartbeat region on the datastore is read to find out if the host is still heartbeating
to it. For NFS datastores, vSphere HA reads the host--hb file, which is locked by the ESXi host
accessing the datastore. The file guarantees that the VMkernel is heartbeating to the datastore
and periodically updates the lock file.
The lock file time stamp is used by the master host to determine whether the subordinate host is
isolated or has failed.
In both storage examples, the vCenter Server instance selects a small subset of datastores for
hosts to heartbeat to. The datastores that are accessed by the greatest number of hosts are
selected as candidates. But two datastores are selected (by default) to keep the associated
overhead and processing to a minimum.
Slide 8
When the master host is placed in maintenance mode or fails, the subordinate hosts detect that
the master host is no longer issuing heartbeats.
To determine which host is the master host, an election process takes place. The host that can
access the greatest number of datastores is elected the master host. If more than one host sees
the same number of datastores, the election process determines the master host by using the
host Managed Object ID (or MOID) assigned by vCenter Server. If the master host fails, is shut
down, or is removed from the cluster a new election is held.
Slide 9
The slide illustrates one of several scenarios that might result in host isolation. If a host loses
connectivity to both the primary heartbeat network and the alternate heartbeat network, the host
no longer receives network heartbeats from the other hosts in the vSphere HA cluster.
Furthermore, the slide depicts that this same host can no longer ping its isolation address.
If a host becomes isolated, the master host must determine if that host is still alive, and merely
isolated, by checking for datastore heartbeats. Datastore heartbeats are used by vSphere HA
only when a host becomes isolated or partitioned.
Slide 10
Storage connectivity problems might arise because of:
• Network or switch failure
• Array misconfiguration
• Or a Power outage.
Storage connectivity problems affect VM availability:
• VMs on affected hosts are difficult to manage
• And Applications with attached disks fail.
Slide 11
When a datastore accessibility failure occurs, the affected host can no longer access the storage
path for a specific datastore. You can determine the response that vSphere HA gives to such a
failure, ranging from the creation of event alarms to VM restarts on other hosts.
Slide 12
When designing your vSphere HA cluster, consider these guidelines:
• Implement redundant heartbeat networks and redundant isolation addresses:
o Redundancy minimizes host isolation events.
• Physically separate VM networks from the heartbeat networks.
• And Implement datastores so that they are separated from the management network by
using one or both of the following approaches:
o Use Fibre Channel over fiber optic for your datastores.
o If you use IP storage, physically separate your IP storage network from the
management network.
If a datastore is based on Fibre Channel, a network failure does not disrupt datastore access.
When using datastores based on IP storage (for example, NFS, iSCSI, or Fibre Channel over
Ethernet), you must physically separate the IP storage network and the management network
(or the heartbeat network). If physical separation is not possible, you can logically separate the
networks.
Slide 13
You should now be able to meet the following objectives:
• Identify the heartbeat mechanisms used by vSphere HA
• Describe failure scenarios
• And Recognize vSphere HA design considerations.
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 5: Configuring vSphere HA!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Recognize the requirements for creating and using a vSphere HA cluster,
• And Configure a vSphere HA cluster.
Slide 3
To create a vSphere HA cluster, you must meet several requirements:
• All hosts must be configured with static IP addresses. If you are using DHCP, you must
ensure that the address for each host persists across reboots.
• All hosts must have at least one management network in common.
• For VM monitoring to work, VMware Tools must be installed in every VM.
• Only vSphere HA clusters that contain ESXi hosts 6.x and later can be used to enable
VMCP.
• And You must not exceed the maximum number of hosts that are allowed in a cluster.
To determine the maximum number of hosts per cluster, see VMware Configuration
Maximums at configmax.vmware.com.
Slide 4
In the vSphere Client, you can configure the following vSphere HA settings:
• Availability failure conditions and responses: To Provide settings for host failure
responses, host isolation, VM monitoring, and VMCP.
• Admission control: To Enable or disable admission control for the vSphere HA cluster
and select a policy for how it is enforced.
• Heartbeat datastores: To Specify preferences for the datastores that vSphere HA uses for
datastore heart-beating.
• And Advanced options: To Customize vSphere HA behavior by setting advanced
options.
Slide 5
Using the Failures and Responses pane, you can configure how your cluster should function
when problems are encountered. You can specify the vSphere HA cluster’s response for host
failures and isolation. You can also configure VMCP actions when permanent device loss and
all paths down situations occur and enable VM monitoring.
If a datastore encounters an All Paths Down (or APD) condition, the device state is unknown
and might only be temporarily available. You can select the following options for a response to
a datastore APD:
• Disabled: No action is taken for the affected VMs.
• Issue events: No action is taken against the affected VMs; however, the administrator is
notified when an APD event has occurred.
• Power off and restart VMs - Conservative restart policy: vSphere HA does not attempt
to restart the affected VMs unless vSphere HA determines that another host can restart
the VMs. The host experiencing the APD communicates with the master host to
determine whether sufficient capacity exists in the cluster to power on the affected VMs.
If the master host determines sufficient capacity exists, the host experiencing the APD
stops the VMs so that the VMs can be restarted on a healthy host. If the host
experiencing the APD cannot communicate with the master host, no action is taken
• Power off and restart VMs - Aggressive restart policy: vSphere HA stops the affected
VMs even if it cannot determine that another host can restart the VMs. The host
experiencing the APD attempts to communicate with the master host to determine if
sufficient capacity exists in the cluster to power on the affected VMs. If the master host
is not reachable, sufficient capacity for restarting the VMs is unknown. In this scenario,
the host takes the risk and stops the VMs so that they can be restarted on the remaining
healthy hosts. However, if sufficient capacity is not available, vSphere HA might not be
able to recover all the affected VMs. This result is common in a network partition
scenario where a host cannot communicate with the master host to get a definitive
response to the likelihood of a successful recovery.
For more information about VM Component Protection, see blogs.vmware.com/vsphere.
Slide 6
You use VM Monitoring settings to control the monitoring of VMs.
By default, VM Monitoring is set to Disabled.
The VM monitoring service determines that the VM has failed if one of the following events
occurs:
• VMware Tools heartbeats are not received.
• Or The guest operating system has not issued an I/O for the last 2 minutes (by default).
If the VM has failed, the VM monitoring service resets the VM to restore services.
You can configure the level of monitoring sensitivity. Highly sensitive monitoring results in a
more rapid conclusion that a failure has occurred. Although unlikely, highly sensitive
monitoring might lead to falsely identifying failures when the VM or application is still
working but heartbeats have not been received because of factors like resource constraints.
Low-sensitivity monitoring results in longer interruptions in service between actual failures and
VMs being reset. Select an option that is an effective compromise for your needs.
You can select VM and Application Monitoring to enable application monitoring.
Slide 7
Datastore heartbeating takes checking the health of a host to another level by checking more
than the management network to determine a host’s health. You can configure a list of
datastores to monitor for a particular host, or you can allow vSphere HA to decide. You can
also combine both methods.
Slide 8
vCenter Server uses admission control to ensure both that sufficient resources are available in a
cluster to provide failover protection and that VM resource reservations are respected.
After you create a cluster, you can use admission control to specify whether VMs can be started
if they violate availability constraints. The cluster reserves resources to allow failover for all
running VMs for a specified number of host failures.
The admission control settings include:
• Disabled: (which is not recommended) This option disables admission control, allowing
the VMs violating availability constraints to power on.
• Slot Policy: A slot is a logical representation of memory and CPU resources. With the
slot policy option, vSphere HA calculates the slot size, determines how many slots each
host in the cluster can hold, and therefore determines the current failover capacity of the
cluster.
• Cluster resource Percentage: (Default) This value specifies a percentage of the cluster’s
CPU and Memory resources to be reserved as spare capacity to support failovers.
• And Dedicated failover hosts: This option selects hosts to use for failover actions. If a
default failover host does not have enough resources, failovers can still occur to other
hosts in the cluster.
Slide 9
This is an example of calculating total failover capacity using cluster resource percentages:
The Total cluster capacity is 18 GHz for CPU and 24 GB for Memory
Total VM reservations are 7 GHz for CPU and 6 GB for Memory
Current failover CPU capacity is 61%:
• Since the sum of 18 GHz minus 7 GHz divided by 18 GHz equals 61%
And Current failover memory capacity is 75%:
• Since the sum of 24 GB minus 6 GB divided by 24 GB equals 75%
Cluster resource percentage is the default admission control policy. Recalculations occur
automatically as the cluster's resources change, for example, when a host is added to or
removed from the cluster.
Slide 10
A slot is calculated by combining the largest memory reservation and the largest CPU
reservation of any running VM in the cluster. vSphere HA performs admission control by
calculating the following values:
• Slot size:
o In this example, the slot size is 2 GHz CPU and 2 GB memory.
• And Number of slots each host in the cluster can hold:
o Which, in this example is Three.
o And The cluster has a total of nine slots (3 + 3 + 3).
Slide 11
vSphere HA also calculates the current failover capacity. In this example, the failover capacity
is one host:
• If the first host fails, six slots remain in the cluster, which is sufficient for all five of the
powered-on VMs.
• If the first and second hosts fail, only three slots remain, which is insufficient for all five
of the VMs.
• And If the current failover capacity is less than the configured failover capacity,
vSphere HA does not allow any more VMs to power on.
Slide 12
Admission control can also be configured to offer warnings when the actual use exceeds the
failover capacity percentage. The resource reduction calculation takes into account a VM's
reserved memory and memory overhead.
By setting the Performance degradation VMs tolerate threshold, you can specify when a
configuration issue should generate a warning or notice. For example:
• The default value is 100 percent, which produces no warnings.
• If you reduce the threshold to 0 percent, a warning is generated when cluster use
exceeds the available capacity.
• If you reduce the threshold to 20 percent, the performance reduction that can be
tolerated is calculated as performance reduction equals current use times 20 percent.
When the current use minus the performance reduction exceeds the available capacity, a
configuration notice is issued. The Performance degradation VMs tolerate threshold is not
available unless vSphere DRS is enabled.
Slide 13
The VM restart priority determines the order in which vSphere HA restarts VMs on a running
host.VMs are put in the Medium restart priority by default, unless the restart priority is
explicitly set using VM overrides.
Some exceptions are:
• Agent VMs always start first, and the restart priority is nonconfigurable.
• vSphere Fault Tolerance secondary VMs fail over before regular VMs.
Primary VMs follow the normal restart priority.
Optionally, you can configure a delay when a certain restart condition is met.
Slide 14
You can set advanced options that affect the behavior of your vSphere HA cluster.
For more details, see vSphere Availability at this address:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-
63F459B7-8884-4818-8872-C9753B2E0215.html.
Slide 15
You can customize the restart priority for individual VMs in a cluster to override the default
level set for the entire cluster.
Slide 16
After a host failure, VMs are assigned to other hosts with unreserved capacity, with the highest
priority VMs placed first. The process continues to those VMs with lower priority until all have
been placed or no more cluster capacity is available to meet the reservations or memory
overhead of the VMs. A host then restarts the VMs assigned to it in priority order.
If insufficient resources exist, vSphere HA waits for more unreserved capacity to become
available, for example, because of a host coming back online, and then retries the placement of
these VMs. To reduce the chance of this situation occurring, configure vSphere HA admission
control to reserve more resources for failures. With admission control, you can control the
amount of cluster capacity that is reserved by VMs, which is unavailable to meet the
reservations and memory overhead of other VMs if a failure occurs.
Slide 17
VMs can depend only on other VMs of the same or higher priority. Only direct dependencies
are supported. VM-to-VM dependency is a hard rule. Creating cyclical dependencies causes
VM restart to fail.
Slide 18
To play the animation, go to this address
https://vmware.bravais.com/s/JDg7NJ3DjVli7r6FiMQ0.
In vSphere 6.5 and later, vSphere HA restarts VMs only from a failed host. Configure affinity
rules to keep VMs on the same host if necessary.
Slide 19
The following network maintenance suggestions can help you avoid the false detection of host
failure and network isolation because of dropped vSphere HA heartbeats:
• Changing your network hardware or networking settings can interrupt the heartbeats
used by vSphere HA to detect host failures, and might result in unwanted attempts to
fail over VMs. When changing the management or vSAN networks of the hosts in the
vSphere HA-enabled cluster, suspend host monitoring and place the host in maintenance
mode.
• Disabling host monitoring is required only when modifying virtual networking
components and properties that involve the VMkernel ports configured for the
Management or vSAN traffic, which are used by the vSphere HA networking heartbeat
service.
• After you change the networking configuration on ESXi hosts, for example, adding port
groups, removing virtual switches, or suspending host monitoring, you must reconfigure
vSphere HA on all hosts in the cluster. This reconfiguration causes the network
information to be reinspected. Then, you must reenable host monitoring.
Slide 20
Your cluster or its hosts can experience configuration issues and other errors that adversely
affect proper vSphere HA operation. You can monitor these errors on the Configuration Issues
page.
Slide 21
vSphere HA is closely integrated with vSphere DRS:
• When a failover occurs, vSphere HA checks whether resources are available on that host
for failover.
• If resources are not available, vSphere HA asks vSphere DRS to accommodate for the
VMs where possible.
vSphere HA might not be able to fail over VMs for the following reasons:
• vSphere HA admission control is disabled, and resources are insufficient in the
remaining hosts to power on all the failed VMs.
• Or Sufficient aggregated resources exist, but they are fragmented across hosts. In such
cases, vSphere HA uses vSphere DRS to try to adjust the cluster by migrating VMs to
defragment the resources.
When vSphere HA performs failover and restarts VMs on different hosts, its first priority is the
immediate availability of all VMs. After the VMs are restarted, the hosts in which they were
powered on are usually heavily loaded, and other hosts are comparatively lightly loaded.
vSphere DRS helps to balance the load across hosts in the cluster.
Slide 22
You should now be able to meet the following objectives:
• Recognize the requirements for creating and using a vSphere HA cluster,
• And Configure a vSphere HA cluster.
This is the end of the Lesson 5 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 6: Introduction to vSphere Fault Tolerance!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe the features and benefits of using vSphere Fault Tolerance
• Describe how vSphere Fault Tolerance works
• Describe how vSphere Fault Tolerance works with vSphere HA and vSphere DRS
• And Enable vSphere Fault Tolerance using the vSphere Client.
Slide 3
You can use vSphere Fault Tolerance for most mission-critical VMs. vSphere Fault Tolerance
is built on the ESXi host platform.
The protected VM is called the primary VM. The duplicate VM is called the secondary VM.
The secondary VM is created and runs on a different host to the primary VM. The secondary
VM’s execution is identical to that of the primary VM. The secondary VM can take over at any
point without interruption and provide fault-tolerant protection.
The primary VM and the secondary VM continuously monitor the status of each other to ensure
that fault tolerance is maintained. A transparent failover occurs if the host running the primary
VM fails, in which case the secondary VM is immediately activated to replace the primary VM.
A new secondary VM is created and started, and fault tolerance redundancy is reestablished
automatically. If the host running the secondary VM fails, the secondary VM is also
immediately replaced. In either case, users experience no interruption in service and no loss of
data.
Slide 4
vSphere Fault Tolerance protects mission-critical, high-performance applications regardless of
the operating system used.
vSphere Fault Tolerance:
• Supports VMs configured with up to 8 vCPUs and 128 GB memory
• Supports up to four fault-tolerant VMs per host with no more than eight vCPUs between
them
• Supports vSphere vMotion migration for primary and secondary VMs
• Creates a secondary copy of all VM files and disks
• Provides fast checkpoint copying to keep primary and secondary VMs synchronized
• Supports multiple VM disk formats: thin provision, thick provision lazy-zeroed, and
thick provision eager-zeroed
• Can be used with vSphere DRS only when Enhanced vMotion Compatibility is enabled
• And it Supports interoperability with vSAN.
You can use vSphere Fault Tolerance with vSphere DRS only when the Enhanced vMotion
Compatibility feature is enabled.
When you enable EVC mode on a cluster, vSphere DRS makes the initial placement
recommendations for fault-tolerant VMs, and you can assign a vSphere DRS automation level
to primary VMs. The secondary VM always assumes the same setting as its associated primary
VM. When vSphere Fault Tolerance is used for VMs in a cluster that has EVC mode disabled,
the fault-tolerant VMs are given the disabled vSphere DRS automation level. In such a cluster,
each primary VM is powered on only on its registered host, and its secondary VM is
automatically placed.
Slide 5
vSphere HA and vSphere DRS are vSphere Fault Tolerance aware:
• vSphere HA Is required for vSphere Fault Tolerance and Restarts failed VMs
• vSphere DRS Selects which hosts run the primary and secondary VM, when a VM is
powered on and does not automatically migrate fault-tolerant VMs
A fault-tolerant VM and its secondary copy are not allowed to run on the same host. This
restriction ensures that a host failure cannot result in the loss of both VMs.
Slide 6
vSphere Fault Tolerance provides failover redundancy by creating two full VM copies. The
VM files can be placed on the same datastore. However, VMware place these files on separate
datastores to provide recovery from datastore failures.
Slide 7
To play the animation, go to this address:
https://vmware.bravais.com/s/a8GAXMVDFHxWLstdhM1G.
Changes on the primary VM are not processed on the secondary VM. The memory is updated
on the secondary VM.
Slide 8
To play the animation, go to this address:
https://vmware.bravais.com/s/XM1mrfVGU5vPd6HVfBfv.
Using vSphere Fault Tolerance, a second VM is created on the secondary host. The memory of
the source VM is then copied to the secondary host.
Slide 9
To play the animation, go to this address:
https://vmware.bravais.com/s/KafKVBJNsBpY7hn5bmGs.
vSphere Fault Tolerance uses an algorithm that provides fast, continuous copying
(checkpointing) of the primary host VM. The primary VM is copied (checkpointed)
periodically, and the copies are sent to a secondary host. If the primary host fails, the VM
continues on the secondary host at the point of its last network send.
The goal is to take checkpoints of VMs at least every 10 milliseconds.
The primary VM is continuously copied (or checkpointed), and these copies (checkpoints) are
sent to a secondary host.
The initial complete copy (or checkpoint) is created using a modified form of vSphere vMotion
migration to the secondary host. The primary VM holds each outgoing network packet until the
following copy (checkpoint) has been sent to the secondary host.
In vSphere Fault Tolerance, checkpoint data makes up the last changed pages of memory. The
source VM is paused to access this memory. This pause is typically under one second.
Slide 10
To play the animation, go to this address:
https://vmware.bravais.com/s/2c9Y6hQ4X4uFWWzSTeFk.
The shared.vmft file, which is found on a shared datastore, is the vSphere Fault Tolerance
metadata file. This file contains the primary and secondary instance UUIDs and the primary and
secondary vmx paths.
vSphere Fault Tolerance avoids split-brain situations, which can lead to two active copies of a
virtual machine after recovery from a failure. The .ftgeneration file ensures that only one VM
instance is designated as the primary VM.
Slide 11
After you take all the required steps for enabling vSphere Fault Tolerance for your cluster, you
can use the feature by turning it on for individual VMs. Before vSphere Fault Tolerance can be
turned on, validation checks are performed on a VM. After these checks are passed, and you
turn on vSphere Fault Tolerance for a VM, new options are added to the Fault Tolerance
section of the VM's context menu. These options include turning off or disabling vSphere Fault
Tolerance, migrating the secondary VM, testing failover, and testing restart of the secondary
VM. When vSphere Fault Tolerance is turned on, vCenter Server resets the VM’s memory limit
to the default (unlimited memory) and sets the memory reservation to the memory size of the
VM. While vSphere Fault Tolerance is turned on, you cannot change the memory reservation,
size, limit, number of virtual CPUs, or shares. You also cannot add or remove disks for the VM.
When vSphere Fault Tolerance is turned off, any parameters that were changed are not reverted
to their original values.
Slide 12
You should now be able to meet the following objectives:
• Describe the features and benefits of using vSphere Fault Tolerance
• Describe how vSphere Fault Tolerance works
• Describe how vSphere Fault Tolerance works with vSphere HA and vSphere DRS
• And Enable vSphere Fault Tolerance using the vSphere Client.
Slide 13
As a Virtual Beans administrator, you want to place ESXi hosts in a vSphere cluster for a
scalable and highly available infrastructure. Take a moment to try and match the goal to the
feature that helps achieve the goal.
We will provide our answer in the following slide.
Slide 14
To Add ESXi hosts to the data center and let vSphere balance the load across the hosts you use
vSphere DRS
To Make business-critical applications 99.99 percent available (downtime per year of 52.56
minutes) you use vSphere HA or vSphere Fault Tolerance
To Identify VMs that are experiencing serious resource contention you use VM scores
To Improve the performance of certain VMs by ensuring that they always run together on the
same host you use VM-Host affinity.
Slide 15
Some key points from Module 9 are:
• When you create a cluster, you can enable vSphere DRS, vSphere HA, vSAN, and the
ability to manage image updates on all hosts collectively.
• vSphere DRS clusters provide automated resource management to ensure that a VM's
resource requirements are satisfied.
• vSphere DRS works best when the VMs meet vSphere vMotion migration requirements.
• vSphere HA restarts VMs on the remaining hosts in the cluster.
• You implement redundant heartbeat networks either with NIC teaming or by creating
additional heartbeat networks.
• And vSphere Fault Tolerance provides zero downtime for applications that must always
be available.
Slide 16
This is the end of Module 9 and the Lesson 6 Lecture. The Labs and Assignments associated
with this Module are as follows:
• Lab 22: Implementing vSphere DRS Clusters
• Lab 23: Using vSphere HA
• And the Module 9 Quiz: vSphere Clusters
If you have any questions, please contact your Instructor. We will see you in the next Module
and thanks for watching!
Slide 1
Welcome back! Let’s get started with Lesson 1: vCenter Server Update Planner!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe features of the vCenter Server Update Planner
• Run vCenter Server upgrade prechecks and interoperability reports
• And Export prechecks and interoperability report results.
Slide 3
In vSphere 7, you can use the Update Planner feature for planning updates to vCenter Server
and other VMware products that are registered with it.
The Update Planner can perform the following tasks:
• Retrieve information about VMware products registered with vCenter Server.
• List available vCenter Server updates and upgrades.
• Create interoperability reports.
• And perform a precheck to verify that your system meets the minimum software and
hardware requirements for a successful upgrade of vCenter Server.
Slide 4
The Update Planner feature is available for vCenter Server 7.0 or later. When generating
reports, if the Customer Experience Improvement Program (or CEIP) is not yet accepted, a
prompt describing CEIP appears. Reports are not generated if you do not join CEIP.
Slide 5
When new vCenter Server updates are released, the vSphere Client shows a notification in the
Summary tab. Clicking the notification directs you to the Updates tab.
The Updates tab has an Update Planner page. This page shows a list of vCenter Server versions
that you can select.
Details include release date, version, build, and other information about each vCenter Server
version available.
The Type column tells you if the release item is an update, an upgrade, or a patch.
If multiple versions appear, the recommended version is preselected. After selecting a vCenter
Server version from the list, you can generate product interoperability reports and preupdate
reports.
Slide 6
In the vSphere Client, the Interoperability page appears on the Monitor tab of the vCenter
Server. This page displays VMware products currently registered with vCenter Server.
Columns show the name, current version, compatible version, and release notes of each
detected product.
If you do not see your registered VMware products, you can manually modify the list and add
the appropriate names and versions.
Slide 7
You can export report results in CSV format and use the report as a guide to prepare for an
update.
Both product interoperability and precheck reports can be exported.
Slide 8
To manage the life cycle of vCenter Server, use the vCenter Server Management Interface (or
VAMI) to update and patch, and use the vCenter Server installer to upgrade.
Slide 9
You should now be able to meet the following objectives:
• Describe features of the vCenter Server Update Planner
• Run vCenter Server upgrade prechecks and interoperability reports
• And Export prechecks and interoperability report results.
This is the end of the Lesson 1 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 2: Overview of vSphere Lifecycle Manager!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Recognize features of vSphere Lifecycle Manager
• Distinguish between managing hosts using baselines and managing hosts using images
• And Change the patch download source.
Slide 3
vSphere Lifecycle Manager centralizes automated patch and version management for clusters,
ESXi, drivers and firmware, VM hardware, and VMware Tools.
vSphere Lifecycle Manager features include:
• Upgrading and patching ESXi hosts
• Installing and updating third-party software on ESXi hosts
• Standardizing images across hosts in a cluster
• Installing and updating ESXi drivers and firmware
• And Managing VMware Tools and VM hardware upgrades.
Slide 4
vSphere Lifecycle Manager supports two methods for updating and upgrading ESXi hosts.
Only one method is supported at a time.
If you switch from managing using baselines to managing using images, you cannot switch
back.
You can see several comparisons in the slide.
Slide 5
In the vSphere Lifecycle Manager home view, you configure and administer the vSphere
Lifecycle Manager instance that runs on your vCenter Server system.
From the drop-down menu at the top of the Lifecycle Manager pane, you can select the vCenter
Server system that you want to manage. To access the vSphere Lifecycle Manager home view
in the vSphere Client, select Menu > Lifecycle Manager.
You do not require special privileges to access the vSphere Lifecycle Manager home view.
In the Lifecycle Manager pane, you can access the following tabs: Image Depot, Updates,
Imported ISOs, Baselines, and Settings.
Slide 6
By default, vSphere Lifecycle Manager is configured to download patch metadata automatically
from the VMware repository.
Select Settings > Patch Setup to change the patch download source or add a URL to configure a
custom download source.
Slide 7
When performing remediation operations on a cluster that is enabled with vSphere DRS,
vSphere Lifecycle Manager automatically integrates with vSphere DRS:
• When vSphere Lifecycle Manager places hosts into maintenance mode, vSphere DRS
evacuates each host before the host is patched.
• When vSphere Lifecycle Manager attempts to place a host into maintenance mode,
certain prechecks are performed to ensure that the ESXi host can enter maintenance
mode.
• And The vSphere Client reports any configuration issues that might prevent an ESXi
host from entering maintenance mode.
Slide 8
You should now be able to meet the following objectives:
• Recognize features of vSphere Lifecycle Manager
• Distinguish between managing hosts using baselines and managing hosts using images
• And Change the patch download source.
This is the end of the Lesson 2 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 3: Working with Baselines!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Identify types of baselines and baseline groups
• Recognize how to create baselines
• And Describe how to update hosts using baselines.
Slide 3
A baseline includes one or more patches, extensions, or upgrades. vSphere Lifecycle Manager
includes the following dynamic baselines by default:
• Critical Host Patches
• Non-Critical Host Patches
• And Host Security Patches.
A baseline group includes multiple baselines.
Baseline groups can contain one upgrade baseline and one or more patch and extension
baselines.
Slide 4
Using the New Baseline wizard, you can create baselines to meet the needs of your deployment:
• The Fixed patch baseline is a set of patches that do not change as patch availability
changes.
• The Dynamic patch baseline is a set of patches that meet certain criteria.
• And the host extension baseline contains additional software for ESXi hosts. This
additional software might be VMware or third-party software.
When you create a patch or extension baseline, you can filter the patches and extensions
available in the vSphere Lifecycle Manager repository to find specific patches and extensions to
include in the baseline.
Slide 5
To create a baseline, select Lifecycle Manager from the Menu drop-down menu. Click NEW >
Baseline.
Slide 6
Provide the name, a description, the content of the baseline, and the ESXi version that this
baseline applies to.
Slide 7
To create a dynamic baseline, set the criteria for adding patches to the baseline and select the
check box for automatic updating of the baseline.
A dynamic baseline is a set of patches that meet certain criteria. The content of a dynamic
baseline changes as the available patches change. You can manually exclude or add specific
patches to the baseline.
Slide 8
To create a fixed baseline, select the patches that you want to include in the baseline.
You must also disable the automatic updates by deselecting the check box on the Select Patches
Automatically page.
A fixed baseline is a set of patches that does not change as patch availability changes.
Slide 9
Managing the life cycle of a standalone host or cluster of hosts is a five-step process:
Step 1. Select your host or cluster and select the Updates tab. The Baselines window is the
default view.
2. Attach one or more baselines.
3. Check compliance of your host or cluster with the attached baselines.
4. Perform a precheck before remediating.
And 5. Remediate the host or cluster.
Optionally, stage your patches to copy them to hosts for remediation later.
Slide 10
The Remediation Pre-check in vSphere Lifecycle Manager helps to verify that your remediation
is successful.
vSphere Lifecycle Manager notifies you about any actions that it takes before the remediation
and recommends actions for your attention.
Slide 11
During the remediating process, the upgrades, updates, and patches from the compliance check
are applied to your hosts:
• You can perform the remediation immediately or schedule it for a later date.
• Host remediation runs in different ways, depending on the types of baselines that you
attach and whether the host is in a cluster.
• For ESXi hosts in a cluster, the process is sequential by default.
• And the remediation of hosts in a cluster temporarily disables cluster features such as
vSphere HA admission control.
Slide 12
You should now be able to meet the following objectives:
• Identify types of baselines and baseline groups
• Recognize how to create baselines
• And Describe how to update hosts using baselines.
This is the end of the Lesson 3 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! We will now begin Lesson 4: Working with Images!
Slide 2
After completing this lesson, you should be able to meet the following objectives:
• Describe ESXi images
• Import ESXi updates into the vSphere Client
• Enable vSphere Lifecycle Manager in a cluster
• Define a cluster image using vSphere Lifecycle Manager
• Validate ESXi host compliance against a cluster image
• Update ESXi hosts using vSphere Lifecycle Manager
• And Apply a recommended image to the hosts in a cluster.
Slide 3
Managing clusters with images helps to standardize the software running on your ESXi hosts.
An ESXi image consists of several elements:
• ESXi base image: which is an update that provides software fixes and enhancements
• Components: A logical grouping of one or more VIBs (or vSphere Installation Bundles)
that encapsulates a functionality in ESXi
• Vendor add-ons: They are sets of components that OEMs bundle together with an ESXi
base image
• And Firmware and Drivers Add-On: which are firmware and driver bundles that you
can define for your cluster image.
To maintain consistency, you apply a single ESXi image to all hosts in a cluster.
The ESXi base image is a complete ESXi installation package and is enough to start an ESXi
host. Only VMware creates and releases ESXi base images. The ESXi base image is a grouping
of components. You must select at least the base image or vSphere version when creating a
cluster image. Starting with vSphere 7, the component is the smallest unit that is used by
vSphere Lifecycle Manager to install VMware and third-party software on ESXi hosts.
Components are the basic packaging for VIBs and metadata. The metadata provides the name
and version of the component.
On installation, a component provides you with a visible feature. For example, vSphere HA is
provided as a component. Components are optional elements to add to a cluster image.
Vendor add-ons are custom OEM images. Each add-on is a collection of components
customized for a family of servers. OEMs can add, update, or remove components from a base
image to create an add-on. Selecting an add-on is optional.
The firmware and drivers add-on is a vendor-provided add-on. It contains the components that
encapsulate firmware and driver update packages for a specific server type. To add a firmware
and drivers add-on to your image, you must first install the Hardware Support Manager plug-in
for the respective family of servers.
Slide 4
The landing page for the vSphere Lifecycle Manager home view is the Image Depot tab.
In the Image Depot tab, you can view details about downloaded ESXi elements:
• ESXi versions
• Vendor add-ons
• And Components.
When you select a downloaded file, the details appear to the right:
• When you select an ESXi version, the details include the version name, build number,
category, and description, and the list of components that make up the base image.
• When you select a vendor add-on, the details include the add-on name, version, vendor
name, release date, category, and the list of added or removed components.
• When you select a component, the details include the component name, version,
publisher, release date, category, severity, and contents (the VIBs).
Slide 5
To use ESXi updates from a configured online depot, select Sync Updates from the Actions
drop-down menu in the Lifecycle Manager pane.
You can also use ESXi updates from an offline bundle:
• From the Actions drop-down menu, select Import Updates.
• Enter a URL or browse for a ZIP file that contains an ESXi image.
Slide 6
After all ESXi hosts in a cluster are upgraded to vSphere 7, you can convert their lifecycle
management from baselines to images.
You set up a single image and apply it to all hosts in a cluster.
This step ensures cluster-wide host image homogeneity.
Slide 7
When creating a cluster, you can create a corresponding cluster image:
1. Create a cluster.
2. Select the Manage image setup and updates on all hosts collectively check box.
3. Define the ESXi version for your cluster image.
And 4. (which is optional) Select vendor add-ons for the host.
Only add-ons that are compatible with the selected vSphere version appear in the drop-down
menu.
After your cluster is created, add ESXi hosts to it. The Create New Cluster wizard introduces a
switch for enabling vSphere Lifecycle Manager and selecting elements for the desired cluster
image.
You can further customize the image in the cluster update settings.
Slide 8
After you define a valid image, you can perform a compliance check to compare that image
with the image that runs on the ESXi hosts in your cluster.
You can check the image compliance at the level of various vCenter Server objects:
• At the host level for a specific ESXi host,
• At the cluster level for all ESXi hosts in the cluster,
• At the data center level for all clusters and hosts in the data center,
• And At the vCenter Server level for all data centers, clusters, and ESXi hosts in the
vCenter Server inventory.
The status of a host can be unknown, compliant, out of compliance, or not compatible with the
image.
• A host status is unknown before you check compliance.
• A compliant host is one that has the same ESXi image defined for the cluster and with
no standalone VIBs or differing components.
• If the host is out of compliance, a message about the impact of remediation appears. In
the example, the host must be rebooted as part of the remediation. Another impact that
might be reported is the requirement that the host enters maintenance mode.
• A host is not compatible if it runs an image version that is later than the desired cluster
image version, or if the host does not meet the installation requirements for the vSphere
build.
Slide 9
To ensure that the cluster's health is good and that no problems occur during the remediation
process of your ESXi hosts, you can perform a remediation precheck.
The procedure for a remediation precheck is as follows:
• In the vSphere Client, click Hosts and Clusters and select a cluster that is managed by
an image.
Slide 10
The hardware compatibility check verifies the underlying hardware of the ESXi host in the
cluster against the vSAN Hardware Compatibility List (or HCL).
Hardware compatibility is checked only for vSAN storage controllers and not with the full
VMware Compatibility Guide.
Slide 11
When you convert a cluster to use vSphere Lifecycle Manager, ESXi hosts are scanned.
During this scan, any VIB that is not part of an identified component is identified as standalone,
and a warning appears.
Before updating ESXi hosts, you can import or ignore standalone VIBs:
• Import a component that contains the VIB and add it to the cluster image.
• Or ignore the warning and let the update process remove the VIB from the host.
A warning about a standalone VIB does not block the process of converting the cluster to use
vSphere Lifecycle Manager. If you continue to update ESXi, the VIB is uninstalled from the
host as part of the process.
You cannot include standalone VIBs in a cluster image.
Slide 12
When you remediate a cluster that you manage with an image, vSphere Lifecycle Manager
applies the following elements to the ESXi hosts:
• ESXi image version,
• Optional: vendor addons,
• Optional: firmware and driver addons,
• And optional: user specified components.
Remediation makes the selected hosts compliant with the desired image.
You can remediate a single ESXi host or an entire cluster, or simply pre-check hosts without
updating them.
The Review Remediation Impact dialog box shows the impact summary, applicable
remediation settings, End User License Agreement, and impact on specific hosts.
vSphere Lifecycle Manager performs a precheck on every remediation call. When the precheck
is complete, vSphere Lifecycle Manager applies the latest saved cluster image to the hosts.
During each step of a remediation process, vSphere Lifecycle Manager determines the
readiness of the host to enter or exit maintenance mode or be rebooted.
You can also click RUN PRE-CHECK to precheck hosts without updating them.
Slide 13
The Review Remediation Impact dialog box includes the following information:
• Impact summary
• Applicable remediation settings
• End User License Agreement
Slide 14
You check for image recommendations on demand and per cluster. You can check for
recommendations for different clusters at the same time. When recommendation checks run
concurrently with other checks, with compatibility scans, and with remediation operations, the
checks are queued to run one at a time.
If you have never checked recommendations for the cluster, the View recommended images
option is dimmed.
After you select Check for recommended images, the results for that cluster are generated.
The Checking for recommended images task is visible to all user sessions and cannot be
canceled.
When the check completes, you can select View recommended images.
Slide 15
When you view recommended images, vSphere shows the following types of images:
• the CURRENT IMAGE: The image specification that is being used to manage the
cluster.
• LATEST IN CURRENT SERIES: If available, a later version within the same release
series appears. For example, if the cluster is running vSphere 7.0 and vSphere 7.1 is
released, an image based on vSphere 7.1 appears.
• LATEST AND GREATEST: If available, a later version in a later major release. For
example, if the cluster is running vSphere 7.0 or 7.1and vSphere 8.0 is released, an
image based on vSphere 8.0 appears.
vSphere might show one or more recommendations:
• If the latest release within the current series is the same as the latest major version
released, only one recommendation appears.
• If the two releases are different, two recommendations appear.
• If the current image is the same as the latest release, no recommendations appear.
Slide 16
You can use a recommended image as a starting point to customize the cluster image. When
you select a recommended image, the Edit Image workflow appears.
You can perform these actions:
• Add or remove image components.
• Validate and save the image.
• Scan the cluster for compatibility.
• Remediate the cluster.
Slide 17
After you start managing a cluster with an image, you can edit the image by changing, adding,
or removing components, such as the ESXi image version, vendor add-ons, firmware and driver
add-ons, and other components.
Before saving the image specification, you can validate it:
• Ensures completeness of the image
• Verifies that the image has no missing component dependencies
• And that it Confirms that components do not conflict with one another.
Slide 18
You should now be able to meet the following objectives:
• Describe ESXi images,
• Import ESXi updates into the vSphere Client,
• Enable vSphere Lifecycle Manager in a cluster,
• Define a cluster image using vSphere Lifecycle Manager,
• Validate ESXi host compliance against a cluster image,
• Update ESXi hosts using vSphere Lifecycle Manager,
• And apply a recommended image to the hosts in a cluster.
This is the end of the Lesson 4 Lecture. If you have any questions, please contact your
Instructor. We will see you next time and thanks for watching!
Slide 1
Welcome back! Let’s get started with the final lecture, Lesson 5: Managing the Life Cycle of
VMware Tools and VM Hardware!
Slide 2
After completing this lesson, you should be able to meet the following objective:
• Use vSphere Lifecycle Manager to upgrade VMware Tools and VM hardware.
Slide 3
With each release of ESXi, VMware provides a new release of VMware Tools.
New releases include:
• Bug fixes
• Security patches
• New driver support for ESXi enhancements
• And Performance enhancements for virtual devices.
Keeping VMware Tools up to date is an important part of ongoing data center maintenance.
Slide 4
From a host or cluster's Updates tab, select VMware Tools to manage the life cycle of VMware
Tools.
Step 1: Check the status of VMware Tools running in your VMs. A VM has one of the
following VMware Tools status values:
• Upgrade Available: You can upgrade VMware Tools to match the current version
available for your ESXi hosts.
• Guest Managed: Your VM is running the Linux OpenVMTools package. Use native
Linux package management tools to upgrade VMware Tools.
• Not Installed: Consider installing VMware Tools in this VM.
• Unknown: vSphere Lifecycle Manager has not yet checked the status of VMware Tools.
Ensure that the VM is powered on before clicking the CHECK STATUS link.
• And Up to Date: The version of VMware Tools running in the VM matches the latest
available version for the ESXi host.
Slide 5
Select the VMs that use VMware Tools whose version you want to upgrade to a newer version.
Step 2: Click UPGRADE TO MATCH HOST.
1. Select the VMs to upgrade.
2. Schedule the upgrade. Plan the upgrade during your maintenance window.
And 3. Select rollback options.
Slide 6
With each subsequent release of ESXi, VMware provides a new release of VM hardware.
As ESXi improves its hardware support, VMware often carries that support into its VMs.
New releases include:
• Greater configuration maximums,
• And New types of hardware (for example, vGPU, vNVMe, vSGX, vTPM, and so on).
Consider upgrading VM hardware only when new features are required.
Slide 7
Select VM Hardware to upgrade your VMs' hardware.
Step 1: Check the status of the VM hardware running in your VMs. A VM has one of the
following status values:
• Upgrade Available: You can choose to upgrade VM hardware to match the current
version available for your ESXi hosts.
• And Up to Date: The version of VM hardware running in the VM matches the latest
available version for the ESXi host.
Slide 8
Select the VMs whose hardware version you want to upgrade to the latest version available on
the ESXi host on which they run.
Step 2: Click UPGRADE TO MATCH HOST.
1. Select the VMs to upgrade.
2. Schedule the upgrade. Plan the upgrade during your maintenance window.
and 3. Select rollback options.
Slide 9
You should now be able to meet the following objective:
• Use vSphere Lifecycle Manager to upgrade VMware Tools and VM hardware.
Slide 10
By developing vSphere knowledge and skills and helping to create a modern data center at
Virtual Beans, you help the company meet its expanding business demands.
Your manager recognizes your competence and assigns you as the lead vSphere administrator.
Slide 11
Some key points from Module 10 are:
• With the Update Planner feature, you can perform prechecks to verify that your vCenter
Server system meets the minimum requirements for a successful upgrade.
• vSphere Lifecycle Manager centralizes automated patch and version management for
clusters, ESXi, drivers and firmware, VM hardware, and VMware Tools.
• In vSphere Lifecycle Manager, you can manage ESXi hosts by using baselines, or you
can manage a cluster of ESXi hosts by using images.
• Keeping VMware Tools up to date is an important part of ongoing data center
maintenance.
• And Consider upgrading VM hardware only when new features are required.
Slide 12
This is the end of Module 10 and the Lesson 5 Lecture. The Labs and Assignments associated
with this Module are as follows:
• Lab 24: Using vSphere Lifecycle Manager,
• And the Module 10 Quiz: vSphere Lifecycle Management.
This is the end of the Course Lectures. If you have any questions, please contact your
Instructor. Thank you for attending Stanly Community Colleges IT Academy for VMware
vSphere v7.0: Install, Configure, and Manage and we hope that you will look to us for your
future educational needs!