0% found this document useful (0 votes)
51 views14 pages

VMware ESXi 8.0 Update 3c Release Notes

The VMware ESXi 8.0 Update 3c Release Notes detail the updates and fixes included in the release, particularly addressing an issue with vSphere vMotion tasks failing due to a namespace database lock error. The release includes critical patches, requires host reboot and virtual machine migration or shutdown, and provides download and installation instructions. Additional sections cover earlier releases, resolved and known issues, and the components involved in the update.

Uploaded by

mipij31459
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views14 pages

VMware ESXi 8.0 Update 3c Release Notes

The VMware ESXi 8.0 Update 3c Release Notes detail the updates and fixes included in the release, particularly addressing an issue with vSphere vMotion tasks failing due to a namespace database lock error. The release includes critical patches, requires host reboot and virtual machine migration or shutdown, and provides download and installation instructions. Additional sections cover earlier releases, resolved and known issues, and the components involved in the update.

Uploaded by

mipij31459
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

6/1/25, 5:25 PM VMware ESXi 8.

0 Update 3c Release Notes

/ VMware® Cloud Infrastructure Software / VMware vSphere / VMware vSphere 8.0 / Release Notes / ESXi Update and Patch Release Notes
/ VMware ESXi 8.0 Update 3c Release Notes

VMware vSphere 8.0

Version 8.0 English

Search this product 

VMware ESXi 8.0 Update 3c Release Notes


LastProduct
UpdatedMenu
May 22, 2025 

This document contains the following sections


Introduction
What's New
Earlier Releases of ESXi 8.0
Patches Contained in This Release
Resolved Issues
Known Issues from Previous Releases

Introduction

VMware ESXi 8.0 Update 3c | 12 DEC 2024 | Build 24414501


Check for additions and updates to these release notes.

What's New
This release resolves an issue with vSphere vMotion tasks that fail with an error NamespaceMgr could not lock the db file.

Earlier Releases of ESXi 8.0


New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier
releases of ESXi 8.0 are:
VMware ESXi 8.0d Release Notes
VMware ESXi 8.0 Update 3b Release Notes
VMware ESXi 8.0 Update 3 Release Notes
VMware ESXi 8.0 Update 2c Release Notes
VMware ESXi 8.0 Update 1d Release Notes
VMware ESXi 8.0 Update 2b Release Notes
VMware ESXi 8.0 Update 2 Release Notes
VMware ESXi 8.0 Update 1c Release Notes
VMware ESXi 8.0 Update 1a Release Notes
VMware ESXi 8.0 Update 1 Release Notes
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 1/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

VMware ESXi 8.0c Release Notes


VMware ESXi 8.0b Release Notes
VMware ESXi 8.0a Release Notes
For internationalization, compatibility, and open source components, see the VMware vSphere 8.0 Release Notes.

Patches Contained in This Release

VMware ESXi 8.0 Update 3c


Build Details
VMware vSphere Hypervisor (ESXi) Offline Bundle

Download Filename: VMware-ESXi-8.0U3c-24414501-depot.zip

Build: 24414501

Download Size: 629.0 MB

SHA256 checksum: 58d0632d3e51adf26ffacce57873e72ff41151d92a8378234f2daa4


76f2cfba5

Host Reboot Required: Yes

Virtual Machine Migration or Shutdown Required: Yes

Components

Component Bulletin Category Severity

ESXi Component - core ESXi ESXi_8.0.3-0.55.24414501 Bugfix Critical


VIBs

ESXi Install/Upgrade esx-update_8.0.3- Bugfix Critical


Component 0.55.24414501

ESXi Install/Upgrade esxio-update_8.0.3- Bugfix Critical


Component 0.55.24414501

Rollup Bulletins
These rollup bulletins contain the latest VIBs with all the fixes after the initial release of ESXi 8.0.

Bulletin ID Category Severity Detail

ESXi80U3c-24414501 Bugfix Critical Bugfix image

Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies
to new bug fixes.

Image Profile Name

ESXi-8.0U3c-24414501-standard

ESXi-8.0U3c-24414501-no-tools

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 2/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

ESXi Images

Name and Version Release Date Category Detail

ESXi8.0U3c - 24414501 12 DEC 2024 Bugfix Bugfix image

Patch Download and Installation


Log in to the Broadcom Support Portal to download this patch.
For download instructions, see Download Broadcom products and software.
For details on updates and upgrades by using vSphere Lifecycle Manager, see About vSphere Lifecycle Manager and vSphere
Lifecycle Manager Baselines and Images. You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an
image profile. To do this, you must manually download the patch offline bundle ZIP file.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

Resolved Issues

ESXi_8.0.3-0.55.24414501

Patch Category Bugfix

Patch Severity Critical

Host Reboot Required Yes

Virtual Machine Migration or Shutdown Required Yes

Affected Hardware N/A

Affected Software N/A

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 3/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501

PRs Fixed 3459675

CVE numbers N/A

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include
the rollup bulletin in the baseline to avoid failure during host patching.
This patch updates the esxio-base, vcls-pod-crx, esxio-dvfilter-generic-fastpath, gc-esxio, drivervm-gpu-base,
trx, clusterstore, esx-xserver, esx-dvfilter-generic-fastpath, native-misc-drivers-esxio, infravisor, native-
misc-drivers, bmcal-esxio, vdfs, esx-base, vsan, vds-vsip, gc, esxio-combiner-esxio, bmcal, esxio-combiner,
crx, vsanhealth, cpu-microcode, and esxio VIBs.
This patch resolves the following issue:
vSphere vMotion tasks fail with an error NamespaceMgr could not lock the db file
In rare cases, when virtual machines are configured with namespaces, an issue with the namespace database might cause migration of
such VMs with vSphere vMotion to fail. In the vSphere Client, you see errors such as:
Failed to receive migration. The source detected that the destination failed to resume.
An error occurred restoring the virtual machine state during migration. NamespaceMgr could not lock the db
file.
The destination ESXi host vmware.log shows the following messages:
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.checkpoint.migration.failedReceive] Failed to receive migration.
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.namespaceMgr.noLock] NamespaceMgr could not lock the db file.
The issue is more likely to occur on VMs with hardware version earlier than 19, and impacts only VMs provisioned on shared datastores
other than NFS, such as VMFS, vSAN, and vSphere Virtual Volumes.
This issue was reported as known in KB 369767 and is resolved in this release.

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 4/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

esx-update_8.0.3-0.55.24414501

Patch Category Bugfix

Patch Severity Critical

Host Reboot Required Yes

Virtual Machine Migration or Shutdown Required Yes

Affected Hardware N/A

Affected Software N/A

Affected VIBs Included


VMware_bootbank_esx-update_8.0.3-0.55.24414501
VMware_bootbank_loadesx_8.0.3-0.55.24414501

PRs Fixed N/A

CVE numbers N/A

Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesx and esx-update.

esxio-update_8.0.3-0.55.24414501

Patch Category Bugfix

Patch Severity Critical

Host Reboot Required Yes

Virtual Machine Migration or Shutdown Required Yes

Affected Hardware N/A

Affected Software N/A

Affected VIBs Included


VMware_bootbank_esxio-update_8.0.3-0.55.24414501
VMware_bootbank_loadesxio_8.0.3-0.55.24414501

PRs Fixed N/A

CVE numbers N/A

Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesxio and esxio-update.

ESXi-8.0U3c-24414501-standard

Profile Name ESXi-8.0U3c-24414501-standard

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 5/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Build For build information, see Patches Contained in This Release.

Vendor VMware by Broadcom, Inc.

Release Date December 12, 2024

Acceptance Level Partner Supported

Affected Hardware N/A

Affected Software N/A

Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501
VMware_bootbank_esx-update_8.0.3-0.55.24414501
VMware_bootbank_loadesx_8.0.3-0.55.24414501
VMware_bootbank_esxio-update_8.0.3-0.55.24414501
VMware_bootbank_loadesxio_8.0.3-0.55.24414501

PRs Fixed 3459675

Related CVE numbers N/A

This patch updates the following issue:


vSphere vMotion tasks fail with an error NamespaceMgr could not lock the db file

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 6/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

In rare cases, when virtual machines are configured with namespaces, an issue with the namespace database might cause migration of
such VMs with vSphere vMotion to fail. In the vSphere Client, you see errors such as:
Failed to receive migration. The source detected that the destination failed to resume.
An error occurred restoring the virtual machine state during migration. NamespaceMgr could not lock the db
file.
The destination ESXi host vmware.log shows the following messages:
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.checkpoint.migration.failedReceive] Failed to receive migration.
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.namespaceMgr.noLock] NamespaceMgr could not lock the db file.
The issue is more likely to occur on VMs with hardware version earlier than 19, and impacts only VMs provisioned on shared datastores
other than NFS, such as VMFS, vSAN, and vSphere Virtual Volumes.
This issue was reported as known in KB 369767 and is resolved in this release.

ESXi-8.0U3c-24414501-no-tools

Profile Name ESXi-8.0U3c-24414501-no-tools

Build For build information, see Patches Contained in This Release.

Vendor VMware by Broadcom, Inc.

Release Date December 12, 2024

Acceptance Level Partner Supported

Affected Hardware N/A

Affected Software N/A

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 7/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501
VMware_bootbank_esx-update_8.0.3-0.55.24414501
VMware_bootbank_loadesx_8.0.3-0.55.24414501
VMware_bootbank_esxio-update_8.0.3-0.55.24414501
VMware_bootbank_loadesxio_8.0.3-0.55.24414501

PRs Fixed 3459675

Related CVE numbers N/A

This patch updates the following issue:


vSphere vMotion tasks fail with an error NamespaceMgr could not lock the db file
In rare cases, when virtual machines are configured with namespaces, an issue with the namespace database might cause migration of
such VMs with vSphere vMotion to fail. In the vSphere Client, you see errors such as:
Failed to receive migration. The source detected that the destination failed to resume.
An error occurred restoring the virtual machine state during migration. NamespaceMgr could not lock the db
file.
The destination ESXi host vmware.log shows the following messages:
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.checkpoint.migration.failedReceive] Failed to receive migration.
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.namespaceMgr.noLock] NamespaceMgr could not lock the db file.
The issue is more likely to occur on VMs with hardware version earlier than 19, and impacts only VMs provisioned on shared datastores
other than NFS, such as VMFS, vSAN, and vSphere Virtual Volumes.
This issue was reported as known in KB 369767 and is resolved in this release.

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 8/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

ESXi-8.0U3c-24414501

Name ESXi

Version ESXi-8.0U3c-24414501

Release Date December 12, 2024

Category Bugfix

Affected Components
ESXi Component - core ESXi VIBs
ESXi Install/Upgrade Component

PRs Fixed 3459675

Related CVE numbers N/A

This patch resolves the issues listed in ESXi_8.0.3-0.55.24414501.

Known Issues from Previous Releases

Installation, Upgrade, and Migration Issues


If you update your vCenter to 8.0 Update 1, but ESXi hosts remain on an earlier version, vSphere Virtual Volumes datastores
on such hosts might become inaccessible
Self-signed VASA provider certificates are no longer supported in vSphere 8.0 and the configuration option
Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false by default. If you update a vCenter instance to 8.0 Update 1 that
introduces vSphere APIs for Storage Awareness (VASA) version 5.0, and ESXi hosts remain on an earlier vSphere and VASA version,
hosts that use self-signed certificates might not be able to access vSphere Virtual Volumes datastores or cannot refresh the CA
certificate.
Workaround: Update hosts to ESXi 8.0 Update 1. If you do not update to ESXi 8.0 Update 1, see VMware knowledge base article
91387.
You cannot update to ESXi 8.0 Update 2b by using esxcli software vib commands
Starting with ESXi 8.0 Update 2, upgrade or update of ESXi by using the commands esxcli software vib update or esxcli
software vib install is not supported. If you use esxcli software vib update or esxcli software vib install to update
your ESXi 8.0 Update 2 hosts to 8.0 Update 2b or later, the task fails. In the logs, you see an error such as:
ESXi version change is not allowed using esxcli software vib commands.
Please use a supported method to upgrade ESXi.
vib = VMware_bootbank_esx-base_8.0.2-0.20.22481015 Please refer to the log file for more details.
Workaround: If you are upgrading or updating ESXi from a depot zip bundle downloaded from the VMware website, VMware supports
only the update command esxcli software profile update --depot=<depot_location> --profile=<profile_name>. For
more information, see Upgrade or Update a Host with Image Profiles.
The Cancel option in an interactive ESXi installation might not work as expected
Due to an update of the Python library, the Cancel option by pressing the ESC button in an interactive ESXi installation might not work
as expected. The issue occurs only in interactive installations, not in scripted or upgrade scenarios.
Workaround: Press the ESC key twice and then press any other key to activate the Cancel option.
If you apply a host profile using a software FCoE configuration to an ESXi 8.0 host, the operation fails with a validation error
Starting from vSphere 7.0, software FCoE is deprecated, and in vSphere 8.0 software FCoE profiles are not supported. If you try to
apply a host profile from an earlier version to an ESXi 8.0 host, for example to edit the host customization, the operation fails. In the
vSphere Client, you see an error such as Host Customizations validation error.
Workaround: Disable the Software FCoE Configuration subprofile in the host profile.
You cannot use ESXi hosts of version 8.0 as a reference host for existing host profiles of earlier ESXi versions
Validation of existing host profiles for ESXi versions 7.x, 6.7.x and 6.5.x fails when only an 8.0 reference host is available in the
inventory.

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 9/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Workaround: Make sure you have a reference host of the respective version in the inventory. For example, use an ESXi 7.0 Update 2
reference host to update or edit an ESXi 7.0 Update 2 host profile.
VMNICs might be down after an upgrade to ESXi 8.0
If the peer physical switch of a VMNIC does not support Media Auto Detect, or Media Auto Detect is disabled, and the VMNIC link is set
down and then up, the link remains down after upgrade to or installation of ESXi 8.0.
Workaround: Use either of these 2 options:
1. Enable the option media-auto-detect in the BIOS settings by navigating to System Setup Main Menu, usually by pressing F2 or
opening a virtual console, and then Device Settings > <specific broadcom NIC> > Device Configuration Menu > Media Auto
Detect. Reboot the host.
2. Alternatively, use an ESXCLI command similar to: esxcli network nic set -S <your speed> -D full -n <your nic>. With
this option, you also set a fixed speed to the link, and it does not require a reboot.
After upgrade to ESXi 8.0, you might lose some nmlx5_core driver module settings due to obsolete parameters
Some module parameters for the nmlx5_core driver, such as device_rss, drss and rss, are deprecated in ESXi 8.0 and any custom
values, different from the default values, are not kept after an upgrade to ESXi 8.0.
Workaround: Replace the values of the device_rss, drss and rss parameters as follows:
device_rss: Use the DRSS parameter.
drss: Use the DRSS parameter.
rss: Use the RSS parameter.
Second stage of vCenter Server restore procedure freezes at 90%
When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter
from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user, even though
the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server,
which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.
Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get command from the
appliancesh shell. If you use VAMI, refresh your browser.

Miscellaneous Issues
RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a
Broadcom RDMA network interface controller (RNIC)
The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a
Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic
disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app
environment, when an ESXi host reboots or an uplink goes up and down.
Workaround: None
The irdman driver might fail when you use Unreliable Datagram (UD) transport mode ULP for RDMA over Converged Ethernet
(RoCE) traffic
If for some reason you choose to use the UD transport mode upper layer protocol (ULP) for RoCE traffic, the irdman driver might fail.
This issue is unlikely to occur, as the irdman driver only supports iSCSI Extensions for RDMA (iSER), which uses ULPs in Reliable
Connection (RC) mode.
Workaround: Use ULPs with RC transport mode.
You might see compliance errors during upgrade to ESXi 8.0 Update 2b on servers with active Trusted Platform Module (TPM)
encryption and vSphere Quick Boot
If you use the vSphere Lifecycle Manager to upgrade your clusters to ESXi 8.0 Update 2b, in the vSphere Client you might see
compliance errors for hosts with active TPM encryption and vSphere Quick Boot.
Workaround: Ignore the compliance errors and proceed with the upgrade.
If IPv6 is deactivated, you might see 'Jumpstart plugin restore-networking activation failed' error during ESXi host boot
In the ESXi console, during the boot up sequence of a host, you might see the error banner Jumpstart plugin restore-networking
activation failed. The banner displays only when IPv6 is deactivated and does not indicate an actual error.
Workaround: Activate IPv6 on the ESXi host or ignore the message.
Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs
If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System
Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as
Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is
present. A backend call to the -f force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the
-f force reboot option, if you use reboot -f on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.
Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system
configuration in a vSphere system with DPUs.
In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 10/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because
the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.
Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.
TCP connections intermittently drop on an ESXi host with Enhanced Networking Stack
If the sender VM is on an ESXi host with Enhanced Networking Stack, TCP checksum interoperability issues when the value of the TCP
checksum in a packet is calculated as 0xFFFF might cause the end system to drop or delay the TCP packet.
Workaround: Disable TCP checksum offloading on the sender VM on ESXi hosts with Enhanced Networking Stack. In Linux, you can
use the command sudo ethtool -K <interface> tx off.
You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU
In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU.
As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.
Workaround: None.
If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an
additional standard switch vSwitchBMC with uplink vusb0
Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you
have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC with uplink vusb0 gets created
on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the
VMware Cloud Foundation Bring-Up process to fail.
Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following
vendor documentation.
After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable to prevent the creation of a
virtual switch vSwitchBMC and associated portgroups on the next reboot of host.
See this script as an example:
~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
The value of BMCNetworkEnable is 0 and the service is disabled.
~# reboot
On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.
If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function
cannot power on
NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to
power on and operate.
Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured
SR-IOV virtual function connected to a virtual switch.
In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage
Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere
environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.
In the Pre-Update Check Results screen, you see an error such as:
Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be
validated. They may not function properly after vCenter Server upgrade.
Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter
Server version.
Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for
recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the
upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution
and VMware knowledge base article 87880.

Networking Issues
You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on
a Pensando DPU
If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set
the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.
Workaround: None.
Connection-intensive RDMA workload might lead to loss of traffic on Intel Ethernet E810 Series devices with inbox driver
irdman-1.4.0.1
The inbox irdman driver version 1.4.0.1 does not officially support vSAN over RDMA. Tests running 10,000 RDMA connections, usual
for vSAN environments, might occasionally lose all traffic on Intel Ethernet E810 Series devices with NVM version 4.2 and irdman driver
version 1.4.0.1.
Workaround: None.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 11/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Transfer speed in IPv6 environments with active TCP segmentation offload is slow
In environments with active IPv6 TCP segmentation offload (TSO), transfer speed for Windows virtual machines with an e1000e virtual
NIC might be slow. The issue does not affect IPv4 environments.
Workaround: Deactivate TSO or use a vmxnet3 adapter instead of e1000e.
Capture of network packets by using the PacketCapture tool on ESXi does not work
Due to tightening of the rhttpproxy security policy, you can no longer use the PacketCapture tool as described in Collecting network
packets using the lightweight PacketCapture on ESXi.
Workaround: Use the pktcap-uw tool. For more information, see Capture and Trace Network Packets by Using the pktcap-uw Utility.
You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later
When two NICs that use the ntg3 driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping
might occur. The issue does not occur on ntg3 drivers of versions earlier than 4.1.3 or the tg3 driver. This issue is not related to the
occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3 driver of version 4.1.7
or later, or disable EEE on physical switch ports.
Workaround: Upgrade the ntg3 driver to version 4.1.8 and set the new module parameter noPhyStateSet to 1. The noPhyStateSet
parameter defaults to 0 and is not required in most environments, except they face the issue.
When you migrate a VM from an ESXi host with a DPU device operating in SmartNIC (ECPF) Mode to an ESXi host with a DPU
device operating in traditional NIC Mode, overlay traffic might drop
When you use vSphere vMotion to migrate a VM attached to an overlay-backed segment from an ESXi host with a vSphere Distributed
Switch operating in offloading mode (where traffic forwarding logic is offloaded to the DPU) to an ESXi host with a VDS operating in a
non-offloading mode (where DPUs are used as a traditional NIC), the overlay traffic might drop after the migration.
Workaround: Deactivate and activate the virtual NIC on the destination ESXi host.
You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS)
mode in vSphere 8.0
Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in
ConnectX-5 and ConnectX-6 adapter cards.
Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.
Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts
When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.
Workaround: None.

Storage Issues
VASA API version does not automatically refresh after upgrade to vCenter Server 8.0
vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA
API version might not automatically change to 4.0. You see the issue in 2 cases:
1. If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API
version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of
version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not
automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate
to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see
VASA API version 3.5.
2. If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API
version to 4.0, even after the upgrade, you still see VASA API version 3.5.
Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.
vSphere vMotion operations of virtual machines residing on Pure-backed vSphere Virtual Volumes storage might time out
vSphere vMotion operations for VMs residing on vSphere Virtual Volumes datastores depend on the vSphere API for Storage
Awareness (VASA) provider and the timing of VASA operations to complete. In rare cases, and under specific conditions when the
VASA provider is under heavy load, response time from a Pure VASA provider might cause ESXi to exceed the timeout limit of 120 sec
for each phase of vSphere vMotion tasks. In environments with multiple stretched storage containers you might see further delays in the
Pure VASA provider response. As a result, running vSphere vMotion tasks time out and cannot complete.
Workaround: Reduce parallel workflows, especially on Pure storage on vSphere Virtual Volumes datastores exposed from the same
VASA provider, and retry the vSphere vMotion task.
You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest
operation has failed
A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the
content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An
error occurred while saving the snapshot: A digest operation has failed in the backtrace. The snapshot creation task
completes upon retry.
Workaround: Retry the snapshot creation task.

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 12/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File
Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than
one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager
because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only
migration of the remaining 2 disks.
In a vSphere Virtual Volumes stretched storage cluster environment, some VMs might fail to power on after recovering from a
cluster-wide APD
In high scale Virtual Volumes stretched storage cluster environments, after recovering from a cluster-wide APD, due to the high load
during the recovery some VMs might fail to power on even though the datastores and protocol endpoints are online and accessible.
Workaround: Migrate the affected VMs to a different ESXi host and power on the VMs.
You see "Object or item referred not found" error for tasks on a First Class Disk (FCD)
Due to a rare storage issue, during the creation of a snapshot of an attached FCD, the disk might be deleted from the Managed Virtual
Disk Catalog. If you do not reconcile the Managed Virtual Disk Catalog, all consecutive operations on such a FCD fail with the Object
or item referred not found error.
Workaround: See Reconciling Discrepancies in the Managed Virtual Disk Catalog.

vCenter Server and vSphere Client Issues


If you load the vSphere virtual infrastructure to more than 90%, ESXi hosts might intermittently disconnect from vCenter
Server
In rare occasions, if the vSphere virtual infrastructure is continuously using more than 90% of its hardware capacity, some ESXi hosts
might intermittently disconnect from the vCenter Server. Connection typically restores within a few seconds.
Workaround: If connection to vCenter Server accidentally does not restore in a few seconds, reconnect ESXi hosts manually by using
vSphere Client.
ESXi hosts might become unresponsive, and you see a vpxa dump file due to a rare condition of insufficient file descriptors
for the request queue on vpxa
In rare cases, when requests to the vpxa service take long, for example waiting for access to a slow datastore, the request queue on
vpxa might exceed the limit of file descriptors. As a result, ESXi hosts might briefly become unresponsive, and you see a vpxa-
zdump.00* file in the /var/core directory. The vpxa logs contain the line Too many open files.
Workaround: None. The vpxa service automatically restarts and corrects the issue.
If you use custom update repository with untrusted certificates, vCenter Server upgrade or update by using vCenter Lifecycle
Manager workflows to vSphere 8.0 might fail
If you use a custom update repository with self-signed certificates that the VMware Certificate Authority (VMCA) does not trust, vCenter
Lifecycle Manager fails to download files from such a repository. As a result, vCenter Server upgrade or update operations by using
vCenter Lifecycle Manager workflows fail with the error Failed to load the repository manifest data for the configured
upgrade.
Workaround: Use CLI, the GUI installer, or the Virtual Appliance Management Interface (VAMI) to perform the upgrade. For more
information, see VMware knowledge base article 89493.

Virtual Machine Management Issues

vSphere Lifecycle Manager Issues


You see error messages when try to stage vSphere Lifecycle Manager Images on ESXi hosts of version earlier than 8.0
ESXi 8.0 introduces the option to explicitly stage desired state images, which is the process of downloading depot components from the
vSphere Lifecycle Manager depot to the ESXi hosts without applying the software and firmware updates immediately. However, staging
of images is only supported on an ESXi 8.0 or later hosts. Attempting to stage a vSphere Lifecycle Manager image on ESXi hosts of
version earlier than 8.0 results in messages that the staging of such hosts fails, and the hosts are skipped. This is expected behavior
and does not indicate any failed functionality as all ESXi 8.0 or later hosts are staged with the specified desired image.
Workaround: None. After you confirm that the affected ESXi hosts are of version earlier than 8.0, ignore the errors.
A remediation task by using vSphere Lifecycle Manager might intermittently fail on ESXi hosts with DPUs
When you start a vSphere Lifecycle Manager remediation on an ESXi hosts with DPUs, the host upgrades and reboots as expected, but
after the reboot, before completing the remediation task, you might see an error such as:
A general system error occurred: After host … remediation completed, compliance check reported host as 'non-
compliant'. The image on the host does not match the image set for the cluster. Retry the cluster remediation
operation.
This is a rare issue, caused by an intermittent timeout of the post-remediation scan on the DPU.

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 13/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes

Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-
remediation scan.

VMware Host Client Issues


VMware Host Client might display incorrect descriptions for severity event states
When you look in the VMware Host Client to see the descriptions of the severity event states of an ESXi host, they might differ from the
descriptions you see by using Intelligent Platform Management Interface (IPMI) or Lenovo XClarity Controller (XCC). For example, in
the VMware Host Client, the description of the severity event state for the PSU Sensors might be Transition to Non-critical from
OK, while in the XCC and IPMI, the description is Transition to OK.
Workaround: Verify the descriptions for severity event states by using the ESXCLI command esxcli hardware ipmi sdr list and
Lenovo XCC.

Security Features Issues


If you use an RSA key size smaller than 2048 bits, RSA signature generation fails
Starting from vSphere 8.0, ESXi uses the OpenSSL 3.0 FIPS provider. As part of the FIPS 186-4 requirement, the RSA key size must
be at least 2048 bits for any signature generation, and signature generation with SHA1 is not supported.
Workaround: Use RSA key size larger than 2048.
Even though you deactivate Lockdown Mode on an ESXi host, the lockdown is still reported as active after a host reboot
Even though you deactivate Lockdown Mode on an ESXi host, you might still see it as active after a reboot of the host.
Workaround: Add users dcui and vpxuser to the list of lockdown mode exception users and deactivate Lockdown Mode after the
reboot. For more information, see Specify Lockdown Mode Exception Users and Specify Lockdown Mode Exception Users in the
VMware Host Client.

Content feedback and comments

Products

Solutions

Support and Services

Company

How To Buy

Copyright © 2005-2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Privacy Supplier Responsibility Terms of Use Sitemap

  

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 14/14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy