VMware ESXi 8.0 Update 3c Release Notes
VMware ESXi 8.0 Update 3c Release Notes
/ VMware® Cloud Infrastructure Software / VMware vSphere / VMware vSphere 8.0 / Release Notes / ESXi Update and Patch Release Notes
/ VMware ESXi 8.0 Update 3c Release Notes
Introduction
What's New
This release resolves an issue with vSphere vMotion tasks that fail with an error NamespaceMgr could not lock the db file.
Build: 24414501
Components
Rollup Bulletins
These rollup bulletins contain the latest VIBs with all the fixes after the initial release of ESXi 8.0.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies
to new bug fixes.
ESXi-8.0U3c-24414501-standard
ESXi-8.0U3c-24414501-no-tools
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 2/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
ESXi Images
Resolved Issues
ESXi_8.0.3-0.55.24414501
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 3/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include
the rollup bulletin in the baseline to avoid failure during host patching.
This patch updates the esxio-base, vcls-pod-crx, esxio-dvfilter-generic-fastpath, gc-esxio, drivervm-gpu-base,
trx, clusterstore, esx-xserver, esx-dvfilter-generic-fastpath, native-misc-drivers-esxio, infravisor, native-
misc-drivers, bmcal-esxio, vdfs, esx-base, vsan, vds-vsip, gc, esxio-combiner-esxio, bmcal, esxio-combiner,
crx, vsanhealth, cpu-microcode, and esxio VIBs.
This patch resolves the following issue:
vSphere vMotion tasks fail with an error NamespaceMgr could not lock the db file
In rare cases, when virtual machines are configured with namespaces, an issue with the namespace database might cause migration of
such VMs with vSphere vMotion to fail. In the vSphere Client, you see errors such as:
Failed to receive migration. The source detected that the destination failed to resume.
An error occurred restoring the virtual machine state during migration. NamespaceMgr could not lock the db
file.
The destination ESXi host vmware.log shows the following messages:
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.checkpoint.migration.failedReceive] Failed to receive migration.
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.namespaceMgr.noLock] NamespaceMgr could not lock the db file.
The issue is more likely to occur on VMs with hardware version earlier than 19, and impacts only VMs provisioned on shared datastores
other than NFS, such as VMFS, vSAN, and vSphere Virtual Volumes.
This issue was reported as known in KB 369767 and is resolved in this release.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 4/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
esx-update_8.0.3-0.55.24414501
Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesx and esx-update.
esxio-update_8.0.3-0.55.24414501
Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesxio and esxio-update.
ESXi-8.0U3c-24414501-standard
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 5/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501
VMware_bootbank_esx-update_8.0.3-0.55.24414501
VMware_bootbank_loadesx_8.0.3-0.55.24414501
VMware_bootbank_esxio-update_8.0.3-0.55.24414501
VMware_bootbank_loadesxio_8.0.3-0.55.24414501
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 6/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
In rare cases, when virtual machines are configured with namespaces, an issue with the namespace database might cause migration of
such VMs with vSphere vMotion to fail. In the vSphere Client, you see errors such as:
Failed to receive migration. The source detected that the destination failed to resume.
An error occurred restoring the virtual machine state during migration. NamespaceMgr could not lock the db
file.
The destination ESXi host vmware.log shows the following messages:
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.checkpoint.migration.failedReceive] Failed to receive migration.
[YYYY-MM-DDTHH:MM:SS] In(05) vmx - [msg.namespaceMgr.noLock] NamespaceMgr could not lock the db file.
The issue is more likely to occur on VMs with hardware version earlier than 19, and impacts only VMs provisioned on shared datastores
other than NFS, such as VMFS, vSAN, and vSphere Virtual Volumes.
This issue was reported as known in KB 369767 and is resolved in this release.
ESXi-8.0U3c-24414501-no-tools
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 7/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Affected VIBs
VMware_bootbank_esxio-base_8.0.3-0.55.24414501
VMware_bootbank_vcls-pod-crx_8.0.3-0.55.24414501
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_gc-esxio_8.0.3-0.55.24414501
VMware_bootbank_drivervm-gpu-base_8.0.3-0.55.24414501
VMware_bootbank_trx_8.0.3-0.55.24414501
VMware_bootbank_clusterstore_8.0.3-0.55.24414501
VMware_bootbank_esx-xserver_8.0.3-0.55.24414501
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.3-
0.55.24414501
VMware_bootbank_native-misc-drivers-esxio_8.0.3-
0.55.24414501
VMware_bootbank_infravisor_8.0.3-0.55.24414501
VMware_bootbank_native-misc-drivers_8.0.3-0.55.24414501
VMware_bootbank_bmcal-esxio_8.0.3-0.55.24414501
VMware_bootbank_vdfs_8.0.3-0.55.24414501
VMware_bootbank_esx-base_8.0.3-0.55.24414501
VMware_bootbank_vsan_8.0.3-0.55.24414501
VMware_bootbank_vds-vsip_8.0.3-0.55.24414501
VMware_bootbank_gc_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner-esxio_8.0.3-
0.55.24414501
VMware_bootbank_bmcal_8.0.3-0.55.24414501
VMware_bootbank_esxio-combiner_8.0.3-0.55.24414501
VMware_bootbank_crx_8.0.3-0.55.24414501
VMware_bootbank_vsanhealth_8.0.3-0.55.24414501
VMware_bootbank_cpu-microcode_8.0.3-0.55.24414501
VMware_bootbank_esxio_8.0.3-0.55.24414501
VMware_bootbank_esx-update_8.0.3-0.55.24414501
VMware_bootbank_loadesx_8.0.3-0.55.24414501
VMware_bootbank_esxio-update_8.0.3-0.55.24414501
VMware_bootbank_loadesxio_8.0.3-0.55.24414501
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 8/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
ESXi-8.0U3c-24414501
Name ESXi
Version ESXi-8.0U3c-24414501
Category Bugfix
Affected Components
ESXi Component - core ESXi VIBs
ESXi Install/Upgrade Component
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 9/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Workaround: Make sure you have a reference host of the respective version in the inventory. For example, use an ESXi 7.0 Update 2
reference host to update or edit an ESXi 7.0 Update 2 host profile.
VMNICs might be down after an upgrade to ESXi 8.0
If the peer physical switch of a VMNIC does not support Media Auto Detect, or Media Auto Detect is disabled, and the VMNIC link is set
down and then up, the link remains down after upgrade to or installation of ESXi 8.0.
Workaround: Use either of these 2 options:
1. Enable the option media-auto-detect in the BIOS settings by navigating to System Setup Main Menu, usually by pressing F2 or
opening a virtual console, and then Device Settings > <specific broadcom NIC> > Device Configuration Menu > Media Auto
Detect. Reboot the host.
2. Alternatively, use an ESXCLI command similar to: esxcli network nic set -S <your speed> -D full -n <your nic>. With
this option, you also set a fixed speed to the link, and it does not require a reboot.
After upgrade to ESXi 8.0, you might lose some nmlx5_core driver module settings due to obsolete parameters
Some module parameters for the nmlx5_core driver, such as device_rss, drss and rss, are deprecated in ESXi 8.0 and any custom
values, different from the default values, are not kept after an upgrade to ESXi 8.0.
Workaround: Replace the values of the device_rss, drss and rss parameters as follows:
device_rss: Use the DRSS parameter.
drss: Use the DRSS parameter.
rss: Use the RSS parameter.
Second stage of vCenter Server restore procedure freezes at 90%
When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter
from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user, even though
the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server,
which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.
Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get command from the
appliancesh shell. If you use VAMI, refresh your browser.
Miscellaneous Issues
RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a
Broadcom RDMA network interface controller (RNIC)
The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a
Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic
disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app
environment, when an ESXi host reboots or an uplink goes up and down.
Workaround: None
The irdman driver might fail when you use Unreliable Datagram (UD) transport mode ULP for RDMA over Converged Ethernet
(RoCE) traffic
If for some reason you choose to use the UD transport mode upper layer protocol (ULP) for RoCE traffic, the irdman driver might fail.
This issue is unlikely to occur, as the irdman driver only supports iSCSI Extensions for RDMA (iSER), which uses ULPs in Reliable
Connection (RC) mode.
Workaround: Use ULPs with RC transport mode.
You might see compliance errors during upgrade to ESXi 8.0 Update 2b on servers with active Trusted Platform Module (TPM)
encryption and vSphere Quick Boot
If you use the vSphere Lifecycle Manager to upgrade your clusters to ESXi 8.0 Update 2b, in the vSphere Client you might see
compliance errors for hosts with active TPM encryption and vSphere Quick Boot.
Workaround: Ignore the compliance errors and proceed with the upgrade.
If IPv6 is deactivated, you might see 'Jumpstart plugin restore-networking activation failed' error during ESXi host boot
In the ESXi console, during the boot up sequence of a host, you might see the error banner Jumpstart plugin restore-networking
activation failed. The banner displays only when IPv6 is deactivated and does not indicate an actual error.
Workaround: Activate IPv6 on the ESXi host or ignore the message.
Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs
If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System
Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as
Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is
present. A backend call to the -f force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the
-f force reboot option, if you use reboot -f on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.
Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system
configuration in a vSphere system with DPUs.
In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 10/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because
the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.
Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.
TCP connections intermittently drop on an ESXi host with Enhanced Networking Stack
If the sender VM is on an ESXi host with Enhanced Networking Stack, TCP checksum interoperability issues when the value of the TCP
checksum in a packet is calculated as 0xFFFF might cause the end system to drop or delay the TCP packet.
Workaround: Disable TCP checksum offloading on the sender VM on ESXi hosts with Enhanced Networking Stack. In Linux, you can
use the command sudo ethtool -K <interface> tx off.
You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU
In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU.
As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.
Workaround: None.
If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an
additional standard switch vSwitchBMC with uplink vusb0
Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you
have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC with uplink vusb0 gets created
on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the
VMware Cloud Foundation Bring-Up process to fail.
Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following
vendor documentation.
After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable to prevent the creation of a
virtual switch vSwitchBMC and associated portgroups on the next reboot of host.
See this script as an example:
~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
The value of BMCNetworkEnable is 0 and the service is disabled.
~# reboot
On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.
If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function
cannot power on
NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to
power on and operate.
Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured
SR-IOV virtual function connected to a virtual switch.
In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage
Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere
environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.
In the Pre-Update Check Results screen, you see an error such as:
Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be
validated. They may not function properly after vCenter Server upgrade.
Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter
Server version.
Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for
recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the
upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution
and VMware knowledge base article 87880.
Networking Issues
You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on
a Pensando DPU
If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set
the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.
Workaround: None.
Connection-intensive RDMA workload might lead to loss of traffic on Intel Ethernet E810 Series devices with inbox driver
irdman-1.4.0.1
The inbox irdman driver version 1.4.0.1 does not officially support vSAN over RDMA. Tests running 10,000 RDMA connections, usual
for vSAN environments, might occasionally lose all traffic on Intel Ethernet E810 Series devices with NVM version 4.2 and irdman driver
version 1.4.0.1.
Workaround: None.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 11/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Transfer speed in IPv6 environments with active TCP segmentation offload is slow
In environments with active IPv6 TCP segmentation offload (TSO), transfer speed for Windows virtual machines with an e1000e virtual
NIC might be slow. The issue does not affect IPv4 environments.
Workaround: Deactivate TSO or use a vmxnet3 adapter instead of e1000e.
Capture of network packets by using the PacketCapture tool on ESXi does not work
Due to tightening of the rhttpproxy security policy, you can no longer use the PacketCapture tool as described in Collecting network
packets using the lightweight PacketCapture on ESXi.
Workaround: Use the pktcap-uw tool. For more information, see Capture and Trace Network Packets by Using the pktcap-uw Utility.
You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later
When two NICs that use the ntg3 driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping
might occur. The issue does not occur on ntg3 drivers of versions earlier than 4.1.3 or the tg3 driver. This issue is not related to the
occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3 driver of version 4.1.7
or later, or disable EEE on physical switch ports.
Workaround: Upgrade the ntg3 driver to version 4.1.8 and set the new module parameter noPhyStateSet to 1. The noPhyStateSet
parameter defaults to 0 and is not required in most environments, except they face the issue.
When you migrate a VM from an ESXi host with a DPU device operating in SmartNIC (ECPF) Mode to an ESXi host with a DPU
device operating in traditional NIC Mode, overlay traffic might drop
When you use vSphere vMotion to migrate a VM attached to an overlay-backed segment from an ESXi host with a vSphere Distributed
Switch operating in offloading mode (where traffic forwarding logic is offloaded to the DPU) to an ESXi host with a VDS operating in a
non-offloading mode (where DPUs are used as a traditional NIC), the overlay traffic might drop after the migration.
Workaround: Deactivate and activate the virtual NIC on the destination ESXi host.
You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS)
mode in vSphere 8.0
Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in
ConnectX-5 and ConnectX-6 adapter cards.
Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.
Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts
When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.
Workaround: None.
Storage Issues
VASA API version does not automatically refresh after upgrade to vCenter Server 8.0
vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA
API version might not automatically change to 4.0. You see the issue in 2 cases:
1. If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API
version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of
version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not
automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate
to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see
VASA API version 3.5.
2. If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API
version to 4.0, even after the upgrade, you still see VASA API version 3.5.
Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.
vSphere vMotion operations of virtual machines residing on Pure-backed vSphere Virtual Volumes storage might time out
vSphere vMotion operations for VMs residing on vSphere Virtual Volumes datastores depend on the vSphere API for Storage
Awareness (VASA) provider and the timing of VASA operations to complete. In rare cases, and under specific conditions when the
VASA provider is under heavy load, response time from a Pure VASA provider might cause ESXi to exceed the timeout limit of 120 sec
for each phase of vSphere vMotion tasks. In environments with multiple stretched storage containers you might see further delays in the
Pure VASA provider response. As a result, running vSphere vMotion tasks time out and cannot complete.
Workaround: Reduce parallel workflows, especially on Pure storage on vSphere Virtual Volumes datastores exposed from the same
VASA provider, and retry the vSphere vMotion task.
You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest
operation has failed
A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the
content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An
error occurred while saving the snapshot: A digest operation has failed in the backtrace. The snapshot creation task
completes upon retry.
Workaround: Retry the snapshot creation task.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 12/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File
Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than
one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager
because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only
migration of the remaining 2 disks.
In a vSphere Virtual Volumes stretched storage cluster environment, some VMs might fail to power on after recovering from a
cluster-wide APD
In high scale Virtual Volumes stretched storage cluster environments, after recovering from a cluster-wide APD, due to the high load
during the recovery some VMs might fail to power on even though the datastores and protocol endpoints are online and accessible.
Workaround: Migrate the affected VMs to a different ESXi host and power on the VMs.
You see "Object or item referred not found" error for tasks on a First Class Disk (FCD)
Due to a rare storage issue, during the creation of a snapshot of an attached FCD, the disk might be deleted from the Managed Virtual
Disk Catalog. If you do not reconcile the Managed Virtual Disk Catalog, all consecutive operations on such a FCD fail with the Object
or item referred not found error.
Workaround: See Reconciling Discrepancies in the Managed Virtual Disk Catalog.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 13/14
6/1/25, 5:25 PM VMware ESXi 8.0 Update 3c Release Notes
Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-
remediation scan.
Products
Solutions
Company
How To Buy
Copyright © 2005-2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html 14/14