VMware ESXi 8.0 Update 2d Release Notes
VMware ESXi 8.0 Update 2d Release Notes
Home / VMware® Cloud Infrastructure Software / VMware vSphere / VMware vSphere 8.0 / Release Notes / ESXi Update and Patch Release Notes
/ VMware ESXi 8.0 Update 2d Release Notes
Introduction
Important
ESXi 8.0 Update 2d delivers fixes for CVE-2025-22224, CVE-2025-22225, and CVE-2025-22226. If you do not plan to update
your environment to ESXi 8.0 Update 3d (build # 24585383), use 8.0 Update 2d to update your ESXi hosts of version 8.0 Update
2c (build # 23825572) and earlier with these security fixes. The supported update path from 8.0 Update 2d is to ESXi 8.0 Update
3d or later.
Caution
Updates from ESXi 8.0 Update 2d to 8.0 Update 3 might expose your vSphere system to security vulnerabilities, because this is
considered a back-in-time update. For more information, see the Product Interoperability Matrix.
What's New
This release resolves CVE-2025-22224, CVE-2025-22225, and CVE-2025-22226. For more information on these vulnerabilities and
their impact on Broadcom products, see VMSA-2025-0004.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 1/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Build: 24585300
sha256checksum: 225843151c6858279da8956107917db187948d5ff6af1964314c8d
ff2c3b6d5f
Components
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 8.0.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies
to new bug fixes.
ESXi-8.0U2d-24585300-standard
ESXi-8.0U2d-24585300-no-tools
ESXi Image
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 2/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
ESXi 8.0 U2d - 24585300 March 04 2025 Security Security fixes only
For information about the individual components and bulletins, see the Resolved Issues section.
Resolved Issues
ESXi_8.0.2-0.45.24585300
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 3/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Affected VIBs
VMware_bootbank_esx-base_8.0.2-0.45.24585300
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio-base_8.0.2-0.45.24585300
VMware_bootbank_bmcal_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers-esxio_8.0.2-
0.45.24585300
VMware_bootbank_vsan_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_cpu-microcode_8.0.2-0.45.24585300
VMware_bootbank_vsanhealth_8.0.2-0.45.24585300
VMware_bootbank_gc_8.0.2-0.45.24585300
VMware_bootbank_esx-xserver_8.0.2-0.45.24585300
VMware_bootbank_bmcal-esxio_8.0.2-0.45.24585300
VMware_bootbank_vdfs_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner-esxio_8.0.2-
0.45.24585300
VMware_bootbank_crx_8.0.2-0.45.24585300
VMware_bootbank_vds-vsip_8.0.2-0.45.24585300
VMware_bootbank_drivervm-gpu-base_8.0.2-0.45.24585300
VMware_bootbank_infravisor_8.0.2-0.45.24585300
VMware_bootbank_trx_8.0.2-0.45.24585300
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers_8.0.2-0.45.24585300
VMware_bootbank_gc-esxio_8.0.2-0.45.24585300
VMware_bootbank_clusterstore_8.0.2-0.45.24585300
This patch updates the esx-base VIB. Due to their dependency with the esx-base VIB, the following VIBs are updated with build
number and patch version changes, but deliver no fixes: esx-dvfilter-generic-fastpath, esxio-base, bmcal, native-misc-
drivers-esxio, vsan, esxio-combiner, cpu-microcode, vsanhealth, gc, esx-xserver, bmcal-esxio, vdfs, esxio-
combiner-esxio, crx, vds-vsip, drivervm-gpu-base, infravisor, trx, esxio-dvfilter-generic-fastpath, esxio,
native-misc-drivers, gc-esxio, and clusterstore. This patch resolves the following issue:
This release resolves CVE-2025-22224, CVE-2025-22225, and CVE-2025-22226. For more information on these vulnerabilities and
their impact on Broadcom products, see VMSA-2025-0004.
esx-update_8.0.2-0.45.24585300
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 4/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Affected VIBs
VMware_bootbank_esx-update_8.0.2-0.45.24585300
VMware_bootbank_loadesx_8.0.2-0.45.24585300
Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesx and esx-update.
esxio-update_8.0.2-0.45.24585300
Affected VIBs
VMware_bootbank_esxio-update_8.0.2-0.45.24585300
VMware_bootbank_loadesxio_8.0.2-0.45.24585300
Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but
deliver no fixes: loadesxio and esxio-update.
ESXi-8.0U2d-24585300-standard
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 5/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Affected VIBs
VMware_bootbank_esx-base_8.0.2-0.45.24585300
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio-base_8.0.2-0.45.24585300
VMware_bootbank_bmcal_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers-esxio_8.0.2-
0.45.24585300
VMware_bootbank_vsan_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_cpu-microcode_8.0.2-0.45.24585300
VMware_bootbank_vsanhealth_8.0.2-0.45.24585300
VMware_bootbank_gc_8.0.2-0.45.24585300
VMware_bootbank_esx-xserver_8.0.2-0.45.24585300
VMware_bootbank_bmcal-esxio_8.0.2-0.45.24585300
VMware_bootbank_vdfs_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner-esxio_8.0.2-
0.45.24585300
VMware_bootbank_crx_8.0.2-0.45.24585300
VMware_bootbank_vds-vsip_8.0.2-0.45.24585300
VMware_bootbank_drivervm-gpu-base_8.0.2-0.45.24585300
VMware_bootbank_infravisor_8.0.2-0.45.24585300
VMware_bootbank_trx_8.0.2-0.45.24585300
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers_8.0.2-0.45.24585300
VMware_bootbank_gc-esxio_8.0.2-0.45.24585300
VMware_bootbank_clusterstore_8.0.2-0.45.24585300
VMware_bootbank_esx-update_8.0.2-0.45.24585300
VMware_bootbank_loadesx_8.0.2-0.45.24585300
VMware_bootbank_esxio-update_8.0.2-0.45.24585300
VMware_bootbank_loadesxio_8.0.2-0.45.24585300
ESXi-8.0U2d-24585300-no-tools
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 6/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Affected VIBs
VMware_bootbank_esx-base_8.0.2-0.45.24585300
VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio-base_8.0.2-0.45.24585300
VMware_bootbank_bmcal_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers-esxio_8.0.2-
0.45.24585300
VMware_bootbank_vsan_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner_8.0.2-0.45.24585300
VMware_bootbank_cpu-microcode_8.0.2-0.45.24585300
VMware_bootbank_vsanhealth_8.0.2-0.45.24585300
VMware_bootbank_gc_8.0.2-0.45.24585300
VMware_bootbank_esx-xserver_8.0.2-0.45.24585300
VMware_bootbank_bmcal-esxio_8.0.2-0.45.24585300
VMware_bootbank_vdfs_8.0.2-0.45.24585300
VMware_bootbank_esxio-combiner-esxio_8.0.2-
0.45.24585300
VMware_bootbank_crx_8.0.2-0.45.24585300
VMware_bootbank_vds-vsip_8.0.2-0.45.24585300
VMware_bootbank_drivervm-gpu-base_8.0.2-0.45.24585300
VMware_bootbank_infravisor_8.0.2-0.45.24585300
VMware_bootbank_trx_8.0.2-0.45.24585300
VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-
0.45.24585300
VMware_bootbank_esxio_8.0.2-0.45.24585300
VMware_bootbank_native-misc-drivers_8.0.2-0.45.24585300
VMware_bootbank_gc-esxio_8.0.2-0.45.24585300
VMware_bootbank_clusterstore_8.0.2-0.45.24585300
VMware_bootbank_esx-update_8.0.2-0.45.24585300
VMware_bootbank_loadesx_8.0.2-0.45.24585300
VMware_bootbank_esxio-update_8.0.2-0.45.24585300
VMware_bootbank_loadesxio_8.0.2-0.45.24585300
Name ESXi
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 7/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Category Bugfix
Affected Components
ESXi Component
ESXi Install/Upgrade Component
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 8/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Some module parameters for the nmlx5_core driver, such as device_rss, drss and rss, are deprecated in ESXi 8.0 and any custom
values, different from the default values, are not kept after an upgrade to ESXi 8.0.
Workaround: Replace the values of the device_rss, drss and rss parameters as follows:
device_rss: Use the DRSS parameter.
drss: Use the DRSS parameter.
rss: Use the RSS parameter.
Second stage of vCenter Server restore procedure freezes at 90%
When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter
from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user, even though
the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server,
which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.
Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get command from the
appliancesh shell. If you use VAMI, refresh your browser.
Miscellaneous Issues
RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a
Broadcom RDMA network interface controller (RNIC)
The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a
Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic
disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app
environment, when an ESXi host reboots or an uplink goes up and down.
Workaround: None
If a PCI passthrough is active on a DPU during the shutdown or restart of an ESXi host, the host fails with a purple diagnostic
screen
If an active virtual machine has a PCI passthrough to a DPU at the time of shutdown or reboot of an ESXi host, the host fails with a
purple diagnostic screen. The issue is specific for systems with DPUs and only in case of VMs that use PCI passthrough to the DPU.
Workaround: Before shutdown or reboot of an ESXi host, make sure the host is in maintenance mode, or that no VMs that use PCI
passthrough to a DPU are running. If you use auto start options for a virtual machine, the Autostart manager stops such VMs before
shutdown or reboot of a host.
You cannot mount an IPv6-based NFS 3 datastore with VMkernel port binding by using ESXCLI commands
When you try to mount an NFS 3 datastore with an IPv6 server address and VMkernel port binding by using an ESXCLI command, the
task fails with an error such as:
[:~] esxcli storage nfs add -I fc00:xxx:xxx:xx::xxx:vmk1 -s share1 -v volume1
Validation of vmknic failed Instance(defaultTcpipStack, xxx:xxx:xx::xxx:vmk1) Input(): Not found:
The issue is specific for NFS 3 datastores with an IPv6 server address and VMkernel port binding.
Workaround: Use the vSphere Client as an alternative to mount IPv6-based NFSv3 datastores with VMkernel port binding.
Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs
If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System
Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as
Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is
present. A backend call to the -f force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the
-f force reboot option, if you use reboot -f on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.
Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system
configuration in a vSphere system with DPUs.
In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs
Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because
the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.
Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.
You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU
In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU.
As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.
Workaround: None.
If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an
additional standard switch vSwitchBMC with uplink vusb0
Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you
have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC with uplink vusb0 gets created
on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the
VMware Cloud Foundation Bring-Up process to fail.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 9/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following
vendor documentation.
After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable to prevent the creation of a
virtual switch vSwitchBMC and associated portgroups on the next reboot of host.
See this script as an example:
~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
The value of BMCNetworkEnable is 0 and the service is disabled.
~# reboot
On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.
If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function
cannot power on
NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to
power on and operate.
Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured
SR-IOV virtual function connected to a virtual switch.
In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage
Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere
environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.
In the Pre-Update Check Results screen, you see an error such as:
Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be
validated. They may not function properly after vCenter Server upgrade.
Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter
Server version.
Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for
recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the
upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution
and VMware knowledge base article 87880.
You cannot remove a PCI passthrough device assigned to a virtual Non-Uniform Memory Access (NUMA) node from a virtual
machine with CPU Hot Add enabled
Although by default when you enable CPU Hot Add to allow the addition of vCPUs to a running virtual machine, virtual NUMA topology
is deactivated, if you have a PCI passthrough device assigned to a NUMA node, attempts to remove the device end with an error. In the
vSphere Client, you see messages such as Invalid virtual machine configuration. Virtual NUMA cannot be configured
when CPU hotadd is enabled.
Workaround: See VMware knowledge base article 89638.
If you configure a VM at HW version earlier than 20 with a Vendor Device Group, such VMs might not work as expected
Vendor Device Groups, which enable binding of high-speed networking devices and the GPU, are supported only on VMs with HW
version 20 and later, but you are not prevented to configure a VM at HW version earlier than 20 with a Vendor Device Group. Such VMs
might not work as expected: for example, fail to power-on.
Workaround: Ensure that VM HW version is of version 20 before you configure a Vendor Device Group in that VM.
Networking Issues
ESXi reboot takes long due to NFS server mount timeout
When you have multiple mounts on an NFS server that is not accessible, ESXi retries connection to each mount for 30 seconds, which
might add up to minutes of ESXi reboot delay, depending on the number of mounts.
Workaround: ESXi Update 8.0 Update 1 adds a configurable option to override the default mount timeout: esxcfg-advcfg -s
<timeout val> /NFS/MountTimeout. For example, if you want to reconfigure mount timeout to 10 seconds, you can run the following
command: - esxcfg-advcfg -s 10 /NFS/MountTimeout. Use the command esxcfg-advcfg -g /NFS/MountTimeout to verify the
current configured mount timeout.
You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on
a Pensando DPU
If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set
the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.
Workaround: None.
You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later
When two NICs that use the ntg3 driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping
might occur. The issue does not occur on ntg3 drivers of versions earlier than 4.1.3 or the tg3 driver. This issue is not related to the
occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3 driver of version 4.1.7
or later, or disable EEE on physical switch ports.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 10/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
Workaround: Upgrade the ntg3 driver to version 4.1.8 and set the new module parameter noPhyStateSet to 1. The noPhyStateSet
parameter defaults to 0 and is not required in most environments, except they face the issue.
VMware NSX installation or upgrade in a vSphere environment with DPUs might fail with a connectivity error
An intermittent timing issue on the ESXi host side might cause NSX installation or upgrade in a vSphere environment with DPUs to fail.
In the nsxapi.log file you see logs such as Failed to get SFHC response. MessageType MT_SOFTWARE_STATUS.
Workaround: Wait for 10 min and retry the NSX install or upgrade.
If you do not reboot an ESXi host after you enable or disable SR-IOV with the icen driver, when you configure a transport node
in ENS Interrupt mode on that host, some virtual machines might not get DHCP addresses
If you enable or disable SR-IOV with the icen driver on an ESXi host and configure a transport node in ENS Interrupt mode, some Rx
(receive) queues might not work if you do not reboot the host. As a result, some virtual machines might not get DHCP addresses.
Workaround: Either add a transport node profile directly, without enabling SR-IOV, or reboot the ESXi host after you enable or disable
SR-IOV.
You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS)
mode in vSphere 8.0
Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in
ConnectX-5 and ConnectX-6 adapter cards.
Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.
Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts
When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.
Workaround: None.
Storage Issues
VASA API version does not automatically refresh after upgrade to vCenter Server 8.0
vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA
API version might not automatically change to 4.0. You see the issue in 2 cases:
1. If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API
version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of
version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not
automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate
to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see
VASA API version 3.5.
2. If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API
version to 4.0, even after the upgrade, you still see VASA API version 3.5.
Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.
vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File
Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than
one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager
because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only
migration of the remaining 2 disks.
You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest
operation has failed
A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the
content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An
error occurred while saving the snapshot: A digest operation has failed in the backtrace. The snapshot creation task
completes upon retry.
Workaround: Retry the snapshot creation task.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 11/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
In rare occasions, if the vSphere virtual infrastructure is continuously using more than 90% of its hardware capacity, some ESXi hosts
might intermittently disconnect from the vCenter Server. Connection typically restores within a few seconds.
Workaround: If connection to vCenter Server accidentally does not restore in a few seconds, reconnect ESXi hosts manually by using
vSphere Client.
In the vSphere Client, you do not see banner notifications for historical data imports
Due to a backend issue, you do not see banner notifications for background migration of historical data in the vSphere Client.
Workaround: Use the vCenter Server Management Interface as an alternative to the vSphere Client. For more information, see Monitor
and Manage Historical Data Migration.
You see an error for Cloud Native Storage (CNS) block volumes created by using API in a mixed vCenter environment
If your environment has vCenter Server systems of version 8.0 and 7.x, creating Cloud Native Storage (CNS) block volume by using
API is successful, but you might see an error in the vSphere Client, when you navigate to see the CNS volume details. You see an error
such as Failed to extract the requested data. Check vSphere Client logs for details. + TypeError: Cannot read
properties of null (reading 'cluster'). The issue occurs only if you review volumes managed by the 7.x vCenter Server by
using the vSphere Client of an 8.0 vCenter Server.
Workaround: Log in to vSphere Client on a vCenter Server system of version 7.x to review the volume properties.
ESXi hosts might become unresponsive, and you see a vpxa dump file due to a rare condition of insufficient file descriptors
for the request queue on vpxa
In rare cases, when requests to the vpxa service take long, for example waiting for access to a slow datastore, the request queue on
vpxa might exceed the limit of file descriptors. As a result, ESXi hosts might briefly become unresponsive, and you see a vpxa-
zdump.00* file in the /var/core directory. The vpxa logs contain the line Too many open files.
Workaround: None. The vpxa service automatically restarts and corrects the issue.
If you use custom update repository with untrusted certificates, vCenter Server upgrade or update by using vCenter Lifecycle
Manager workflows to vSphere 8.0 might fail
If you use a custom update repository with self-signed certificates that the VMware Certificate Authority (VMCA) does not trust, vCenter
Lifecycle Manager fails to download files from such a repository. As a result, vCenter Server upgrade or update operations by using
vCenter Lifecycle Manager workflows fail with the error Failed to load the repository manifest data for the configured
upgrade.
Workaround: Use CLI, the GUI installer, or the Virtual Appliance Management Interface (VAMI) to perform the upgrade. For more
information, see VMware knowledge base article 89493.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 12/13
6/1/25, 5:25 PM VMware ESXi 8.0 Update 2d Release Notes
operation.
This is a rare issue, caused by an intermittent timeout of the post-remediation scan on the DPU.
Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-
remediation scan.
Products
Solutions
Company
How To Buy
Copyright © 2005-2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u2d-release-notes.html 13/13