0% found this document useful (0 votes)
61 views31 pages

BP 2052 Cisco ACI

Uploaded by

daemonbehr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views31 pages

BP 2052 Cisco ACI

Uploaded by

daemonbehr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

v2.

5 | October 2023 | BP-2052

BEST PRACTICES

Cisco ACI with Nutanix


Copyright
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and
intellectual property laws. Nutanix and the Nutanix logo are registered trademarks of
Nutanix, Inc. in the United States and/or other jurisdictions. All other brand and product
names mentioned herein are for identification purposes only and may be trademarks of
their respective holders.
Cisco ACI with Nutanix

Contents

1. Executive Summary................................................................................. 5
Document Version History.................................................................................................................. 5

2. Cisco ACI Overview.................................................................................7


Leaf-Spine Architecture and Encapsulation........................................................................................8
Use Cases and Automation................................................................................................................8

3. Cisco ACI and Nutanix Recommended Topologies..............................9

4. General ACI Best Practices for Nutanix.............................................. 11


Physical Connections........................................................................................................................ 11
Bridge Domains.................................................................................................................................11
Endpoint Groups and Contracts....................................................................................................... 11
Switch Port Channel Configuration: ACI...........................................................................................12
Switch Port Channel Configuration: Nutanix AHV............................................................................13
Switch Port Channel Configuration: VMware ESXi.......................................................................... 13
Default Virtual Switches.................................................................................................................... 14

5. ACI Best Practices for Nutanix and AHV............................................ 15


VMM Domain Best Practices for AHV..............................................................................................15
Physical Domain Best Practices for AHV......................................................................................... 16

6. ACI Best Practices for Nutanix and ESXi............................................20


vSphere Standard Switch................................................................................................................. 20
vSphere Distributed Switch...............................................................................................................22

7. Conclusion.............................................................................................. 25

8. Appendix................................................................................................. 26
Best Practices Checklist................................................................................................................... 26
References........................................................................................................................................ 29
About Nutanix.............................................................................................30
List of Figures.............................................................................................................................................31
Cisco ACI with Nutanix

1. Executive Summary
Cisco Application Centric Infrastructure (ACI)™ empowers your applications by
automatically translating application requirements into infrastructure configuration.
Combining the power of software-defined networking through Cisco ACI with the Nutanix
Cloud Platform enables you to build a datacenter that performs well and is easy to
manage and scale, freeing IT to focus on the applications instead of the infrastructure.
Cisco ACI defines your desired network state using GUI or API-driven policy. This policy-
based approach to software-defined networking enables the network to scale beyond
the limits of an imperative, controller-oriented model. ACI integrates intelligent control
of hardware switches in a leaf-spine topology with management and automation for
software virtual switches. Using this policy framework, ACI delivers tenant isolation,
microsegmentation, automation, programmability, ease of management, and deep
network visibility.
With the Cisco ACI Virtual Machine Manager (VMM) integration for Nutanix AHV,
introduced in ACI 6.0(3), you can easily configure networks, security, and visibility in
AHV. Cisco and Nutanix have developed recommendations for deploying Nutanix in a
Cisco ACI environment to achieve maximum performance and reliability. Refer to the
best practices checklist in the appendix for a summary of these recommendations.

Document Version History


Version Number Published Notes
1.0 September 2016 Original publication.
2.0 August 2019 Major technical updates
throughout.
2.1 September 2019 Removed static-channel mode
because it's not compatible
with LCM or Foundation.
2.2 April 2021 Updated the Nutanix overview
and the General ACI Best
Practices for Nutanix section.

© 2023 Nutanix, Inc. All rights reserved | 5


Cisco ACI with Nutanix

Version Number Published Notes


2.3 May 2022 Added proxy ARP warning.
2.4 May 2023 Content refresh.
2.5 October 2023 Added Cisco ACI VMM
integration for Nutanix
AHV and updates for Cisco
Compute Hyperconverged for
Nutanix.

© 2023 Nutanix, Inc. All rights reserved | 6


Cisco ACI with Nutanix

2. Cisco ACI Overview


Cisco ACI is an application-centered networking fabric that can provision and enforce
application policies in both physical and virtual switches. The ACI fabric consists of
physical Cisco Nexus 9000 series switches running in ACI mode and a cluster of at least
three centrally managed Application Policy Infrastructure Controller (APIC) servers.
Cisco ACI uses a declarative policy model, so you can configure policy centrally in the
APIC cluster; the APIC cluster then pushes the policy out to all leaf and spine switches
in the fabric. ACI can also integrate with hypervisors such as Nutanix AHV and VMware
vCenter, using the Virtual Machine Manager (VMM) to configure virtual switches and VM
networks and provide microsegmentation.
Most importantly, ACI implements an allowlist policy model, which allows no traffic
by default. Administrators create contracts to explicitly define traffic allowed between
endpoint groups (EPGs). EPGs contain endpoints that require similar treatment on the
network. When you apply a contract between EPGs, only traffic specified in the contract
is allowed on the fabric. For more details on Cisco ACI policy and architecture, see the
Cisco ACI Design Guide.

Figure 1: Cisco ACI Component Overview

© 2023 Nutanix, Inc. All rights reserved | 7


Cisco ACI with Nutanix

Leaf-Spine Architecture and Encapsulation


ACI enforces a true leaf-spine design and automatically detects switches in either
the leaf or spine position. Connections between leaf switches aren't allowed, nor are
connections between spines. In this type of architecture, also called a Clos network,
every leaf connects directly to every spine. All hosts, devices, switches, and routers
connect to the leaf layer. The spine layer serves as a high-speed transit backbone for the
leaf switches. In a single-pod deployment, only the leaves can connect to the spines.
In the ACI fabric, traffic is encapsulated on leaf entry and routed through the most
efficient path to the destination leaf, where it's decapsulated. From a packet's
perspective, the entire ACI fabric acts as one switch, and the packet looks the same
on egress as it did on ingress. The protocols and VLANs used in the fabric are locally
significant—they aren't visible when a packet leaves the fabric.

Use Cases and Automation


Cisco ACI enables an application-centric policy-based approach, automation, ease of
management, multitenancy, network isolation, and microsegmentation in the datacenter.
Administrators control the entire ACI fabric through the APIC using either a GUI
or an open, RESTful API. You don't need to configure individual physical switches
manually, and you can provision new switches automatically as you add them. The fabric
abstraction and encapsulation layers can implement a multitenant policy defined in the
APIC rapidly in the physical network. Coupled with hypervisor (VMM) integration, network
policies in the fabric can apply at the virtual switch level as well.

© 2023 Nutanix, Inc. All rights reserved | 8


Cisco ACI with Nutanix

3. Cisco ACI and Nutanix Recommended


Topologies
Nutanix has successfully validated compatibility with a Cisco ACI leaf-spine architecture
using the following network topology and components.

Figure 2: Cisco ACI and Nutanix Topology

Each of the four Nutanix hosts connects to two Cisco Nexus leaf switches in ACI mode.
Two Cisco Nexus switches form the spine of the Cisco ACI fabric. Three Cisco APICs
connect to the leaf switches to manage the ACI fabric. The Nutanix Controller VM (CVM)
and a Linux guest VM run on each hypervisor node.
We used the following combinations of hypervisors and virtual switches to verify
functionality and compatibility in the most common scenarios.
Table: Hypervisor and Virtual Switch Combinations Recommended with Cisco ACI
Hypervisor Virtual Switch Description
Nutanix AHV Open vSwitch (OVS) Physical domain
Nutanix AHV Open vSwitch (OVS) VMM domain

© 2023 Nutanix, Inc. All rights reserved | 9


Cisco ACI with Nutanix

Hypervisor Virtual Switch Description


VMware ESXi vSphere Standard Switch Default standard switch
(VSS) physical domain
VMware ESXi vSphere Distributed Switch Distributed switch VMM
(VDS) domain

© 2023 Nutanix, Inc. All rights reserved | 10


Cisco ACI with Nutanix

4. General ACI Best Practices for Nutanix


Nutanix has developed the following general best practices for deploying Nutanix in a
Cisco ACI environment. For recommendations specific to your hypervisor and virtual
switch, see the corresponding sections.

Physical Connections
Connect each Nutanix node directly to at least two ACI leaf switches for load balancing
and fault tolerance. For non-UCS servers, we recommend establishing a direct
connection to the ACI leaf, without any intermediate switches, to guarantee maximum
throughput and minimal latency between nodes. Topologies with intermediate switches
are allowed, but ensure line-rate, nonblocking connectivity for east-west traffic between
Nutanix nodes in the same cluster with a maximum of three switch hops.

Bridge Domains
In the Cisco ACI bridge domain dedicated to the Nutanix server hypervisors and CVMs,
configure the following settings to allow Nutanix node discovery and addition:
• L3 unknown multicast flooding: flood
• Multidestination flooding: flood in BD
• ARP flooding: enabled
• GARP-based detection: enabled
Without these settings, automated Nutanix node discovery and addition might fail and
you must add new Nutanix nodes to the cluster directly by IPv4 address.

Endpoint Groups and Contracts


Use Cisco ACI EPGs as a policy enforcement tool. Contracts between EPGs explicitly
allow traffic to flow from one EPG to another. No traffic is allowed until you apply

© 2023 Nutanix, Inc. All rights reserved | 11


Cisco ACI with Nutanix

contracts between EPGs. Nutanix recommends placing all CVMs and hypervisor hosts
in a single Nutanix cluster in the same EPG to ensure full network connectivity and
functionality and low storage latency and to allow storage communication between nodes
in the Nutanix cluster.
Nutanix recommends placing hypervisor hosts and the CVMs in an untagged, or native,
VLAN and placing guest VMs in tagged VLANs:
1. In the Cisco APIC, navigate to the endpoint group that contains the CVM and
hypervisor.
2. In the Static Link section, select Trunk (Native) for Mode.
Don't enable proxy ARP inside the Nutanix EPGs used for CVMs, hosts, or backplane, as
doing so can cause unexpected failure in Nutanix processes that use ARP to determine
endpoint availability. You can enable proxy ARP inside guest VM EPGs.
We recommend using contracts that allow only management traffic into the Nutanix
EPGs to restrict network-level access to the Nutanix compute and storage infrastructure.
Using features that present storage and services for external clients, like Nutanix
Volumes Block Storage and Nutanix Files, requires more permissive contracts.
If necessary, you can place Nutanix CVMs and hypervisor hosts in different EPGs
and separate them using contracts but you must allow all required ports and protocols
between endpoints. Failure to allow all required ports might lead to loss of the storage
fabric. Even in separate EPGs, Nutanix hosts and CVMs must still be in the same layer 2
broadcast domain and same layer 3 IP subnet.

Switch Port Channel Configuration: ACI


Note: For individual ports in ACI, map each interface directly into the desired EPG.

If you use a virtual portal channel (vPC), create a vPC policy group that contains the two
leaf switches for each Nutanix node in the APIC. Specify the desired port channel policy
in each vPC policy group, matching the hypervisor load balancing configuration (Static
Channel Mode On or LACP). Create an interface profile with an interface selector for
each pair of uplinks corresponding to a Nutanix node and associate this interface profile
with the vPC policy group for the node. Associate these interface profiles with the switch
policies for the pair of leaf switches.

© 2023 Nutanix, Inc. All rights reserved | 12


Cisco ACI with Nutanix

Figure 3: ACI Port Channel Policy Matches Hypervisor Policy

Don't use MAC pinning in Cisco ACI. Cisco has documented a limitation of MAC pinning
that can cause traffic disruption during a leaf switch reload.

Switch Port Channel Configuration: Nutanix AHV


For Nutanix AHV, we recommend ACI individual ports instead of a vPC port channel with
the default AHV active-backup configuration. If you want to select active-active on AHV,
use LACP in AHV with an LACP Active port channel policy in ACI and remove Suspend
Individual from the policy control to allow LACP fallback. Nutanix doesn't recommend
using balance-slb because of known limitations with multicast traffic, but you can use
ACI individual ports if you must use balance-slb in AHV. For more information, see the
Nutanix AHV Networking best practice guide.

Switch Port Channel Configuration: VMware ESXi


With VMware ESXi, Nutanix recommends using individual ports for each Nutanix
node. Configure each Nutanix node with an active-active load-based teaming uplink
configuration to both leaf switches. This configuration aligns with the Nutanix vSphere
networking best practice of using the VDS and Route Traffic Based on Physical NIC

© 2023 Nutanix, Inc. All rights reserved | 13


Cisco ACI with Nutanix

Load option. For ESXi Active Standby, or Route Based on Originating Virtual Port, use
individual ports in ACI as well.
To use LACP with ESXi, change the port channel mode to LACP Active and remove
the Suspend Individual control. Nutanix doesn't recommend Static Channel Mode On
because it drops traffic during the LCM and Foundation reboot process.

Default Virtual Switches


Don't alter the default virbr0 switch in Nutanix AHV or vSwitchNutanix in ESXi. These
internal-only virtual switches contain no external network uplinks and pass traffic
between the CVM and the local hypervisor.

© 2023 Nutanix, Inc. All rights reserved | 14


Cisco ACI with Nutanix

5. ACI Best Practices for Nutanix and AHV


With Nutanix AHV, the virtual switch management domain encompasses all nodes in a
cluster and is managed centrally through Prism. There are two methods of ACI network
provisioning for Nutanix AHV: VMM domains and physical domains. The Cisco ACI VMM
integration for AHV allows automatic creation of subnets in AHV with corresponding
VLANs. With the traditional physical domain, administrators must provision nodes the
same way they provision bare-metal servers in the APIC, extending VLANs to each node
statically with an ACI physical domain.

Figure 4: Cisco ACI AHV Test Topology

VMM Domain Best Practices for AHV


Using the Nutanix VMM domain integration, ACI automatically provisions the EPGs
created in the APIC in the AHV virtual switch as subnets in Nutanix Prism. The APIC
configures the subnet from the dynamic or static VLAN pool associated with the Nutanix
VMM domain. With dynamic pools, the APIC selects an available VLAN to assign to the
EPG. With a static pool, the admin selects the specific VLAN when selecting the Nutanix
VMM domain in the EPG.
For more information on the Cisco ACI VMM integration for Nutanix AHV, including
requirements and limitations, see the Cisco ACI and Nutanix AHV Integration tech note.

© 2023 Nutanix, Inc. All rights reserved | 15


Cisco ACI with Nutanix

AHV Two-Uplink Configuration


In AHV host deployments with the VMM domain integration, use preprovision resolution
immediacy in the Cisco ACI endpoint group for the CVM and AHV hosts to guarantee
that the AHV and CVM VLAN is provisioned on the leaf switches. This setting is the
default in the Nutanix AHV VMM integration and you can't change it.

VMM Domain Best Practices for AHV and Cisco UCS Fabric Interconnects
Cisco UCS servers in domain mode, such as Cisco Compute Hyperconverged for
Nutanix servers, connect directly to a pair of fabric interconnects instead of to the ACI
leaf.
With the fabric interconnect as an intermediate switch, you must perform the VLAN
configuration from the EPG on the intermediate switch as well. To configure these
VLANs, you can preconfigure them from the VLAN pool on the intermediate switch
beforehand. You can also manage VLAN configuration dynamically, which might be
preferable from a scalability perspective if there are many VLANs and ports. In the case
of Cisco UCS fabric interconnects, the ExternalSwitch application can automate VLAN
configuration between ACI and the Fabric Interconnect. For more information, see the
Cisco ACI Design Guide.

Physical Domain Best Practices for AHV


In addition to VMM domain integration, physical domain configuration with AHV is
possible. You might choose a physical domain if the requirements or limitations of the
VMM domain aren't met, or if you want an external configuration method.
Configure a physical domain in the APIC that encompasses all the switch ports
connected to Nutanix AHV servers and is associated with the required VLAN pools
for the hosts, CVMs, and guest VMs. Create an attachable entity profile (AEP) with an
associated interface policy group and make sure that the AEP contains the physical
domain created in the first step. The following figure shows the association between the
AEP and the physical domain, performed under the Fabric tab.

© 2023 Nutanix, Inc. All rights reserved | 16


Cisco ACI with Nutanix

Figure 5: Associate the AEP and the Physical Domain

The following figure shows the static binding (or static port in newer ACI versions)
configuration for individual ports, located under the Tenant tab. Create EPG static
bindings for each VLAN trunked to the AHV hosts. Here you can see VLAN 3000 on
ports 1/37 through 1/40 placed into epg-prod-ib-mgmt, where the AHV hosts and CVMs
are connected. The EPG Domains (VMs and Bare-Metal) menu item in this figure
contains the physical domain physd-Nutanix, which holds the ports for the Nutanix
servers.

Figure 6: APIC EPG Static Binding

In the Nutanix cluster, keep the AHV OVS bond mode at the default active-backup setting
for easy configuration. In the following example, traffic from the CVM and user VMs flows
out from the active adapter eth3 toward Leaf 101. In the event of a failure or link loss,
traffic flows out from eth2 toward Leaf 102. Alternatively, if you need the bandwidth from

© 2023 Nutanix, Inc. All rights reserved | 17


Cisco ACI with Nutanix

both adapters, use LACP and the balance-tcp bond mode combined with a Cisco ACI
LACP Active port channel policy. You can find additional information on AHV bond modes
in the Nutanix AHV Networking best practice guide.

Figure 7: AHV Host Detail

In addition to the CVM and AHV EPG, create EPGs and static bindings for guest VM
VLANs. In our test example, we created an ACI application profile (app-NTNX-WEB) and
an EPG (epg-NTNX-WEB) to separate guest VM traffic from the CVM and AHV traffic.
Guest VM traffic used VLAN 3001; CVM and AHV traffic used VLAN 3000.
Create and apply contracts between the guest VM EPGs and between the guest VM
EPGs and the Nutanix CVM and host EPG to enforce network policies. In our testing
scenarios, we created a simple contract named Nutanix for management purposes that
allows only SSH, ICMP (ping), and Nutanix Prism web traffic on port 9440 from the guest
VM EPG (epg-NTNX-WEB) to the CVM and AHV EPG (epg-prod-ib-management).

© 2023 Nutanix, Inc. All rights reserved | 18


Cisco ACI with Nutanix

Figure 8: APIC Nutanix Contract Between EPGs

© 2023 Nutanix, Inc. All rights reserved | 19


Cisco ACI with Nutanix

6. ACI Best Practices for Nutanix and ESXi


VMware vSphere allows you to configure multiple types of virtual switches in the
hypervisor, so choose the virtual switch that works best for your deployment.
vSphere Standard Switch
The VSS is simple to configure for a small number of nodes, but managing
it can become more difficult as node count increases. Standard switches are
local to each host and you must configure them independently of one another,
compounding the complexity every time you add new hosts. The VSS is also
limited to basic network functionality and doesn't provide Cisco ACI VMM
integration.
vSphere Distributed Switch
The VDS provides additional network functionality, easy management at scale, and
integration with the Cisco ACI APIC using VMM domains. VDS requires additional
configuration, licensing, and vCenter. The VDS is configured centrally in vCenter,
and the configuration is pushed to each participating host.
Nutanix recommends the VDS for its ease of management and load-balancing
flexibility. The ability to use the Cisco ACI VMM domain out of the box without any extra
installation also makes the VDS an appealing choice to unify virtual and physical network
administration.
Nutanix has tested these virtual switches in our lab environment and developed the
following recommendations.

vSphere Standard Switch


The VSS is installed by default in the ESXi host. The VSS management domain extends
only to the individual host as shown in the following image, and you must configure each
VSS independently. ACI VMM domain integration requires the VDS, using vCenter as
a central configuration point for integration, so the VSS can't use the VMM domain.
Instead, statically bind VLANs to EPGs and use a physical domain and AEP for the

© 2023 Nutanix, Inc. All rights reserved | 20


Cisco ACI with Nutanix

Nutanix switch ports in ACI. Use the default Route Based on Originating Virtual Port
load balancing method in the virtual switch port groups.

Figure 9: ESXi VSS Topology

Each Nutanix ESXi host contains two virtual switches: the standard vSwitchNutanix for
internal control traffic and the default vSwitch0 for the 10 GbE CVM and user VM traffic.
In our testing we added a third switch, vSwitchMgmt, for dedicated 1 GbE management
connections. The third vSwitch is optional; choose this design if you want to separate
management traffic onto a different uplink NIC team. The following diagram illustrates the
internal host layout as tested.

Figure 10: ESXi VSS Host Detail

© 2023 Nutanix, Inc. All rights reserved | 21


Cisco ACI with Nutanix

vSphere Distributed Switch


The VDS requires additional configuration, licensing, and vCenter, but it also stretches
the management domain among multiple hosts, which means that vCenter centrally
manages virtual switch configuration for all hosts, rather than configuring each host
individually. The VDS also supports ACI VMM domain integration, allowing the APIC to
push policies down to the ESXi host VDS using vCenter.
Note: Using the VMM integration is optional with the VDS, and the other recommendations in this section
still apply even without VMM integration.

Figure 11: ESXi VDS Topology

Using the VMware VDS VMM domain integration, ACI automatically provisions the EPGs
created in the APIC in the virtual switch as port groups in vCenter. The APIC configures
the port group from the dynamic or static VLAN pool associated with the VMM domain.
With dynamic pools, the APIC selects an available VLAN to assign to the EPG. With a
static pool, the admin selects the specific VLAN when selecting the VMM domain in the
EPG.
When you use a VMM domain, you don't need to have a physical domain with static EPG
port bindings. Instead, when you create the VMM domain, select the AEP you want to
associate it with from the Associated Attachable Entity Profile dropdown menu.
In our example, we created two EPGs—epg-ib-mgmt and epg-NTNX-Web—tied to the
VMM domain. The EPG epg-ib-mgmt represents the CVMs and hypervisors, while epg-
NTNX-Web represents the guest VMs in the tests. Creating these EPGs in ACI causes

© 2023 Nutanix, Inc. All rights reserved | 22


Cisco ACI with Nutanix

the APIC to create port groups in vCenter with names based on the combination of ACI
tenant, application, and EPG. The following figure shows how the application profile ties
together the VMM domain and EPG for epg-NTNX-Web.

Figure 12: Application Profile to EPG and Domain Mapping

The following figure shows the configuration of the VDS in each hypervisor host and the
port groups named aci_mgmt|app-prod-ib-mgmt|epg-ib-mgmt and aci_mgmt|app-NTNX-
Web|epg-NTNX-Web that the APIC automatically configured on the VDS. Each EPG has
its own port group.

Figure 13: ESXi VDS Host Detail

© 2023 Nutanix, Inc. All rights reserved | 23


Cisco ACI with Nutanix

Note: If you choose to migrate from the VSS to the VDS, follow Nutanix KB 3289. Ensure that the
internal CVM adapter remains in the port group svm-iscsi-pg by selecting Do not migrate for the adapter.
This setting ensures that the adapter remains in the default vSwitchNutanix. This step can be easy to
overlook when using the vSphere migration wizard.

To avoid disconnecting any nodes in the cluster, ensure that you're only migrating
one physical adapter at a time from the VSS to the VDS. Place the CVM and primary
VMkernel adapter in the same EPG (epg-ib-mgmt in our example) by assigning them to
the same port group in vCenter. Connect user VMs to port group EPGs, such as epg-
NTNX-Web. Optionally, use a second VMkernel adapter created in a VSS (vSwitchMgmt
in our example) to provide a backup connection to the ESXi host while migrating to the
VDS. The virtual switch port groups that the APIC creates should follow the Nutanix VDS
best practice of using Route Based on Physical NIC Load for load balancing.

Two-Uplink Configuration
The previous diagram shows a four-uplink configuration, with a second pair of uplink
adapters used as a management backup in case of communication failures on the ACI-
controlled VDS. If you don't have or want four adapters, you can build a two-uplink
configuration using Cisco ACI preprovision resolution immediacy for the EPG containing
the ESXi VMkernel port and CVM. The preprovision option causes the ACI fabric to
statically provision the VLAN for the Nutanix CVM and ESXi host on the leaf switch
ports where the AEP is associated. Using the preprovision option on the EPG avoids the
scenario that occurs when ACI waits to hear from vCenter to provision the port, but the
host can't talk to vCenter until ACI provisions the port.

© 2023 Nutanix, Inc. All rights reserved | 24


Cisco ACI with Nutanix

7. Conclusion
Running the Cisco ACI network fabric with Nutanix creates a compute and storage
infrastructure that puts applications first. Whether you're using the native Nutanix
hypervisor, AHV, with the default OVS, or ESXi with the VSS or VDS, Cisco ACI
provides a high-performance, easy-to-manage, and scalable leaf-spine architecture for
building a web-scale Nutanix Cloud Platform. Based on our extensive testing of these
configurations, we provide a best practices checklist in the appendix.
Nutanix eliminates the need to focus on storage and compute infrastructure configuration
by providing an invisible cluster of resources to applications. Similarly, the Cisco ACI
fabric simplifies network setup using policy attuned to application requirements to
automate individual switch configuration. In addition to physical network and L4–7 device
automation, the ACI VMM integration for Nutanix AHV and vSphere extends the network
fabric into the virtual switch, allowing administrators to stop provisioning VLANs manually
on each node and leaf and surpass the existing 4,000 VLAN limit for building security
zones.
For feedback or questions, contact us using the Nutanix NEXT Community forums.

© 2023 Nutanix, Inc. All rights reserved | 25


Cisco ACI with Nutanix

8. Appendix

Best Practices Checklist


General
• Connect each physical host directly to two ACI leaf switches.
UCS hosts can connect to intermediate fabric interconnects.
• Place the Nutanix CVMs and hypervisor hosts in the same ACI EPG to allow full
communication between nodes in the same Nutanix cluster.
• If using separate EPGs for the CVM and hypervisor or a microsegmented Nutanix
EPG, ensure that all ports are open for communication between CVMs and hosts.
• Use ACI contracts between the Nutanix EPG and other EPGs to restrict management
access.
• Use the following bridge domain settings to allow Nutanix cluster expansion and node
addition:
› L3 unknown multicast flooding: flood
› Multidestination flooding: flood in BD
› ARP flooding: enabled
› GARP-based detection: enabled
• Set the EPG port type to Trunk (Native) to allow CVMs and hypervisors to use the
untagged VLAN.
• Don't enable proxy ARP inside the Nutanix EPG for CVMs, hosts, or backplane
interfaces.

© 2023 Nutanix, Inc. All rights reserved | 26


Cisco ACI with Nutanix

AHV
• When using servers behind intermediate switches, such as Cisco UCS fabric
interconnects or blade switches, ensure the correct VLANs are provisioned on the
intermediate switches.
In the case of UCS fabric interconnect, provision VLANs automatically with
the ExternalSwitch App.
• When using the ACI VMM domain for Nutanix, use the default resolution immediacy:
Pre-provision.
• When using an ACI physical domain for AHV, create one EPG for the AHV host and
CVM.
Note: Create additional EPGs for each AHV guest VM network.

• Use the default active-backup bond mode unless you need the bandwidth of multiple
network adapters.
• Note: Use individual ports with a static binding and don't use a port channel policy for active-backup.

• Use balance-tcp with LACP if you need active-active adapters.


› Use an LACP-Active port channel policy in the ACI vPC policy for active-active.
› Remove the Suspend Individual configuration from the port channel policy to enable
LACP fallback.
• Don't alter the default virbr0 in AHV.

ESXi Standard vSwitch


• Use individual port static bindings instead of a vPC.
› Use the default Route Based on Originating Virtual Port load balancing method.
› If you need active-standby, use individual ports as well.
• Don't use a MAC pinning or Static Channel - Mode On port channel policy in the vPC
policy.
• Don't alter the default vSwitchNutanix.

© 2023 Nutanix, Inc. All rights reserved | 27


Cisco ACI with Nutanix

ESXi Distributed vSwitch


• If desired, use a VMware VDS vCenter VMM domain.
VMM domain integration is optional and all other recommendations still apply.
• Use local switching mode in the VMM domain vSwitch configuration.
• Place the CVM and ESXi VMkernel adapter in the VDS following KB 3289.
• Migrate one physical adapter on the host at a time to the VDS.
• Note: Don't migrate the svm-iscsi-pg port group.

• If four network adapters are available and you need out-of-band management, create
a second VMkernel adapter in a VSS to provide a management connection to vCenter.
• If only two network adapters are available or the CVM and VMkernel adapters are in
a VMM domain EPG, set the CVM and VMkernel EPG resolution immediacy to Pre-
provision.
• Use the Route Based on Physical NIC Load load balancing method in the VDS and
individual ports with a static binding.
• If you need LACP on ESXI, use an LACP-Active port channel policy and a vPC.
› Remove the Suspend Individual configuration from the port channel policy to
enable LACP fallback.
› Don't use a MAC pinning or Static Channel - Mode On port channel policy in the
vPC policy.
• Don't alter the default vSwitchNutanix.

ESXi with DVS and Cisco AVE


• If using Cisco Application Virtual Edge, Nutanix recommends native switching mode
for the CVM and VMkernel adapter. Using AVE switching mode for the Nutanix CVM
might add latency that disrupts storage performance.

Note: Review the Cisco End of Sale and End of Life notification if using AVE.

• Use AVE switching mode for microsegmentation in the hypervisor for guest VMs if
desired.

© 2023 Nutanix, Inc. All rights reserved | 28


Cisco ACI with Nutanix

References
1. Cisco ACI Design Guide
2. Cisco ACI Virtual EoS EoL
3. KB 3289 Migrate to ESXi Virtual Switch
4. Cisco ACI ExternalSwitch App
5. Cisco Compute Hyperconverged with Nutanix Field Installation Guide
6. Cisco ACI and Nutanix AHV VMM Integration

© 2023 Nutanix, Inc. All rights reserved | 29


Cisco ACI with Nutanix

About Nutanix
Nutanix offers a single platform to run all your apps and data across multiple clouds
while simplifying operations and reducing complexity. Trusted by companies worldwide,
Nutanix powers hybrid multicloud environments efficiently and cost effectively. This
enables companies to focus on successful business outcomes and new innovations.
Learn more at Nutanix.com.

© 2023 Nutanix, Inc. All rights reserved | 30


Cisco ACI with Nutanix

List of Figures
Figure 1: Cisco ACI Component Overview.................................................................................................... 7

Figure 2: Cisco ACI and Nutanix Topology....................................................................................................9

Figure 3: ACI Port Channel Policy Matches Hypervisor Policy................................................................... 13

Figure 4: Cisco ACI AHV Test Topology......................................................................................................15

Figure 5: Associate the AEP and the Physical Domain...............................................................................17

Figure 6: APIC EPG Static Binding............................................................................................................. 17

Figure 7: AHV Host Detail............................................................................................................................18

Figure 8: APIC Nutanix Contract Between EPGs........................................................................................19

Figure 9: ESXi VSS Topology...................................................................................................................... 21

Figure 10: ESXi VSS Host Detail.................................................................................................................21

Figure 11: ESXi VDS Topology.................................................................................................................... 22

Figure 12: Application Profile to EPG and Domain Mapping.......................................................................23

Figure 13: ESXi VDS Host Detail................................................................................................................ 23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy