BP 2052 Cisco ACI
BP 2052 Cisco ACI
BEST PRACTICES
Contents
1. Executive Summary................................................................................. 5
Document Version History.................................................................................................................. 5
7. Conclusion.............................................................................................. 25
8. Appendix................................................................................................. 26
Best Practices Checklist................................................................................................................... 26
References........................................................................................................................................ 29
About Nutanix.............................................................................................30
List of Figures.............................................................................................................................................31
Cisco ACI with Nutanix
1. Executive Summary
Cisco Application Centric Infrastructure (ACI)™ empowers your applications by
automatically translating application requirements into infrastructure configuration.
Combining the power of software-defined networking through Cisco ACI with the Nutanix
Cloud Platform enables you to build a datacenter that performs well and is easy to
manage and scale, freeing IT to focus on the applications instead of the infrastructure.
Cisco ACI defines your desired network state using GUI or API-driven policy. This policy-
based approach to software-defined networking enables the network to scale beyond
the limits of an imperative, controller-oriented model. ACI integrates intelligent control
of hardware switches in a leaf-spine topology with management and automation for
software virtual switches. Using this policy framework, ACI delivers tenant isolation,
microsegmentation, automation, programmability, ease of management, and deep
network visibility.
With the Cisco ACI Virtual Machine Manager (VMM) integration for Nutanix AHV,
introduced in ACI 6.0(3), you can easily configure networks, security, and visibility in
AHV. Cisco and Nutanix have developed recommendations for deploying Nutanix in a
Cisco ACI environment to achieve maximum performance and reliability. Refer to the
best practices checklist in the appendix for a summary of these recommendations.
Each of the four Nutanix hosts connects to two Cisco Nexus leaf switches in ACI mode.
Two Cisco Nexus switches form the spine of the Cisco ACI fabric. Three Cisco APICs
connect to the leaf switches to manage the ACI fabric. The Nutanix Controller VM (CVM)
and a Linux guest VM run on each hypervisor node.
We used the following combinations of hypervisors and virtual switches to verify
functionality and compatibility in the most common scenarios.
Table: Hypervisor and Virtual Switch Combinations Recommended with Cisco ACI
Hypervisor Virtual Switch Description
Nutanix AHV Open vSwitch (OVS) Physical domain
Nutanix AHV Open vSwitch (OVS) VMM domain
Physical Connections
Connect each Nutanix node directly to at least two ACI leaf switches for load balancing
and fault tolerance. For non-UCS servers, we recommend establishing a direct
connection to the ACI leaf, without any intermediate switches, to guarantee maximum
throughput and minimal latency between nodes. Topologies with intermediate switches
are allowed, but ensure line-rate, nonblocking connectivity for east-west traffic between
Nutanix nodes in the same cluster with a maximum of three switch hops.
Bridge Domains
In the Cisco ACI bridge domain dedicated to the Nutanix server hypervisors and CVMs,
configure the following settings to allow Nutanix node discovery and addition:
• L3 unknown multicast flooding: flood
• Multidestination flooding: flood in BD
• ARP flooding: enabled
• GARP-based detection: enabled
Without these settings, automated Nutanix node discovery and addition might fail and
you must add new Nutanix nodes to the cluster directly by IPv4 address.
contracts between EPGs. Nutanix recommends placing all CVMs and hypervisor hosts
in a single Nutanix cluster in the same EPG to ensure full network connectivity and
functionality and low storage latency and to allow storage communication between nodes
in the Nutanix cluster.
Nutanix recommends placing hypervisor hosts and the CVMs in an untagged, or native,
VLAN and placing guest VMs in tagged VLANs:
1. In the Cisco APIC, navigate to the endpoint group that contains the CVM and
hypervisor.
2. In the Static Link section, select Trunk (Native) for Mode.
Don't enable proxy ARP inside the Nutanix EPGs used for CVMs, hosts, or backplane, as
doing so can cause unexpected failure in Nutanix processes that use ARP to determine
endpoint availability. You can enable proxy ARP inside guest VM EPGs.
We recommend using contracts that allow only management traffic into the Nutanix
EPGs to restrict network-level access to the Nutanix compute and storage infrastructure.
Using features that present storage and services for external clients, like Nutanix
Volumes Block Storage and Nutanix Files, requires more permissive contracts.
If necessary, you can place Nutanix CVMs and hypervisor hosts in different EPGs
and separate them using contracts but you must allow all required ports and protocols
between endpoints. Failure to allow all required ports might lead to loss of the storage
fabric. Even in separate EPGs, Nutanix hosts and CVMs must still be in the same layer 2
broadcast domain and same layer 3 IP subnet.
If you use a virtual portal channel (vPC), create a vPC policy group that contains the two
leaf switches for each Nutanix node in the APIC. Specify the desired port channel policy
in each vPC policy group, matching the hypervisor load balancing configuration (Static
Channel Mode On or LACP). Create an interface profile with an interface selector for
each pair of uplinks corresponding to a Nutanix node and associate this interface profile
with the vPC policy group for the node. Associate these interface profiles with the switch
policies for the pair of leaf switches.
Don't use MAC pinning in Cisco ACI. Cisco has documented a limitation of MAC pinning
that can cause traffic disruption during a leaf switch reload.
Load option. For ESXi Active Standby, or Route Based on Originating Virtual Port, use
individual ports in ACI as well.
To use LACP with ESXi, change the port channel mode to LACP Active and remove
the Suspend Individual control. Nutanix doesn't recommend Static Channel Mode On
because it drops traffic during the LCM and Foundation reboot process.
VMM Domain Best Practices for AHV and Cisco UCS Fabric Interconnects
Cisco UCS servers in domain mode, such as Cisco Compute Hyperconverged for
Nutanix servers, connect directly to a pair of fabric interconnects instead of to the ACI
leaf.
With the fabric interconnect as an intermediate switch, you must perform the VLAN
configuration from the EPG on the intermediate switch as well. To configure these
VLANs, you can preconfigure them from the VLAN pool on the intermediate switch
beforehand. You can also manage VLAN configuration dynamically, which might be
preferable from a scalability perspective if there are many VLANs and ports. In the case
of Cisco UCS fabric interconnects, the ExternalSwitch application can automate VLAN
configuration between ACI and the Fabric Interconnect. For more information, see the
Cisco ACI Design Guide.
The following figure shows the static binding (or static port in newer ACI versions)
configuration for individual ports, located under the Tenant tab. Create EPG static
bindings for each VLAN trunked to the AHV hosts. Here you can see VLAN 3000 on
ports 1/37 through 1/40 placed into epg-prod-ib-mgmt, where the AHV hosts and CVMs
are connected. The EPG Domains (VMs and Bare-Metal) menu item in this figure
contains the physical domain physd-Nutanix, which holds the ports for the Nutanix
servers.
In the Nutanix cluster, keep the AHV OVS bond mode at the default active-backup setting
for easy configuration. In the following example, traffic from the CVM and user VMs flows
out from the active adapter eth3 toward Leaf 101. In the event of a failure or link loss,
traffic flows out from eth2 toward Leaf 102. Alternatively, if you need the bandwidth from
both adapters, use LACP and the balance-tcp bond mode combined with a Cisco ACI
LACP Active port channel policy. You can find additional information on AHV bond modes
in the Nutanix AHV Networking best practice guide.
In addition to the CVM and AHV EPG, create EPGs and static bindings for guest VM
VLANs. In our test example, we created an ACI application profile (app-NTNX-WEB) and
an EPG (epg-NTNX-WEB) to separate guest VM traffic from the CVM and AHV traffic.
Guest VM traffic used VLAN 3001; CVM and AHV traffic used VLAN 3000.
Create and apply contracts between the guest VM EPGs and between the guest VM
EPGs and the Nutanix CVM and host EPG to enforce network policies. In our testing
scenarios, we created a simple contract named Nutanix for management purposes that
allows only SSH, ICMP (ping), and Nutanix Prism web traffic on port 9440 from the guest
VM EPG (epg-NTNX-WEB) to the CVM and AHV EPG (epg-prod-ib-management).
Nutanix switch ports in ACI. Use the default Route Based on Originating Virtual Port
load balancing method in the virtual switch port groups.
Each Nutanix ESXi host contains two virtual switches: the standard vSwitchNutanix for
internal control traffic and the default vSwitch0 for the 10 GbE CVM and user VM traffic.
In our testing we added a third switch, vSwitchMgmt, for dedicated 1 GbE management
connections. The third vSwitch is optional; choose this design if you want to separate
management traffic onto a different uplink NIC team. The following diagram illustrates the
internal host layout as tested.
Using the VMware VDS VMM domain integration, ACI automatically provisions the EPGs
created in the APIC in the virtual switch as port groups in vCenter. The APIC configures
the port group from the dynamic or static VLAN pool associated with the VMM domain.
With dynamic pools, the APIC selects an available VLAN to assign to the EPG. With a
static pool, the admin selects the specific VLAN when selecting the VMM domain in the
EPG.
When you use a VMM domain, you don't need to have a physical domain with static EPG
port bindings. Instead, when you create the VMM domain, select the AEP you want to
associate it with from the Associated Attachable Entity Profile dropdown menu.
In our example, we created two EPGs—epg-ib-mgmt and epg-NTNX-Web—tied to the
VMM domain. The EPG epg-ib-mgmt represents the CVMs and hypervisors, while epg-
NTNX-Web represents the guest VMs in the tests. Creating these EPGs in ACI causes
the APIC to create port groups in vCenter with names based on the combination of ACI
tenant, application, and EPG. The following figure shows how the application profile ties
together the VMM domain and EPG for epg-NTNX-Web.
The following figure shows the configuration of the VDS in each hypervisor host and the
port groups named aci_mgmt|app-prod-ib-mgmt|epg-ib-mgmt and aci_mgmt|app-NTNX-
Web|epg-NTNX-Web that the APIC automatically configured on the VDS. Each EPG has
its own port group.
Note: If you choose to migrate from the VSS to the VDS, follow Nutanix KB 3289. Ensure that the
internal CVM adapter remains in the port group svm-iscsi-pg by selecting Do not migrate for the adapter.
This setting ensures that the adapter remains in the default vSwitchNutanix. This step can be easy to
overlook when using the vSphere migration wizard.
To avoid disconnecting any nodes in the cluster, ensure that you're only migrating
one physical adapter at a time from the VSS to the VDS. Place the CVM and primary
VMkernel adapter in the same EPG (epg-ib-mgmt in our example) by assigning them to
the same port group in vCenter. Connect user VMs to port group EPGs, such as epg-
NTNX-Web. Optionally, use a second VMkernel adapter created in a VSS (vSwitchMgmt
in our example) to provide a backup connection to the ESXi host while migrating to the
VDS. The virtual switch port groups that the APIC creates should follow the Nutanix VDS
best practice of using Route Based on Physical NIC Load for load balancing.
Two-Uplink Configuration
The previous diagram shows a four-uplink configuration, with a second pair of uplink
adapters used as a management backup in case of communication failures on the ACI-
controlled VDS. If you don't have or want four adapters, you can build a two-uplink
configuration using Cisco ACI preprovision resolution immediacy for the EPG containing
the ESXi VMkernel port and CVM. The preprovision option causes the ACI fabric to
statically provision the VLAN for the Nutanix CVM and ESXi host on the leaf switch
ports where the AEP is associated. Using the preprovision option on the EPG avoids the
scenario that occurs when ACI waits to hear from vCenter to provision the port, but the
host can't talk to vCenter until ACI provisions the port.
7. Conclusion
Running the Cisco ACI network fabric with Nutanix creates a compute and storage
infrastructure that puts applications first. Whether you're using the native Nutanix
hypervisor, AHV, with the default OVS, or ESXi with the VSS or VDS, Cisco ACI
provides a high-performance, easy-to-manage, and scalable leaf-spine architecture for
building a web-scale Nutanix Cloud Platform. Based on our extensive testing of these
configurations, we provide a best practices checklist in the appendix.
Nutanix eliminates the need to focus on storage and compute infrastructure configuration
by providing an invisible cluster of resources to applications. Similarly, the Cisco ACI
fabric simplifies network setup using policy attuned to application requirements to
automate individual switch configuration. In addition to physical network and L4–7 device
automation, the ACI VMM integration for Nutanix AHV and vSphere extends the network
fabric into the virtual switch, allowing administrators to stop provisioning VLANs manually
on each node and leaf and surpass the existing 4,000 VLAN limit for building security
zones.
For feedback or questions, contact us using the Nutanix NEXT Community forums.
8. Appendix
AHV
• When using servers behind intermediate switches, such as Cisco UCS fabric
interconnects or blade switches, ensure the correct VLANs are provisioned on the
intermediate switches.
In the case of UCS fabric interconnect, provision VLANs automatically with
the ExternalSwitch App.
• When using the ACI VMM domain for Nutanix, use the default resolution immediacy:
Pre-provision.
• When using an ACI physical domain for AHV, create one EPG for the AHV host and
CVM.
Note: Create additional EPGs for each AHV guest VM network.
• Use the default active-backup bond mode unless you need the bandwidth of multiple
network adapters.
• Note: Use individual ports with a static binding and don't use a port channel policy for active-backup.
• If four network adapters are available and you need out-of-band management, create
a second VMkernel adapter in a VSS to provide a management connection to vCenter.
• If only two network adapters are available or the CVM and VMkernel adapters are in
a VMM domain EPG, set the CVM and VMkernel EPG resolution immediacy to Pre-
provision.
• Use the Route Based on Physical NIC Load load balancing method in the VDS and
individual ports with a static binding.
• If you need LACP on ESXI, use an LACP-Active port channel policy and a vPC.
› Remove the Suspend Individual configuration from the port channel policy to
enable LACP fallback.
› Don't use a MAC pinning or Static Channel - Mode On port channel policy in the
vPC policy.
• Don't alter the default vSwitchNutanix.
Note: Review the Cisco End of Sale and End of Life notification if using AVE.
• Use AVE switching mode for microsegmentation in the hypervisor for guest VMs if
desired.
References
1. Cisco ACI Design Guide
2. Cisco ACI Virtual EoS EoL
3. KB 3289 Migrate to ESXi Virtual Switch
4. Cisco ACI ExternalSwitch App
5. Cisco Compute Hyperconverged with Nutanix Field Installation Guide
6. Cisco ACI and Nutanix AHV VMM Integration
About Nutanix
Nutanix offers a single platform to run all your apps and data across multiple clouds
while simplifying operations and reducing complexity. Trusted by companies worldwide,
Nutanix powers hybrid multicloud environments efficiently and cost effectively. This
enables companies to focus on successful business outcomes and new innovations.
Learn more at Nutanix.com.
List of Figures
Figure 1: Cisco ACI Component Overview.................................................................................................... 7