Brkarc 3601
Brkarc 3601
Architecture and
Deployment Models
Satish Kondalam
Technical Marketing Engineer
Session Abstract
This session will discuss the foundations of the Nexus 7000 and 7700 series
switches, including chassis, I/O modules, and NX-OS software. Examples will
show common use-cases for different module types and considerations for
module interoperability. The focus will then shift to key platform capabilities and
features – including VPC,OTV, VDCs, and others – along with real-world designs
and deployment models.
Session Goals
• To provide an understanding of the Nexus 7000 / Nexus 7700 switching
architecture, which provides the foundation for flexible, scalable Data Centre
designs
• To examine key Nexus 7000 / Nexus 7700 design building blocks and illustrate
common design alternatives leveraging those features and functionalities
• To see how the Nexus 7000 / Nexus 7700 platform plays in emerging
technologies and architectures
Agenda
• Introduction to Nexus 7000 / Nexus 7700
• Nexus 7000 / Nexus 7700 Architecture
• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding
• Generic Designs with Nexus 7000 / Nexus 7700
• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Introduction to Nexus 7000 / Nexus 7700 Platform
Data-centre class Ethernet switches designed to deliver high performance, high
availability, system scale, and investment protection
Designed for wide range of Data Centre deployments, focused on feature-rich
10G/40G/100G density and performance
Supervisor Engines
Chassis
I/O Modules
Fabrics
Nexus 7000 / Nexus 7700 – Common Foundation
Nexus 7000 Nexus 7700
General purpose DC switching w/10/40/100G Targeted at Dense 40G/100G deployments
Common Foundation
25RU
21RU Side Side
Front
14RU 7RU
Side Side Side
Nexus 7718
Back
Nexus 7710
Back
14RU
Front
9RU
Front
Front
3RU
One Supervisor
Two Power Supplies No Fabric Modules!
Engine
Supervisor Engine 2 / 2E
• Provides all control plane and management functions
Supervisor Engine 2 (Nexus 7000) Supervisor Engine 2E (Nexus 7000 / Nexus 7700)
Base performance High performance
One quad-core 2.1GHz CPU with 12GB DRAM Two quad-core 2.1GHz CPU with 32GB DRAM
N7K-SUP2/N7K-SUP2E
Dedicated
Switched
Arbitration
1GE EOBC Fabric ASIC Path
Dedicated Central
Arbitration
Switched Arbiter
Path
EOBC VOQs
1GE Inband
I/O Controller
Bootflash
(eUSB) NVRAM
2GB Main CPU 32MB Main CPU
Sup2E
DRAM Only
F3 closes the
F/M feature gap!
Nexus 7000 M2 I/O Modules
N7K-M224XP-23L / N7K-M206FQ-23L / N7K-M202CF-22L
• 10G / 40G / 100G M2 I/O modules N7K-M224XP-23L
Replication Replication
Engine Engine
Replication Replication
Engine Engine
LinkSec + LinkSec +
12 X 10G MAC -or- 12 X 10G MAC -or-
3 X 40G MAC -or- 3 X 40G MAC -or-
1 X 100G MAC 1 X 100G MAC
Front Panel Ports
7000: Supported in NX-OS release 6.1(2) and later
7700: Supported in NX-OS release 6.2(2) and later
LC Arbitration
CPU Fabric ASIC Aggregator
LC Inband
to LC
to ARB
CPU
4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7700 F2E Module Architecture
N77-F248XP-23E
EOBC To Central Arbiters
To Fabric Modules
LC Arbitration
CPU Aggregator
Fabric ASIC Fabric ASIC
LC Inband
to LC
to ARB
CPU
4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7000 F3 I/O Modules
N7K-F348XP-25 / N7K-F312FQ-25 / N7K-F306CK-25
• 10G / 40G / 100G F3 I/O modules N7K-F348XP-25
• Share common hardware architecture
• SOC-based forwarding engine design N7K-F312FQ-25
6 independent SOC ASICs per module
FSA Arbitration
CPU Aggregator
x6 …
1G switch
… x6
Fabric ASIC
to FSA
LC Inband
CPU
to ARB
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Fabric Services Accelerator (FSA) for F3
EOBC
• High-performance module CPU with on-board
acceleration engines
• 6Gbps inband connectivity from SOCs to FSA FSA CPU
• Multi-Mpps packet processing
• 2 X 2GB dedicated DRAM
6 x 1Gbps
Module Inband
Nexus 7000 F3 12-Port 40G Module Architecture
N7K-F312FQ-25 EOBC To Fabric Modules To Central Arbiters
FSA Arbitration
CPU Aggregator
x6 …
1G switch Fabric ASIC
… x6
to FSA
LC Inband
CPU
to ARB
1 2 3 4 5 6 7 8 9 10 11 12
Front Panel Ports (QSFP+)
Nexus 7000 F3 6-Port 100G Module Architecture
N7K-F306CK-25 EOBC To Fabric Modules To Central Arbiters
FSA Arbitration
CPU Aggregator
x6 …
1G switch Fabric ASIC
… x6
to FSA
LC Inband
CPU
to ARB
1 2 3 4 5 6
Front Panel Ports (CPAK)
Nexus 7700 F3 48-Port 1G/10G Module Architecture
N77-F348XP-23 To Fabric Modules
EOBC To Central Arbiters
FSA Arbitration
CPU Aggregator
x6 …
1G switch
… x6
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7700 F3 24-Port 40G Module Architecture
N77-F324FQ-25
EOBC To Fabric Modules To Central Arbiters
FSA Arbitration
CPU Aggregator
x 12 …
1G switch
… x 12
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB
2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Front Panel Ports (QSFP+)
Nexus 7700 F3 12-Port 100G Module Architecture
N77-F312CK-26
EOBC To Fabric Modules To Central Arbiters
FSA Arbitration
CPU Aggregator
x 12 …
1G switch
… x 12
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB
1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12
1 2 3 4 5 6 7 8 9 10 11 12
Front Panel Ports (CPAK)
F3 Module 40G and 100G Flows
• Virtual Queuing Index (VQI)
sustains 10G, 40G, or 100G traffic Ingress Modules
flow based on destination interface Destination
type Spines VQIs
Spines
Spines
• No single-flow limit – full 40G/100G Spines
Fabrics
flow support
10G 10G 40G 40G 100G
1 VQI 1 VQI 1 VQI 1 VQI 1 VQI
Egress Interfaces
Agenda
• Introduction to Nexus 7000 / Nexus 7700
F2E modules
• M2 modules host SVIs and other L3 functions 10.1.10.100
M2 modules
vlan 10
• From F2E perspective, Router MAC reachable
via M2 modules L2
F3 ✓ ✓ ✓ ✓ ✓ ✓ ✓ F3 size
F3 + M2 ✓ ✓ ✓ ✓ ✓ ✗ ✗ F3 size
• F2E + F3 VDC
• Introduction of 40G/100G into existing 10G environments F2E
• Migration to larger table sizes F3
• Transition to additional features/functionality (OTV, MPLS, VXLAN, etc.)
• M2 + F3 VDC
• Introduce higher 1G/10G/40G/100G port-density while maintaining feature-set M2
• Avoid proxy-forwarding model for module interoperability
F3
• Migrate to 40G/100G interfaces with full-rate flow capability
Agenda
• Introduction to Nexus 7000 / Nexus 7700
N7K-C7018-FAB-2
N7K-C7010-FAB-2
N7K-C7009-FAB-2
Multistage Crossbar
Nexus 7000 / Nexus 7700 implement 3-stage crossbar switch fabric
• Stages 1 and 3 on I/O modules
• Stage 2 on fabric modules
2nd stage Fabric Modules
Fabric Modules
Fabric Fabric Fabric Fabric Fabric Fabric
ASIC ASIC ASIC ASIC ASIC ASIC
1 Fabric 2 Fabric 3 Fabric 4 Fabric 5 Fabric Fabric Fabric Fabric Fabric Fabric Fabric
ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC
1 2 3 4 5 6
550G 1.32T
220G
110G (4 x 55G)
(2 x 55G)
Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC
3rd stage
1st stage Ingress Egress Ingress Module Egress Module
1st stage
Module Module
3rd stage
Nexus 7000 Nexus 7700
I/O Module Capacity – Nexus 7000
Fabric 2 Modules
550Gbps
110Gbps
440Gbps
220Gbps
330Gbps Fabric
1
per slot bandwidth ASIC
One fabric:
Local Fabric
• Any port can pass traffic to any (240G)
2
other port in VDC Fabric
ASIC
Three fabrics:
• 240G M2 module has maximum Fabric
3
bandwidth Local Fabric ASIC
(480G)
Five fabrics:
Fabric
4
• 480G F2E/F3 module has maximum ASIC
bandwidth
Fabric
5
ASIC
What About Nexus 7004?
• Nexus 7004 has no fabric modules
• Each I/O module has local fabric with 10 available fabric channels
• I/O modules connect “back-to-back” via 8 fabric channels
• Two fabric channels “borrowed” to connect supervisor engines
2 * 55G
fabric channels
M2/F2E/F3 M2/F2E/F3
Module 3 Fabric Fabric Module 4
ASIC ASIC
1320Gbps
1100Gbps
880Gbps
660Gbps
440Gbps
220Gbps Local Fabric
Fabric Fabric
ASICs
1
bandwidth
4
Five fabrics: Fabric
ASICs
Fabric
• 960G F3 40G module has maximum Local Fabric
#2
bandwidth #1 (1.2T) 5
Fabric
ASICs
Six fabrics:
• 1.2T F3 100G module has maximum 6
Fabric
bandwidth ASICs
What About Nexus 7702?
• Nexus 7702 has no fabric modules
• Single I/O module – all traffic locally switched
• Two fabric channels connect to supervisor engine
F3 Module
Fabric Fabric
ASIC ASIC
1* 55G
fabric channel
Supervisor
Fabric ASIC
Agenda
• Introduction to Nexus 7000 / Nexus 7700
• Layer 3 IPv4/IPv6 unicast and multicast • Ingress and egress Netflow (full and
sampled)
• MPLS/VPLS/EoMPLS
• Ingress policing
Classification Egress lookup
CL TCAM
• Egress ACL/QOS (ACL/QOS) pipeline • Egress policing
classification
L2 Engine
MAC L2 Lookup (post-L3)
Table L2 Lookup (pre-L3)
• Egress MAC lookups
FIB
Layer 3 FIB Policing
• Egress ACL/QOS TCAM • Egress policing
classification
• Ingress policing
Ingress lookup
• Ingress ACL/QOS/SNF pipeline
classification
• Ingress MAC table
L2 Lookup (pre-L3) lookups
• Port-channel hash result
Foundational: Innovative:
• Spanning Tree (RSTP+/MST) • Remote Integrated Service Engine
(RISE)
• Virtual Port Channel (VPC)
• Virtual Device Context (VDC)
• Virtual Routing and Forwarding
(VRF) and MPLS VPNs • Overlay Transport Virtualisation
(OTV)
Topology without VPC
STP → Virtual Port Channel (VPC) L3
L2
• Eliminates STP blocked ports, leveraging
all available uplink bandwidth and
minimising reliance on STP STP blocks
one uplink
• Provides active-active HSRP
• Works seamlessly with current network
designs/topologies
Topology with VPC
• Works with any module type (M2/F2E/F3)
L3
• Most customers have taken this step L2
No blocking
ports
Collapsed Core/Aggregation
• Nexus 7000 / Nexus 7700 as Data Centre
collapsed core/aggregation L3
core1 core2
• Nexus 7000 / Nexus 7700 in both
Data Centre aggregation and core
• Provides high-density, high-performance
10G / 40G / 100G
• Same module-type considerations as
collapsed core – density, features, scale L3 agg1 agg2 aggX aggY
L2 …
• Scales well, but scoping of failure
domains imposes some restrictions
• VLAN extension / workload mobility
options limited
L4-7 Services Integration – VPC Connected
• VPC designs well-suited for L4-7 services
integration – pair of aggregation devices makes
service appliance connections simple L3
Client → Real
VDC n
Network Stack (L2 / IPv4 / IPv6) Network Stack (L2 / IPv4 / IPv6)
Infrastructure
Kernel
VDC Interface Allocation
• Physical interfaces assigned on per VDC basis, from default/admin VDC
• All subsequent interface configuration performed within the assigned VDC
• A single interface cannot be shared across multiple VDCs
• VDC type (“limit-resource module-type”) determines types of interfaces allowed
in VDC
• VDC type driven by operational goals and/or hardware restrictions, e.g.:
• Mix M2 and F2E in same VDC to increase MAC scale in FabricPath
• Restrict VDC to F3 only to avoid lowest common denominator
• Cannot mix M1 and F3 in same VDC
VDC Interface Allocation – M2
• Allocate any interface to any VDC
• But, be aware of shared hardware resources – backend ASICs may be shared by several
VDCs
• Best practice: allocate entire module to one VDC to minimise shared hardware resources
VDC 1
VDC 2
M2-10G
VDC 3
VDC 4
M2-40G
VDC Interface Allocation – F2E / F3 Modules
• Allocation on port-group boundaries – aligns ASIC resources to VDCs
• Port-group size varies depending on module type
F2E
4-port
port-group
VDC 1
F3-10G
8-port VDC 2
port-group
VDC 3
F3-40G
2-port
port-group
VDC 4
F3-100G
1-port
port-group
Communicating Between VDCs
• Must use front-panel ports to communicate between VDCs
• No backplane inter-VDC communication
interconnect options
Collapsed Core Design with VDCs
• Maintain administrative
segmentation while
consolidating network core1 Admin Zone 1 core2
infrastructure Admin Zone 1
(VDC 1)
Admin Zone 2 core1 core2 Admin Zone 3
• Maintain fault isolation (VDC 2) L3 (VDC 3)
between zones (independent L2
Admin Zone 2 Admin Zone 3
L2, routing processes per
zone) L3 agg1 agg2 aggX aggY
L2 …
• Firewalling between zones
facilitated by VDC port
membership model
VRF / MPLS VPNs
• Provides network virtualisation – One physical network
supporting multiple virtual networks
• While maintaining security/segmentation and access to shared
services
• VRF-lite segmentation for simple/limited virtualisation
environments
• MPLS L3VPN for larger-scale, more flexible deployments
MPLS Layer 3 VPN – Secure Multi-Tenant Data
Centre
Requirement:
Internet Campus
• Secure segmentation for hosted / enterprise
data centre Core
MPLS
P P
L2
• Direct PE-PE or PE-P-PE interconnections in
core Pod 1 Pod 2
Layer 2 adjacent
Site 1 Site 2
OTV VDC Requirement
• Current limitation – SVI (for VLAN termination at L3) and OTV overlay interface (for VLAN
extension over OTV) cannot exist in same VDC
• Typical designs move OTV to separate VDC, or separate switch (e.g. Nexus 7702)
SVI OTV SVI OTV SVI OTV
L2 VLAN L2 VLAN L2 VLAN L2 VLAN L2 VLAN