0% found this document useful (0 votes)
101 views73 pages

Brkarc 3601

This document discusses the Nexus 7000 and 7700 series switches, including their architecture, modules, and deployment models. It provides an overview of the chassis, supervisor engines, I/O modules, NX-OS software, and fabric architecture. The agenda outlines key capabilities like VPC, OTV, VDCs, and deployment examples. The goal is to explain the flexible and scalable design of these data center switches.

Uploaded by

olam bator
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views73 pages

Brkarc 3601

This document discusses the Nexus 7000 and 7700 series switches, including their architecture, modules, and deployment models. It provides an overview of the chassis, supervisor engines, I/O modules, NX-OS software, and fabric architecture. The agenda outlines key capabilities like VPC, OTV, VDCs, and deployment examples. The goal is to explain the flexible and scalable design of these data center switches.

Uploaded by

olam bator
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Nexus 7000/7700

Architecture and
Deployment Models
Satish Kondalam
Technical Marketing Engineer
Session Abstract
This session will discuss the foundations of the Nexus 7000 and 7700 series
switches, including chassis, I/O modules, and NX-OS software. Examples will
show common use-cases for different module types and considerations for
module interoperability. The focus will then shift to key platform capabilities and
features – including VPC,OTV, VDCs, and others – along with real-world designs
and deployment models.
Session Goals
• To provide an understanding of the Nexus 7000 / Nexus 7700 switching
architecture, which provides the foundation for flexible, scalable Data Centre
designs
• To examine key Nexus 7000 / Nexus 7700 design building blocks and illustrate
common design alternatives leveraging those features and functionalities
• To see how the Nexus 7000 / Nexus 7700 platform plays in emerging
technologies and architectures
Agenda
• Introduction to Nexus 7000 / Nexus 7700
• Nexus 7000 / Nexus 7700 Architecture
• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding
• Generic Designs with Nexus 7000 / Nexus 7700
• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Introduction to Nexus 7000 / Nexus 7700 Platform
Data-centre class Ethernet switches designed to deliver high performance, high
availability, system scale, and investment protection
Designed for wide range of Data Centre deployments, focused on feature-rich
10G/40G/100G density and performance
Supervisor Engines
Chassis
I/O Modules

Fabrics
Nexus 7000 / Nexus 7700 – Common Foundation
Nexus 7000 Nexus 7700
General purpose DC switching w/10/40/100G Targeted at Dense 40G/100G deployments

Common Foundation

• Same release vehicles, versioning, feature-sets


• Common configuration model
• Common operational model

• Common fabric ASICs (Fab2) and architecture


• Same central arbitration model
• Same VOQ/QOS model

• Identical forwarding ASICs (F2E, F3)


• Consistent hardware feature sets
• Consistent hardware scale
Agenda
• Introduction to Nexus 7000 / Nexus 7700

• Nexus 7000 / Nexus 7700 Architecture


• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding

• Generic Designs with Nexus 7000 / Nexus 7700


• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Nexus 7000 Chassis Family Back

Nexus 7010 Nexus 7018

25RU
21RU Side Side

Front

Front N7K-C7010 Rear


Front N7K-C7018 Rear

Nexus 7009 Nexus 7004


Back

14RU 7RU
Side Side Side

Front N7K-C7004 Rear


Front N7K-C7009 Rear
Nexus 7700 Chassis Family

Nexus 7718

Back
Nexus 7710
Back

26RU Nexus 7706


Back

14RU
Front
9RU
Front
Front

Front Rear Front Rear Front Rear


N77-C7718 N77-C7710 N77-C7706
Nexus 7702 Chassis
One F3 Series One Fan Tray
I/O Module (3 Fans) NX-OS 7.2 and later
Back

3RU

Front N77-C7702 Rear


Front

One Supervisor
Two Power Supplies No Fabric Modules!
Engine
Supervisor Engine 2 / 2E
• Provides all control plane and management functions
Supervisor Engine 2 (Nexus 7000) Supervisor Engine 2E (Nexus 7000 / Nexus 7700)
Base performance High performance
One quad-core 2.1GHz CPU with 12GB DRAM Two quad-core 2.1GHz CPU with 32GB DRAM

• Connects to fabric via 1G inband interface


N77-SUP2E
• Interfaces with I/O modules via 1G switched EOBC
• Onboard central arbiter ASIC
Controls access to fabric bandwidth via dedicated arbitration path to I/O modules

N7K-SUP2/N7K-SUP2E

ID and Status USB Expansion


LEDs Flash
ID and Status Management USB Host USB Log USB Expansion Console Port Management
LEDs Console Port Ethernet Ports Flash Flash Ethernet
Supervisor Engine 2 / 2E Architecture
To Module CPUs To Fabric Modules To Module VOQs

Dedicated
Switched
Arbitration
1GE EOBC Fabric ASIC Path
Dedicated Central
Arbitration
Switched Arbiter
Path
EOBC VOQs
1GE Inband

I/O Controller

Bootflash
(eUSB) NVRAM
2GB Main CPU 32MB Main CPU
Sup2E
DRAM Only

USB expansion 2.1GHz 12GB/32GB 2.1GHz


Console Mgmt0
Quad-Core Quad-Core
USB logflash USB device port
Nexus 7000 / 7700 I/O Module Families

M2 10G / 40G / 100G


M1 1G and 10G

F3 10G / 40G / 100G


F1 10G F2 10G F2E 10G
F2E 10G F3 10G / 40G / 100G

F3 closes the
F/M feature gap!
Nexus 7000 M2 I/O Modules
N7K-M224XP-23L / N7K-M206FQ-23L / N7K-M202CF-22L
• 10G / 40G / 100G M2 I/O modules N7K-M224XP-23L

• Share common hardware architecture – multi-chipset


• Two integrated forwarding engines (120Mpps)
N7K-M206FQ-23L
• Layer 2/Layer 3 forwarding with L3/L4
services (ACL/QOS) and advanced
features (MPLS/OTV/GRE etc.)
• Large forwarding tables (900K FIB/ N7K-M202CF-22L
128K ACL)

Module Port Density Optics Bandwidth


M2 10G 24 x 10G (plus Nexus 2000 FEX support) SFP+ 240G
M2 40G 6 x 40G (or up to 24 x 10G via breakout) QSFP+ 240G
M2 100G 2 x 100G CFP 200G
Nexus 7000 M2 I/O Module Architecture
N7K-M224XP-23L / N7K-M206FQ-23L / N7K-M202CF-22L
EOBC To Fabric Modules To Central Arbiters

LC Fabric ASIC Arbitration


CPU Aggregator

Forwarding VOQs VOQs VOQs VOQs Forwarding


Engine Engine

Replication Replication
Engine Engine
Replication Replication
Engine Engine

LinkSec + LinkSec +
12 X 10G MAC -or- 12 X 10G MAC -or-
3 X 40G MAC -or- 3 X 40G MAC -or-
1 X 100G MAC 1 X 100G MAC
Front Panel Ports
7000: Supported in NX-OS release 6.1(2) and later
7700: Supported in NX-OS release 6.2(2) and later

Nexus 7000 / Nexus 7700 F2E I/O Modules


N7K-F248XP-25E / N7K-F248XT-25E / N77-F248XP-23E
• 48-port 1G/10G with SFP/SFP+ transceivers Nexus 7000 Nexus 7000
N7K-F248XP-25E N7K-F248XT-25E
• 480G full-duplex fabric connectivity
• System-on-chip (SOC) forwarding engine design
12 independent SOC ASICs

• Layer 2/Layer 3 forwarding with L3/L4 services


(ACL/QOS)
Nexus 7700
• Interoperability with M1/M2, in Layer 2 mode on Nexus N77-F248XP-23E
7000
Proxy routing for inter-VLAN/L3 traffic
Nexus 7000 F2E Module Architecture
N7K-F248XP-25E / N7K-F248XT-25E
EOBC To Fabric Modules To Central Arbiters

LC Arbitration
CPU Fabric ASIC Aggregator

LC Inband
to LC
to ARB
CPU

4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7700 F2E Module Architecture
N77-F248XP-23E
EOBC To Central Arbiters
To Fabric Modules

LC Arbitration
CPU Aggregator
Fabric ASIC Fabric ASIC

LC Inband
to LC
to ARB
CPU

4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7000 F3 I/O Modules
N7K-F348XP-25 / N7K-F312FQ-25 / N7K-F306CK-25
• 10G / 40G / 100G F3 I/O modules N7K-F348XP-25
• Share common hardware architecture
• SOC-based forwarding engine design N7K-F312FQ-25
6 independent SOC ASICs per module

• Layer 2/Layer 3 forwarding with L3/L4 services (ACL/QOS)


and advanced features (MPLS/OTV/GRE/VXLAN etc.)
• Require Supervisor Engine 2 / 2E

Module Port Density Optics Bandwidth


F3 10G 48 x 1/10G (plus Nexus 2000 FEX support) SFP+ 480G
F3 40G 12 x 40G (or up to 48 x 10G via breakout) QSFP+ 480G
F3 100G 6 x 100G CPAK 550G N7K-F306CK-25
Nexus 7700 F3 I/O Modules
N7K-F348XP-25 / N7K-F312FQ-25 / N7K-F306CK-25
• 10G / 40G / 100G F3 I/O modules
N77-F348XP-23
• Share common hardware architecture
• SOC-based forwarding engine design
6 independent SOC ASICs per 10G module
12 independent SOC ASICs per 40G/100G module

• Layer 2/Layer 3 forwarding with L3/L4 services (ACL/QOS)


and advanced features (MPLS/OTV/GRE/VXLAN etc.) N77-F324FQ-25

Module Port Density Optics Bandwidth


F3 10G 48 x 1/10G (plus Nexus 2000 FEX support) SFP+ 480G
F3 40G 24 x 40G (or up to 76 x 10G + 5 x 40G via QSFP+ 960G
breakout)
F3 100G 12 x 100G CPAK 1.2T N77-F312CK-26
Nexus 7000 F3 48-Port 1G/10G Module Architecture
N7K-F348XP-25 To Fabric Modules
EOBC To Central Arbiters

FSA Arbitration
CPU Aggregator

x6 …
1G switch
… x6
Fabric ASIC
to FSA
LC Inband
CPU
to ARB

8 X 10G 8 X 10G 8 X 10G 8 X 10G 8 X 10G 8 X 10G


SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Fabric Services Accelerator (FSA) for F3
EOBC
• High-performance module CPU with on-board
acceleration engines
• 6Gbps inband connectivity from SOCs to FSA FSA CPU
• Multi-Mpps packet processing
• 2 X 2GB dedicated DRAM

• Performance/scale boost for distributed fabric Dual-Core Acceleration


services, including sampled Netflow and BFD LC CPU Engines
(roadmap)
• Other potential applications include distributed
ARP/ping processing, data plane packet
2GB 2GB
analysis (wireshark), network probing, etc. DRAM DRAM
I/O

6 x 1Gbps
Module Inband
Nexus 7000 F3 12-Port 40G Module Architecture
N7K-F312FQ-25 EOBC To Fabric Modules To Central Arbiters

FSA Arbitration
CPU Aggregator

x6 …
1G switch Fabric ASIC
… x6
to FSA
LC Inband
CPU
to ARB

2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G


SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6

1 2 3 4 5 6 7 8 9 10 11 12
Front Panel Ports (QSFP+)
Nexus 7000 F3 6-Port 100G Module Architecture
N7K-F306CK-25 EOBC To Fabric Modules To Central Arbiters

FSA Arbitration
CPU Aggregator

x6 …
1G switch Fabric ASIC
… x6
to FSA
LC Inband
CPU
to ARB

1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G


SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6

1 2 3 4 5 6
Front Panel Ports (CPAK)
Nexus 7700 F3 48-Port 1G/10G Module Architecture
N77-F348XP-23 To Fabric Modules
EOBC To Central Arbiters

FSA Arbitration
CPU Aggregator

x6 …
1G switch
… x6
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB

8 X 10G 8 X 10G 8 X 10G 8 X 10G 8 X 10G 8 X 10G


SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Front Panel Ports (SFP/SFP+)
Nexus 7700 F3 24-Port 40G Module Architecture
N77-F324FQ-25
EOBC To Fabric Modules To Central Arbiters

FSA Arbitration
CPU Aggregator

x 12 …
1G switch
… x 12
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB

2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G 2 X 40G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Front Panel Ports (QSFP+)
Nexus 7700 F3 12-Port 100G Module Architecture
N77-F312CK-26
EOBC To Fabric Modules To Central Arbiters

FSA Arbitration
CPU Aggregator

x 12 …
1G switch
… x 12
Fabric ASIC Fabric ASIC
to FSA
LC Inband
CPU
to ARB

1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G 1 X 100G
SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12

1 2 3 4 5 6 7 8 9 10 11 12
Front Panel Ports (CPAK)
F3 Module 40G and 100G Flows
• Virtual Queuing Index (VQI)
sustains 10G, 40G, or 100G traffic Ingress Modules
flow based on destination interface Destination
type Spines VQIs
Spines
Spines
• No single-flow limit – full 40G/100G Spines
Fabrics
flow support
10G 10G 40G 40G 100G
1 VQI 1 VQI 1 VQI 1 VQI 1 VQI

Egress Interfaces
Agenda
• Introduction to Nexus 7000 / Nexus 7700

• Nexus 7000 / Nexus 7700 Architecture


• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding

• Generic Designs with Nexus 7000 / Nexus 7700


• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
I/O Module Interoperability
• General module interoperability rule is: “+/-1 generation” in same Virtual Device
Context (VDC)
• Layer 3 forwarding behaviour is key difference between interop models:
• “Proxy Forwarding”
• “Ingress Forwarding” with Lowest Common Denominator
Router MAC
reached via M2
Proxy Forwarding Model modules Port-channel
hash selects
M2 + F2E VDC M2 module
MAC Table
• F2E modules run in pure Layer 2 mode – all L3 rtr-mac → M2 modules
functions disabled
Host A

F2E modules
• M2 modules host SVIs and other L3 functions 10.1.10.100

M2 modules
vlan 10
• From F2E perspective, Router MAC reachable
via M2 modules L2

• All packets destined to Router MAC forwarded


through fabric toward one M2 module, selected
via port-channel hash L2

• M2 modules(s) perform all L3 forwarding and Host B


policy, pass packets back over fabric to output 10.1.20.100
port vlan 20 interface vlan 10
ip address 10.1.10.1/24
!
• Key consideration: M-series L3 routing interface vlan 20
capacity versus F-series front-panel port count – M2 modules ip address 10.1.20.1/24
How much Layer 3 routing is required? host SVIs
Ingress Forwarding with Lowest Common
Denominator Model
F3 + M2 VDC -or- F3 + F2E VDC
• F3 module interoperability always “Ingress Forwarding” – NO proxy forwarding
Ingress module receiving packet makes all forwarding decisions for that packet
• Supported feature set and scale based on Lowest Common Denominator
Feature available if all modules support the feature Not all features
supported by
Table sizes based on lowest capacity software today…

Module Types Fabric


Layer 2 Layer 3 VPC MPLS OTV VXLAN Table Sizes
in VDC Path

F3 ✓ ✓ ✓ ✓ ✓ ✓ ✓ F3 size

F3 + M2 ✓ ✓ ✓ ✓ ✓ ✗ ✗ F3 size

F3 + F2E ✓ ✓ ✓ ✗ ✗ ✓ ✗ F2E size

M2 + F2E + F3 Not supported


Module Interoperability Use Cases
• M2 + F2E VDC
• Provide higher-density 1G/10G while supporting M2 features and L3 functions M2
• Full internet routes, MPLS VPNs F2E
• FabricPath with increased MAC address scale (proxy L2 learning)

• F2E + F3 VDC
• Introduction of 40G/100G into existing 10G environments F2E
• Migration to larger table sizes F3
• Transition to additional features/functionality (OTV, MPLS, VXLAN, etc.)

• M2 + F3 VDC
• Introduce higher 1G/10G/40G/100G port-density while maintaining feature-set M2
• Avoid proxy-forwarding model for module interoperability
F3
• Migrate to 40G/100G interfaces with full-rate flow capability
Agenda
• Introduction to Nexus 7000 / Nexus 7700

• Nexus 7000 / Nexus 7700 Architecture


• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding

• Generic Designs with Nexus 7000 / Nexus 7700


• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Crossbar Switch Fabric Modules
N77-C7718-FAB-2
• Provide interconnection of I/O modules N77-C7710-FAB-2
N77-C7706-FAB-2

• Nexus 7000 and Nexus 7700 fabrics based on Fabric 2 ASIC


• Each installed fabric increases available per-payload slot bandwidth
Per-fabric module Max fabric Total bandwidth per
Fabric Module Supported Chassis
bandwidth modules slot
Nexus 7000 Fabric 2 7009 / 7010 / 7018 110Gbps per slot 5 550Gbps per slot
Nexus 7700 Fabric 2 7706 / 7710 / 7718 220Gbps per slot 6 1.32Tbps per slot

• Different I/O modules leverage different amount of available fabric bandwidth


• Access to fabric bandwidth controlled using QOS-aware central arbitration with
VOQ

N7K-C7018-FAB-2
N7K-C7010-FAB-2
N7K-C7009-FAB-2
Multistage Crossbar
Nexus 7000 / Nexus 7700 implement 3-stage crossbar switch fabric
• Stages 1 and 3 on I/O modules
• Stage 2 on fabric modules
2nd stage Fabric Modules
Fabric Modules
Fabric Fabric Fabric Fabric Fabric Fabric
ASIC ASIC ASIC ASIC ASIC ASIC
1 Fabric 2 Fabric 3 Fabric 4 Fabric 5 Fabric Fabric Fabric Fabric Fabric Fabric Fabric
ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC
1 2 3 4 5 6

550G 1.32T
220G
110G (4 x 55G)
(2 x 55G)
Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC Fabric ASIC
3rd stage
1st stage Ingress Egress Ingress Module Egress Module
1st stage
Module Module
3rd stage
Nexus 7000 Nexus 7700
I/O Module Capacity – Nexus 7000
Fabric 2 Modules
550Gbps
110Gbps
440Gbps
220Gbps
330Gbps Fabric
1
per slot bandwidth ASIC
One fabric:
Local Fabric
• Any port can pass traffic to any (240G)
2
other port in VDC Fabric
ASIC

Three fabrics:
• 240G M2 module has maximum Fabric
3
bandwidth Local Fabric ASIC

(480G)
Five fabrics:
Fabric
4
• 480G F2E/F3 module has maximum ASIC
bandwidth

Fabric
5
ASIC
What About Nexus 7004?
• Nexus 7004 has no fabric modules
• Each I/O module has local fabric with 10 available fabric channels
• I/O modules connect “back-to-back” via 8 fabric channels
• Two fabric channels “borrowed” to connect supervisor engines

Sup Slot 1 Fabric Fabric Sup Slot 2


ASIC ASIC

2 * 55G
fabric channels

M2/F2E/F3 M2/F2E/F3
Module 3 Fabric Fabric Module 4
ASIC ASIC

8 * 55G local fabric channels


interconnect I/O modules (440G)
I/O Module Capacity – Nexus 7700 Fabric 2 Modules

1320Gbps
1100Gbps
880Gbps
660Gbps
440Gbps
220Gbps Local Fabric
Fabric Fabric
ASICs
1

per slot bandwidth #2


#1 (480G)
One fabric:
2
• Any port can pass traffic to any other port Fabric
ASICs
in VDC
Three fabrics: Fabric
Local Fabric 3
#2
#1 (960G) Fabric
• 480G F2E/F3 10G module has maximum ASICs

bandwidth
4
Five fabrics: Fabric
ASICs
Fabric
• 960G F3 40G module has maximum Local Fabric
#2
bandwidth #1 (1.2T) 5
Fabric
ASICs
Six fabrics:
• 1.2T F3 100G module has maximum 6
Fabric
bandwidth ASICs
What About Nexus 7702?
• Nexus 7702 has no fabric modules
• Single I/O module – all traffic locally switched
• Two fabric channels connect to supervisor engine

F3 Module
Fabric Fabric
ASIC ASIC

1* 55G
fabric channel
Supervisor
Fabric ASIC
Agenda
• Introduction to Nexus 7000 / Nexus 7700

• Nexus 7000 / Nexus 7700 Architecture


• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding

• Generic Designs with Nexus 7000 / Nexus 7700


• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Hardware Forwarding Lookups
• Layer 2 and Layer 3 packet flow virtually identical in hardware
• Forwarding engine / decision engine pipeline provides consistent L2 and L3
lookup performance
• Pipelined architecture also performs ingress and egress ACL, QOS, and Netflow
lookups, affecting final forwarding result
M2 Forwarding Engine Hardware
• Two hardware forwarding engines • OTV / GRE
integrated on every M2 I/O module
• RACL/VACL/PACL
• Layer 2 switching (with hardware MAC
learning) • QOS remarking and policing policies

• Layer 3 IPv4/IPv6 unicast and multicast • Ingress and egress Netflow (full and
sampled)
• MPLS/VPLS/EoMPLS

Hardware Table M-Series Modules M-Series Modules with


without Scale License Scale License
MAC Address Table 128K 128K
FIB TCAM 128K IPv4 / 64K IPv6 900K IPv4 / 350K IPv6
Classification TCAM (ACL/QOS) 64K 128K
Netflow Table 1M 1M
M-Series Forwarding Engine Architecture
FE Daughter Card
L3 Engine
Layer 3 FIB FIB TCAM
• Egress Netflow
collection
Netflow Netflow
Table • FIB TCAM and adjacency table
• Ingress Netflow lookups for Layer 3 forwarding
collection • ECMP hashing
Policing

• Ingress policing
Classification Egress lookup
CL TCAM
• Egress ACL/QOS (ACL/QOS) pipeline • Egress policing
classification

• Ingress ACL/QOS Ingress lookup


classification pipeline

L2 Engine
MAC L2 Lookup (post-L3)
Table L2 Lookup (pre-L3)
• Egress MAC lookups

• Ingress MAC table Ingress Parser Final Results


lookups
• Port-channel hash result

HDR • Receive packet header • Return final result


for lookup from (destination + priority)
From I/O Module Replication Engine To I/O Module to Replication Engine
Replication Engines Replication Engines
F2E Forwarding Engine Hardware
• 4 x 10G SOC with decision engine • RACL/VACL/PACL
• Layer 2 switching (with hardware MAC • QOS remarking and policing policies
learning)
• Ingress sampled Netflow
• Layer 3 IPv4/ IPv6 unicast and multicast
• FabricPath forwarding

Hardware Table F2E Capacity


MAC Address Table 16K
FIB TCAM 32K IPv4/16K IPv6
Classification TCAM (ACL/QOS) 16K

Per F2E Module


F3 Forwarding Engine Hardware
• 8 x 10G, 2 x 40G, or 1 x 100G SOC with • RACL/VACL/PACL
decision engine
• QOS remarking and policing policies
• Layer 2 switching (with hardware MAC
learning) • Ingress sampled Netflow

• Layer 3 IPv4/ IPv6 unicast and multicast • MPLS/VPLS/EoMPLS

• FabricPath forwarding • OTV / GRE tunnels


• LISP
• VXLAN

Hardware Table F3 Capacity


MAC Address Table 64K
FIB TCAM 64K IPv4/32K IPv6
Classification TCAM (ACL/QOS) 16K
• Return final result

F2E/F3 Decision Engine (destination + priority)


to Ingress Buffer

To Ingress Final Results


Buffer
• Egress MAC lookups
L2 Lookup (post-L3)
• FIB TCAM and adjacency table
lookups for Layer 3 forwarding
• ECMP hashing

FIB
Layer 3 FIB Policing
• Egress ACL/QOS TCAM • Egress policing
classification
• Ingress policing

MAC CL Classification Egress lookup


Table TCAM (ACL/QOS/SNF) pipeline

Ingress lookup
• Ingress ACL/QOS/SNF pipeline
classification
• Ingress MAC table
L2 Lookup (pre-L3) lookups
• Port-channel hash result

Ingress Parser Decision Engine

• Receive packet from Port Logic block PKT HDR


• Send payload to Ingress Buffer
• Send header to Decision Engine From Ingress F2E/F3 SOC
Port Logic
Agenda
• Introduction to Nexus 7000 / Nexus 7700

• Nexus 7000 / Nexus 7700 Architecture


• Chassis, Supervisor Engines and NX-OS software, I/O modules (M2/F2E/F3)
• I/O Module Interoperability
• Fabric Architecture
• Hardware Forwarding

• Generic Designs with Nexus 7000 / Nexus 7700


• STP/VPC, L4-7 services integration, VDCs, VRF/MPLS VPNs, OTV
Nexus 7000 / Nexus 7700 Design Building Blocks

Foundational: Innovative:
• Spanning Tree (RSTP+/MST) • Remote Integrated Service Engine
(RISE)
• Virtual Port Channel (VPC)
• Virtual Device Context (VDC)
• Virtual Routing and Forwarding
(VRF) and MPLS VPNs • Overlay Transport Virtualisation
(OTV)
Topology without VPC
STP → Virtual Port Channel (VPC) L3
L2
• Eliminates STP blocked ports, leveraging
all available uplink bandwidth and
minimising reliance on STP STP blocks
one uplink
• Provides active-active HSRP
• Works seamlessly with current network
designs/topologies
Topology with VPC
• Works with any module type (M2/F2E/F3)
L3
• Most customers have taken this step L2

No blocking
ports
Collapsed Core/Aggregation
• Nexus 7000 / Nexus 7700 as Data Centre
collapsed core/aggregation L3

• Consolidate multiple aggregation building L3


blocks into single switch pair
L3
• Reduce number of managed devices L2 VPC Domain

• Simplify East-West communication path


• M-series or F-series I/O modules, L3
depending on: L2 …
• Port density, feature-set, and scale
requirements
• Desired level of oversubscription
Traditional 3-Tier Hierarchical Design
• Extremely wide customer-deployment
footprint L3

core1 core2
• Nexus 7000 / Nexus 7700 in both
Data Centre aggregation and core
• Provides high-density, high-performance
10G / 40G / 100G
• Same module-type considerations as
collapsed core – density, features, scale L3 agg1 agg2 aggX aggY
L2 …
• Scales well, but scoping of failure
domains imposes some restrictions
• VLAN extension / workload mobility
options limited
L4-7 Services Integration – VPC Connected
• VPC designs well-suited for L4-7 services
integration – pair of aggregation devices makes
service appliance connections simple L3

• Multiple service types possible – transparent VPC VPC


Primary Secondary
services, appliance as gateway, active-standby or
L3
active-active models
L2
• VPC-connected appliances preferred: Active Standby
Service Service
• Ensures that all traffic – data plane, fault-tolerance, and
management – sent direct via VPC port-channels
• Minimises VPC peer link utilisation in steady state

• Use orphan ports with “vpc orphan-port suspend”


when services appliance does not support port-
channels or Layer 3 peering to VPC peer required
L4-7 Services Integration – RISE
Remote Integrated Service Engine (RISE)
• Logical integration of external services Physical Logical Topology
Topology with RISE
appliance with Nexus 7000 / Nexus 7700
Citrix NetScaler and Cisco Prime NAM
appliance supported today
• Enables tight services integration
between services appliance and Nexus
7000 / Nexus 7700 switches, including:
Discovery and bootstrap
Automated Policy Based Routing (APBR)
Route Health Injection (RHI) (future)
RISE Auto-PBR
• User configures new service in NetScaler
• NetScaler sends server list and next-hop
interface to Nexus 7000/7700 switch over
RISE control channel APBR rules

• Switch automatically generates PBR APBR APBR


route-maps and applies PBR rules in data-
plane hardware to redirect target traffic – Client → VIP
no manual configuration on switch
• Client traffic destined to VIP redirected to
NetScaler for processing, destination NetScaler MPX
rewritten to Real server IP

Client → Real

Configure new service


RISE Auto-PBR
• User configures new service in NetScaler
• NetScaler sends server list and next-hop
interface to Nexus 7000/7700 switch over
RISE control channel APBR rules

• Switch automatically generates PBR APBR APBR


route-maps and applies PBR rules in data-
plane hardware to redirect target traffic – VIP → Client
no manual configuration on switch
• Client traffic destined to VIP redirected to
NetScaler for processing, destination NetScaler MPX
rewritten to Real server IP
• Return traffic redirected to rewrite Real IP
to VIP
Real → Client
Note: VDCs do not provide a hypervisor
VDC Details capability, or ability to run different OS
versions in each VDC

Virtual Device Contexts


• Create multiple logical devices out of one physical device
• Provide data-plane, control-plane, and management-plane separation
• Fault isolation and reduced fate sharing
VDC n
VDC 1 VDC 2
VDC 1
Layer 2 Protocols Layer 3 Protocols Layer 2 Protocols Layer 3 Protocols

VLAN VPC OSPF VRRP VLAN VPC OSPF VRRP

STP CDP BGP SNMP


VDC 2 STP CDP BGP SNMP

LACP CTS PIM RIB LACP CTS PIM RIB

VDC n
Network Stack (L2 / IPv4 / IPv6) Network Stack (L2 / IPv4 / IPv6)

Infrastructure
Kernel
VDC Interface Allocation
• Physical interfaces assigned on per VDC basis, from default/admin VDC
• All subsequent interface configuration performed within the assigned VDC
• A single interface cannot be shared across multiple VDCs
• VDC type (“limit-resource module-type”) determines types of interfaces allowed
in VDC
• VDC type driven by operational goals and/or hardware restrictions, e.g.:
• Mix M2 and F2E in same VDC to increase MAC scale in FabricPath
• Restrict VDC to F3 only to avoid lowest common denominator
• Cannot mix M1 and F3 in same VDC
VDC Interface Allocation – M2
• Allocate any interface to any VDC
• But, be aware of shared hardware resources – backend ASICs may be shared by several
VDCs
• Best practice: allocate entire module to one VDC to minimise shared hardware resources

VDC 1

VDC 2
M2-10G

VDC 3

VDC 4

M2-40G
VDC Interface Allocation – F2E / F3 Modules
• Allocation on port-group boundaries – aligns ASIC resources to VDCs
• Port-group size varies depending on module type

F2E
4-port
port-group
VDC 1

F3-10G
8-port VDC 2
port-group

VDC 3
F3-40G
2-port
port-group
VDC 4

F3-100G
1-port
port-group
Communicating Between VDCs
• Must use front-panel ports to communicate between VDCs
• No backplane inter-VDC communication

• No restrictions on L2/L3 configuration, module types, or


physical media type – just like interconnecting two physical VDC 1
switches
• Copper Twinax cables (CX-1) or 40G bidi optics provide low-cost VDC 2

interconnect options
Collapsed Core Design with VDCs
• Maintain administrative
segmentation while
consolidating network core1 Admin Zone 1 core2
infrastructure Admin Zone 1
(VDC 1)
Admin Zone 2 core1 core2 Admin Zone 3
• Maintain fault isolation (VDC 2) L3 (VDC 3)
between zones (independent L2
Admin Zone 2 Admin Zone 3
L2, routing processes per
zone) L3 agg1 agg2 aggX aggY
L2 …
• Firewalling between zones
facilitated by VDC port
membership model
VRF / MPLS VPNs
• Provides network virtualisation – One physical network
supporting multiple virtual networks
• While maintaining security/segmentation and access to shared
services
• VRF-lite segmentation for simple/limited virtualisation
environments
• MPLS L3VPN for larger-scale, more flexible deployments
MPLS Layer 3 VPN – Secure Multi-Tenant Data
Centre
Requirement:
Internet Campus
• Secure segmentation for hosted / enterprise
data centre Core

MPLS
P P

Solution: PE Agg Agg PE

• MPLS Layer 3 VPNs for segmentation L3


L2
• MPLS PE boundary in Pod aggregation layer Access
with VRF membership on SVIs

L2
• Direct PE-PE or PE-P-PE interconnections in
core Pod 1 Pod 2

• Layer 2 with VLANs below MPLS boundary


OTV for Multi-Site VLAN Extension
• Overlay Transport Virtualisation (OTV) provides multi-site Layer 2 Data Centre Interconnect (DCI)
• Dynamic “MAC in IP” encapsulation with forwarding based on MAC “routing” table
• No pseudo-wire or tunnel state maintained

Site 1 Site 2 Site 3

VLAN x VLAN x VLAN x

Layer 2 adjacent

OTV – Virtual Layer 2 Interconnect


Any Transport! – L2, L3, MPLS
OTV at a Glance
• MAC addresses advertised in routing protocol (control plane learning) between Data
Centre sites
• Ethernet traffic between sites encapsulated in IP: “MAC in IP”

Ethernet Frame IP packet Ethernet Frame Ethernet Frame


MAC1→MAC2 IP A→IP B MAC1→MAC2 MAC1→MAC2
Encap Decap
MAC IF MAC IF
OTV edge OTV edge
MAC1 po1 MAC1 IP A
MAC2 IP B IP A IP B MAC2 po1
MAC3 IP B OTV MAC3 po1

MAC1 MAC2 MAC3

Site 1 Site 2
OTV VDC Requirement
• Current limitation – SVI (for VLAN termination at L3) and OTV overlay interface (for VLAN
extension over OTV) cannot exist in same VDC
• Typical designs move OTV to separate VDC, or separate switch (e.g. Nexus 7702)

Nexus 7000/7700 Nexus 7000/7700 Nexus 7000/7700 Nexus 7000/7700

 
SVI OTV SVI OTV SVI OTV


L2 VLAN L2 VLAN L2 VLAN L2 VLAN L2 VLAN

VDC w/SVI VDC w/OTV


Key Takeaways
• Nexus 7000 / Nexus 7700 switching architecture provides
foundation for flexible and scalable Enterprise network designs
• Nexus 7000 / Nexus 7700 design building blocks interwork and
complement each other to solve customer challenges
• Nexus 7000 / Nexus 7700 platform continues to evolve to support
next-generation/emerging technologies and architectures
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.
– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
Learn online with Cisco Live!
T-Shirts can be collected Friday 11 March Visit us online after the conference
for full access to session videos and
at Registration presentations.
www.CiscoLiveAPAC.com
Thank you

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy