Data Center Design Power Session
Data Center Design Power Session
TECDCT-3873
Presentation_ID
Cisco Public
Agenda
Infrastructure Design LAN Switching Analysis
Recap on Current Trends New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling
Break Demos: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server Break Demo: Nexus1kv Blade Servers
Blade Switching LAN Blade Switching SAN Unified Compute System
TECDCT-3873_c2
Cisco Public
Infrastructure Design
Presentation_ID
Cisco Public
TECDCT-3873_c2
Cisco Public
TECDCT-3873_c2
Cisco Public
Physical Pod
Pay-as-you-grow modularity - Predictable, Scalable & Flexible Pod server density affected by power & cooling, cabling & server connectivity
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Overall DC Layout
HDA MDA
TECDCT-3873_c2
Cisco Public
336 Servers
TECDCT-3873_c2
Cisco Public
Agg1
Agg2
Agg3
Agg4
TECDCT-3873_c2
Cisco Public
10 Gigabit Ethernet
for Server Connectivity
Mid 1980s 10Mb
UTP Cat 3
10G Options
Connector (Media)
SFP+ CU*
copper
Cable
Twinax Twinax MM OM2 MM OM3 MM OM2 MM OM3
Cat6 Cat6a/7 Cat6a/7
Distance
<10m 15m 10m 100m 82m 300m
55m 100m 30m
** Draft 3.0, not final
Standard
SFF 8431** IEEE 802.3ak
X2 CX4
copper
SFP+ USR
MMF, ultra short reach
SFP+ SR
MMF,short reach
Across racks 1W
~ 6W*** ~ 6W*** ~ 4W***
RJ45 10GBASE-T
copper
* Terminated cable
IEEE 802.3an
TECDCT-3873_c2
Cisco Public
10
TECDCT-3873_c2
Cisco Public
11
TECDCT-3873_c2
Cisco Public
12
10G SPF+ Cu
SFF 8431 Supports 10GE passive direct attached up to 10 meters Active cable options to be available Twinax with direct attached SFP+ Primarily for in rack and rack-to-rack links Low Latency, low cost, low power
TECDCT-3873_c2
Cisco Public
13
OM1 is equivalent to standard 62.5/125m MM fiber OM2 is equivalent to standard 50/125m fiber. OM3 is laser enhanced 50/125m fiber 10gig OS1 is equivalent to SM 8/125m fiber.
10Gig
300M OM3
Cisco Public
10
100
500
~10000
10
26-82
100
220
300
~10000
In Rack X-rack
<10M
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved.
Across Aisles
<300 M
Across Sites
<10 KM
15
TECDCT-3873_c2
Cisco Public
16
Cabling Infrastructure Patch Panels for End of the Row or Middle of the Row
Category 6A (Blue) with OM3 MM (Orange) per Rack, terminating in patch rack at EoR Cable count varies based on design requirement
Fiber for SAN or for TOR switches Copper for EoR server connectivity
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
17
TECDCT-3873_c2
Cisco Public
18
Common Characteristics
Typically used for modular access Cabling is done at DC build-out Model evolving from EoR to MoR Lower cabling distances (lower cost) Allows denser access (better flexibility) 6-12 multi-RU servers per Rack 4-6 kW per server rack, 10Kw-20Kw per network rack Subnets and VLANs: one or many per switch. Subnets tend to be medium and large
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Fiber Copper
Middle of Row
19
Patch panel Top of Rack p server Patch panel X-connect Patch panel X-connect
server
To network core
Patch panel Top of Rack Top of Rack server Patch panel X-connect Patch panel X-connect
20
sw1
sw2
sw1
sw2
Blade Chassis
sw1
sw2
Blade Chassis
Blade Chassis
Patch panel
Patch panel Patch panel X-connect Top of Rack Patch panel X-connect Pass-through Blade Chassis Pass-through Network Aggregation Point ABC-D Network Aggregation Point AB-C-D Blade Chassis Pass-through Blade Chassis
ToR
Pass-through
TECDCT-3873_c2
Cisco Public
21
Final Result
12 Server PODs Consists of the following: 4 Switch Cabinets for LAN & SAN 32 S Server C bi t Cabinets 12 Servers per Server Cabinet
Core 1 Core 2
Servers: 4032 6509 Switches: 30 Server/Switch Cabinets: 399 Midrange/SAN Cabinets Allotted For: 124 Mid /SAN C bi t All tt d F
Agg1
Agg2
Agg3
Agg4
Acc1
Acc2
6 Pair Switches
Acc11
Acc12
Acc13
Acc14
6 Pair Switches
Acc23
Acc24
336 Servers
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved.
336 Servers
Cisco Public
336 Servers
336 Servers 22
Presentation_ID
Cisco Public
23
=
Nexus 1000v Catalyst 6500 with VSS = Nexus 2148T CBS 3100 Blade Switches
TECDCT-3873_c2
Cisco Public
24
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server Break Demo: Nexus1kv
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
25
LACP+L4 Port Hash Dist EtherChannel for FT and Data VLANs FT Data
Agg1: STP Primary Root HSRP Primary HSRP Preempt and Delay Dual Sup with NSF+SSO
LACP+L4 Hash Dist EtherChannel Min-Links
Agg2: STP Secondary Root HSRP Secondary HSRP Preempt and Delay Single Sup
Rapid PVST+: Maximum Number of STP Active Logical Ports- 8000 and Virtual Ports Per Linecard-1500
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
26
TECDCT-3873_c2
Cisco Public
27
STP Root
TECDCT-3873_c2
Cisco Public
28
VSS system
VSS system
29
Nexus
Access Layer
Full F F ll Featured 10G d Density for aggregating 10G Top of Rack and 10G Blade Servers
As i A virtualization li i drives host I/O utilization, 10G to the host requirements are becoming reality
TECDCT-3873_c2
Cisco Public
30
8*10GbE 8 10GbE
4*10GbE
Nexus-based Aggregation Layer with VDCs, CTS and vPCs d PC Catalyst 6500 services chassis with Firewall Services and ACE Module provides Advanced Service delivery Possibility of converting the Catalyst 6500 in VSS mode
TECDCT-3873_c2
Cisco Public
31
New Options highlighted in red 10 Gigabit Top of the Rack Connectivity with the Nexus 5k Fabric Extender (Nexus2k) Server Virtual Switching (Nexus1kv)
Nexus 7018 Catalyst 6500 1GbE Top-of-Rack Nexus 2148T Nexus 5000
10GbE End-of-Row
TECDCT-3873_c2
Cisco Public
32
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server Break Demo: Nexus1kv
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
33
Fabric Extender (Nexus2148T) Nexus1kv Future on Nexus products Nexus5k, Nexus2k, SR-IOV 10 Gigabit Adapters Nexus 5k and future linecards, Converged Network Adapters Layer 2 extension
Cisco Public
34
Cisco TrustSec
TrustSec Linksec (802.1ae) Frame Format
The encryption used by TrustSec follows IEEE Standards-based LinkSec (802.1ae) encryption, where the upper layers are unaware of the L2 header/encryption.
CMD E_TYPE
Version
Length
SGT Value
Variable
DMAC
SMAC
.1Q (4)
CMD (8 Octets)
Payload P l d
CRC
Encrypted Authenticated
TECDCT-3873_c2
Cisco Public
35
DC2
Nexus-7000-1(config)# interface ethernet 2/45 Nexus-7000-1(config-if)# cts manual Nexus-7000-1(config-if-cts-manual)# sap pmk 12344219 Nexus-7000-1(config-if-cts-manual)# exit Nexus-7000-1# show cts CTS Global Configuration ============================== CTS support : enabled CTS device identity : test1 CTS caching support : disabled Number of CTS interfaces in DOT1X mode : 0 Manual mode : 1
Nexus-7000-2(config)# interface ethernet 2/3 Nexus-7000-2(config-if)# cts manual Nexus-7000-2(config-if-cts-manual)# sap pmk 12344219 Nexus-7000-2(config-if-cts-manual)# exit Nexus-7000-2# show cts CTS Global Configuration ============================== CTS support : enabled CTS device identity : test2 CTS caching support : disabled Number of CTS interfaces in DOT1X mode : 0 Manual mode : 1
TECDCT-3873_c2
Cisco Public
36
Nexus-7000-1# show cts interface e 2/3 CTS Information for Interface Ethernet2/3: CTS i enabled, mode: is bl d d CTS_MODE_MANUAL CTS MODE MANUAL IFC state: CTS_IFC_ST_CTS_OPEN_STATE Authentication Status: CTS_AUTHC_SKIPPED_CONFIG Peer Identity: Peer is: Not CTS Capable 802.1X role: CTS_ROLE_UNKNOWN Last Re-Authentication: Authorization Status: CTS_AUTHZ_SKIPPED_CONFIG PEER SGT: 0 Peer SGT assignment: Not Trusted Global policy fallback access list: SAP Status: CTS_SAP_SUCCESS g p p Configured pairwise ciphers: GCM_ENCRYPT Replay protection: Disabled Replay protection mode: Strict Selected cipher: GCM_ENCRYPT Current receive SPI: sci:225577f0860000 an:1 Current transmit SPI: sci:1b54c1a7a20000 an:1
TECDCT-3873_c2
Cisco Public
37
Core Devices
Core
Aggregation Devices
agg1
agg2
agg3
agg4
Aggregation VDCs
acc1
acc2
accN
accY
acc1
acc2
accN
accY
38
Default VDC
The default VDC (VDC_1) is different from other configured VDCs.
Default VDC
vrf
VDC Admin
Can create/delete VDCs Can allocate/de-allocate resources to/ from VDCs Can intercept control plane and potentially some data-plane traffic from all VDCs (using wireshark) Has control over all global resources and p parameters such as managment0 g interface, console, CoPP, etc.
Network Admin
Can have the network-admin role which network admin has super-user priviledges over all VDCs
VDC2
VDC Admin
vrf
VDC3
VDC Admin
vrf
VDC4
vrf
With this in mind for high-security or critical environments the default VDC should be treated differently. It needs to be secured.
TECDCT-3873_c2
Cisco Public
39
40
for 4.0(3)
41
TECDCT-3873_c2
Cisco Public
32 port 10GE module Once a port has been assigned to a VDC, ll b t fi ti done f from all subsequent configuration iis d within that VDC On 32-port 10GE module ports must be assigned to a VDC by 4-block groups.
VDC B
VDC C
http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/nx-os/virtual_device_context/configuration/guide/ vdc_overview.html#wp1073104
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
42
TECDCT-3873_c2
Cisco Public
43
VDC 20
Linecard 2
FIB TCAM
VDC 30
Linecard 4
FIB TCAM
Linecard 3
FIB TCAM
Linecard 5
FIB TCAM
Linecard 6
FIB TCAM
Linecard 7
FIB TCAM
Linecard 8
FIB TCAM
64K
64K
64K
64K
64K
64K
64K
64K
TECDCT-3873_c2
Cisco Public
44
MAC Pinning
LAN
L2MP
LAN
vPC/MEC
MAC A
MAC B Active-Active
L2 ECMP
MAC A
MAC B
L2 ECMP
Virtual Switch (VSS on C6K, ( , vPC on Nexus 7K) Virtual port channel mechanism is transparent to hosts or switches connected to the virtual switch STP as fail-safe mechanism to prevent loops even in the case of control plane failure
Host Mode Eliminates STP on Uplink Bridge Ports Allows Multiple Active Uplinks Switch to Network Prevents Loops by Pinning a MAC Address to Only One Port Completely Transparent to Next Hop Switch
Uses ISIS based topology Up to 16 way ECMP Eliminates STP from L2 domain Preferred path selection
TECDCT-3873_c2
Cisco Public
45
vPC Terminology
STP Root vPC FT link
vPC peer a vPC switch, one of a pair vPC member port one of a set of ports (port channels) that form a vPC vPC the combined port channel between the vPC peers and the downstream device vPC peer-link Link used to synchronize state between vPC peer devices, must be 10GbE vPC ft-link the fault tolerant link p , , between vPC peer devices, i.e., backup to the vPC peer-link
10 Gig uplinks
CFS Cisco Fabric Services protocol, used for state synchronization and configuration validation between vPC peer devices
TECDCT-3873_c2
Cisco Public
46
10 Gig uplinks
TECDCT-3873_c2
Cisco Public
47
10 Gig uplinks
TECDCT-3873_c2
Cisco Public
48
VSS Unified Yes 1 single IP address, i.e. NO HSRP yes subsecond Yes, automatically done because of the unified CP Yes via BFD and PagP+
49
Separated Yes (2 sups per chassis) 2 entities yes In the order of seconds in the current release CFS to verify configurations and warn about mismatches Yes via the Fault Tolerant link
Cisco Public
Pinning
1 Border interface 2 3 4
A
TECDCT-3873_c2
B
Cisco Public
F
50
Traffic sourced by a station y connected to a SIF can go to one of the locally connected servers Or, if no local match is found, goes out of its pinned border interface
TECDCT-3873_c2
Cisco Public
51
Local replication to all SIFs is done by the End Host Virtualizer switch One copy of the packet is sent out of the source SIFs pinned border interface
TECDCT-3873_c2
Cisco Public
52
Reverse Path Forwarding protects from Loops Packets destined to a station behind a SIF are accepted only by the SIF pinned border interface
TECDCT-3873_c2
Cisco Public
53
Multicast/Broadcast Portal protects from Loops One border interface is elected to receive broadcast, multicast and unknown unicast traffic for all the SIFs
TECDCT-3873_c2
Cisco Public
54
The Deja-vu check prevents Loops If the source MAC belongs to a local station
The multicast/broadcast portal drops the packet The pinned port accepts the packet, but no replication is done This is regardless of the destination MAC (known/unknown unicast, multicast or broadcast)
TECDCT-3873_c2
Cisco Public
55
Border interface
TECDCT-3873_c2
Cisco Public
56
Border interface
TECDCT-3873_c2
Cisco Public
57
Layer 2 Multipathing
Clos Networks
L2
L2
Layer 2 MultiPathing enables designs that up until today were only possible with Infiniband
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
58
Layer 2 Multipathing
Edge switches
Determine which Edge id can reach a given MAC address Set the destination id IS-IS computes shortest path to id
Core switches
Forward from Edge switch to Edge switch based on destination id IS-IS computes shortest path to id
Source MAC sends to Destination MAC Edge switch does lookup for id attached to Destination MAC
If found, forward based on id If not found, flood on broadcast tree
TECDCT-3873_c2
Cisco Public
59
FORWARDING TABLE on 3
L2
Core l1
3 l2
Destination Switch 1
L2
Edge
M A C
TECDCT-3873_c2
Switch 5
Cisco Public
60
FORWARDING TABLE on 1
L2
Core l1
3 l2
Destination
l3 2
L2
MAC A, B, C MAC D, E, F
Edge
M A C
TECDCT-3873_c2
Cisco Public
61
VSwitch
VSwitch
VSwitch
VSwitch
vNICs
vNICs
vNICs
vNICs
VMs
VMs
VMs
VMs
) and logical (
) elements
62
VSwitch
VSwitch
VSwitch
VSwitch
vNICs
vNICs
vNICs
vNICs
VMs
VMs
VMs
VMs
TECDCT-3873_c2
Cisco Public
64
VNTAG
VNTAG Format
VNTAG Ethertype l source virtual interface
SA[6]
d p
DA[6]
direction indicates to/from adapter source virtual interface indicates frame source
looped indicates frame came back to source adapter
VNTAG[6] 802.1Q[4]
Frame Payload
CRC[4]
65
Interface Virtualizer v v OS v v OS v v OS
Application Payload P l d TCP VNTAG Ethertype l source virtual interface d p destination virtual interface IP VNTAG Ethernet
TECDCT-3873_c2
Cisco Public
66
Interface Virtualizer v v OS v v OS v v OS
VNTAG
TECDCT-3873_c2
Cisco Public
67
Interface Virtualizer
direction is set to 1 destination virtual interface and pointer select a single vNIC or list source virtual interface and l (looped) filter a single vNIC if sending frame to source adapter
v v OS
v v OS
v v OS
Application Payload P l d TCP VNTAG Ethertype l source virtual interface d p destination virtual interface IP VNTAG(2) Ethernet
TECDCT-3873_c2
Cisco Public
68
Interface Virtualizer v v OS v v OS v v OS
x
v v OS v v OS v v OS
x
v v OS
VNTAG(2)
69
OS stack formulates frames traditionally Interface Virtualizer adds VNTAG Virtual Interface Switch ingress processing Virtual Interface Switch egress processing Interface Virtualizer forwards based on VNTAG OS stack receives frame as if directly connected to Switch
Interface Virtualizer v v OS v v OS v OS
TECDCT-3873_c2
Cisco Public
70
TECDCT-3873_c2
Cisco Public
71
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server Break Demo: Nexus1kv
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
72
Logical Topology
Core Layer
VSS
VSS
L3 L2
4x10G uplinks from each rack
FE
L3 L2
Nexus 5020
FEX
FEX
FEX
FEX
FEX
FEX
12 FEX
Servers
Servers
Rack-1
Rack-N
Rack-1
Rack-N
Rack-1
Rack-2
Rack-3
Rack-4
Rack-5
Rack-12
TECDCT-3873_c2
Cisco Public
73
...
Nexus 5000/2000 Mixed ToR & EoR
Combination of EoR and ToR cabling
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
74
Top of Rack Fabric Extenders provide 1G server connectivity Nexus 5000 in Middle of Row connects to Fabric Extenders with CX1 copper 10G pp between racks Suitable for small server rows where each FEX is no longer than 5 meters from the 5Ks CX1 copper between racks is not patched Middle of Row Nexus 5000 can also provide 10G server connectivity within their rack
TECDCT-3873_c2
Cisco Public
75
76
SDP exchange
Err-disable
Static pinning is not supported in a redundant supervisor mode Server ports appear on both N5K Currently configuration for all ports must be kept in sync manually on both N5Ks
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
77
vPC peers
78
N5KA N5K
N5KB
79
Port Channel
80
BPDU Guard
Bridge Assurance
Global BPDU Filter reduces the spanning tree load (BPDUs generated on a Host Port) VMWare S VMW Server Trunk T k Needs to Carry Multiple VLANs which can increase the STP load
VSwitch
VM #1 VM #2 VM VM #3 #4
UDLD
TECDCT-3873_c2
Cisco Public
81
TECDCT-3873_c2
Cisco Public
82
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server Break Demo: Nexus1kv
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
83
TECDCT-3873_c2
Cisco Public
84
vPC domain
vPC election generates vPC role (primary/secondary) for each switch. vPC role is used only when dual-active topology is detected.
TECDCT-3873_c2
Cisco Public
85
vPC FT (fault-tolerant) link is an additional mechanism to detect liveness of the peer. can peer use any L3 port. By default, will use management network.
used only when peer-link is down does NOT carry any state information
VRF FT
VDC A (e.g. 2)
Peer-link
Rare lik lih d of d l R likelihood f dualactive topology vPC is within the context of a VDC
TECDCT-3873_c2
Cisco Public
86
vPC Deployment
Recommended Configurations
vPC is a Layer 2 feature Port has to be in switchport mode before configuring vPC vPC/vPC peer link support following port/ peer-link port-channel modes Port Modes: Access or Trunk Port-channel Modes: On mode or LACP (active/passive) mode Recommended port mode Trunk vPC peer-link should support multiple VLANs and should trunk the access VLANs Recommended port-channel mode is Link Aggregation Control Protocol (LACP). Dynamically react to runtime changes and failures Lossless membership change Detection of mis-configuration Maximum 8 ports in a port-channel in on-mode and 16 ports with 8 operational ports in a LACP port-channel
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
VRF FT
VDC A (e.g. 2)
Peer-link
LACP
87
cfs
cfsoe
netstack
sw-1
sw-2
cfs
cfsoe
netstack
CFS (Cisco Fabric Service), over Ethernet (CFSoE), provides a reliable transport layer to all applications that need to co-operate with peer vPC switch. CFSoE
uses retransmissions & acknowledgements per segment transmitted. supports fragmentation and re-assembly for payloads more than MTU uses BPDU class address, and is treated with highest QoS/drop-thresholds.
Each component has (one or more) request-response handshakes (over CFSoE) with its peer. Protocols (STP/IGMP/FHRP) continue to exchange regular protocol BPDUs. In addition, theyll use CFS for state synchronization
TECDCT-3873_c2
Cisco Public
88
CFS Distribution
CFS only checks that the VLANs assigned to a vPC are the same on both devices that are on the same vPC This warns the person on the other 7k that he has to make configuration changes to include the same exact VLANs Distribution is automatically enabled b enabling vPC by PC (config)#cfs distribute enable ( (config)#cfs ethernet g) distribute enable tc-nexus7k01-vdc3# show cfs status Distribution: Enabled Distribution over IP: Disabled IPv4 multicast address: 239.255.70.83 IPv6 multicast address: ff15::efff:4653 Distribution over Ethernet: Enabled
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
89
CFSoIP vs CFSoE
vPC uses CFSoE, Roles Leverage CFSoIP
vPC domain (CFSoE) CFSoIP Cloud
Role Defintion
The user creates new Role User commits the changes Role get automatically propagated to the other switches
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
90
TECDCT-3873_c2
Cisco Public
91
Detecting Mis-Configuration
Sw1 (config)# show vpc brief VPC domain id : Peer status : VPC keep-alive status : Configuration consistency status: 1 peer adjacency formed ok Disabled success
VPC status --------------------------------------------------id Port Consistency Reason ---- -------------- ----------- ---------------1 Po2 success success 2 Po3 failed vpc port channel mis-config due to vpc links in the 2 switches connected to different partners
TECDCT-3873_c2
Cisco Public
92
In case vPC peer-link fails Check active status of remote vPC peer via vPC ft-link (heartbeat) If both peers are active, then Secondary will disable all vPC ports to prevent loops Data will automatically forward down remaining active port channel ports Failover gated on CFS message F il t d failure, or UDLD/Link state detection
CFSoE
TECDCT-3873_c2
Cisco Public
93
No vPC Peer link failed? (UDLD/Link state) Yes vPC ft-link heartbeat detect? No Other processes take over based on priority (STP root, HSRP active, PIM DR) Yes
Yes
No
Yes
TECDCT-3873_c2
Cisco Public
94
Peer link
Eth2/9
Peer link
Eth7/9 Eth7/25 Eth8/5
Eth2/25
Eth2/26
Eth2/3 Po30
Eth7/3 N7kD-DC2
N7kB-DC1
95
Routing Design
for the Extended VLANs
DC1 gw 1.1.1.1 DC2 gw 1.1.1.2 Failover direction SRP Group 2 (e.g. 1.1.1.2) for HS
HSRP Group 1
HSRP Group 2
G 60 0000.0c07.ac3c static << group that is active or standby * 60 0000.0c07.ac3d static << group that is listen mode G 60 0000.0c07.ac3d static << group that is active or standby * 60 0000.0c07.ac3c static << group that is listen mode
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
96
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
97
MAC2
MAC1
MAC2
?
VM2
Cisco Public
VM1
TECDCT-3873_c2
98
MAC2
=
MAC1 MAC2 Nexus1kv
VM1
VM2
99
TECDCT-3873_c2
Cisco Public
OS
OS
OS
vmnics
TECDCT-3873_c2
Cisco Public
100
Nexus 1000v
Distributed Virtual Switch
N1k-VSM# sh module
Linecards Equivalent
Mod Ports Module-Type Model Status 1 1 Supervisor Module Cisco Nexus 1000V active * 2 1 Supervisor Module Cisco Nexus 1000V standby 3 48 Virtual Ethernet Module ok 4 48 Virtual Ethernet Module ok
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
Hypervisor
Hypervisor
Hypervisor
Hypervisor
vCenter
Fabric Function
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Nexus 1000V
Virtual Interface
veth = Virtual Machine port (vnic)
Hypervisor
App App App App OS OS OS OS
veth3
veth7 veth68
Mod Host
Net Adapter 1 Ubuntu VM 1 pe-esx1 Net Adapter 1 Ubuntu VM 2 pe-esx1 Net Adapter 1 Ubuntu VM 3 pe-esx1
Cisco VSMs
TECDCT-3873_c2
Cisco Public
102
Nexus 1000v
Ethernet Interface
App
App
App
App
eth3/1 th3/1
OS
OS
OS
OS
eth3/2
Hypervisor
WS-C6504E-VSS#sh cdp neighbors Device ID Local Intrfce Platform N1k-VSM N1k-VSM N1k-VSM N1k-VSM Gig Gig Gig Gig 1/1/1 2/1/2 1/8/1 2/8/2 Nexus1000 Nexus1000 Nexus1000 Nexus1000
eth4/1
App
App
App
App
OS
OS
OS
OS
eth4/2
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Hypervisor
103
Support Commands Include: Port management VLAN PVLAN Port-channel ACL Netflow Port Security QoS
TECDCT-3873_c2
Cisco Public
104
105
VSM1
Cisco VSMs
VSM2
Cisco VSMs
Port Profiles
Port Profiles
TECDCT-3873_c2
Cisco Public
106
107
VMotion Requires the Destination vSwitch to Have the Same Port Groups/Port-Profiles as the Originating ESX Host
Rack1 Rack10
vmnic1
Prior to DVS you had to manually ensure that the same Port-Group existed on ESX Host 1 as ESX Host 2
vmnic1
vSwitch
App OS
App OS
App OS
App OS
App OS
App OS
VM1
TECDCT-3873_c2
VM2
VM3
Cisco Public
VM4
VM5
VM6
108
Server 1
VM #1 VM #2 VM #3 VM #4 VM #1
Server 2
VM #2 VM #3 VM #4
109
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
110
Server 1
VM #1 VM #2 VM #3
vSwitch
VMW ESX
ACL need to be specify the IP address of the VM else you risk to drop both VM1 and VM3 traffic SPAN will get all traffic from VM1, VM2, VM3, VM4!! You need to filter that!! Port Security CANT be used
ACLs (complicated)
TECDCT-3873_c2
Cisco Public
111
Is VM#1 on Server 1? It doesnt matter ACL follows the VM SPAN will get only the traffic from the virtual Ethernet Port Port Security ensures that VMs wont generate fake make addresses
TECDCT-3873_c2
Cisco Public
112
vNIC Security
Server
VM #1
VM #2
VM #3
VM #4
vnics i
Nexus 1000 DVS
TECDCT-3873_c2
Cisco Public
113
Private VLANs Can Be Extended Across ESX Servers by Using the Nexus1kv
Promiscuous ports receive and transmit to all hosts Communities allow communications between groups Isolated ports talk to promiscuous promisc o s ports only
App App App App App App App
Promiscuous Port
Promiscuous Port
x x
App
Primary VLAN
OS OS OS OS OS OS OS OS
.11
.12
.13
.14
.15
.16
.17
.18
Community A
Cisco Public
Community B
Isolated Ports
114
115
SPAN Traffic to a Catalyst 6500 or a Nexus 7k Where You Have a Sniffer Attached
Capture here
App App App App App App App App App App App App
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
TECDCT-3873_c2
Cisco Public
116
Ease of Provisioning
Plug-and-play designs with VBS
1 Add or replace a VBS Switch to the Cluster 2 Switch config and code automatically propagated Virtual Ethernet Module
3 Add a blade Server 4 Its always booted from the same LUN
TECDCT-3873_c2
Cisco Public
117
Ease of Provisioning
Making Blade Servers Deployment Faster
1 Physically Add a new blade (or replace an old one)
TECDCT-3873_c2
Cisco Public
118
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
119
Eth3/2
Veth1
Veth2
TECDCT-3873_c2
Cisco Public
120
Eth4/2
X
Cisco VEM
X
VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM7 VM9 VM10 VM11 VM12
TECDCT-3873_c2
Cisco Public
MAC Learning
Each VEM learns independently and maintains a separate MAC table VM MACs are statically mapped
Other vEths are learned this way (vmknics and vswifs) No aging while the interface is up
Eth3/1 Cisco VEM Eth4/1 Cisco VEM
VM1
VM2
VM3
VM4
TECDCT-3873_c2
Cisco Public
122
Port Channels
Po1
Cisco VEM
TECDCT-3873_c2
Cisco Public
123
N5K View
SG0
Po1
SG1
VEM View
VM #2
VM #3
VM #4
124
Each interface in the channel must have consistent speed/duplex Channel-group does not need to exit and will automatically be created
TECDCT-3873_c2
Cisco Public
125
Cisco VEM
VM1
VM2
VM3
VM4
126
System VLANs
System VLANs enable interface connectivity before an interface is programmed
i.E VEM cant communicate with VSM during boot
Cisco VSM
Packet
TECDCT-3873_c2
Cisco Public
127
No EtherChannel
Cisco VEM Ci
C P
VEM Configuration
Source Based Hashing
Use Case
Medium 1Gb servers (rack or blade) Need to separate VMotion from Data
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
SC
VM Data
VMK
128
SG0
SG1
SG0
SG1
VM
VMK
SC
TECDCT-3873_c2
Cisco Public
129
VM
VMK
SC
TECDCT-3873_c2
Cisco Public
130
Do not use CDP to create the sub-groups in this type of topology (manually configure the sub-groups)
SG0
SG1
VM 1
VM 2
VM 3
TECDCT-3873_c2
Cisco Public
131
TECDCT-3873_c2
Cisco Public
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
133
TECDCT-3873_c2
Cisco Public
134
FCoE
10 Gigabit Ethernet
DCE
TECDCT-3873_c2
Cisco Public
135
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels Scalability Considerations LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
136
Nexus1kv Mapping of servers t M i f to VLANs/Port Profiles Profile Definition vCenter C t Nexus1kv CLI
TECDCT-3873_c2
Cisco Public
137
TECDCT-3873_c2
Cisco Public
138
App
App
OS
OS
OS
OS
OS OS OS OS
OS
OS
OS
OS
OS OS OS OS
139
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and design with virtual Port Channeling Break Demo: vPC Designs with Server Virtualization
Nexus1kv Components Operational benefits VEM Forwarding: NIC Teaming and Etherchannels Scalability Considerations LAN switching infrastructure requirements Designs with Blade Servers
TECDCT-3873_c2
Cisco Public
140
LAN HPC
SAN A
SAN B
TECDCT-3873_c2
Cisco Public
141
Consolidation Vision
Why? VM integration Cable Reduction Power Consumption reduction Foundation for Unified Fabrics IPC
(*) RDMA = Remote Direct Memory Access (**) iWARP = Internet Wide Area RDMA Protocol
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
FCoE
142
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Designs with Server Virtualization Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server
10 Gigabit Ethernet Performance Considerations 10 Gigabit Performance in Virtualized Environments Datacenter Ethernet
143
Large Send Offload (LSO): allows the TCP layer to build a TCP message up to 64KB and send it in one call down the stack through the device driver. Segmentation is handled by the Network Adapter Receive Side Scaling queues: 2 4 or disabled. Allows distributing incoming traffic to the available cores. VLAN offload in Hardware NetDMA support
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
144
OS Enablers
TCP Chimney Offload Receive Side Scaling ( RSS g (+ capable NIC) In Windows 2003 this requires the Scalable Networking Pack (SNP). (SNP) In Windows 2008 this is already part of the OS.
TECDCT-3873_c2
Cisco Public
145
TECDCT-3873_c2
Cisco Public
146
Preliminary Tests
Maximum Throughput Is ~3.2 Gbps
TECDCT-3873_c2
Cisco Public
147
Why?
Only 1 core is dealing with TCP/IP processing The OS doesnt know that the Adapter is TOE capable so it doesnt really use it A lot of memory copies between user space and kernel space Is the card plugged in the p gg PCIe x8? Solution:
Make sure that the OS uses TCP offloading in Hardware Enable Large Segment Offload Enable TCP/IP distribution to all available cores
TECDCT-3873_c2
Cisco Public
148
CPU 1
CPU 2
Receive FIFOs
TECDCT-3873_c2
Cisco Public
149
Data record
100%
MSS Data MSS MSS record MSS MSS
% CORE overhead
TCP/IP
MSS
40% 20%
I/O Adapter
TECDCT-3873_c2
Cisco Public
150
V2 (Windows 2008):
allows the TCP layer to build a TCP message up to 256KB and send it in one call down the stack through the device driver. Segmentation is handled by the Network Adapter Supports IPv4/IPv6
Main Benefit: Reduces CPU utilization Key Use Cases: Large I/O applications such as Storage, backup, and ERP.
TECDCT-3873_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
151
Data record
100%
% CORE overhead
20%
I/O Adapter
TCP/IP
MSS
MSS
MSS
MSS
TECDCT-3873_c2
Cisco Public
152
Set to 1
TECDCT-3873_c2
Cisco Public
153
TECDCT-3873_c2
Cisco Public
154
TECDCT-3873_c2
Cisco Public
155
But the RX Side Cannot Keep Up With the TX Hence You Need to Enable SACK in HW
TECDCT-3873_c2
Cisco Public
156
TECDCT-3873_c2
Cisco Public
157
TECDCT-3873_c2
Cisco Public
158
CPU
s/w h/w
NIC
Kernel Bypass direct user-level access to hardware Dramatically reduces application context switches
TECDCT-3873_c2
Cisco Public
159
iWARP
The Internet Wide Area RDMA Protocol (iWARP) is an Internet Engineering Task Force (IETF) update of the RDMA Consortium's RDMA over TCP standard. iWARP is a superset of the Virtual Interface Architecture that permits zero-copy transmission over legacy TCP. It may be thought of as the features of InfiniBand (IB) applied to Ethernet. http://www.openfabrics.org/ http://www openfabrics org/ runs on top of iWARP
TECDCT-3873_c2
Cisco Public
160
161
Latency Fundamentals
What matters is the application-to-application latency and jitter
Driver/Kernel software Adapter Network components Kernel NIC NIC Kernel Application
Data Packet
Application
TECDCT-3873_c2
Cisco Public
Latency
TECDCT-3873_c2
Cisco Public
163
TX RX CPU % TCP workload Transactions/s TCP workload Throughput UDP throughput Latency
+ + ++ + + + ++ ++
(4)
+++ +++ + +
(1,2) +++
+
Cisco Public
+++
164
TECDCT-3873_c2
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Designs with Server Virtualization Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server
10 Gigabit Ethernet Performance Considerations 10 Gigabit Performance in Virtualized Environments Datacenter Ethernet
165
ESX 3.5 U2 CPU 2 x dual core Xeon5140 Guest OS Windows 2003 R2 SP2 Memory 8 GB
TECDCT-3873_c2
Cisco Public
166
Catalyst C t l t 6500
GigE 2/13 - 16
10 GigE
client 1
ESX 1 ESX 1
client 2
1 GigE
vmnic0
vNIC
vNIC
vNIC
vNIC
ESX 2
2009 Cisco Systems, Inc. All rights reserved. Cisco Public
167
Catalyst C t l t 6500
10 GigE Te4/3
10 GigE
client 1
client 2
10 GigE
vmnic0
vNIC
vNIC
vNIC
vNIC
ESX 2
Cisco Public
168
ESX 3.5 U2 CPU 2 x dual core Xeon5140 Guest OS Windows 2003 R2 SP2 Memory 8 GB
TECDCT-3873_c2
Cisco Public
169
ESX 3.5 U2 CPU 2 x dual core Xeon5140 Guest OS Windows 2003 R2 SP2 Memory 8 GB
TECDCT-3873_c2
Cisco Public
170
TECDCT-3873_c2
Cisco Public
171
TECDCT-3873_c2
Cisco Public
172
TECDCT-3873_c2
Cisco Public
173
TECDCT-3873_c2
Cisco Public
LAN Switching
Evolution of Data Center Architectures New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling Designs with Server Virtualization Break Demo: vPC Designs with Server Virtualization 10 Gigabit Ethernet to the Server
10 Gigabit Ethernet Performance Considerations 10 Gigabit Performance in Virtualized Environments Datacenter Ethernet
175
I/O Consolidation
I/O consolidation supports all three types of traffic onto a single network Servers have a common interface adapter that supports all three types of traffic
176
Benefit
Provides class of service flow control Ability to support control. storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
Auto negotiation Auto-negotiation for Enhanced Ethernet capabilities DCBX (Switch to NIC) Eliminate Spanning Tree for L2 topologies Utilize full Bi-Sectional bandwidth with ECMP
TECDCT-3873_c2
Cisco Public
177
SAN Switching
Presentation_ID
Cisco Public
178
Dont forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com.
179
TECDCT-3873_c2
Cisco Public
Recommended Readings
www.datacenteruniversity.com
TECDCT-3873_c2
Cisco Public
180
Recommended Readings
TECDCT-3873_c2
Cisco Public
181
TECDCT-3873_c2
Cisco Public
182
Presentation_ID
Cisco Public
Agenda
Infrastructure Design (Mauricio Arregoces) LAN Switching Analysis (Maurizio Portolani)
Recap on Current Trends and Past Best Practices New Layer 2 Technologies Fabric Extender Deep dive and Design with virtual Port Channeling
Break
Unified IO Unified Compute System
TECDCT-3873
Cisco Public
TECDCT-3873
Cisco Public
OEM
CBS31x0X
10G VBS
CBS31x0G
Nevertheless, power and cooling constraints needs to be considered on a case by case basis when implementing blade servers.
TECDCT-3873
Cisco Public
Nexus 7018
TECDCT-3873
TECDCT-3873
Cisco Public
Design with Pass-Thru Module and Top of the Rack (TOR) Switches
High Cable density within the rack High capacity uplinks p provide g p y p aggregation layer connectivity Rack example:
Up to Four blade enclosures/rack Up to 128 cables for server traffic Up to 8 cables for Server management p pp Up to four rack switches support local blade servers Additional switch for server management Requires up to 136 cables within the rack 10 GigE Uplinks Aggregation Layer
TECDCT-3873
Cisco Public
Aggregation Layer
10 GigE Or GE Uplinks
Aggregation Layer
10 GigE Or GE Uplinks
10
TECDCT-3873
Cisco Public
11
TECDCT-3873
Cisco Public
12
13
TECDCT-3873
Cisco Public
14
TECDCT-3873
Cisco Public
15
Additional Options
By combining above three scenarios, the user can:
Deploy up to 8 switches per enclosure Build smaller Rings with fewer Switches Split VBS between LAN on Motherboard (LOM) and Daughter Card Ethernet NICs Split VBS across racks Connect unused uplinks to other Devices such as additional Rack Servers or Appliances such as storage
TECDCT-3873
Cisco Public
16
3 Add a blade Server 4 Its always booted from the same LUN
TECDCT-3873
Cisco Public
17
TECDCT-3873
Cisco Public
18
ENC 4
ENC 3 No Yes
ENC 2
ENC 1
TECDCT-3873
Cisco Public
19
~2 FT
~2 FT
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
20
Aggregation Layer
Core Layer
TECDCT-3873
Cisco Public
21
Spanning-Tree Blocking
22
Spanning-Tree Blocking
23
Deployment Example
Switch Numbering 1 to 8, left to Right, Top to Bottom Master Switch is Member 1 Alternate Masters will be 3,5,7 Uplink Switches will be Members 2,4,6,8
1 2
TECDCT-3873
Cisco Public
24
Configuration Commands
switch 1 priority 15 switch 3 priority 14 switch 5 priority 13 switch 7 priority 12 spanning-tree mode rapid-pvst vlan 1-10 state active g g g g g interface range gig1/0/1 gig1/0/16 switchport access vlan xx Assign ports to VLANs Sets Sw 1 to pri master Sets Sw 3 to sec master Sets Sw 5 to 3rd master Sets Sw 7 to 4th Master Enables Rapid STP Configures VLANs
TECDCT-3873
Cisco Public
25
Configuration Commands
interface range ten2/0/1, ten4/0/1 switchport mode trunk switchport trunk allowed vlans 1 10 1-10 channel group 1 mode active interface range ten6/0/1, ten8/0/1 switchport mode trunk switchport trunk allowed vlans 1-10 channel group 2 mode active interface po1 spanning-tree vlan 1 3 5 7 9 port-priority 0 i t l 1,3,5,7,9 t i it spanning-tree vlan 2,4,6,8,10 port-priority 16 interface po2 spanning-tree vlan 1,3,5,7,9 port-priority 16 spanning-tree vlan 2,4,6,8,10 port-priority 0
TECDCT-3873
Cisco Public
26
27
Single Switch / Node (for Spanning Tree or Layer 3 or Management) All Links Forwarding
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
28
VBS 1
VBS 2
VBS 3
VBS 4
VSS vPC
TECDCT-3873
Cisco Public
29
VBS 1
VBS 2
VSS vPC
VBS 3
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
VBS 4
30
31
NIC teaming software typically req ires la er t picall requires layer 2 adjacency
32
Active Standby
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
33
Not as Popular
Cisco Public
34
Flexlink Overview
Achieve Layer 2 resiliency without using STP Access switches have backup links to Aggregation switches p gg g Target of sub-100msec convergence upon forwarding link failover Convergence time independent of #vlans and #mac-addresses Interrupt based link-detection for Flexlink ports. Link-Down detected at a 24msec poll. No STP instance for Flexlink ports. Forwarding on all vlans on the <up> flexlink port occurs with a single update operation low cost.
TECDCT-3873
Cisco Public
36
TECDCT-3873
Cisco Public
37
Flexlink Preemption
Flexlink enhanced to :
provide flexibility in choosing FWD link, optimizing available bandwidth utilization
User can configure Fl li k pair when previous FWD li k comes b k up : U fi Flexlink i h i link back
Current FWD link continues Preemption mode Off Previous FWD link preempts the current and begins FWD instead Preemption mode Forced Higher bandwidth interface preempts the other and goes FWD Preemption mode Bandwidth
Note: By default, flexlink preemption mode is OFF default When configuring preemption delay:
user can specify a preemption delay time (0 to 300 sec) default preemption delay is 35 secs
38
Active Interface
Backup Interface
State
Bandwidth : 20000000 Kbit (Po1), 10000000 Kbit (Po2) Mac Address Move Update Vlan : auto
CBS3120-VBS-TOP#
TECDCT-3873
Cisco Public
39
Management Screenshot
Topology View
TECDCT-3873
Cisco Public
40
Management Screenshot
Front Panel View
TECDCT-3873
Cisco Public
41
TECDCT-3873
Cisco Public
42
16 internal copper 1/2/4-Gbps Fibre Channel connecting to blade servers through blade chassis backplane Up to 8 SFP uplinks Offered in 12-port and 24-port configurations via port licensing
14 internal copper 1/2/4-Gbps Fibre Channel connecting to blade servers through blade chassis backplane Up to 6 SFP uplinks Offered in 10-port and 20-port configurations via port licensing
TECDCT-3873
Cisco Public
43
SAN Islands
Resilient SAN Extension Standard solution (ANSI T11 FC-FS-2 section 10)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
44
Tape SAN
FC
FC
Test SAN
FC
FC
FC
SAN C DomainID=3
SAN D DomainID=4
SAN E DomainID=5
TECDCT-3873
Cisco Public
45
VSAN Technology
The Virtual SANs Feature Consists of Two Primary Functions:
Hardware-based isolation of tagged traffic belonging to different VSANs Create independent instance of Fibre Channel services for each newly created VSAN VSAN services include:
Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN
VSAN Header Is Removed at Egress Point Cisco MDS 9000 Family with VSAN Service Enhanced ISL (EISL) Trunk Carries Tagged Traffic from Multiple VSANs VSAN Header Is Added at Ingress Point Indicating Membership No Special Support Required by End Nodes
TECDCT-3873
Cisco Public
46
All configuration changes fi ti h are made within a single session. Switch locks entire fabric to implement change
One configuration session for entire fabric to ensure consistency within fabric
If a zone is a member References to the zone are Reduced payload size as the zone is referenced. of multiple zonesets , used by the zonesets as The size is more required once you define the an instance is pronounced with bigger zone. created per zoneset. database Default zone policy is defined per switch.
Enforces and exchanges default zone setting throughout the fabric Fabric-wide policy enforcement reduces troubleshooting time.
TECDCT-3873
Cisco Public
47
Managing switch provides combined status about activation. Will not identify a failure switch. To distribute zoneset must re-activate the same zoneset. During D i a merge MDS specific types can be misunderstood by noncisco switches.
Retrieves th activation R ti the ti ti results and the nature of the problem from each remote switch.
Implements changes to the This avoids hardware zoning database and changes for hard distributes it without zoning in the switches. activation. Provides a vendor ID along with a vendor-specific type value to uniquely identify a member type Unique Vendor type
TECDCT-3873
Cisco Public
48
IVR
Marketing VSAN_2
TECDCT-3873
Cisco Public
49
Quick Review 1
VSANs enable creation of multiple virtual fabrics on top of a consolidated physical SAN infrastructure; Enhanced Zoning recommended and helpful from both scalability and troubleshooting standpoints; Inter VSAN Routing (IVR) required when selective communication between shared devices on distinct fabrics is needed.
TECDCT-3873
Cisco Public
50
Application Server
FC Switch
Email I/O N_Port_ID 1 Web I/O N_Port_ID 2 File Services I/O N_Port_ID 3 F_Port
Web
File Services
TECDCT-3873
Cisco Public
51
TECDCT-3873
Cisco Public
52
FC
FC
FC
FC
FC
FC
FC
FC
NP_Port
TECDCT-3873
Cisco Public
53
Blade System
Blade N
Blade N
Blade 2
Blade 2
Blade 1
Blade 1
FC Switch
FC Switch
NPV
NPV
N-Port
SAN
SAN
F-Port
Storage
Storage
Blade Switch Attribute FC Switch Mode (E-Port) One per FC Blade Switch Yes Medium Deployment Model # of Domain IDs Used Interoperability issues with multi-vendor Core SAN switch Level of management coordination between Server and SAN Administrators
Cisco Public
NPV is also available on the MDS 9124 & 9134 Fabric Switches
2009 Cisco Systems, Inc. All rights reserved.
54
10.1.1 10 1 1
20.2.1 20 2 1
F-port
NP-port
MDS 9124 MDS 9134
Can have multiple uplinks, on different VSANs (port channel and trunking in a later release)
10.5.2
FC
10.5.7 20.5.1
Initiator (no FL ports)
NPV Device
Uses the same domain(s) as the NPV-core switch(es)
Target
TECDCT-3873
Cisco Public
55
When NP port comes up on a NPV edge switch, it first FLOGI and PLOGI into the core to register into the FC Name Server End Devices connected on NPV edge switch does FLOGI but NPV switch converts FLOGI to FDISC command, creating a virtual PWWN for the end device and allowing to login using the physical NP port. All I/O of end device will always flow through same NP port
F
NP P1 NP P2
P4 = vP2
P5 = vP3
TECDCT-3873
Cisco Public
56
FlexAttach
Because Even Physical Devices Move How it works ?
Based on WWN NAT of Servers Server s WWN
Bl lade 1
Blade Server
.
NPV
Re eplaced Blade B
Bl lade N
Key Benefit:
Flexibility for Server Mobility - Adds, Moves and Changes Eliminates need for SAN and server team to coordinate changes Two modes: Lock identity to port Identity follows physical PWWN
Flex Attach
SAN
Storage
TECDCT-3873
Cisco Public
57
Flex Attach
Example
Creation of virtual PWWN (vPWWN) on NPV switch F-port Zone vPWWN to storage LUN masking is done on vPWWN Can swap Server or replace physical HBA
No need for zoning modification No LUN masking change required
Before: switch 1
After: switch 2
1
FC1/1 vPWWN1 FC1/6 vPWWN1
PWWN 1
Server 1
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Server 1 58
PWWN 2
Whats Coming:
Enhanced Blade Switch Resiliency
F-Port Port Channel
F-Port Port Channel Blade N Blade 2 Blade 1
Blade System
SAN
N-Port
F-Port
F-Port Trunking
Core Director Storage Blade System
Blade N VSAN 1
F-Port Trunking
Blade 2 Blade 1
VSAN 2
SAN
TECDCT-3873
Cisco Public
59
Whats Coming:
F-Port Trunking for the End-Host / Storage
Hardware-based isolation of tagged traffic belonging to different VSANs up to Servers or Storage Devices
Non VSANTrunking capable end node
Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN
Trunking E_Port
Enhanced ISL (EISL) Trunk carries tagged traffic from multiple VSANs Trunking E_Port
Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN
Trunking F_Port
60
Quick Review 2
NPIV standard mechanism enabling F-port (switches and HBAs) virtualization NPV allows a FC switch to work on HBA mode. The switch behaves like a proxy of WWN and doesnt consume a Domain ID, enhancing SAN scalability (mainly on blade environments) Flex-Attach adds flexibility to server mobility allowing the server FC identity to follow the physical pWWN (for blades and rack mount servers) F-port port-channel on NPV scenarios, the ability to bundle p p , y multiple physical ports in to 1 logical link F-port trunking extend VSAN tagging to the N_Port to F_Port connection. Works between switches together with NPV. For host, needs VSAN support on the HBA and allows per-VM VSAN allocation.
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
61
SAN Design
Factors to Consider
- Topologies - Bandwidth reservation - Networking / gear capacity
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Parameters
- Number of end devices - Speed variation
62
Parameters:
1. Number of end-devices (servers, storage and tape) 2. Speed: Majority of end device connection speeds will be primarily 1G, 2G or 4G
Factors to consider:
1. Required topology (core-edge, colapsed core-edge, edge-coreedge, etc.) 2. Bandwidth reservation versus Oversubscription 3. Networking capacity needed (VSANs, ISL, fabric logins, zones, NPIV instances, etc.)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
63
Parameters:
1. Number of end-devices (servers, storage and tape) 2. Speed: Majority of end device connection speeds will be primarily 1G, 2G or 4G
Factors to consider:
1. Required topology (core-edge, colapsed core-edge, edge-coreedge, etc.) 2. Bandwidth reservation versus Oversubscription 3. Networking capacity needed (VSANs, ISL, fabric logins, zones, NPIV instances, etc.)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
64
Disk Oversubscription Disk do not sustain wire-rate I/O with realistic I/O mixtures. A major vendor promotes 12:1 host:disk fan-out.
Tape O Oversubscription Low sustained I/O rates. All technologies currently have max theoretical native transfer rate << wire-speed FC (LTO, SDLT, etc)
ISL Oversubscription Typical oversubscription in two-tier design can approach 8:1, some even higher
Host Oversubscription Most hosts suffer from PCI bus limitations, OS, and application limitations thereby limiting maximum I/O and bandwidth rate
TECDCT-3873
Cisco Public
65
Design by the maximum values leads to over engineered and underutilized SANs. Oversubscription helps to achieve best cost / performance ratio. Rule of thumb: limit the number of hosts per storage port based on the array fan-out. For instance, 10:1 or 12:1.
TECDCT-3873
Cisco Public
66
Parameters:
1. Number of end-devices (servers, storage and tape) 2. Speed: Majority of end device connection speeds will be primarily 1G, 2G or 4G
Factors to consider:
1. Required topology (core-edge, colapsed core-edge, edge-coreedge, etc.) 2. Bandwidth reservation versus Oversubscription 3. Networking capacity needed (VSANs, ISL, fabric logins, zones, NPIV instances, etc.)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
67
TECDCT-3873
Cisco Public
68
Parameters:
1. Number of end-devices (servers, storage and tape) 2. Speed: Majority of end device connection speeds will be primarily 1G, 2G or 4G
Factors to consider:
1. Required topology (core-edge, colapsed core-edge, edge-coreedge, etc.) 2. Bandwidth reservation versus Oversubscription 3. Networking capacity needed (VSANs, ISL, fabric logins, zones, NPIV instances, etc.)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
69
Core-Edge
Traditional SAN design for growing SANs High density di t Hi h d it directors i in core and, on the edge:
Unified IO (FCoE) switches [1]; Directors [2] , Fabric Switches [3] or Blade switches [ 4 ]
A A B B A B A B A B
[1]
[2]
[3]
[4]
70
TECDCT-3873
Cisco Public
Parameters:
1. Number of end-devices (servers, storage and tape) 2. Speed: Majority of end device connection speeds will be primarily 1G, 2G or 4G
Factors to consider:
1. Required topology (core-edge, colapsed core-edge, edge-coreedge, etc.) 2. Bandwidth reservation versus Oversubscription 3. Networking capacity needed (VSANs, ISL, fabric logins, zones, NPIV instances, etc.)
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
71
TECDCT-3873
Cisco Public
72
Being able to remove and reinsert a new blade without having to change Zoning Configurations VMWare Integration (discussed later on
this Techtorial)
8 bits
Device
73
Switch Domain
Cisco Public
NPIV
TECDCT-3873
Cisco Public
74
Blade S Bl d Server Design using 2 D i i x 4G ISL per blade switch. Oversubscription can be reduced for individual blade centers by adding additional ISLs as needed.
[A] Storage Ports 240 (2G dedicated): or [B] Storage Ports (4G dedicated): 120 Host Ports (4G HBAs): 1152 ISL Oversubscription (ports): 8 : 1 Disk Oversubscription (ports): 10 : 1 Core-Edge Design 9.6 : 1 Oversubscription:
NPIV
Each Cisco MDS FC blade switch (02 switches per HP c-Class enclosure): 2 ISL to core @ 4G 16 host ports per HP c-Class enclosure @ 4G 8:1 oversubscription
TECDCT-3873
Cisco Public
75
TECDCT-3873
Cisco Public
76
Virtual Machines
Fabric
Tier 1
Tier 2
Tier 3
77
VM
VM
VM
VM
VM
VM
FC
FC
SCSI
iSCSI is popular in SMB market DAS is not popular because it prohibits VMotion
TECDCT-3873
Cisco Public
79
Access control is demanded to storage array LUN masking and mapping, it is based on the physical HBA pWWN and it is the same for all VMs The hypervisor is in charge of the mapping, errors may be disastrous
Hypervisor
MDS9124e Mapping
FC
HW
pWWN-P
FC
pWWN-P
Zone
TECDCT-3873
FC Name Server
80
Virtual Servers s
Hypervisor
FC
FC
FC
FC
FC
To pWWN-1
pWWN-1 pWWN-2 pWWN-3 pWWN-4
HW
pWWN-P
FC
FC Name Server
81
VM-1
VM-2
Congested Link Cisco MDS 9124e Multilayer Fabric Switch Cisco MDS 9000 Multilayer Fabric Switch Storage Array (SAN A or B)
Storage Array
FC
H Hypervisor
QoS
FC
Q QoS IVR
pWWN-T
HW
pWWN-P
FC
Low-Priority Traffic
TECDCT-3873
Cisco Public
82
VM-1
VM-2
IVR-Zone-P includes the physical devices pWWN-P and pWWN-T IVR-Zone-Vx includes the virtual machine x and the physical target only p y g y
FC
IVR-Zone-V2 MDS9124e
MDS9000 VSAN-20
FC
ESX Hypervisor X
pWWN-V2
pWWN-T2 WWN T2
FC
IVR-Zone-V1
pWWN-V1
VSAN-1
HW
pWWN-P
FC
VSAN-10
FC
IVR
pWWN-T1
IVR-Zone-P
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
83
Standard HBAs
WWPN
W S-X9 01 6
1 STAT S U
10
11
12
13
14
15
16
All configuration parameters are based on the World Wide Port Name (WWPN) of the physical HBA
FC
All LUNs must be exposed to p every server to ensure disk access during live migration (single zone)
TECDCT-3873
Cisco Public
84
No need to reconfigure zoning or LUN masking Dynamically reprovision VMs without impact to existing infrastructure
FC
Centralized management of VMs and resources Redeploy VMs and support live migration
TECDCT-3873
Cisco Public
85
TECDCT-3873
Cisco Public
86
TECDCT-3873
Cisco Public
87
Frame Tagged on Trunk Cisco MDS 9124e Blade Data CenterGreen VSAN-10 VSAN-20 VSAN-30 Cisco MDS 9000 Family
VSAN-10 VSAN 10
Storage Array
VSAN-20
Storage Array
VSAN VSAN30
Data Center Yellow Administrator Privileges Admininistrative Team Red Green Yellow
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Virtual Machines Data Center Red Data Center Green Data Center Yellow
TECDCT-3873
Cisco Public
89
Unified IO (FCoE)
TECDCT-3873
Cisco Public
90
Data Center Ethernet is an architectural collection of Ethernet extensions designed to improve Ethernet networking and management in the Data Center.
TECDCT-3873
Cisco Public
91
Nothing! All 03 acronyms describe the same thing, meaning the architectural collection of Ethernet extensions (based on open standards) Cisco has co-authored many of the standards associated and is focused on providing a standards-based solution for a Unified Fabric in the data center The IEEE has decided to use the term DCB (Data Center Bridging) to DCB describe these extensions to the industry. http://www.ieee802.org/1/pages/dcbridges.html
TECDCT-3873
Cisco Public
92
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
Auto-negotiation for Enhanced Ethernet capabilities DCBX Eliminate Spanning Tree for L2 topologies Utilize full Bi-Sectional bandwidth with ECMP Provides ability to transport various traffic types (e.g. Storage, RDMA)
TECDCT-3873
Cisco Public
93
Benefit
Provides class of service flow control. Ability to support storage traffic
TECDCT-3873
Cisco Public
94
Enables lossless Fabrics for each class of service PAUSE sent per virtual lane when buffers limit exceeded Network resources are partitioned between VLs (E.g. input buffer and output queue) The switch behavior is negotiable per VL
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
95
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission
TECDCT-3873
Cisco Public
96
Enables Intelligent sharing of bandwidth between traffic classes control of bandwidth Being Standardized in IEEE 802.1Qaz Also known as Priority Grouping
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
97
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
TECDCT-3873
Cisco Public
98
Moves congestion out of the core to avoid congestion spreading Allows End-to-End congestion management Standards track in 802.1Qau
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
99
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
TECDCT-3873
Cisco Public
100
Handshaking Negotiation for: CoS BW Management Class Based Flow Control Congestion Management (BCN/QCN) Application (user_priority usage) Logical Link Down
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
101
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
Auto-negotiation for Enhanced Ethernet capabilities DCBX Eliminate Spanning Tree for L2 topologies Utilize full Bi-Sectional bandwidth with ECMP
TECDCT-3873
Cisco Public
102
Phase 2
LAN
Virtual Switch
Phase 3
LAN
MAC A
MAC B Active-Active
L2 ECMP vPC
MAC A
MAC B
L2 ECMP
We are here
Eliminates STP on Uplink Bridge Ports Allows Multiple Active Uplinks Switch to Network Prevents Loops by Pinning a MAC Address to Only One Port Completely Transparent to Next Hop Switch Virtual Switch retains physical switches independent control and data planes Virtual port channel mechanism is transparent to hosts or switches connected to the virtual switch STP as fail-safe mechanism to prevent loops even in the case of control plane failure Uses ISIS based topology Eliminates STP from L2 domain Preferred path selection TRILL is the work in progress standard
TECDCT-3873
Cisco Public
103
Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into Service Lanes IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network
Auto-negotiation for Enhanced Ethernet capabilities DCBX Eliminate Spanning Tree for L2 topologies Utilize full Bi-Sectional bandwidth with ECMP Provides ability to transport various traffic types (e.g. Storage, RDMA)
TECDCT-3873
Cisco Public
104
Virtual Links
An Example
Up to 8 VLs per physical link Ability to support QoS queues within the lanes
VL2 - No Drop Service - Storage
DCE
CNA
DCE
CNA
DCE
CNA
Storage Gateway
TECDCT-3873
Cisco Public
FC-3 FC-2
FCoE Mapping MAC PHY
FC Frame
Ethernet Payload
EOF
Ethernet Header
Ethernet FCS
FCoE Enablers
10Gbps Ethernet Lossless Ethernet
Matches the lossless behavior guaranteed in FC by B2B credits
FC Payload
FCS
107
Encapsulation Technologies
FCP
SRP
FCoE IB
10, 20 Gbps
Ethernet
1, 10 . . . Gbps
TECDCT-3873
Cisco Public
108
Encapsulation Technologies
FCP layer is untouched
OS / Applications SCSI Layer FCP
Allows same management tools for Fibre Channel Allows same Fibre Channel drivers Allows same Multipathing software
FCoE E. Ethernet
1, 10 . . . Gbps
TECDCT-3873
Cisco Public
109
FC Traffic FC Traffic
CNA
LAN Traffic LAN Traffic Mgmt Traffic Backup Traffic IPC Traffic
Cisco Public
CNA
110
SAN A
FC HBA FC HBA NIC
SAN B
Core switches
NIC
Ethernet FC
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
111
Unified I/O
SAN B
Reduction of server adapters Fewer Cables Simplification of access layer & cabling Gateway free implementation - fits in installed base of existing LAN and SAN L2 Multipathing Access Distribution Lower TCO Investment Protection (LANs and SANs) Consistent Operational Model One set of ToR Switches
SAN A
FCoE Switch
FCoE Ethernet FC
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
112
TECDCT-3873
Cisco Public
113
Cisco ASIC
10 GE
FC
PCIe Bus
TECDCT-3873
Cisco Public
114
Emulex
Qlogic
TECDCT-3873
Cisco Public
115
Both Emulex and Qlogic are using Intel Oplin 10 Gigabit Ethernet chip
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
116
Disk Management
Storage is zoned to FC initiator f h t i iti t of host.
TECDCT-3873
Cisco Public
117
TECDCT-3873
Cisco Public
118
Network Admin
Login: Net_admin Password: abc1234
SAN Admin
Login: SAN_admin Password: xyz6789
Ethernet FC
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
119
Network Admin
Login: Net_admin Password: abc1234
SAN Admin
Login: SAN_admin Password: xyz6789
NX5000
CNA CNA
CNA CNA
Ethernet
FC
120
SAN Fabric
Fabric B
L3
Aggregation
N7K
L3 L2
C6K
SAN Edge A SAN Edge B
Access
L2
LAN Access
N5K
A
N5K
VF_Ports
B
N5K
VN_Ports
D
N5K
E
121
SAN Fabric
Fabric B
L3
Aggregation
N7K
4 4
MDS9500 MDS9500
L3 L2
C6K
N7K
Access
4
LAN Access
4 4 4 4
L2
N5K
A
N5K
B
N5K
D
N5K
E
TECDCT-3873
Cisco Public
122
SAN Fabric
Fabric B
L3
Aggregation
N7K
4 4
MDS9500 MDS9500
L3 L2
C6K
N7Ks
Access
4
LAN Access
L2
N5Ks
A B D
N5Ks
E
TECDCT-3873
Cisco Public
123
Nexus 5000 on the Aggregation Layer VE Interfaces are NOT Supported so Far
TECDCT-3873
Cisco Public
124
TECDCT-3873
Cisco Public
125
Unified Fabric
Wire once infrastructure Low-latency lossless Virtualization aware
TECDCT-3873
Unified Computing
Consolidated Fabric & I/O Stateless Vn-tagging Management
Cisco Public
126
Ethernet
Fibre Channel
TECDCT-3873
Cisco Public
127
Virtual
Ethernet
Fibre Channel
Virtual
VN-Link (Nexus 1000v) Manage virtual the same as physical
Physical
TECDCT-3873
Cisco Public
128
Virtual
Ethernet
Fibre Channel
Virtual
VN-Link (Nexus 1000v) Manage virtual the same as physical
Scale Physical
Fabric Extender (Nexus 2000) Scale without increasing points of management
TECDCT-3873
Cisco Public
129
Mgmt Server
TECDCT-3873
Cisco Public
130 130
Network: Unified fabric Compute: Industry standard x86 Storage: Access options Virtualization optimized
Efficient Scale
Cisco network scale & services Fewer servers with more memory
Lower cost
Fewer servers, switches, adapters, cables Lower power consumption Fewer points of management
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
131 131
TECDCT-3873
Cisco Public
132 132
TECDCT-3873
Cisco Public
133 133
UCS Blade Server Chassis Flexible bay configurations UCS Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
134
Nexus Products
TECDCT-3873
Cisco Public
136
Chassis
Blade Enclosure Fabric I R Extender
x8 x8
R
x8
Fabric Extender
Fabric Extender
Host to uplink traffic engineering
M Adapter B P Adapter B P Adapter
X X x86 Computer
X X X X x86 Computer
Adapter 3 options
Cisco Virtualized adapter Compatibility CNAs (Emulex and QLogic) Native FC + Intel Oplin Intel Oplin - (10GE only)
Compute Blade
137
Half width server blade Up to eight per enclosure Hot Swap SAS drive (Optional)
Full width server blade Up to four per enclosure Mix blade types
6U Enclosure
Ejector Handles
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved.
138
Fan Handle
TECDCT-3873
Cisco Public
139
Compatibility
Existing Driver Stacks
Cost
Proven 10GbE Technology T h l
Converged network adapters (CNA) Ability to mix and match adapter types within a system Automatic discovery of component types
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
140
TECDCT-3873
Cisco Public
141
Note: Only one adaptor on the half slot blade rtp-6100-B# scope adapter 1/5/2 Error: Managed object does not exist
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
142
TECDCT-3873
Cisco Public
143
Ethernet stats
TECDCT-3873
Cisco Public
144
TECDCT-3873
Cisco Public
145
Virtualized Adapter
Unified Management
TECDCT-3873
Cisco Public
146
Server
Identity (UUID) Adapters Number Type: FC, Ethernet Identity Characteristics Firmware Revisions Configuration settings
TECDCT-3873
Cisco Public
147
Server
Identity (UUID) Adapters Number Type: FC, Ethernet Identity Characteristics Firmware Revisions Configuration settings Uplinks
Network
LAN settings vLAN, QoS, etc SAN settings vSAN Firmware Revisions
TECDCT-3873
Cisco Public
148
Storage
Optional Disk usage SAN settings LUNs Persistent Binding Firmware Revisions Adapters
Server
Identity (UUID) Uplinks
Network
LAN settings vLAN, QoS, etc SAN settings vSAN Firmware Revisions
Number Type: FC, Ethernet Identity Characteristics Firmware Revisions Configuration settings
TECDCT-3873
Cisco Public
149
Configure switch
Zoning, VSANs, QoS
Configure policies
QoS, ACLs
Configure BIOS settings Configure NIC settings Configure HBA settings Configure boot parameters
Perform tasks for each server Inhibits pay-as-you-grow incremental deployment pay-as-you-grow
Needs admin coordination every time May incur downtime during deployments
TECDCT-3873
Cisco Public
150
Definable Attributes
Disks & usage Network Type: FC, Ethernet, etc. Number Identity Characteristics LAN settings vLAN, QoS, etc SAN settings g LUNs vSAN & Persistent Binding Firmware Revisions Configuration settings Identity (BIOS)
TECDCT-3873
Cisco Public
151
Storage
Optional Disk usage SAN settings LUNs Persistent Binding SAN settings vSAN Firmware Revisions
Server
Identity (UUID) Adapters Number Type: FC, Ethernet y Identity Characteristics Firmware Revisions Configuration settings
Network
Uplinks LAN settings vLAN QoS etc Firmware Revisions
TECDCT-3873
Cisco Public
152
BMC Firmware
Separate firmware, addresses, and parameter settings from server hardware Separate access port settings from physical ports Physical servers become interchangeable hardware components Easy to move OS & applications across server hardware
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
154
BMC Firmware
Server virtualization & hardware state abstraction are independent of each other Hypervisor (or OS) is unaware of underlying hardware state abstraction
TECDCT-3873
Cisco Public
155
TECDCT-3873
Cisco Public
156
Server Upgrades:
Within a UCS
Server Name: finance-01 UUID: 56 4d cd 3f 59 5b 61 MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN Firmware: xx.yy.zz
Old Server
New Server
Disassociate server profile from old server Associate server profile to new server Old server can be retired or re-purposed
TECDCT-3873
Cisco Public
157
Server Upgrades:
Across UCS Instances
Old UCS System New UCS System
Server Name: finance-01 UUID: 56 4d cd 3ffinance-01 Server Name: 59 5b 61 MAC :Server4d cd 3ffinance-01 08:00:69:02:01:FC UUID: 56 Name: 59 5b 61 WWN: 5080020000075740 5b 61 MAC : 08:00:69:02:01:FC UUID: 56 4d cd 3f 59 Boot Order: 08:00:69:02:01:FC WWN: 5080020000075740 MAC : SAN, LAN Firmware: xx.yy.zz LAN Boot Order: SAN, WWN: 5080020000075740 Firmware: xx.yy.zz LAN Boot Order: SAN, Firmware: xx.yy.zz
TECDCT-3873
Cisco Public
158
Server Upgrades:
Across UCS Instances
Old System New System
Server Name: finance-01 UUID: 56 Name:3f 59 5b 61 Server 4d cd finance-01 Server Name: finance-01 MAC : 08:00:69:02:01:FC 61 UUID: 56 4d cd 3f3f 59 5b 61 UUID: 56 4d cd 59 5b WWN: 5080020000075740 MAC : 08:00:69:02:01:FC MAC : 08:00:69:02:01:FC Boot Order: SAN, LAN WWN: 5080020000075740 WWN: 5080020000075740 Firmware: xx.yy.zz LAN Boot Order: SAN, LAN Boot Order: SAN, Firmware: xx.yy.zz Firmware: xx.yy.zz
1.
TECDCT-3873
Cisco Public
159
Server Upgrades:
Across UCS Instances
Old System New System
Server Name: finance-01 UUID: 56 4d cd 3ffinance-01 Server Name: 59 5b 61 MAC :Server4d cd 3ffinance-01 08:00:69:02:01:FC UUID: 56 Name: 59 5b 61 WWN: 5080020000075740 5b 61 MAC : 08:00:69:02:01:FC UUID: 56 4d cd 3f 59 Boot Order: 08:00:69:02:01:FC WWN: 5080020000075740 MAC : SAN, LAN Firmware: xx.yy.zz LAN Boot Order: SAN, WWN: 5080020000075740 Firmware: xx.yy.zz LAN Boot Order: SAN SAN, Firmware: xx.yy.zz
1. 2. 3.
Disassociate server profiles from servers in old UCS system Migrate server profiles to new UCS system Associate server profiles to hardware in new UCS system
160
TECDCT-3873
Cisco Public
Apply appropriate profile to provision a specific server type Same hardware can dynamically be deployed as different server types No need to purchase custom configured servers for specific applications
TECDCT-3873
Cisco Public
161
Web Servers
Blade Blade Blade Blade Blade Blade
Web Servers
Blade Blade Blade Blade Blade
VMware
Blade Blade Blade
VMware
Blade Blade Blade Blade Blade
Blade Blade
Burst Capacity
Blade
Total Servers: 18
Normal use Burst Capacity Spare Hot Spare
HA Spare
Blade
Blade
Total Servers: 14
TECDCT-3873
Cisco Public
162
Virtualized Adapter
Unified Management
TECDCT-3873
Cisco Public
165
Unified Fabric
SAN IPC LAN
Todays Approach
All fabric types have switches in each chassis Repackaged switches Complex to manage Blade-chassis configuration Bl d h i fi ti dependency Costly Small network domain Blade Chassis
Unified Fabric
Fewer switches Fewer adapters 10GE/FCoE Blade Blade Blade All I/O types available in each chassis
10GE & FCoE LAN, SAN, IPC
Blade
Easier to manage Blades can work with any chassis Small network domain
TECDCT-3873
Cisco Public
166
Compute blade
Backplane
Fabric Extender
Fabric extender
Manage oversubscription 2:1 to 8:1 FCoE from blade to fabric switch Customizable bandwidth
Compute blade Compute blade Compute blade
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
167
TECDCT-3873
Cisco Public
168
Virtualized Adapter
Unified Management
TECDCT-3873
Cisco Public
169
vmnic
vmnic
TECDCT-3873
Cisco Public
170
Compatibility
Existing Driver Stacks
Cost
Free SAN Access for Any Ethernet Equipped Host
10GbE/FCoE
10GbE/FCoE
Eth
QP FC FC C Eth
vNICs
0 1 2 3 127
10GbE
FC
Software FCoE
TECDCT-3873
Cisco Public
FC
1
SCSI
2
FC
3
Eth
127
128 vNICs
Ethernet, FC or SCSI 500K IOPS Initiator and Target mode
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
PCIe x16
172
Guest OS
Device Driver
Guest OS
Device Driver
Device Manager
IOMMU
vNIC
vNIC
Use Cases:
I/O Appliances High Performance VMs
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
173
FC Eth
Eth FC
SCSI Eth
SCSI Eth
Eth
FC
IPC
NIV Adapter
OS Compute Blade
Network Interface Virtualization adapter Vary nature and number of PCIe interfaces
Ethernet, FC, SCSI, IPC
174
FC
3 No-Drop No 1 (20%)
Gold
1 Drop No 3 (60%)
Ethernet BE
0 Drop No 1 (20%)
FC Traffic
vNIC1
Class Rate Burst FC 4000 300
vNIC2
FC 4000 400
vNIC3
Eth. BE 5000 100 Class Rate Burst
vNIC1
Gold 600 100
vNIC2
Eth. BE 4000 300
TECDCT-3873
Cisco Public
175
Virtualized Adapter
Unified Management
TECDCT-3873
Cisco Public
178
Blade Overview
Half-width blade Common Attributes 2 x Intel Nehalem-EP processors 2 x SAS hard drives (optional) Blade Service processor Blade and HDD hot plug support Stateless blade design 4x the memory Full-width blade 10Gb CNA and 10GbE adapter options Differences Half-width blade 4x memory 12 x DIMM slots 2x I/O bandwidth
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Full-Height Blade
2 socket Nehalem-EP blade 48 x DDR3 DIMMs 2 x Mezzanine Cards 2 x Hot swap disk drives Up to 384GB per 2 socket blade Transparent to OS and applications
180
Physical View
8GB
Nehalem-EP Processor
Logical View
Slot 23
Slot 23 Slot 22 Slot 21 Slot 20 Slot 19 Slot 18 Slot 17 Slot 16 Slot 15 Slot 14 Slot 13 Slot Sl t 12 Slot 11 Slot 10 Slot 9 Slot 8 Slot 7 Slot 6 Slot 5 Slot 4 Slot 3 Slot 2 Slot 1 Slot 0
Cisco Public
8GB
32GB
Channel 2
(red)
8GB
Channel 2
(red)
8GB
8GB
8GB
8GB
32GB
8GB
8GB
8GB
32GB
Channel 1
(blue)
8GB
8GB
Channel 1
(blue)
8GB 8GB
8GB
32GB
8GB
8GB
8GB
32GB
Channel 0
(green)
8GB
Channel 0
(green)
8GB
8GB
8GB
8GB
32GB
8GB
TECDCT-3873
181
TECDCT-3873
Cisco Public
182
I/O
CPU Memory
TECDCT-3873
Cisco Public
183
Virtualized Adapter
Unified Management
TECDCT-3873
Cisco Public
184
Two Failure Domains Separate fabrics Central supervisor, forwarding logic Distributed Fabric Extenders
10GE/FCoE
Blade Chassis Blade Chassis Blade Chassis Blade Chassis
Chassis Management
Chassis Management
Chassis Management
Chassis Management
TECDCT-3873
Cisco Public
185
Custom Portal
GUI
CLI
XML API
Standard APIs
UCS Manager
TECDCT-3873
Cisco Public
186
TECDCT-3873
Cisco Public
187
Physical
Server Blades Adapters
Logical
UUIDs VLANs IP Address MAC Address VSANs WWNs
TECDCT-3873
Cisco Public
188
MACs
01:23:45:67:89:0d 01:23:45:67:89:0c 01:23:45:67:89:0b 01:23:45:67:89:0a
MAC pool
05:00:1B:32:00:00:00:01
TECDCT-3873
Cisco Public
189
TECDCT-3873
Cisco Public
190
Profiles Example
Servers Virtual Machines Ethernet Adapters Fibre Channel Adapters IPMI Profiles
TECDCT-3873
Cisco Public
191
IPMI
CIM XML
Remote KVM
TECDCT-3873
Cisco Public
192
TECDCT-3873
Cisco Public
193
NAVIGATION PANE
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
CONTENT PANE
194
TECDCT-3873
Cisco Public
195
Creation Wizards
TECDCT-3873
Cisco Public
196
Network Management
HR
Policies
Policies
Compute Blade
Compute Blade
Compute Blade
Compute Blade
Compute Blade
Blade Chassis
Blade Chassis
Blade Chassis
Blade Chassis
Blade Chassis
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Fabric Extender
Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade
Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade
Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade
Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade
Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade
Facilities
TECDCT-3873 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Fabric Extender
197
Custom Portal
Server Array
TECDCT-3873
Cisco Public
198
Virtualized Adapter
Unified Management
UCS Integration
TECDCT-3873
Cisco Public
199
Core Layer
Nexus 7010
Distribution Layer
10GE
Access Layer
Nexus 5000
GigE 10GE
GigE 10GE
FEX
10GE Servers
...
..
Rack 12 200
TECDCT-3873
Cisco Public
Core Layer
Nexus 7010
Distribution Layer
10GE
Access Layer
Nexus 5000
GigE 10GE
GigE 10GE
UCS 6100
FEX
10GE Servers
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
...
..
Rack 12
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Cisco Public
201
TECDCT-3873
Cisco Public
202
203
TECDCT-3873
Cisco Public
TECDCT-3873
Cisco Public
205
Recommended Readings
TECDCT-3873
Cisco Public
206
Dont forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com.
207
TECDCT-3873
Cisco Public
TECDCT-3873
Cisco Public
208