0% found this document useful (0 votes)
349 views87 pages

Medium Enterprise Design Profile (MEDP) - LAN Design

This document discusses the LAN design for a medium-sized enterprise with multiple campuses. It uses a hierarchical, tiered design with either three or two tiers. The tiers are the access, aggregation, and core layers. This design provides modularity, resiliency, and flexibility to allow for growth and overlay of security, mobility, and unified communication features.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
349 views87 pages

Medium Enterprise Design Profile (MEDP) - LAN Design

This document discusses the LAN design for a medium-sized enterprise with multiple campuses. It uses a hierarchical, tiered design with either three or two tiers. The tiers are the access, aggregation, and core layers. This design provides modularity, resiliency, and flexibility to allow for growth and overlay of security, mobility, and unified communication features.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Medium Enterprise Design Profile (MEDP)—LAN Design

LAN Design
The Medium Enterprise LAN design is a multi-campus design, where a campus consists
of multiple buildings and services at each location, as shown in Figure 1.
Figure 1 Medium Enterprise LAN Design

Large Building Medium Building Small Building Extra Small Building

Services
Block

Data
Center

Internet
Main Site Edge

WAN PSTN Internet

Services Services
Block Block
Services
Block
Data Data
Center Center
Data
Center

Remote Small Site

Large Building Medium Building Small Building Medium Building Small Building
229353

Remote Large Site Remote Medium Site


Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 2 shows the service fabric design model used in the medium enterprise LAN LAN Design Principles
design.
Any successful design or system is based on a foundation of solid design theory and
Figure 2 Medium Enterprise LAN Design principles. Designing the LAN component of the overall medium enterprise LAN service
fabric design model is no different than designing any large networking system. The use
of a guiding set of fundamental engineering design principles serves to ensure that the
Service Fabric Design Model LAN design provides for the balance of availability, security, flexibility, and manageability
required to meet current and future advanced and emerging technology needs. This
chapter provides design guidelines that are built upon the following principles to allow a
medium enterprise network architect to build enterprise campuses that are located in
Unified different geographical locations:
Mobility Security
Communications
• Hierarchical
– Facilitates understanding the role of each device at every tier
– Simplifies deployment, operation, and management
Local Area Wide Area – Reduces fault domains at every tier
Network (LAN) Network (WAN)

228469
• Modularity—Allows the network to grow on an on-demand basis
• Resiliency—Satisfies user expectations for keeping network always on
• Flexibility—Allows intelligent traffic load sharing by using all network resources
This chapter focuses on the LAN component of the overall design. The LAN component These are not independent principles. The successful design and implementation of a
consists of the LAN framework and network foundation technologies that provide baseline campus network requires an understanding of how each of these principles applies to the
routing and switching guidelines. The LAN design interconnects several other overall design. In addition, understanding how each principle fits in the context of the
components, such as endpoints, data center, WAN, and so on, to provide a foundation on others is critical in delivering a hierarchical, modular, resilient, and flexible network
which mobility, security, and unified communications (UC) can be integrated into the required by medium enterprises today.
overall design.
Designing the medium enterprise LAN building blocks in a hierarchical fashion creates a
This LAN design provides guidance on building the next-generation medium enterprise flexible and resilient network foundation that allows network architects to overlay the
network, which becomes a common framework along with critical network technologies security, mobility, and UC features essential to the service fabric design model, as well as
to deliver the foundation for the service fabric design. This chapter is divided into following providing an interconnect point for the WAN aspect of the network. The two proven,
sections: time-tested hierarchical design frameworks for LAN networks are the three-tier layer and
• LAN design principles—Provides proven design choices to build various types of the two-tier layer models, as shown in Figure 3.
LANs.
• LAN design model for the medium enterprise—Leverages the design principles of
the tiered network design to facilitate a geographically dispersed enterprise campus
network made up of various elements, including networking role, size, capacity, and
infrastructure demands.
• Considerations of a multi-tier LAN design model for medium enterprises—Provides
guidance for the enterprise campus LAN network as a platform with a wide range of
next-generation products and technologies to integrate applications and solutions
seamlessly.
• Designing network foundation services for LAN designs in medium
enterprise—Provides guidance on deploying various types of Cisco IOS
technologies to build a simplified and highly available network design to provide
continuous network operation. This section also provides guidance on designing
network-differentiated services that can be used to customize the allocation of
network resources to improve user experience and application performance, and to
protect the network against unmanaged devices and applications.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 3 Three-Tier and Two-Tier LAN Design Models • Core layer


The core layer is the network backbone that connects all the layers of the LAN design,
Three-Tier Two-Tier providing for connectivity between end devices, computing and data storage
LAN Design LAN Design services located within the data center and other areas, and services within the
network. The core layer serves as the aggregator for all the other campus blocks, and
Core ties the campus together with the rest of the network.
Note For more information on each of these layers, see the enterprise class network
framework at the following URL:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/campover.htm
l.

Figure 4 shows a sample three-tier LAN network design for medium enterprises where
the access, distribution, and core are all separate layers. To build a simplified,
Distribution Collapsed cost-effective, and efficient physical cable layout design, Cisco recommends building an
Core/Distribution extended-star physical network topology from a centralized building location to all other
buildings on the same campus.
Figure 4 Three-Tier LAN Network Design Example

Building B –
Marketing Building C –

228470
Access Access
and Sales Engineering

The key layers are access, distribution and core. Each layer can be seen as a well-defined
structured module with specific roles and functions in the LAN network. Introducing Access
modularity in the LAN hierarchical design further ensures that the LAN network remains
resilient and flexible to provide critical network services as well as to allow for growth and
changes that may occur in a medium enterprise.
• Access layer Distribution
The access layer represents the network edge, where traffic enters or exits the
campus network. Traditionally, the primary function of an access layer switch is to Building A –
provide network access to the user. Access layer switches connect to the distribution Core Management
layer switches to perform network foundation technologies such as routing, quality of
service (QoS), and security.
To meet network application and end-user demands, the next-generation Cisco
Catalyst switching platforms no longer simply switch packets, but now provide Distribution
intelligent services to various types of endpoints at the network edge. Building
intelligence into access layer switches allows them to operate more efficiently,
optimally, and securely.
• Distribution layer Access
The distribution layer interfaces between the access layer and the core layer to
provide many key functions, such as the following:
Building D – Building E – Building F –
– Aggregating and terminating Layer 2 broadcast domains

229354
Research and Information Data Center
– Aggregating Layer 3 routing boundaries Development Technology
– Providing intelligent switching, routing, and network access policy functions to
access the rest of the network The primary purpose of the core layer is to provide fault isolation and backbone
connectivity. Isolating the distribution and core into separate layers creates a clean
– Providing high availability through redundant distribution layer switches to the
delineation for change control between activities affecting end stations (laptops, phones,
end-user and equal cost paths to the core, as well as providing differentiated
and printers) and those that affect the data center, WAN, or other parts of the network. A
services to various classes of service applications at the edge of network
core layer also provides for flexibility in adapting the campus design to meet physical
Medium Enterprise Design Profile (MEDP)—LAN Design

cabling and geographical challenges. If necessary, a separate core layer can use a
different transport technology, routing protocols, or switching hardware than the rest of
the campus, providing for more flexible design options when needed.
In some cases, because of either physical or network scalability, having separate
distribution and core layers is not required. In smaller locations where there are less users
accessing the network or in campus sites consisting of a single building, separate core
and distribution layers are not needed. In this scenario, Cisco recommends the two-tier
LAN network design, also known as the collapsed core network design.
Figure 5 shows a two-tier LAN network design example for a medium enterprise LAN
where the distribution and core layers are collapsed into a single layer.
Figure 5 Two-Tier Network Design Example

Access
Floor 6 –
Research and Development

Floor 5 –
Engineering
WAN
Floor 4 –
Serverfarm
PSTN
Floor 3 –
Information Technology Collapsed
Distribution/
Core

229274
Floor 2 –
Management

If using the small-scale collapsed campus core design, the enterprise network architect
must understand the network and application demands so that this design ensures a
hierarchical, modular, resilient, and flexible LAN network.

Medium Enterprise LAN Design Models


Both LAN design models (three-tier and two-tier) have been developed with the following
considerations:
• Scalability—Based on Cisco enterprise-class high-speed 10G core switching
platforms for seamless integration of next-generation applications required for
medium enterprises. Platforms chosen are cost-effective and provide investment
protection to upgrade network as demand increases.
• Simplicity—Reduced operational and troubleshooting cost via the use of
network-wide configuration, operation, and management.
• Resilient—Sub-second network recovery during abnormal network failures or even
network upgrades.
• Cost-effectiveness—Integrated specific network components that fit budgets
without compromising performance.
As shown in Figure 6,multiple campuses can co-exist within a single medium enterprise
system that offers various academic programs.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 6 Medium Enterprise LAN Design Model

HDTV
IP

Main Large Site


Large Building Medium Building Small Building Extra Small Building

Cisco 6500 Cisco 4500 Cisco 4500 Cisco 3750


VSS Stackwise

CUCM/Unitity Core
ACS/CSA-MC Cisco 6500 VSS DMZ
Service
NAC Mgr DC
Block
WCS
VSOM/VSMS WAE
DMM/CVP ACNS ESA
WLC Web/Email
DHCP/DNS
NTTP/FTP NAC
www WSA
NTP Internet Edge

ASR Cisco 3800 ASR

GigaPOP
MetroE HDLC PSTN
Internet NLR
HDLC

Cisco 375ME Cisco 2800 Cisco 2800 Cisco 3800

VSOM/VSMS VSOM/VSMS VSOM/VSMS


WAE/ACNS DHCP/DNS WAE/ACNS DHCP/DNS WAE/ACNS
DHCP/DNS Cisco 4500
NTTP/FTP/NTP WLC/NAC NTTP/FTP/NTP WLC/NAC NTTP/FTP/NTP WLC/NAC

Appliance Appliance Appliance


DC Cisco 6500 DC DC
Block Block Block
VSS

Cisco 6500
VSS Cisco 4500 Cisco 4500 Cisco 4500 Cisco 4500 Cisco 4500

Large Building Medium Building Small Building Medium Building Small Building Small Building

Remote Large Site Remote Medium Site Remote Small Site

229355
HDTV HDTV
IP IP

Depending on the remote campus office facility, the number of employees and the Using high-speed WAN technology, all the remote medium enterprise campuses
networked devices in remote campuses may be equal to or less than the main site. interconnect to a centralized main site that provides shared services to all the employees
Campus network designs for the remote campus may require adjusting based on overall independent of their physical location. The WAN design is discussed in greater detail in
campus capacity. the next chapter, but it is worth mentioning in the LAN section because some remote sites
may integrate LAN and WAN functionality into a single platform. Collapsing the LAN and
Medium Enterprise Design Profile (MEDP)—LAN Design

WAN functionality into a single Cisco platform can provide all the needed requirements The main site typically consists of various sizes of building facilities and various
for a particular remote site as well as provide reduced cost to the overall design, as organization department groups. The network scale factor in the main site is higher than
discussed in more detail in the following section. the remote campus site, and includes end users, IP-enabled endpoints, servers, and
Table 1 shows a summary of the LAN design models as they are applied in the overall security and network edge devices. Multiple buildings of various sizes exist in one
medium enterprise network design. location, as shown in Figure 8.
Figure 8 Main Site Reference Design
Table 1 Medium Enterprise Recommended LAN Design Model
Large Medium Small
Medium Enterprise Location Recommended LAN Design Model Building Building Building
Main campus Three-tier
Remote large campus Three-tier Access
Remote medium campus Three-tier with collapsed WAN edge
Remote small campus Two-tier
Distribution

Main Site Network Design


The main site in the medium enterprise design consists of a centralized hub campus
location that interconnects several sizes of remote campuses to provide end-to-end
shared network access and services, as shown in Figure 7.
Data Center
Figure 7 Main Site Reference Design
Block
Core
Large Medium Small Extra Small
Building Building Building Building
Service
Block
Access

WAN PSTN
Distribution Edge QFP Gateway

228475
WAN PSTN

Data Center
Block The three-tier LAN design model for the main site meets all key technical aspects to
Core provide a well-structured and strong network foundation. The modularity and flexibility in
DMZ
a three-tier LAN design model allows easier expansion and integration in the main site
Service
Block
network, and keeps all network elements protected and available.
To enforce external network access policy for each end user, the three-tier model also
provides external gateway services to the employees for accessing the Internet.
WAN PSTN Internet
Edge QFP Gateway QFP Edge Note The WAN design is a separate element in this location, because it requires a
separate WAN device that connects to the three-tier LAN model. WAN design is
discussed in more detail in Chapter 3, “Medium Enterprise Design Profile
WAN PSTN GigaPOP (MEDP)—WAN Design.”
Internet NLR
229356
Medium Enterprise Design Profile (MEDP)—LAN Design

Remote Large Campus Site Design a three-tier large campus LAN design. All the LAN benefits are achieved in a three-tier
design model as in the main and remote large site campus, and in addition, the platform
From the location size and network scale perspective, the remote large site is not much chosen in the core layer also serves as the WAN edge, thus collapsing the WAN and core
different from the main site. Geographically, it can be distant from the main campus site LAN functionality into a single platform. Figure 10 shows the remote medium campus in
and requires a high-speed WAN circuit to interconnect both campuses. The remote large more detail.
site can also be considered as an alternate campus to the main campus site, with the
same common types of applications, endpoints, users, and network services. Similar to Figure 10 Remote Medium Campus Site Reference Design
the main site, separate WAN devices are recommended to provide application delivery
and access to the main site, given the size and number of employees at this location.
Access
Similar to the main site, Cisco recommends the three-tier LAN design model for the
remote large site campus, as shown in Figure 9.
Figure 9 Remote Large Campus Site Reference Design
Data Center
Block
Medium Small Distribution/Core
Building Building
Service
Block
Access WAN PSTN
Edge Gateway

228477
Distribution WAN PSTN

Remote Small Campus Network Design


The remote small campus is typically confined to a single building that spans across
Data Center multiple floors with different academic departments. The network scale factor in this
Block design is reduced compared to other large campuses. However, the application and
Core
services demands are still consistent across the medium enterprise locations.
Service In such smaller scale campus network deployments, the distribution and core layer
Block functions can collapse into the two-tier LAN model without compromising basic network
demands. Before deploying a collapsed core and distribution layer in the remote small
campus network, considering all the scale and expansion factors prevents physical
WAN PSTN network re-design, and improves overall network efficiency and manageability.
Edge Gateway WAN bandwidth requirements must be assessed appropriately for this remote small
campus network design. Although the network scale factor is reduced compared to other
larger campus locations, sufficient WAN link capacity is needed to deliver consistent
network services to employees. Similar to the remote medium campus location, the WAN
228476

WAN PSTN
functionality is also collapsed into the LAN functionality. A single Cisco platform can
provide collapsed core and distribution LAN layers. This design model is recommended
only in smaller locations, and WAN traffic and application needs must be considered.
Remote Medium Campus Site Design Figure 11 shows the remote small campus in more detail.

Remote medium campus locations differ from a main or remote large site campus in that
there are less buildings with distributed organization departments. A remote medium
campus may have a fewer number of network users and endpoints, thereby reducing the
need to build a similar campus network to that recommended for main and large
campuses. Because there are fewer employees and networked devices at this site as
compared to the main or remote large site campus sites, the need for a separate WAN
device may not be necessary. A remote medium campus network is designed similarly to
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 11 Remote Small Campus Site Reference Design Figure 12 Core Layer Design Models for Medium Enterprises

Medium Small Core Design Core Design Core Design


Building Building Option – 1 Option – 2 Option – 3
Switch-1 Switch-2

Access VSL Collapsed


Core Core
Core/Distribution

228478
Cisco Catalyst 6500 Cisco Catalyst 4500 Cisco Catalyst 4500
Distribution
Each design model offers consistent network services, high availability, expansion
flexibility, and network scalability. The following sections provide detailed design and
deployment guidance for each model as well as where they fit within the various locations
of the medium enterprise design.

Data Center Core Layer Design Option 1—Cisco Catalyst 6500-E-Based Core Network
Block
Core Core layer design option 1 is specifically intended for the main and remote large site
campus locations. It is assumed that the number of network users, high-speed and
Service low-latency applications (such as Cisco TelePresence), and the overall network scale
Block capacity is common in both sites and thus, similar core design principles are required.
Core layer design option 1 is based on Cisco Catalyst 6500 Series switches using the
Cisco Virtual Switching System (VSS), which is a software technology that builds a single
WAN PSTN logical core system by clustering two redundant core systems in the same tier. Building a
Edge Gateway
VSS-based network changes network design, operation, cost, and management
dramatically. Figure 13 shows the physical and operational view of VSS.
228476 Figure 13 VSS Physical and Operational View
WAN PSTN
Virtual Switch Domain

VSL
Multi-Tier LAN Design Models for Medium Enterprise
The previous section discussed the recommended LAN design model for each medium
enterprise location. This section provides more detailed design guidance for each tier in

228479
the LAN design model. Each design recommendation is optimized to keep the network Switch-1 Switch-2 VSS – Single
Logical Switch
simplified and cost-effective without compromising network scalability, security, and
resiliency. Each LAN design model for a medium enterprise location is based on the key
To provide end-to-end network access, the core layer interconnects several other network
LAN layers of core, distribution, and access.
systems that are implemented in different roles and service blocks. Using VSS to
Campus Core Layer Network Design virtualize the core layer into a single logical system remains transparent to each network
device that interconnects to the VSS-enabled core. The single logical connection
As discussed in the previous section, the core layer becomes a high-speed intermediate between core and the peer network devices builds a reliable, point-to-point connection
transit point between distribution blocks in different premises and other devices that that develops a simplified network topology and builds distributed forwarding tables to
interconnect to the data center, WAN, and Internet edge. fully use all resources. Figure 14 shows a reference VSS-enabled core network design for
Similarly to choosing a LAN design model based on a location within the medium the main campus site.
enterprise design, choosing a core layer design also depends on the size and location
within the design. Three core layer design models are available, each of which is based on
either the Cisco Catalyst 6500-E Series or the Cisco Catalyst 4500-E Series Switches.
Figure 12 shows the three core layer design models.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 14 VSS-Enabled Core Network Design Figure 15 Remote Medium Campus Core Network Design

Large Medium Small Extra Small Medium Small


Building Building Building Building Building Building

Access Access

Distribution
Distribution

Internet Edge Block


Data Center
Block VSL Shared Service Block
DMZ
Core

Service Data Center


Block Block
Core

WAN PSTN Service


Edge QFP Gateway QFP Block

Gigapop WAN PSTN


WAN PSTN
Edge Gateway
Internet NLR

229357

228481
WAN PSTN
Note For more detailed VSS design guidance, see the Campus 3.0 Virtual Switching
System Design Guide at the following URL:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/VSS30dg/ca The cost of implementing and managing redundant systems in each tier may introduce
mpusVSS_DG.html. complications in selecting the three-tier model, especially when network scale factor is
not too high. This cost-effective core network design provides protection against various
Core Layer Design Option 2—Cisco Catalyst 4500-E-Based Campus Core Network types of hardware and software failure and offers sub-second network recovery. Instead
of a redundant node in the same tier, a single Cisco Catalyst 4500-E Series Switch can be
Core layer design option 2 is intended for a remote medium-sized campus and is built on deployed in the core role and bundled with 1+1 redundant in-chassis network
the same principles as for the main and remote large site campus locations. The size of components. The Cisco Catalyst 4500-E Series modular platform is a one-size platform
this remote site may not be large, and it is assumed that this location contains distributed that helps enable the high-speed core backbone to provide uninterrupted network
building premises within the remote medium campus design. Because this site is smaller access within a single chassis. Although a fully redundant, two-chassis design using VSS
in comparison to the main and remote large site campus locations, a fully redundant, as described in core layer option 1 provides the greatest redundancy for large-scale
VSS-based core layer design may not be necessary. Therefore, core layer design option locations, the redundant supervisors and line cards of the Cisco Catalyst 4500-E provide
2 was developed to provide a cost-effective alternative while providing the same adequate redundancy for smaller locations within a single platform. Figure 16 shows the
functionality as core layer design option 1. Figure 15 shows the remote medium campus redundancy of the Cisco Catalyst 4500-E Series in more detail.
core design option in more detail.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 16 Highly Redundant Single Core Design Using the Cisco Catalyst 4500-E Platform As discussed in the previous section, the remote small campus has a two-tier LAN design
model, so the role of the core system is merged with the distribution layer. Remote small
Redundant campus locations have consistent design guidance and best practices defined for main,
Supervisor remote large site, and remote medium-sized campus cores. However, for platform
Core Redundant selection, the remote medium campus core layer design must be leveraged to build this
Line Cards two-tier campus core.
Redundant
Power Cycle Single highly resilient Cisco Catalyst 4500-E switches with a Cisco Sup6L-E supervisor
must be deployed in a centralized collapsed core and distribution role that interconnects
Diversed to wiring closet switches, a shared service block, and a WAN edge router. The
Fiber Paths cost-effective supervisor version supports key technologies such as robust QoS, high
availability, security, and much more at a lower scale, making it an ideal solution for
small-scale network designs. Figure 17 shows the remote small campus core design in
Distribution more detail.

228482
Figure 17 Core Layer Option 3 Collapsed Core/Distribution Network Design in Remote Small
Campus Location

This core network design builds a network topology that has similar common design Medium Small
principles to the VSS-based campus core in core layer design option 1. The future Building Building
expansion from a single core to a dual VSS-based core system becomes easier to deploy,
and helps retain the original network topology and the management operation. This Access
cost-effective single resilient core system for a medium-size enterprise network meets
the following four key goals:
• Scalability—The modular Cisco Catalyst 4500-E chassis enables flexibility for core
network expansion with high throughput modules and port scalability without Distribution
compromising network performance.
• Resiliency—Because hardware or software failure conditions may create
catastrophic results in the network, the single core system must be equipped with
redundant system components such as supervisor, line card, and power supplies. Shared Service Block
Implementing redundant components increases the core network resiliency during
various types of failure conditions using Non-Stop Forwarding/Stateful Switch Over Data Center
(NSF/SSO) and EtherChannel technology. Block
Core
• Simplicity—The core network can be simplified with redundant network modules
and diverse fiber connections between the core and other network devices. The Service
Layer 3 network ports must be bundled into a single point-to-point logical Block
EtherChannel to simplify the network, such as the VSS-enabled campus design. An
EtherChannel-based campus network offers similar benefits to an Multi-chassis
EtherChannel (MEC)- based network. WAN PSTN
Edge Gateway
• Cost-effectiveness—A single core system in the core layer helps reduce capital,
operational, and management cost for the medium-sized campus network design.

228481
Core Layer Design Option 3—Cisco Catalyst 4500-E-Based Collapsed Core WAN PSTN
Campus Network
Core layer design option 3 is intended for the remote small campus network that has
consistent network services and applications service-level requirements but at reduced Campus Distribution Layer Network Design
network scale. The remote small campus is considered to be confined within a single The distribution or aggregation layer is the network demarcation boundary between
multi-story building that may span academic departments across different floors. To wiring-closet switches and the campus core network. The framework of the distribution
provide consistent services and optimal network performance, scalability, resiliency, layer system in the medium enterprise design is based on best practices that reduce
simplification, and cost-effectiveness in the small campus network design must not be
compromised.
Medium Enterprise Design Profile (MEDP)—LAN Design

network complexities and accelerate reliability and performance. To build a strong The distribution block and core network operation changes significantly when redundant
campus network foundation with the three-tier model, the distribution layer has a vital role Cisco Catalyst 6500-E Series switches are deployed in VSS mode in both the distribution
in consolidating networks and enforcing network edge policies. and core layers. Clustering redundant distribution switches into a single logical system
The following the core layer design options in different campus locations, the distribution with VSS introduces the following technical benefits:
layer design provides consistent network operation and configuration tools to enable • A single logical system reduces operational, maintenance, and ownership cost.
various network services. Three simplified distribution layer design options can be • A single logical IP gateway develops a unified point-to-point network topology in the
deployed in main or remote campus locations, depending on network scale, application distribution block, which eliminates traditional protocol limitations and enables the
demands, and cost, as shown in Figure 18. Each design model offers consistent network network to operate at full capacity.
services, high availability, expansion flexibility, and network scalability.
• Implementing the distribution layer in VSS mode eliminates or reduces several
Figure 18 Distribution Layer Design Model Options deployment barriers, such as spanning-tree loop, Hot Standby Routing Protocol
(HSRP)/Gateway Load Balancing Protocol (GLBP)/Virtual Router Redundancy
Design Option – 1 Design Option – 2 Design Option – 3 Protocol (VRRP), and control plane overhead.
• Cisco VSS introduces unique inter-chassis traffic engineering to develop a
Switch-1 Switch-2
fully-distributed forwarding design that helps in increased bandwidth, load
VSL balancing, predictable network recovery, and network stability.
Distribution Distribution Distribution
Deploying VSS mode in both the distribution layer switch and core layer switch provides
numerous technology deployment options that are not available when not using VSS.
Designing a common core and distribution layer option using VSS provides greater
redundancy and is able to handle the amount of traffic typically present in the main and
remote large site campus locations. Figure 20 shows five unique VSS domain
Access Access Access interconnect options. Each variation builds a unique network topology that has a direct

228484
impact on steering traffic and network recovery.
Figure 20 Core/Distribution Layer Interconnection Design Considerations
Distribution Layer Design Option 1—Cisco Catalyst 6500-E Based Distribution Design Option – 1 Design Option – 2 Design Option – 3 Design Option – 4 Design Option – 5

Network Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2

Distribution VSL VSL VSL VSL VSL

Distribution layer design option 1 is intended for main campus and remote large site VSS
Domain
campus locations, and is based on Cisco Catalyst 6500-E Series switches using the Cisco ID : 1
VSS, as shown in Figure 19.
Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2
Figure 19 VSS-Enabled Distribution Layer Network Design Access VSL VSL VSL VSL VSL
VSS

228486
Conferencing Domain
ID : 2
Rooms Engineering
Lab The various core/distribution layer interconnects offer the following:
Lobby
• Core/distribution layer interconnection option 1—A single physical link between
Access each core switch with the corresponding distribution switch.
• Core/distribution layer interconnection option 2—A single physical link between
each core switch with the corresponding distribution switch, but each link is logically
grouped to appear as one single link between the core and distribution layers.
• Core/distribution layer interconnection option 3—Two physical links between each
core switch with the corresponding distribution switch. This design creates four
equal cost multi-path (ECMP) with multiple control plane adjacency and redundant
Switch-1 Switch-2
path information. Multiple links provide greater redundancy in case of link failover.
VSL • Core/distribution layer interconnection option 4—Two physical links between each
Distribution
core switch with the corresponding distribution switch. There is one link direction
229358

between each switch as well as one link connecting to the other distribution switch.
The additional link provides greater redundancy in case of link failover. Also these
links are logically grouped to appear like option 1 but with greater redundancy.
Medium Enterprise Design Profile (MEDP)—LAN Design

• Core/distribution layer interconnection option 5—This provides the most This distribution layer network design provides protection against various types of
redundancy between the VSS-enabled core and distribution switches as well as the hardware and software failure, and can deliver consistent sub-second network recovery.
most simplified configuration, because it appears as if there is only one logical link A single Catalyst 4500-E with multiple redundant system components can be deployed to
between the core and the distribution. Cisco recommends deploying this option offer 1+1 in-chassis redundancy, as shown in Figure 22.
because it provides higher redundancy and simplicity compared to any other
Figure 22 Highly Redundant Single Distribution Design
deployment option.
Conferencing
Distribution Layer Design Option 2—Cisco Catalyst 4500-E-Based Distribution Rooms Engineering
Network Lab
Lobby
Two cost-effective distribution layer models have been designed for the medium-sized Access
and small-sized buildings within each campus location that interconnect to the
centralized core layer design option and distributed wiring closet access layer switches.
Both models are based on a common physical LAN network infrastructure and can be
chosen based on overall network capacity and distribution block design. Both distribution
layer design options use a cost-effective single and highly resilient Cisco Catalyst 4500-E
as an aggregation layer system that offers consistent network operation like a Redundant
VSS-enabled distribution layer switch. The Cisco Catalyst 4500-E Series provides the Supervisor
same technical benefits of VSS for a smaller network capacity within a single Cisco Redundant
Distribution
Line Cards

229360
platform. The two Cisco Catalyst 4500-E-based distribution layer options are shown in Redundant
Figure 21. Power Cycle
Figure 21 Two Cisco Catalyst 4500-E-Based Distribution Layer Options
Distribution layer design option 2 is intended for the remote medium-sized campus
Distribution Design 1 – Distribution Design 2 – locations, and is based on the Cisco Catalyst 4500-E Series switches. Although the
Hybrid Distribution Block Multi-Layer Distribution Block remote medium and the main and remote large site campus locations share similar design
principles, the remote medium campus location is smaller and may not need a
Conferencing Conferencing VSS-based redundant design. Fortunately, network upgrades and expansion become
Rooms Engineering Rooms Engineering easier to deploy using distribution layer option 2, which helps retain the original network
Lab Lab topology and the management operation. Distribution layer design option 2 meets the
Lobby Lobby
following goals:
Access • Scalability—The modular Cisco Catalyst 4500-E chassis provides the flexibility for
distribution block expansion with high throughput modules and port scalability
without compromising network performance.
• Resiliency—The single distribution system must be equipped with redundant
system components, such as supervisor, line card, and power supplies. Implementing
Distribution redundant components increases network resiliency during various types of failure
conditions using NSF/SSO and EtherChannel technology.
229359

Cisco Catalyst 4500-E – Sup6-E Cisco Catalyst 4500-E – Sup6E-L • Simplicity—This cost-effective design simplifies the distribution block similarly to a
VSS-enabled distribution system. The single IP gateway design develops a unified
point-to-point network topology in the distribution block to eliminate traditional
The hybrid distribution block must be deployed with the next-generation supervisor
protocol limitations, enabling the network to operate at full capacity.
Sup6-E module. Implementing redundant Sup6-Es in the distribution layer can
interconnect access layer switches and core layer switches using a single point-to-point • Cost-effectiveness—The single distribution system in the core layer helps reduce
logical connection. This cost-effective and resilient distribution design option leverages capital, operational, and ownership cost for the medium-sized campus network
core layer design option 2 to take advantage of all the operational consistency and design.
architectural benefits.
Distribution Layer Design Option 3—Cisco Catalyst 3750-X StackWise-Based
Alternatively, the multilayer distribution block option requires the Cisco Catalyst 4500-E
Series Switch with next-generation supervisor Sup6L-E deployed. The Sup6L-E Distribution Network
supervisor is a cost-effective distribution layer solution that meets all network foundation Distribution layer design option 3 is intended for a very small building with a limited
requirements and can operate at moderate capacity, which can handle a medium-sized number of wiring closet switches in the access layer that connects remote classrooms or
enterprise distribution block. and office network with a centralized core, as shown in Figure 23.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 23 Cisco StackWise Plus-enabled Distribution Layer Network Design Enterprise campuses may deploy a wide range of network endpoints. The campus
network infrastructure resources operate in shared service mode, and include
Utilities Storage S/H IT-managed devices such as Cisco TelePresence and non-IT-managed devices such as
Department Department Department employee laptops. Based on several endpoint factors such as function and network
demands and capabilities, two access layer design options can be deployed with campus
Access
network edge platforms, as shown in Figure 24.
Figure 24 Access Layer Design Models

Access Design Option 1 – Access Design Option 2 –


Modular/Stackable Fix Configuration

228489
Distribution

Access
While providing consistent network services throughout the campus, a number of network
users and IT-managed remote endpoints can be limited in this building. This distribution
Cisco Cisco Cisco Cisco Cisco
layer design option recommends using the Cisco Catalyst 3750-X StackWise Plus Series Catalyst 4500-E Catalyst 3750-X Catalyst 2960-S Catalyst 3560-X Catalyst 2960

229361
platform for the distribution layer switch. Sup6E-L StackWise Plus FlexStack
The fixed-configuration Cisco Catalyst 3750-X Series switch is a multilayer platform that
supports Cisco StackWise Plus technology to simplify the network and offers flexibility to
expand the network as it grows. With Cisco StackWise Plus technology, multiple Catalyst Access Layer Design Option 1—Modular/StackWise Plus/FlexStack Access Layer
3750-X can be stacked into a high-speed backplane stack ring to logically build as a single Network
large distribution system. Cisco StackWise Plus supports up to nine switches into single
Access layer design option 1 is intended to address the network scalability and
stack ring for incremental network upgrades, and increases effective throughput capacity
availability for the IT-managed critical voice and video communication network edge
up to 64 Gbps. The chassis redundancy is achieved via stacking, in which member
devices. To accelerate user experience and campus physical security protection, these
chassis replicate the control functions with each member providing distributed packet
devices require low latency, high performance, and a constant network availability
forwarding. This is achieved by stacked group members acting as a single virtual Catalyst
switching infrastructure. Implementing a modular, Cisco StackWise Plus and latest Cisco's
3750-X switch. The logical switch is represented as one switch by having one stack
innovation FlexStack-capable platform provides flexibility to increase network scale in the
member act as the master switch. Thus, when failover occurs, any member of the stack
densely populated campus network edge.
can take over as a master and continue the same services. It is a 1:N form of redundancy
where any member can become the master. This distribution layer design option is ideal The Cisco Catalyst 4500-E with supervisor Sup6E-L can be deployed to protect devices
for the remote small campus location. against access layer network failure. Cisco Catalyst 4500-E Series platforms offer
consistent and predictable sub-second network recovery using NSF/SSO technology to
Campus Access Layer Network Design minimize the impact of outages on enterprise business and IT operation.
The Cisco Catalyst 3750-X Series is the alternate Cisco switching platform in this design
The access layer is the first tier or edge of the campus, where end devices such as PCs,
option. Cisco StackWise Plus technology provides flexibility and availability by clustering
printers, cameras, Cisco TelePresence, and so on attach to the wired portion of the
multiple Cisco Catalyst 3750-X Series Switches into a single high-speed stack ring that
campus network. It is also the place where devices that extend the network out one more
simplifies operation and allows incremental access layer network expansion. The Cisco
level, such as IP phones and wireless access points (APs), are attached. The wide variety
Catalyst 3750-X Series leverages EtherChannel technology for protection during
of possible types of devices that can connect and the various services and dynamic
member link or stack member switch failure.
configuration mechanisms that are necessary, make the access layer one of the most
feature-rich parts of the campus network. Not only does the access layer switch allow The Catalyst 2960-S with FlexStack technology is Cisco's latest innovation in access-layer
users to access the network, the access layer switch must provide network protection so tier. Based on StackWise Plus architecture, the FlexStack design is currently supported on
that unauthorized users or applications do not enter the network. The challenge for the Layer-2 Catalyst 2960-S Series switches. Following to the Catalyst 3750-X StackWise Plus
network architect is determining how to implement a design that meets this wide variety success, the Catalyst 2960-S model offers high availability, increased port-density with
of requirements, the need for various levels of mobility, the need for a cost-effective and unified single control-plane and management to reduce the cost for small enterprise
flexible operations environment, while being able to provide the appropriate balance of network. However the architecture of FlexStack on Catalyst 2960-S series platform differs
security and availability expected in more traditional, fixed-configuration environments. from StackWise Plus. The Cisco FlexStack is comprised with hardware module and
The next-generation Cisco Catalyst switching portfolio includes a wide range of fixed and software capabilities. The FlexStack module must be installed on each Catalyst 2960-S
modular switching platforms, each designed with unique hardware and software switches that are intended to be deployed in stack-group. Cisco FlexStack module is
capability to function in a specific role. hot-swappable module providing flexibility to deploy FlexStack without impacting
business network operation.
Medium Enterprise Design Profile (MEDP)—LAN Design

Access Layer Design Option 2—Fixed Configuration Access Layer Network Implementing LAN Network Infrastructure
This entry-level access layer design option is widely chosen for enterprise environments. The preceding sections provided various design options for deploying the Cisco Catalyst
The fixed configuration Cisco Catalyst switching portfolio supports a wide range of platform in multi-tier centralized main campus and remote campus locations. The Medium
access layer technologies that allow seamless service integration and enable intelligent Enterprise Reference network is designed with consistency to build simplified network
network management at the edge. topology for easier operation, management, and troubleshooting independent of campus
The next-generation fixed configuration Cisco Catalyst 3560-X and Catalyst 2960 Series location. Depending on network size, scalability, and reliability requirements, the Medium
is a commonly deployed platform for wired network access that can be in a mixed Enterprise Reference design applies the following common set of Cisco Catalyst
configuration with critical devices such as Cisco IP Phones and non-mission critical platforms in different campus network layers:
endpoints such as library PCs, printers, and so on. For non-stop network operation during • Cisco Catalyst 6500-E in VSS mode
power outages, the Catalyst 3560-X must be deployed with an internal or external • Cisco Catalyst 4500- E
redundant power supply solution using the Cisco RPS 2300. Increasing aggregated
• Cisco Catalyst 3750-X Stackwise and Catalyst 2960-S FlexStack
power capacity allows flexibility to scale with enhanced power-over-Ethernet (PoE+) on a
per-port basis. With its wire-speed 10G uplink forwarding capacity, this design reduces • Cisco Catalyst 3560-X and 2960
network congestion and latency to significantly improve application performance. This subsection focuses on building the initial LAN network infrastructure setup to bring
For a campus network, the Cisco Catalyst 3560-X is an alternate switching solution for the the network up to the stage to start establishing network protocol communication with
multilayer distribution block design option discussed in the previous section. The Cisco the peer devices. The deployment and configuration guidelines remain consistent for
Catalyst 3560-X Series Switches offer limited software feature support that can function
each recommended Catalyst platform independent of their network role. Advanced
only in a traditional Layer 2 network design. To provide a consistent end-to-end enhanced
user experience, the Cisco Catalyst 2960-S supports critical network control services to network services implementation and deployment guidelines will be explained in
secure the network edge, intelligently provide differentiated services to various subsequent section.
class-of-service traffic, as well as simplified management. The Cisco Catalyst must
leverage the 1G dual uplink ports to interconnect the distribution system for increased Deploying Cisco Catalyst 6500-E in VSS Mode
bandwidth capacity and network availability.
All the VSS design principles and foundational technologies defined in this subsection
Both design options offer consistent network services at the campus edge to provide
remains consistent when the Cisco Catalyst 6500-E is deployed in VSS mode at campus
differentiated, intelligent, and secured network access to trusted and untrusted
endpoints. The distribution options recommended in the previous section can core or distribution layer.
accommodate both access layer design options. Prior to enabling the Cisco Catalyst 6500-E in VSS mode, enterprise network
administrator must adhere to Cisco recommended best practices to take complete
Deploying Medium Enterprise Network Foundation Services advantage of virtualized system and minimize the network operation downtime when
After each tier in the model has been designed, the next step for the medium enterprise migration is required in a production network. Migrating VSS from the standalone
design is to establish key network foundation services. Regardless of the application Catalyst 6500-E system requires multiple pre and post-migration steps to deploy
function and requirements that medium enterprises demand, the network must be
virtual-system that includes building virtual-system itself and migrating the existing
designed to provide a consistent user experience independent of the geographical
location of the application. The following network foundation design principles or services standalone network configuration to operate in virtual-system environment. Refer to the
must be deployed in each campus location to provide resiliency and availability for all following document for step-by-step migration procedure:
users to obtain and use the applications the medium enterprise offers: http://www.cisco.com/en/US/products/ps9336/products_tech_note09186a0080a7c74
• Implementing LAN network infrastructure c.shtml
• Network addressing hierarchy This subsection is divided into the following categories that provide guidance for
• Network foundation technologies for LAN designs deploying mandatory steps and procedure in implementing VSS and its components in
• Multicast for applications delivery campus distribution and core.
• QoS for application performance optimization • VSS Identifiers
• High availability to ensure user experience even with a network failure • Virtual Switch Link
Design guidance for each of these six network foundation services are discussed in the • Unified Control-Plane
following sections, including where they are deployed in each tier of the LAN design • Multi-Chassis EtherChannel
model, the campus location, and capacity.
• VSL Dual-Active Detection and Recovery
Medium Enterprise Design Profile (MEDP)—LAN Design

VSS Identifiers VSS-SW1(config-vs-domain)# switch 1

This is the first premigration step to be implemented on two standalone Cisco Catalyst
6500-E in the same campus tier that are planned to be clustered into a single logical entity. Standalone Switch 2:
Cisco VSS defines the following two types of physical node identifiers to distinguish VSS-SW2(config)# switch virtual domain 20
remote node within the logical entity as well as to set logical VSS domain identity to VSS-SW2(config-vs-domain)# switch 2
uniquely identify beyond the single VSS domain boundary.
Switch Priority
Domain ID During both virtual-switch bootup processes, the switch priority is negotiated between
Defining the domain identifier (ID) is the initial step in creating a VSS with two physical both virtual switches to determine the control-plane ownership. Virtual-switch configured
chassis. The domain ID value ranges from 1 to 255. Virtual Switch Domain (VSD) is with high priority takes the control-plane ownership while the low priority switch boots up
comprised with two physical switches and they must be configured with common domain in redundant mode. The default switch priority is 100, the lower switch ID is a tie-breaker
ID. When implementing VSS in multi-tier campus network design, the unique domain ID when both virtual-switch node are deployed with default settings.
between different VSS pair will prevent network protocol conflicts and allow simplified Cisco recommends deploying both virtual-switch nodes with identical hardware and
network operation, troubleshooting, and management. software to take full advantage of distributed forwarding architecture with centralized
control and management plane. The control-plane operation is identical on either of the
Switch ID virtual-switch nodes. Modifying the default switch priority is an optional setting since
In current software version, each VSD supports up to two physical switches to build a either of the virtual-switch can provide transparent operation to network and the user.
logical virtual switch. The switch ID value is 1 or 2. Within VSD, each physical chassis must
be uniquely configure switch-ID to successfully deploy VSS. Post VSS migration when Virtual Switch Link
two physical chassis is clustered, from the control-plane and management plane To cluster two physical chassis into single a logical entity, the Cisco VSS technology
perspective, it will create single large system; therefore, all the distributed physical enables the capability to extend various types of single-chassis internal system
interfaces between two chassis are automatically appended with the switch ID (i.e., components to multi-chassis level. Each virtual-switch must be deployed with the direct
<switch-id>/<slot#>/<port#> or TenGigabitEthernet 1/1/1. The significance of the switch physical links and extend the backplane communication boundary over the special links
ID remains within VSD and all the interfaces ID associated to the switch ID will be retained known as Virtual-Switch Link (VSL).
independent of control-plane ownership. See Figure 25.
VSL can be considered as Layer 1 physical links between two virtual-switch nodes and is
Figure 25 VSS Domain and Switch ID designed to not operate any network control protocols. Therefore, the VSL links cannot
establish network protocol adjacencies and are excluded when building the network
topology tables. With the customized traffic engineering on VSL, it is tailored to carry the
following major traffic categories:
Domain 20 SW1 SW2 Core
• Inter-Switch Control Traffic
– Inter-Chassis Ethernet Out Band Channel (EOBC) traffic— Serial
Communication Protocol (SCP), IPC, and ICC.
– Virtual Switch Link Protocol (VSLP) —LMP and RRP control-link packets.
• Network Control Traffic
– Layer 2 Protocols —STP BPDU, PagP+, LACP, CDP, UDLD, LLDP, 802.1x, DTP, etc.
Domain 10 SW1 SW2 Distribution
– Layer 3 Protocols—ICMP, EIGRP, OSPF, BGP, MPLS LDP, PIM, IGMP, BFD, etc.
• Data Traffic
– End-user data application traffic in single-home network designs.
– Integrated service-module with centralized forwarding architecture (i.e., FWSM)
– Remote SPAN
228955

Access
Using EtherChannel technology, the VSS software design provides the flexibility to
The following simple configuration shows how to configure VSS domain ID and switch ID: increase on-demand VSL bandwidth capacity and to protect the network stability during
Standalone Switch 1: the VSL link failure or malfunction.
VSS-SW1(config)# switch virtual domain 20 The following sample configuration shows how to configure VSL EtherChannel:
Medium Enterprise Design Profile (MEDP)—LAN Design

Standalone Switch 1: Figure 26 Recommended VSL Links Design


VSS-SW1(config)# interface Port-Channel 1
Sup720-10GE VSL Sup720-10GE
VSS-SW1(config-if)# switch virtual link 1 Ten5/4 Ten5/4
6708/6716 6708/6716

228956
SW1 Ten1/1 Ten1/1 SW2
VSS-SW1(config)# interface range Ten 1/1 , Ten 5/4
VSS-SW1(config-if)# channel-group 1 mode on
Deploying VSL with multiple diversified VSL-link design offers the following benefits:
• Leverage 10G port from supervisor and use remaining available ports for other
Standalone Switch 2: network connectivity.
VSS-SW2(config)# interface Port-Channel 2 • Use 10G ports from VSL-capable WS-X6708 or WS-X6716 linecard module to
VSS-SW2(config-if)# switch virtual link 2 protect against any abnormal failure on supervisor uplink port (i.e., GBIC failure).
• Reduces the single point-of-failure chances as triggering multiple hardware faults on
VSS-SW2(config)# interface range Ten 1/1 , Ten 5/4 diversified cables, GBIC and hardware modules are rare conditions.
VSS-SW2(config-if)# channel-group 2 mode on • VSL-enabled 10G module boot up rapidly than other installed modules in system.
This software design is required to initialize VSL protocols and communication
VSL Design Consideration during bootup process. If the same 10G module is shared to connect other network
devices, then depending on the network module type and slot bootup order, it is
Implementing VSL EtherChannel is a simple task; however, the VSL design may require possible to minimize traffic losses during system initialization process.
proper design with high reliability, availability, and optimized. Deploying VSL requires • Use 4 class built-in QoS model on each VSL member-links to optimize inter-chassis
careful planning to keep system virtualization intact during VSS system component failure communication traffic, network control, and user data traffic.
on either virtual-switch node. The strategy for reliable VSL design requires the following
three categories of planning:
VSL Bandwidth Capacity
• VSL Links Diversification
From each virtual-switch node, VSL EtherChannel can bundle up to 8 physical
• VSL Bandwidth Capacity member-links. Therefor, VSL can be bundled up to 80G of bandwidth capacity, the
• VSL QoS requirement on exact capacity may truly depend on number of the following factors:
• Aggregated network uplink bandwidth capacity on per virtual-switch node basis. For
VSL Links Diversification example, 2 x 10GE diversified to same remote peer system.
Complete VSL link failure may break the system virtualization and create network • Designing the network with single-homed devices connectivity (no MEC) will force at
instability during VSL link failure. Designing VSL link redundancy through diverse physical least half of the downstream traffic to flow over the VSL link. This type of connectivity
paths on both systems prevents network instability, reduces single point of failure is highly discouraged.
conditions and optimizes bootup process. • Remote SPAN from one switch member to other. The SPANed traffic is considered as
All the traffic traverses over the VSL are encoded with special encapsulation header, a single flow, thus the traffic hashes only over a single VSL link that can lead to
hence VSL protocols is not designed to operate all Catalyst 6500-E supported linecard oversubscription of a particular link. The only way to improve the probability of traffic
module. The next-generation specialized Catalyst 6500-E 10G based supervisor and distribution is to have an additional VSL link. Adding a link increases the chance of
linecard modules are fully capable and equipped with modern hardware ASICs to build distributing the normal traffic that was hashed on the same link carrying the SPAN
VSL communication. VSL EtherChannel can bundle 10G member-links with any of traffic, which may then be sent over a different link.
following next-generate hardware modules: • If the VSS is carrying the services hardware (such as FWSM, WiSM, etc.), then
• Sup720-10G depending on the service module forwarding design, it may be carried over the VSL.
• WS-X6708 Capacity planning for each of the supported services blades is beyond the scope of
this design guide.
• WS-X6716 (must be deployed in performance mode to enable VSL capability)
For an optimal traffic load-sharing between VSL member-links, it is recommended to
Figure 26 shows an example of how to build VSL EtherChannel with multiple diverse bundle VSL member-link in the power of 2 (i.e., 2, 4, and 8).
physical fiber paths from supervisor 10G uplink ports and the VSL-capable 10G hardware
modules.
VSL QoS
The network infrastructure and the application demands of next-generation enterprise
networks have tremendous amount of dependencies on the strong and resilient network
for constant network availability and on-demand bandwidth allocation to provide services
Medium Enterprise Design Profile (MEDP)—LAN Design

compromising performance. Cisco VSS is designed with application intelligence and Figure 27 Inter-Chassis SSO Operation in VSS
automatically enables QoS on VSL interface to provide bandwidth and resource
allocation for different class-of-service traffic. Switch-1 Switch-2
The QoS implementation on VSL EtherChannel operates in restricted mode as it carries
critical inter-chassis backplane traffic. Independent of global QoS settings, the VSL
CFC or DFC Line Cards CFC or DFC Line Cards
member-links are automatically configured with system generated QoS settings to
protect different class of applications. To retain system stability, the inter-switch VSLP CFC or DFC Line Cards CFC or DFC Line Cards
protocols the QoS settings are fine tuned to protect high priority traffic with different CFC or DFC Line Cards CFC or DFC Line Cards
thresholds even during VSL link congestion. Active VSL Standby
Virtual SF RP PFC SF RP PFC Virtual
To deploy VSL in non-blocking mode and increase the queue depth, the Sup720-10G
uplink ports can be configured in one of the following two QoS modes: Switch Active Supervisor Standby HOT Supervisor Switch
• Default (Non-10G-only mode)—In this mode, all ports must follow a single queuing CFC or DFC Line Cards CFC or DFC Line Cards
mode. If any 10-Gbps port is used for the VSL link, the remaining ports (10 Gbps or
1Gbps) follow the same CoS-mode of queuing for any other non-VSL connectivity CFC or DFC Line Cards CFC or DFC Line Cards
because VSL only allows class of service (CoS)-based queuing. CFC or DFC Line Cards CFC or DFC Line Cards
• Non-blocking (10G-only mode)—In this mode, all 1-Gbps ports are disabled, as the
entire supervisor module operates in a non-blocking mode. Even if only one 10G port SF: Switch Fabric CFC: Centralized Forwarding Card

228957
used as VSL link, still both 10-Gbps ports are restricted to CoS-based trust model. RP: Route Processor DFC: Distributed Forwarding Card
PFC: Policy Forwarding Card
Implementing 10G mode may assist in increasing the number of transmit and receive
queue depth level; however, restricted VSL QoS prevents reassigning different
To successfully establish SSO communication between two virtual-switch nodes, the
class-of-service traffic in different queues. Primary benefit in implementing 10G-only
following criteria must match between both virtual-switch node:
mode is to deploy VSL port in non-blocking mode to dedicate complete 10G bandwidth
on port. Deploying VSS network based on Cisco’s recommendation significantly reduces • Identical software version
VSL link utilization, thus minimizing the need to implement 10G-only mode and using all • Consistent VSD and VSL interface configuration
1G ports for other network connectivities (i.e., out-of-band network management port). • Power mode and VSL-enabled module power settings
• Global PFC Mode
Unified Control-Plane
• SSO and NSF-enabled
Deploying redundant supervisor with common hardware and software components into During the bootup process, the SSO synchronization checks all the above criteria with
single standalone Cisco Catalyst 6500-E platform automatically enables the Stateful remote virtual-system. If any of the criteria fails to match, it will force the virtual-switch node
Switch Over (SSO) capability to provide in-chassis supervisor redundancy in highly to boot in RPR or cold-standby state that cannot synchronize protocol and forwarding
redundant network environment. The SSO operation on active supervisor holds information.
control-plane ownership and communicates with remote Layer 2 and Layer 3 neighbors
to build distributed forwarding information. SSO-enabled active supervisor is tightly
VSL Dual-Active Detection and Recovery
synchronized with standby supervisor with several components (protocol state-machine,
configuration, forwarding information, etc.). As a result, if an active supervisor fails, a The preceding section described VSL EtherChannel functions as extended backplane
hot-standby supervisor takes over control-plane ownership and initializes protocol link that enables system virtualization by transporting inter-chassis control traffic, network
graceful-recovery with peer devices. During network protocol graceful-recovery process control plane and user data traffic. The state machine of the unified control-plane
the forwarding information remains non-disrupted to continue nonstop packet switching protocols and distributed forwarding entries gets dynamically synchronized between the
in hardware. two virtual-switch nodes. Any fault triggered on VSL component leads to a catastrophic
Leveraging the same SSO and NSF technology, the Cisco VSS supports inter-chassis instability in VSS domain and beyond. The virtual-switch member that assumes the role
SSO redundancy by extending the supervisor redundancy capability from single-chassis of hot-standby keeps constant communication with the active switch. The role of the
to multi-chassis level. Cisco VSS uses VSL EtherChannel as a backplane path to establish hot-standby switch is to assume the active role as soon as it detects a loss of
SSO communication between active and hot-standby supervisor deployed in separate communication with its peer via all VSL links without the operational state information of
physical chassis. Entire virtual-switch node gets reset during abnormal active or the remote active peer node. Such network condition is known an dual-active, where both
hot-standby virtual-switch node failure. See Figure 27. virtual switches get split with common configuration and takes control plane ownership.
The network protocols detect inconsistency and instability when VSS peering devices
detect two split systems claiming the same addressing and identifications. Figure 28
depicts the state of campus topology in a single active-state and during dual-active state.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 28 Single Active and Dual-Active Campus Topology All dual-active detection protocol and methods can be implemented in parallel. As
depicted in Figure 29, in a VSS network deployment peering with Cisco Catalyst
Single Active Network State Dual-Active Network State platforms, Cisco recommends deploying Fast-Hello and PAgP+ methods for rapid
detection, to minimize network topology instability, and to retain application performance
Active Hot_Standby Active Active intact.
Figure 29 Recommended Dual-Active Detection Method
SW1 SW2 SW1 SW2
Single Active Network State

Active Hot_Standby

SW1 SW2
Fast-Hello
Link

228958
Dual-Active
Po101
Trusted MEC
The system virtualization gets impacted during the dual-active network state and splits the
single virtual system into two identical Layer 2/3 system. This condition that can
destabilize the campus network communication with two split system advertising

228959
duplicate information. To prevent such network instability, Cisco VSS introduces the
following two methods to rapidly detect dual-active condition and recover the situation by
isolating the old active virtual-switch from network operation before the network gets The following sample configuration illustrates implementing both methods:
destabilized:
• Dual-Active Fast-Hello
• Direct Detection Method—This method requires extra physical connection between
both virtual-switch nodes. Dual-Active Fast-Hello (Fast-Hello) and Bidirectional
Forwarding Decision (BFD) protocols are specifically designed to detect the cr23-VSS-Core(config)#interface range Gig1/5/1 , Gig2/5/1
dual-active condition and protect network malfunction. All VSS supported Ethernet cr23-VSS-Core(config-if-range)# dual-active fast-hello
media and module can be used to deploy this methods. For additional redundancy,
VSS allows configuring up to four dual-active fast-hello links between virtual-switch ! Following logs confirms fast-hello adjacency is established on
nodes. Cisco recommends deploying Fast-Hello in lieu of BFD for the following ! both virtual-switch nodes.
reasons:
%VSDA-SW1_SP-5-LINK_UP: Interface Gi1/5/1 is now dual-active
– Fast-Hello can rapidly detects dual-active condition and trigger recovery detection capable
procedure. Independent of routing protocols and network topology, Fast-Hello %VSDA-SW2_SPSTBY-5-LINK_UP: Interface Gi2/5/1 is now dual-active
offers faster network recovery. detection capable
– Fast-Hello enables the ability to implement dual active detection in multi-vendor
campus or data-center network environments. cr23-VSS-Core#show switch virtual dual-active fast-hello
– Fast-Hello optimize protocol communication procedure without reserving Fast-hello dual-active detection enabled: Yes
higher system CPU and link overheads. Fast-hello dual-active interfaces:
– Fast-Hello supersedes BFD-based detection mechanism. Port Local StatePeer Port Remote State
• Indirect Detection Method—This method relies on intermediate trusted L2/L3 MEC ---------------------------------------------------------------------
Cisco Catalyst remote platform to detect the failure and notify to old-active switch --------
about the dual-active detection. Cisco extended the capability of PAgP protocol with Gi1/5/1 Link up Gi2/5/1 Link up
extra TLVs to signal the dual-active condition and initiate recovery procedure. Most of
the Cisco Catalyst switching platforms can be used as trusted PAgP+ partner to
deploy indirect detection method.
• PAgP+
Medium Enterprise Design Profile (MEDP)—LAN Design

Enabling or disabling dual-active trusted mode on L2/L3 MEC requires MEC to be in Virtual Routed MAC
administration shutdown state. Prior to implementing trust settings, network
administrator must plan for downtime to provision PAgP+-based dual-active The MAC address allocation for the interfaces does not change during a switchover event
configuration settings: when the hot-standby switch takes over as the active switch. This avoids gratuitous ARP
updates (MAC address changed for the same IP address) from devices connected to
VSS. However, if both chassis are rebooted at the same time and the order of the active
cr23-VSS-Core(config)#int range Port-Channel 101 - 102 switch changes (the old hot-standby switch comes up first and becomes active), then the
cr23-VSS-Core(config-if-range)#shutdown entire VSS domain will use that switch's MAC address pool. This means tat the interface
will inherit a new MAC address, which will trigger gratuitous ARP updates to all Layer-2 and
cr23-VSS-Core(config)#switch virtual domain 20 Layer-3 interfaces. Any networking device connected one hop away from the VSS (and
cr23-VSS-Core(config-vs-domain)#dual-active detection pagp trust any networking device that does not support gratuitous ARP), will experience traffic
channel-group 101 disruption until the MAC address of the default gateway/interface is refreshed or timed
cr23-VSS-Core(config-vs-domain)#dual-active detection pagp trust out. To avoid such a disruption, Cisco recommends using the configuration option
channel-group 102 provided with the VSS in which the MAC address for Layer-2 and Layer-3 interfaces is
derived from the reserved pool. This takes advantage of the virtual-switch domain
identifier to form the MAC address. The MAC addresses of the VSS domain remain
cr23-VSS-Core(config)#int range Port-Channel 101 - 102
consistent with the usage of virtual MAC addresses, regardless of the boot order.
cr23-VSS-Core(config-if-range)#no shutdown
The following configuration illustrates how to configure virtual routed MAC address for
Layer 3 interface under switch-virtual configuration mode:
cr23-VSS-Core(config)#switch virtual domain 20
cr23-VSS-Core(config-vs-domain)#mac-address use-virtual
cr23-VSS-Core#show switch virtual dual-active pagp
PAgP dual-active detection enabled: Yes
PAgP dual-active version: 1.1 Deploying Cisco Catalyst 4500-E

Channel group 101 dual-active detect capability w/nbrs In a mid-size medium enterprise campus network, it is recommended to deploy single
highly redundant Cisco Catalyst 4500-E Series platform in the different campus network
Dual-Active trusted group: Yes
tiers-access, distribution, core. Cisco Catalyst 4500-E Series switches is a multi-slots
Dual-Active Partner Partner modular and scalable and high-speed resilient platform. Single Catalyst 4500-E Series
Partner
platform in medium enterprise design is build with multiple redundant hardware
Port Detect Capable Name Port Version components to develop consistent network topology as Catalyst 6500-E VSS based large
Te1/1/2 Yescr22-6500-LB Te2/1/2 1.1 network design. For Catalyst 4500-E in-chassis supervisor redundancy, the network
Te1/3/2 Yes cr22-6500-LB Te2/1/4 1.1 administrators must consider Catalyst 4507R-E or 4510R-E slot chassis to accommodate
Te2/1/2 Yes cr22-6500-LB Te1/1/2 1.1 redundant supervisors and use remaining for LAN network modules.
Te2/3/2 Yes cr22-6500-LB Te1/1/4 1.1
Cisco Catalyst 4500-E Series supports wide-range of supervisor modules designed for
Channel group 102 dual-active detect capability w/nbrs high-performance Layer 2 and Layer 3 network. This reference design recommends
Dual-Active trusted group: Yes deploying next-generation Sup6E and Sup6L-E that supports next-generation hardware
Dual-Active Partner Partner switching capabilities, scalability, and performance for various types application and
Partner services deployed in campus network.
Port Detect Capable Name Port Version
Implementing Redundant Supervisor
Te1/1/3 Yes cr24-4507e-MB Te4/2 1.1
Te1/3/3 Yes cr24-4507e-MB Te3/1 1.1 Cisco Catalyst 4507R-E supports intra-chassis or single-chassis supervisor redundancy
Te2/1/3 Yes cr24-4507e-MB Te4/1 1.1 with dual-supervisor support. Implementing single Catalyst 4507R-E in highly resilient
Te2/3/3 Yes cr24-4507e-MB Te3/2 1.1 mode at various campus layer with multiple redundant hardware components will protect
against different types of abnormal failures. This reference design guide recommends
deploying redundant Sup6E or Sup6L-E supervisor module to deploy full high-availability
feature parity. Mid-size core or distribution layer Cisco Catalyst 4507R-E Series platform
currently do not support inter-chassis supervisor and node redundancy with VSS
Medium Enterprise Design Profile (MEDP)—LAN Design

technology. Therefore, implementing intra-chassis supervisor redundancy and initial Initial IP-based IOS Release for Sup6L-E supports SSO capability for multiple types of
network infrastructure setup will be simplified for medium and small size campus network. Layer 2 protocols. To extend its high availability and enterprise-class Layer 3
Figure 30 illustrates Cisco Catalyst 4500-E-based intra-chassis SSO and NSF capability. feature-parity support on Sup6L-E supervisor, it is recommended to deploy IOS Release
12.2(53)SG2 software version with Enterprise license.
Figure 30 Intra-Chassis SSO Operation
Note This validated design guide provides the Sup6L-E supervisor deployment
Cisco Catalyst 4507R-E/4510R-E guidance and validated test results based on the above recommended software
version.

Line Card Deploying Supervisor Uplinks


Stub Linecards Line Card Every supported supervisor module in Catalyst 4500-E supports different types of uplink
Line Card ports for core network connectivity. Each Sup6E and Sup6L-E supervisor module
supports up two 10G or can deployed as four different 1G uplinks using Twin-Gigabit
Line Card
converters. To build high speed low-latency campus backbone network, it is
PP FE CPU FPGA recommended to leverage and deploy 10G uplinks to accommodate various types of
Active Supervisor bandwidth demanding network application operating in the network.
Sup6E/Sup6L-E Cisco Catalyst 4500-E Series supervisors are designed with unique architecture to
Standby Supervisor
provide constant network availability and reliability during supervisor reset. Even during
PP FE CPU FPGA supervisor switchover or administrative reset events, the state-machines of all deployed
uplinks remains operation and with centralized forwarding architecture it continue to
Line Card
switch packets without impacting any time-sensitive application like Cisco TelePresence.
Such unique architecture protects bandwidth capacity while administrative supervisor
PP: Packet Processor switchover is to upgrade IOS software or during abnormal software triggers supervisor
FE: Forwarding Engine reset.

228960
CPU: Control-Plane Processing
FPGA: Hardware Based Forwarding Information
Sup6E Uplink Port Design

During bootup process, the SSO synchronization checks various criteria to assure both Non-Redundant Mode
supervisors can provide consistent and transparent network services during failure event. In non-redundant mode, there is a single supervisor module deployed in Catalyst 4500-E
If any of the criteria fails to match, it forces the standby supervisor to boot in RPR or chassis. In non-redundant mode, by default both uplink physical ports can be deployed in
cold-standby state which cannot synchronize protocol and forwarding information from 10G or 1G with Twin-Gigabit converters. Each port operates in non-blocking state and can
active supervisor. The following sample configuration illustrates how to implement SSO switch traffic at the wire-rate performance.
mode on Catalyst 4507R-E and 4510R-E chassis deployed with Sup6E and Sup6L-E
redundant supervisors:
Redundant Mode
cr24-4507e-MB#config t
cr24-4507e-MB (config)#redundancy
In recommended redundant mode, Catalyst 4507R-E chassis is deployed with dual
supervisor. To provide wire-rate switching performance, by default port-group 1 from
cr24-4507e-MB (config-red)#mode sso
active and hot-standby supervisor are in active mode and put port-group 2 in the in-active
state. The default configuration can be modified by changing Catalyst 4500-E backplane
cr24-4507e-MB#show redundancy states settings to sharing mode. The shared backplane mode enables operation of port-group 2
my state = 13 - ACTIVE of both supervisors. Note that sharing the 10G backplane ASIC between two 10G port do
peer state = 8 - STANDBY HOT not increase switching capacity, it creates 2:1 oversubscription. If the upstream device is
< snippet > deployed with chassis-redundancy (i.e., Catalyst 6500-E VSS), then it is highly
recommended to deploy all four uplink ports for the following reasons:
Sup6L-E Enhancement • Helps developing full-mesh or V shape physical network topology from each
supervisor module.
Starting in IOS Release 12.2(53)SG, Cisco introduced new Catalyst 4500 – Sup6L-E
• Increases high availability in the network during individual link, supervisor, or other
supervisor module that is designed and built on the next-generation supervisor Sup6E
hardware component failure event.
architecture. As a cost-effective solution, the Sup6L-E supervisor is built with reduced
system resources, but also addresses several types of key business and technical • Reduces latency and network congestion during rerouting traffic through
challenges for mid- to small-scale size Layer-2 network design. non-optimal path.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 31 summarizes the uplink port support on Sup6E model depends on leverage the existing WS-4606 Series 10G linecard to build full-mesh uplink. Figure 32
non-redundant and redundant deployment scenario. illustrates the deployment guideline for highly resilient Catalyst 4507R-E-based Sup6L-E
uplink.
Figure 31 Catalyst 4500-E Sup6E Uplink Mode
Figure 32 Catalyst 4500-E Sup6L-E Uplink Mode
Non-Redundant Mode Redundant Mode Redundant Mode
(Shared Backplace Mode) Non-Redundant Mode Redundant Mode
Port Group Port Group Port Group
1 2 1 2 1 2
WS-X45-Sup6-E SUPERVISOR ENGINE 6-E
X2 10GbE UPLINK
1
UPLINKS

X2 10GbE UPLINK
2
USB
CONSOLE
10/100/1000
“E”
SERIES
WS-X45-Sup6-E SUPERVISOR ENGINE 6-E
X2 10GbE UPLINK
UPLINKS

X2 10GbE UPLINK USB


“E”
SERIES WS-X45-Sup6-E SUPERVISOR ENGINE 6-E
X2 10GbE UPLINK
UPLINKS

X2 10GbE UPLINK USB


“E”
SERIES
Port Group Port Group
1 2 1 2
10/100/1000 10/100/1000
1 2 1 2
TX
10GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT CONSOLE CONSOLE
RX RX
TX
10GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT TX
10GB A SF - 1 X4
TX
10GB A SF - 1 X4 MGT
RX RX RX RX
RESET COMPACT FLASH ACTIVE ACTIVE
UTILIZATION COMPACT FLASH COMPACT FLASH
ACTIVE RESET UTILIZATION ACTIVE ACTIVE RESET UTILIZATION ACTIVE ACTIVE
SUPERVISOR SFP 10GbE SFP 10GbE ACTIVE ACTIVE
STATUS 3 4 5 6 SUPERVISOR SUPERVISOR
1% 100%
EJECT STATUS SFP 10GbE SFP 10GbE
STATUS SFP 10GbE SFP 10GbE
EJECT 3 4 5 6 EJECT 3 4 5 6
1% 100% 1% 100%

UPLINKS UPLINKS
“E” “E”
WS-X45-Sup6-E SUPERVISOR ENGINE 6-E SERIES WS-X45-Sup6-E SUPERVISOR ENGINE 6-E SERIES

X2 10GbE UPLINK X2 10GbE UPLINK USB X2 10GbE UPLINK X2 10GbE UPLINK USB
10/100/1000 10/100/1000
1 2 1 2
CONSOLE CONSOLE
TX
10GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT TX
10GB A SF - 1 X4
TX
10GB A SF - 1 X4 MGT
RX RX RX RX

UPLINKS
“E”
COMPACT FLASH COMPACT FLASH
RESET ACTIVE ACTIVE RESET ACTIVE ACTIVE WS-X45-Sup6-E SUPERVISOR ENGINE 6-E SERIES
UTILIZATION UTILIZATION
UPLINKS
“E”
WS-X45-Sup6-E SUPERVISOR ENGINE 6-E
USB
ACTIVE ACTIVE SERIES
SUPERVISOR SUPERVISOR X2 10GbE UPLINK X2 10GbE UPLINK
USB
SFP 10GbE SFP 10GbE SFP 10GbE SFP 10GbE
STATUS STATUS
10/100/1000
1 2
EJECT 3 4 5 6 EJECT 3 4 5 6 X2 10GbE UPLINK X2 10GbE UPLINK
1% 100% 1% 100%
CONSOLE 1 2
10/100/1000
TX
10 GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT CONSOLE
RX RX
TX
10GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT
RX RX

228961
COMPACT FLASH

Active
RESET UTILIZATION ACTIVE ACTIVE
RESET COMPACT FLASH ACTIVE ACTIVE
ACTIVE UTILIZATION
SUPERVISOR SFP 10GbE SFP 10GbE ACTIVE
STATUS 3 4 5 6 SUPERVISOR
1% 100%
EJECT STATUS SFP 10GbE SFP 10GbE
EJECT 3 4 5 6
1% 100%

UPLINKS
“E”

Active In-Active Active Active


WS-X45-Sup6-E SUPERVISOR ENGINE 6-E SERIES

X2 10GbE UPLINK X2 10GbE UPLINK USB


10/100/1000
1 2
CONSOLE
TX
10GB A SF - 1 X4
TX
10GB ASF - 1 X4 MGT
RX RX

RESET COMPACT FLASH ACTIVE ACTIVE


UTILIZATION
ACTIVE
SUPERVISOR SFP 10GbE SFP 10GbE
STATUS 3 4 5 6
1% 100% EJECT

The following sample configuration provides guideline to modify default backplane Active
settings on Catalyst 4507R-E platform deployed with Sup6E supervisors in redundant Active In-Active
mode. The new backplane settings will be effective only after complete chassis gets reset; WS-X4606-X2-E
X2 10GbE
1 2 3
X2 10GbE
4 5 6
“E”
SERIES

therefore, it is important to plan the downtime during this implementation: STATUS


SFP
GbE
TX
10GBA SF - 1 X4
RX TX
10GB ASF - 1 X4

9 10
RX TX
10GBA SF - 1 X4

11 12
RX

SFP
GbE

13 14 15 16
WS-4606

228962
cr24-4507e-MB#config t
cr24-4507e-MB(config)#hw-module uplink mode shared-backplane Active Active

!A 'redundancy reload shelf' or power-cycle of chassis is required


Deploying Cisco Catalyst 3750-X StackWise Plus
! to apply the new configuration
The next-generation Cisco Catalyst 3750-X switches can be deployed in StackWise
cr24-4507e-MB#show hw-module uplink mode using special stack cable that develops bidirectional physical ring topology. Up to
Active uplink mode configuration is Shared-backplane
nine switches can be integrated into a single stack ring that offers robust distributed
forwarding architecture and unified single control and management plane. Device level
redundancy in StackWise mode is achieved via stacking multiple switches using the
Cisco StackWise Plus technology. Single switch from the stack ring is selected in master
role that manages centralized control-plane process while keeping all member switches
in member role. Cisco StackWise Plus solution is designed based on 1:N redundancy
cr24-4507e-MB#show hw-module mod 3 port-group option. Master switch election in stack ring is determined based on internal protocol
Module Port-group ActiveInactive negotiation. During the active master switch failure, the new master is selected based on
----------------------------------------------------------------------
reelection process that takes place internally through the stack ring. See Figure 33.
3 1 Te3/1-2Gi3/3-6 Figure 33 Cisco StackWise Plus Switching Architecture

cr24-4507e-MB#show hw-module mod 4 port-group Master Slave


Stack Ring
Module Port-group ActiveInactive Catalyst Catalyst
---------------------------------------------------------------------- 3750-X 3750-X

229362
4 1 Te4/1-2Gi4/3-6 Member Member
Switch-1 Switch-2
Sup6L-E Uplink Port Design Single Virtual Switch
The Sup6L-E uplink port function same as Sup6E in non-redundant mode. However, in
Since Cisco StackWise Plus solution is developed with high redundancy, it offers unique
redundant mode the hardware design of Sup6L-E differs from Sup6E—currently does not
centralized control and management plane with forwarding architecture design. To
support shared backplane mode that allow using all uplink ports actively. The Catalyst
logically appear as a single virtual switch, the master switch manages complete
4507R-E deployed with Sup6L-E may use 10G uplink of port group 1 from active and
management-plane and Layer-3 control-plane operations (i.e., IP Routing, CEF, PBR, etc.).
standby supervisor when the upstream device is a single, highly redundant Catalyst
Depending on the implemented network protocols, the master switch communicates with
4507R-E chassis. If the upstream device is deployed with chassis-redundancy, (i.e., Cisco
rest of the Layer 3 network through stack ring and dynamically develops the best path
VSS), then it is recommended to build full-mesh network design between each
global routing and updates local hardware with forwarding information.
supervisor and virtual-switch node. For such design, the network administrator must
Medium Enterprise Design Profile (MEDP)—LAN Design

Unlike centralized Layer-3 management function on master switch, the Layer-2 network Implementing StackWise Mode
topology development is completely based on distributed design. Each member switch
in the stack ring dynamically discovers MAC entry from the local port and use internal As described earlier, Cisco Catalyst 3750-E switch dynamically detects and provision
stack ring network to synchronize MAC address table on each member switch in the stack member-switches in the stack ring without any extra configuration. For planned
ring. Table 2 lists the network protocols that are designed to operate in centralized versus deployment, network administrator can pre-provision the switch in the ring with the
distributed model in Cisco StackWise Plus architecture. following configuration in global configuration mode:

cr36-3750x-xSB(config)#switch 3 provision WS-C3750E-48PD


Table 2 Cisco StackWise Plus Centralized and Distributed Control-Plane

Protocols Function cr36-3750x-xSB#show running-config | include interface GigabitEthernet3/


Layer 2 Protocols MAC Table Distributed interface GigabitEthernet3/0/1
interface GigabitEthernet3/0/2
Spanning-Tree Protocol Distributed
CDP Centralized
VLAN Database Centralized Switch Priority
EtherChannel - LACP Centralized The centralized control-plane and management plane is managed by the master switch in
the stack. By default, the master switch selection within the ring is performed dynamically
Layer 3 Protocols Layer 3 Management Centralized
by negotiating several parameters and capabilities between each switch within the stack.
Layer 3 Routing Centralized Each StackWise-capable member-switch is by default configured with switch priority 1.
cr36-3750x-xSB#show switch
Using stack ring as a backplane communication path, master switch updates the Layer-3 Switch/Stack Mac Address : 0023.eb7b.e580
forwarding information base (FIB) to each member-switch in the stack ring. Synchronizing H/W Current
common FIB in member switch will develop distributed forwarding architecture. Each Switch#Role Mac AddressPriorityVersion State
member switch performs local forwarding physical path lookup to transmit the frame -------------------------------------------------------------------------
instead of having master switch performing forwarding path lookup, which may cause --------------------------------
traffic hair-pinning problem. * 1 Master 0023.eb7b.e58010 Ready
2 Member 0026.5284.ec80 1 0 Ready
SSO Operation in 3750-EX StackWise Plus

Cisco StackWise Plus solution offers network and device resiliency with distributed As described in previous section, the Cisco StackWise architecture is not SSO-capable.
forwarding, but the control plane is not designed like 1+1 redundant design. This is This means all the centralized Layer-3 functions must be reestablished with the neighbor
because Cisco Catalyst 3750-X StackWise switch is not an SSO-capable platform that switch during a master-switch outage. To minimize the control-plane impact and improve
can synchronize control-plane state-machines to a standby switch in the ring. However, it network convergence, the Layer 3 uplinks should be diverse, originating from member
can be configured in NSF-capable mode to gracefully recover from the network during switches, instead of the master switch. The default switch priority must be increased
master switch failure. Therefore, when the master switch failure occurs, all the Layer 3 manually after identifying the master switch and switch number. The new switch priority
function that is primarily deployed on the uplink ports may get disrupted until new master becomes effective after switch reset.
election occurs and reforms Layer 3 adjacency. Although the new master switch in the
stack ring identification is done in range of 0.7 to 1 second, the amount of time for cr36-3750x-xSB (config)#switch 1 priority 15
rebuilding the network and forwarding topology depends on the protocol function and Changing the Switch Priority of Switch Number 1 to 15
scalability.
cr36-3750x-xSB (config)#switch 2 priority 14
To prevent Layer 3 disruption in the network caused by master switch failure, the Changing the Switch Priority of Switch Number 2 to 14
determined master switch with the higher switch priority can be isolated from the uplink
Layer 3 EtherChannel bundle path and use physical ports from switches in member role.
With the Non-Stop Forwarding (NSF) capabilities in the Cisco StackWise Plus cr36-3750x-xSB # show switch
architecture, this network design helps to decrease major network downtime during Switch/Stack Mac Address : 0023.eb7b.e580
master switch failure.
H/W Current
Switch#Role Mac AddressPriority Version State
Medium Enterprise Design Profile (MEDP)—LAN Design

------------------------------------------------------------------------- in design. These switches are designed to go above traditional access-layer switching
--------------------------- function to provide robust next-generation network services (i.e., edge security, PoE+
1 Master 0023.eb7b.e580150Ready EnergyWise, etc.).
* 2 Member 0026.5284.ec80140Ready Cisco Catalyst 3560-X and 2960 Series platform do not support StackWise technology,
therefore, these platforms are ready to deploy with a wide-range of network services at the
access-layer. All recommended access-layer features and configuration will be explained
Stack-MAC Address in following relevant sections.
The access-layer Cisco Catalyst 2960-S Series switches can be stacked using Cisco
To provide a single unified logical network view in the network, the MAC addresses of
FlexStack technology that allows stacking up to four switches into single stack ring using
Layer-3 interfaces on the StackWise (physical, logical, SVIs, port channel) are derived
special properietary cable. Cisco FlexStack leverages several architecture components
from the Ethernet MAC address pool of the master switch in the stack. All the Layer-3
from Cisco Catalyst 3750-X StackWise Plus. However it offers flexibility to upgrade
communication from the StackWise switch to the endpoints (like IP phone, PC, servers,
hardware capability in standalone Cisco Catalyst 2960-S series platform to support
and core network system) is based on the MAC address pool of the master switch.
FlexStack with hot-swappable FlexStack module. The FlexStack module supports dual
cr36-3750x-xSB#show switch on-board StackPort each design to support upto 10G switching capacity. The StackPorts
Switch/Stack Mac Address : 0023.eb7b.e580 on FlexStack module is not a network ports hence it does not run any Layer 2 network
H/W Current protocols, i.e. STP, to develop virtual-switch environment each participating Cisco Catalyst
Switch#Role Mac AddressPriority Version State 2960-S in stack-ring runs FlexStack protocol to keep protocols, ports and forwarding
------------------------------------------------------------------------- information synchronized within the ring. The port configuration and QoS configuration
--------------------------- StackPorts are preset and cannot be modified by user, it is design to minimize the network
1 Master 0023.eb7b.e580150Ready
impact due to misconfiguration. From an operational perspective Cisco Catalyst 2960-S
FlexStack technology is identical as Cisco Catalyst 3750-X StackWise Plus. Therefore, all
* 2 Member 0026.5284.ec80140Ready
the deployment guidelines and best practices defined in “Deploying Cisco Catalyst
3750-X StackWise Plus” section on page -21 must be leverage to deploy Cisco Catalyst
cr36-3750s-xSB #show version 2960-S FlexStack in the campus access-layer.
. . .
Base ethernet MAC Address : 00:23:EB:7B:E5:80 Designing EtherChannel Network
. . .
In this reference design, multiple parallel physical paths are recommended to build highly
scalable and resilient medium enterprise network design. Without optimizing the network
To prevent network instability, the old MAC address assignments on Layer-3 interfaces configuration, by default each interfaces requires network configuration, protocol
can be retained even after the master switch fails. The new active master switch can adjacencies and forwarding information to load-share traffic and provide network
continue to use the MAC addresses assigned by the old master switch, which prevents redundancy.
ARP and routing outages in the network. The default stack-mac timer settings must be The reference architecture of medium enterprise network is design is built upon small- to
changed in Catalyst 3750-X StackWise switch mode using the global configuration CLI mid-size enterprise-class network. Depending on the network applications, scalability,
mode as shown below: and performance requirement, it offers wide-range of campus network designs, platform
cr36-3750x-xSB (config)#stack-mac persistent timer 0 and technology deployment options in different campus locations and building premises.
cr36-3750x-xSB #show switch Each campus network design offers the following set of operation benefits:
Switch/Stack Mac Address : 0026.5284.ec80 • Common network topologies and configuration (all campus network design)
Mac persistency wait time: Indefinite • Simplifies network protocols (eases network operations)
H/W Current
• Increase network bandwidth capacity with symmetric forwarding paths
Switch#Role Mac AddressPriority Version State
• Delivers deterministic network recovery performance
-------------------------------------------------------------------------
---------------------------
Diversified EtherChannel Physical Design
1 Master 0023.eb7b.e580150Ready
* 2 Member 0026.5284.ec80140Ready As a general best practice to build resilient network designs, it is highly recommended to
interconnect all network systems with full-mesh diverse physical paths. Such network
Deploying Cisco Catalyst 3560-X and 2960-S FlexStack design automatically creates multiple parallel paths to provide load-sharing capabilities
and path redundancy during network fault events. Deploying single physical connection
The Medium Enterprise Reference design recommends deploying fixed configuration from a standalone single system to separate redundant upstream systems creates a “V”
Cisco Catalyst 3560-X and 2960 Series platform at the campus network edge. The shape physical network design instead non-recommended partial-mesh “square”
hardware architecture of access-layer fixed configuration is standalone and non-modular network design.
Medium Enterprise Design Profile (MEDP)—LAN Design

Cisco recommends building full-mesh fiber path between each Layer 2 or Layer 3 Figure 35 Non-optimized Campus Network Design
operating in standalone, redundant (dual-supervisor) or virtual systems (Cisco VSS and
StackWise Plus. Independent of network tier and platform role, this design principle is
applicable to all systems across campus network. Figure 34 demonstrates recommended VLAN VLAN VLAN VLAN
10 20 10 20
deployment physical network design model for various Catalyst platforms.
Figure 34 Designing Diverse Full-mesh Network Topology
Access Access

Redundant Per Physical Per Physical


Port Layer 3 Port Layer 2
Mode IGP Adjacency STP Operation
Stackwise
Master Slave Standalone STP STP
Primary SW1 SW2 Distribution Primary SW1 SW2 Distribution
Catalyst Root Root
4507R-E
Per Physical
Port Layer 3
IGP Adjacency

VSS SW1 SW2 Core VSS SW1 SW2 Core

Layer 2 Trunk Port Bi-Directional Traffic Port


Layer 3 Routed Port Asymmetric Traffic Port

228965
VSS SW1 SW2 STP Block Port Non-Forwarding Port

The design in Figure 35 suffers from the following challenges for different network modes:
• Layer 3—Multiple routing adjacencies between two Layer-3 systems. This
configuration doubles or quadruples the control-plane load between each of the
Layer-3 devices. It also uses more system resources like CPU and memory to store
redundant dynamic-routing information with different Layer-3 next-hop addresses
VSS SW1 SW2 connected to same router. It develops Equal Cost Multi Path (ECMP) symmetric
forwarding paths between same Layer 3 peers and offers network scale-dependent
Cisco CEF-based network recovery.
• Layer 2—Multiple parallel Layer-2 paths between STP Root (distribution) and the
access switch will build the network loop. To build loop-free network topology, the
STP blocks the non-preferred individual link path from forwarding state. With the
228964

single STP root virtual-switch, such network topologies cannot fully use all the
network resources as well as it creates non-optimal and asymmetric traffic forwarding
design.
Deploying diverse physical network design with redundant mode standalone or the
virtual-system running single control-plane will require extra network design tuning to • VSL Link Utilization—In a Cisco VSS-based distribution network, it is highly
gain all EtherChannel benefits. Without designing the campus network with EtherChannel recommended to prevent the condition where it creates hardware or network
technology, the individual redundant parallel paths will create network operation state protocol-driven asymmetric forwarding design (i.e., single-home connection or STP
depicted in Figure 35. Such network design cannot leverage distributed forwarding block port). As described in “Deploying Cisco Catalyst 4500-E” section on page -19,
architecture and increase operational and troubleshooting complexities. Figure 35 VSL is not regular network port; it is a special inter-chassis backplane connection
demonstrates the default network design with redundant and complex control-plane used to build virtual system and the network must be designed to switch traffic
operation with under-utilized forwarding plane design. across VSL-only as a last-resort.
Implementing campus wide MEC or EtherChannel across all the network platforms is the
solution for all of the above challenges. Bundling multiple parallel paths into single logical
connection builds single loop-free, point-to-point topology that helps to eliminate all
protocol-driven forwarding restrictions and program hardware for distributed forwarding
to fully use all network resources.
Medium Enterprise Design Profile (MEDP)—LAN Design

EtherChannel Fundamentals Figure 36 Network-Wide Port-Aggregation Protocol Deployment Guidelines

In a standalone EtherChannel mode, multiple and diversified member-links are physically Catalyst Catalyst
connected in parallel between two same physical systems. All the key network devices in 4507R-E 4507R-E Catalyst
3750-X/2960-S
the Medium Enterprise Reference design support EtherChannel technology. StackWise/FlexStack Catalyst
Independent of campus location and the network layer-campus, data center, WAN/Internet 3560-X/2960
edge, all the EtherChannel fundamentals and configuration guideline described in this Access
section remain consistent.
PAgP+ PAgP+ LACP LACP
Multi-Chassis EtherChannel Fundamentals
Cisco’s Multi-Chassis EtherChannel (MEC) technology is a breakthrough innovation that VSL Catalyst Catalyst Catalyst
Distribution
lifts up barrier to create logical point-to-point EtherChannel by distributing physical 4507-E 4507-E 3750-X
connection to each highly resilient virtual-switch node in the VSS domain. Deploying Sup6E Sup6L-E StackWise
Layer 2 or Layer 3 MEC with VSS introduces the following benefits: Catalyst PAgP+
PAgP+
• In addition to all EtherChannel benefits, the distributed forwarding architecture in 6500-E PAgP+ LACP
MEC helps increasing network bandwidth capacity.
• Increases network reliability by eliminating single point-of-failure limitation compare Core Catalyst LACP VSL
LACP
to traditional EtherChannel technology. 3750-X
Service StackWise Catalyst
• Simplifies network control-plane, topology, and system resources with single logical Block 3750-X
bundled interface instead multiple individual parallel physical paths. LACP ON StackWise
• Independent of network scalability, MEC provides deterministic hardware-based
subsecond network recovery. Internet Edge WAN PSTN
Edge QFP Gateway

229363
• MEC technology which remains transparent operation to remote peer devices.
ASR 1006 Cisco 3800
Implementing EtherChannel
Port-aggregation protocol support varies on various types of Cisco platforms; therefore,
In a standalone EtherChannel mode, multiple and diversified member-links are physically depending on each end of EtherChannel device types, Cisco recommends deploying the
connected in parallel between two same physical systems. All the key network devices in port-channel settings specified in Table 3.
the medium enterprise network design support EtherChannel technology. Independent
of campus location and the network layer—campus, data center, WAN/Internet edge, all Table 3 MEC Port-Aggregation Protocol Recommendation
the EtherChannel fundamentals and configuration guideline described in this section
remain consistent. Port-Agg Protocol Local Node Remote Node Bundle State
PAgP+ Desirable Desirable Operational
Port-Aggregation Protocols
LACP Active Active Operational
The member-links of EtherChannel must join the port-channel interface using Cisco
PAgP+ or industry standard LACP port-aggregation protocols. Both protocols are None 1 ON ON Operational
designed to provide identical benefits. Implementing these protocols provides the
following additional benefits: 1. None or Static Mode EtherChannel configuration must be deployed in
exceptional cases when remote node do not support either of the
• Ensure link aggregation parameters consistency and compatibility between two port-aggregation protocols. To prevent network instability, network
systems. administrator must implement static mode port-channel with special
attention that assures no configuration in-compatibility between
• Ensure compliance with aggregation requirements. bundling member-link ports.
• Dynamically react to runtime changes and failures on local and remote Etherchannel
systems.
The implementation guidelines to deploy EtherChannel and MEC in Layer 2 or Layer 3
• Detect and remove unidirectional links and multidrop connections from the mode are simple and consistent. The following sample configuration provides a guidance
Etherchannel bundle. to implement single point-to-point Layer-3 MEC from diverse physical ports in different
module slots that physically resides in two virtual-switch chassis to a single redundant
mode, standalone Catalyst 4507R-E system:
• MEC—VSS-Core
Medium Enterprise Design Profile (MEDP)—LAN Design

EtherChannel Load-Sharing
cr23-VSS-Core(config)#interface Port-channel 102 The numbers of applications and their function in campus network design becomes
cr23-VSS-Core(config-if)# ip address 10.125.0.14 255.255.255.254 highly variable, especially when the network is provided as a common platform for
! Bundling single MEC diversed physical ports and module on per node business operation, campus security and open accessibility to the users. It becomes
basis. important for the network to become more intelligence-aware with deep
cr23-VSS-Core(config)#interface range Ten1/1/3 , Ten1/3/3 , Ten2/1/3 packet-inspection and load-share the traffic by fully using all network resources.
, Ten2/3/3 Fine tuning EtherChannel and MEC add an extra computing intelligence in the network to
cr23-VSS-Core(config-if-range)#channel-protocol pagp make protocol-aware egress forwarding decision between multiple local member-links
cr23-VSS-Core(config-if-range)#channel-group 102 mode desirable paths. For each traffic flow, such tuning optimizes the egress path-selection procedure
with multiple levels of variable information that are originated by the source host (i.e., Layer
cr23-VSS-Core#show etherchannel 102 summary | inc Te 2 to Layer 4). EtherChannel load-balancing method supports varies on Cisco Catalyst
platforms. Table 4 summarizes the currently supported EtherChannel load-balancing
102 Po102(RU) PAgP Te1/1/3(P) Te1/3/3(P)
Te2/1/3(P) Te2/3/3(P)
methods.
cr23-VSS-Core#show pagp 102 neighbor | inc Te
Te1/1/3 cr24-4507e-MB 0021.d8f5.45c0 Te4/2 27s SC Table 4 EtherChannel Load Balancing Support Matrix
10001
Supported Cisco
Te1/3/3 cr24-4507e-MB 0021.d8f5.45c0 Te3/1 28s SC Packet Type Classification Layer Load Balancing Mechanic Catalyst Platform
10001
Te2/1/3 cr24-4507e-MB 0021.d8f5.45c0 Te4/1 11s SC Non-IP Layer 2 src-dst-mac 29xx, 35xx, 3750,
10001 4500, 6500
src-mac
Te2/3/3 cr24-4507e-MB 0021.d8f5.45c0 Te3/2 11s SC
10001
dst-mac
src-dst-mac
• EtherChannel—Catalyst 4507R-E Distribution IP Layer 3 src-ip
dst-ip
cr24-4507e-MB (config)#interface Port-channel 1
cr24-4507e-MB (config-if)# ip address 10.125.0.15 255.255.255.254 src-dst-ip (recommended)
! Bundling single EtherChannel diversed on per physical ports and per IP Layer 4 src-port 4500, 6500
supervisor basis.
dst-port
cr24-4507e-MB (config)#interface range Ten3/1 - 2 , Ten4/1 - 2
cr24-4507e-MB (config-if-range)#channel-protocol pagp src-dst-port
cr24-4507e-MB (config-if-range)#channel-group 1 mode desirable IP XOR L3 and L4 src-dst-mixed-ip-port 6500
(recommended)
cr24-4507e-MB #show etherchannel 101 summary | inc Te
1 Po1 (RU) PAgP Te3/1(P) Te3/2(P) Te4/1(P)
Implementing EtherChannel Load-Sharing
Te4/2(P)
EtherChannel load-sharing is based on a polymorphic algorithm. On per-protocol basis,
load sharing is done based on source XOR destination address or port from Layer 2 to 4
header and ports. For the higher granularity and optimal utilization of each member-link
cr24-4507e-MB#show pagp 1 neighbor | inc Te
port, an EtherChannel can intelligently load-share egress traffic using different algorithms.
Te3/1 cr23-VSS-Core 0200.0000.0014 Te1/3/3 26s SC All Cisco Catalyst 29xx-S, 3xxx-X, and 4500-E switching must be tuned with optimal
660001 EtherChannel load-sharing capabilities similar to the following sample configuration:
Te3/2 cr23-VSS-Core 0200.0000.0014 Te2/3/3 15s SC
660001 cr24-4507e-MB(config)#port-channel load-balance src-dst-ip
Te4/1 cr23-VSS-Core 0200.0000.0014 Te2/1/3 25s SC cr24-4507e-MB#show etherchannel load-balance
660001 EtherChannel Load-Balancing Configuration:
Te4/2 cr23-VSS-Core 0200.0000.0014 Te1/1/3 11s SC src-dst-ip
660001
Medium Enterprise Design Profile (MEDP)—LAN Design

cr23-VSS-Core#show etherchannel 101 detail | inc Hash


Last applied Hash Distribution Algorithm: Adaptive
Implementing MEC Load-Sharing
Network Addressing Hierarchy
The next-generation Catalyst 6500-E Sup720-10G supervisor introduces more
intelligence and flexibility to load-share traffic with upto 13 different traffic patterns. Developing a structured and hierarchical IP address plan is as important as any other
Independent of virtual-switch role, each node in VSD uses same polymorphic algorithm design aspect of the medium enterprise network to create an efficient, scalable, and
to load-share egress Layer 2 or Layer 3 traffic across different member-links from local stable network design. Identifying an IP addressing strategy for the network for the entire
chassis. When computing the load-sharing hash, each virtual-switch node includes local medium enterprise network design is essential.
physical ports of MEC instead remote switch ports; this customized load-sharing is design Note This section does not explain the fundamentals of TCP/IP addressing; for more
to prevent traffic reroute over the VSL. It is recommended to implement the following MEC details, see the many Cisco Press publications that cover this topic.
load-sharing configuration in the global configuration mode:
The following are key benefits of using hierarchical IP addressing:
cr23-VSS-Core(config)#port-channel load-balance src-dst-mixed-ip-port • Efficient address allocation
– Hierarchical addressing provides the advantage of grouping all possible
cr23-VSS-Core#show etherchannel load-balance addresses contiguously.
EtherChannel Load-Balancing Configuration: – In non-contiguous addressing, a network can create addressing conflicts and
src-dst-mixed-ip-port vlan included overlapping problems, which may not allow the network administrator to use the
complete address block.
Note MEC load-sharing becomes effective only when each virtual-switch node have • Improved routing efficiencies
more than one physical path in same bundle interface. – Building centralized main and remote campus site networks with contiguous IP
addresses provides an efficient way to advertise summarized routes to
MEC Hash Algorithm neighbors.
Like MEC load sharing, the hash algorithm is computed independently by each – Route summarization simplifies the routing database and computation during
virtual-switch to perform load share via its local physical ports. Traffic-load share is topology change events.
defined based on number of internal bits allocated to each local member-link ports. Cisco – Reduces network bandwidth utilization used by routing protocols.
Catalyst 6500-E system in VSS mode assigns 8 bits to every MEC, 8-bit can be – Improves overall routing protocol performance by flooding less messages and
represented as 100 percent switching load. Depending on number of local member-link improves network convergence time.
ports in an MEC bundle, the 8-bit hash is computed and allocated to each port for optimal
• Improved system performance
load-sharing result. Like standalone network design, VSS supports the following
EtherChannel hash algorithms: – Reduces the memory needed to hold large-scale discontiguous and
non-summarized route entries.
• Fixed—Default setting. Keep it default if each virtual-switch node has single local
member-link port bundled in same L2/L3 MEC (total 2 ports in MEC). – Reduce higher CPU power to re-compute large-scale routing databases during
topology change events.
• Adaptive—Best practice is to modify to adaptive hash method if each virtual-switch
node has greater than or equal to two physical ports in the same L2/L3 MEC. – Becomes easier to manage and troubleshoot.
When deploying full-mesh V-shape network VSS-enabled campus core network, it is – Helps in overall network and system stability.
recommended to modify default MEC hash algorithm from default settings as shown in
the following sample configuration: Network Foundational Technologies for LAN Design
cr23-VSS-Core(config)#port-channel hash-distribution adaptive In addition to a hierarchical IP addressing scheme, it is also essential to determine which
areas of the medium enterprise design are Layer 2 or Layer 3 to determine whether
Modifying MEC hash algorithm to adaptive mode requires the system to internally routing or switching fundamentals need to be applied. The following applies to the three
reprogram hash result on each MEC. Therefore, plan for additional downtime to make new layers in a LAN design model:
configuration effective. • Core layer—Because this is a Layer 3 network that interconnects several remote
locations and shared devices across the network, choosing a routing protocol is
cr23-VSS-Core(config)#interface Port-channel 101 essential at this layer.
cr23-VSS-Core(config-if)#shutdown
cr23-VSS-Core(config-if)#no shutdown
Medium Enterprise Design Profile (MEDP)—LAN Design

• Distribution layer—The distribution block uses a combination of Layer 2 and Layer 3 • Scalability—The routing protocol function must be network- and system-efficient and
switching to provide for the appropriate balance of policy and access controls, operate with a minimal number of updates and re-computation, independent of the
availability, and flexibility in subnet allocation and VLAN usage. Both routing and number of routes in the network.
switching fundamentals need to be applied. • Rapid convergence—Link-state versus DUAL re-computation and synchronization.
• Access layer—This layer is the demarcation point between network infrastructure Network re-convergence also varies based on network design, configuration, and a
and computing devices. This is designed for critical network edge functions to multitude of other factors that may be more than a specific routing protocol can
provide intelligent application and device-aware services, to set the trust boundary to handle. The best convergence time can be achieved from a routing protocol if the
distinguish applications, provide identity-based network access to protected data network is designed to the strengths of the protocol.
and resources, provide physical infrastructure services to reduce greenhouse • Operational—A simplified routing protocol that can provide ease of configuration,
emission, and more. This subsection provides design guidance to enable various management, and troubleshooting.
types of Layer 1 to 3 intelligent services, and to optimize and secure network edge
ports. Cisco IOS supports a wide range of routing protocols, such as Routing Information
Protocol (RIP) v1/2, Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest
The recommended routing or switching scheme of each layer is discussed in the Path First (OSPF), and Intermediate System-to-Intermediate System (IS-IS). However,
following sections. Cisco recommends using EIGRP or OSPF for this network design. EIGRP is a popular
version of an Interior Gateway Protocol (IGP) because it has all the capabilities needed for
Designing the Core Layer Network small to large-scale networks, offers rapid network convergence, and above all is simple
Because the core layer is a Layer 3 network, routing principles must be applied. Choosing to operate and manage. OSPF is popular link-state protocol for large-scale enterprise and
a routing protocol is essential, and routing design principles and routing protocol service provider networks. OSPF enforces hierarchical routing domains in two tiers by
selection criteria are discussed in the following subsections. implementing backbone and non-backbone areas. The OSPF area function depends on
the network connectivity model and the role of each OSPF router in the domain. OSPF can
Routing Design Principles scale higher but the operation, configuration, and management might become too
complex for the medium enterprise LAN network infrastructure.
Although enabling routing functions in the core is a simple task, the routing blueprint must Other technical factors must be considered when implementing OSPF in the network,
be well understood and designed before implementation, because it provides the such as OSPF router type, link type, maximum transmission unit (MTU) considerations,
end-to-end reachability path of the enterprise network. For an optimized routing design, designated router (DR)/backup designated router (BDR) priority, and so on. This
the following three routing components must be identified and designed to allow more document provides design guidance for using simplified EIGRP in the medium enterprise
network growth and provide a stable network, independent of scale: campus and WAN network infrastructure.
• Hierarchical network addressing—Structured IP network addressing in the medium Note For detailed information on EIGRP and OSPF, see the following URL:
enterprise LAN and/or WAN design is required to make the network scalable, optimal, http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/routed-ex.htm
and resilient. l.
• Routing protocol—Cisco IOS supports a wide range of Interior Gateway Protocols
(IGPs). Cisco recommends deploying a single routing protocol across the medium Designing an End-to-End EIGRP Routing Network
enterprise network infrastructure.
EIGRP is a balanced hybrid routing protocol that builds neighbor adjacency and flat
• Hierarchical routing domain—Routing protocols must be designed in a hierarchical
routing topology on a per autonomous system (AS) basis. Cisco recommends
model that allows the network to scale and operate with greater stability. Building a
considering the following three critical design tasks before implementing EIGRP in the
routing boundary and summarizing the network minimizes the topology size and
medium enterprise LAN core layer network:
synchronization procedure, which improves overall network resource use and
re-convergence. • EIGRP autonomous system—The Layer 3 LAN and WAN infrastructure of the medium
enterprise design must be deployed in a single EIGRP AS, as shown in Figure 37. A
Routing Protocol Selection Criteria single EIGRP AS reduces operational tasks and prevents route redistribution, loops,
and other problems that may occur because of misconfiguration. Figure 37 illustrates
The criteria for choosing the right protocol vary based on the end-to-end network end-to-end single EIGRP Autonomous network design in medium enterprise
infrastructure. Although all the routing protocols that Cisco IOS currently supports can network.
provide a viable solution, network architects must consider all the following critical design
factors when selecting the right routing protocol to be implemented throughout the
internal network:
• Network design—Requires a proven protocol that can scale in full-mesh campus
network designs and can optimally function in hub-and-spoke WAN network
topologies.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 37 Sample End-to-End EIGRP Routing Design in Medium Enterprise LAN Network Implementing EIGRP Routing Protocol

Main Site The following sample configuration provides deployment guideline for implement EIGRP
routing protocol on all Layer-3 network devices into a single Autonomous System (AS):
EIGRP
AS 100
cr23-VSS-Core(config)#router eigrp 100
cr23-VSS-Core(config-router)# network 10.0.0.0
cr23-VSS-Core(config-router)# eigrp router-id 10.125.200.254
cr23-VSS-Core(config-router)# no auto-summary
VSL
cr23-VSS-Core#show ip eigrp neighbors
EIGRP-IPv4 neighbors for process 100
H Address Interface Hold Uptime SRTT RTO
Q Seq
(sec) (ms)
VSL Cnt Num
7 10.125.0.13 Po101 12 3d16h 1
200 0 62
0 10.125.0.15 Po102 10 3d16h 1 200
0 503
1 10.125.0.17 Po103 11 3d16h 1
200 0 52
QFP …
EIGRP
AS 100 cr23-VSS-Core#show ip route eigrp | inc /16|/20|0.0.0.0
WAN
10.0.0.0/8 is variably subnetted, 41 subnets, 5 masks
D 10.126.0.0/16 [90/3072] via 10.125.0.23, 08:33:16,
Port-channel106
D 10.125.128.0/20 [90/3072] via 10.125.0.17, 08:33:15,
Port-channel103
D 10.125.96.0/20 [90/3072] via 10.125.0.13, 08:33:18,
Port-channel101
D 10.125.0.0/16 is a summary, 08:41:12, Null0
VSL

D*EX 0.0.0.0/0 [170/515072] via 10.125.0.27, 08:33:20, Port-channel108

• EIGRP adjacency protection—This increases network infrastructure efficiency and


protection by securing the EIGRP adjacencies with internal systems. This task
involves two subset implementation tasks on each EIGRP-enabled network devices:
VSL
– Increases system efficiency—Blocks EIGRP processing with passive-mode
EIGRP EIGRP EIGRP configuration on physical or logical interfaces connected to non- EIGRP devices
AS 100 AS 100 AS 100 in the network, such as PCs. The best practice helps reduce CPU utilization and
secures the network with unprotected EIGRP adjacencies with untrusted
229364

Remote Remote Remote


Large Site Medium Site Small Site devices. The following sample configuration provide guidelines to enable EIGRP
protocol communication on trusted interface and block on all system interfaces.
This recommended best practice must be enabled on all the EIGRP Layer 3
systems in the network:
cr23-VSS-Core(config)#router eigrp 100
Medium Enterprise Design Profile (MEDP)—LAN Design

cr23-VSS-Core(config-router)# passive-interface default Figure 38 EIGRP Route Aggregator Design


cr23-VSS-Core(config-router)# no passive-interface
Port-channel101 Main Site
cr23-VSS-Core(config-router)# no passive-interface
Port-channel102
Access
<snippet>
– Network security—Each EIGRP neighbor in the LAN/WAN network must be
trusted by implementing and validating the Message-Digest algorithm 5 (MD5)
authentication method on each EIGRP-enabled system in the network. Following
recommended EIGRP MD5 adjacency authentication configuration must on VSL
each non-passive EIGRP interface to establish secure communication with Aggregator Distribution
remote neighbors. This recommended best practice must be enabled on all the
EIGRP Layer 3 systems in the network:

cr23-VSS-Core(config)#key chain eigrp-key


cr23-VSS-Core(config-keychain)# key 1 VSL Core
cr23-VSS-Core(config-keychain-key)#key-string <password>

cr23-VSS-Core(config)#interface range Port-Channel 101 - 108


cr23-VSS-Core(config-if-range)# ip authentication mode eigrp 100
md5
cr23-VSS-Core(config-if-range)# ip authentication key-chain eigrp Aggregator
100 eigrp-key QFP WAN
Edge

• Optimizing EIGRP topology—EIGRP allows network administrators to summarize


multiple individual and contiguous networks into a single summary network before WAN
advertising to the neighbor. Route summarization helps improve network
performance, stability, and convergence by hiding the fault of an individual network
that requires each router in the network to synchronize the routing topology. Each
aggregating device must summarize a large number of networks into a single
summary route. Figure 38 shows an example of the EIGRP topology for the medium Aggregator Aggregator
enterprise LAN design.

VSL

Aggregator
Aggregator

VSL
Aggregator

229365
The following configuration must be applied on each EIGRP route aggregator system as
depicted in Figure 38. EIGRP route summarization must be implemented on upstream
logical port-channel interface to announce single prefix from each block.
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-6500-LB(config)#interface Port-channel100 topology. Such flexibility introduces scalability, performance, and security
cr22-6500-LB(config-if)# ip summary-address eigrp 100 10.125.96.0 challenges, and may require extra attention to protect the network against
255.255.240.0 misconfiguration and miswiring that can create spanning-tree loops and de-stabilize
the network.
• Segmented—Provides a unique VLAN for different organization divisions and
enterprise business function segments to build a per-department logical network. All
network communication between various enterprise and administrative groups
passes through the routing and forwarding policies defined at the distribution layer.
• Hybrid—A hybrid logical network design segments VLAN workgroups that do not
span different access layer switches, and allows certain VLANs (for example, that net
cr22-6500-LB#show ip protocols
management VLAN) to span across the access-distribution block. The hybrid

network design enables flat Layer 2 communication without impacting the network,
Address Summarization: and also helps reduce the number of subnets used.
10.125.96.0/20 for Port-channel100
Figure 38 shows the three design variations for the multilayer network.
<snippet>

cr22-6500-LB#s ip route | inc Null0


D 10.125.96.0/20 is a summary, 3d16h, Null0

• EIGRP Timers—By default, EIGRP speakers transmit Hello packets every 5 seconds,
and terminates EIGRP adjacency if the neighbor fails to receive it within 15 seconds
of hold-down time. In this network design, Cisco recommends retaining default
EIGRP Hello and Hold timers on all EIGRP-enabled platforms.

Designing the Campus Distribution Layer Network


This section provides design guidelines for deploying various types of Layer 2 and Layer
3 technology in the distribution layer. Independent of which implemented distribution
layer design model is deployed, the deployment guidelines remain consistent in all
designs.
Because the distribution layer can be deployed with both Layer 2 and Layer 3
technologies, the following two network designs are recommended:
• Multilayer
• Routed access

Designing the Multilayer Network


A multilayer network is a traditional, simple, and widely deployed scenario, regardless of
network scale. The access layer switches in the campus network edge interface with
various types of endpoints and provide intelligent Layer 1/2 services. The access layer
switches interconnect to distribution switches with the Layer 2 trunk, and rely on the
distribution layer aggregation switch to perform intelligent Layer 3 forwarding and to set
policies and access control.
There are the following three design variations to build a multilayer network; all variations
must be deployed in a V-shape physical network design and must be built to provide a
loop-free topology:
• Flat—Certain applications and user access requires that the broadcast domain
design span more than a single wiring closet switch. The multilayer network design
provides the flexibility to build a single large broadcast domain with an extended star
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 1-39 Multilayer Design Variations


cr22-3750-LB(config)#vlan 900
Multi-Layer – Flat Network Multi-Layer –Segmented VLAN Multi-Layer – Hybrid Network
cr22-3750-LB(config-vlan)#name Mgmt_VLAN
Single Root VSL VSL VSL
Bridge
cr22-3750-LB#show vlan | inc 101|102|900
V-Shape
101 Untrusted_PC_VLANactive Gi1/0/1
Network 102 Lobby_IP_Phone_VLANactive Gi1/0/2
Single Loop-free 900 Mgmt_VLANactive
Etherchannel

Marketing Sales Engineering Marketing Sales Engineering Marketing Sales Engineering Implementing Layer 2 Trunk
VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN
10 10 10 10 20 30 10 20 30
In a typical campus network design, a single access switch will be deployed with more
Network Mangement
than singleVLAN, for example a Data VLAN and a Voice VLAN. The Layer-2 network

229366
VLAN 900
connection between the distribution and access device is a trunk interface. VLAN tag is
Cisco recommends that the hybrid multilayer access-distribution block design use a added to maintain logical separation between VLANS across the trunk. It is
loop-free network topology, and span a few VLANs that require such flexibility, such as the recommended to implement 802.1Q trunk encapsulation in static mode instead of
management VLAN.
negotiating mode, to improve the rapid link bring-up performance.
The following sample configuration provides guideline to deploy several types of Enabling the Layer-2 trunk on a port-channel automatically enables communication for all
multilayer network components for hybrid multilayer access-distribution block. All the of the active VLANs between the access and distribution. This may create an adverse
configuration and best practices remains consistent and can deployed independent of impact in the large scale network, the access-layer switch may receive traffic flood
Layer 2 platform type and campus location: destined to another access switch. Hence it is important to limit traffic on Layer-2 trunk
ports by statically allowing the active VLANS to ensure efficient and secure network
VTP
performance. Allowing only assigned VLANs on a trunk port automatically filters rest.
VLAN Trunking Protocol (VTP) is a Cisco proprietary Layer -messaging protocol that By default on Cisco Catalyst switches, the native VLAN on each Layer 2 trunk port is
manages the addition, deletion, and renaming of VLANs on a network-wide basis. Cisco's
VLAN 1, and cannot be disabled or removed from VLAN database. The native VLAN
VTP simplifies administration in a switched network. VTP can be configured in three
modes—server, client, and transparent. It is recommended to deploy VTP in transparent remains active on all access switches Layer 2 ports. The default native VLAN must be
mode, set the VTP domain name and change the mode to the transparent mode as properly configured to avoid several security risks—Attack, worm and virus or data theft.
follows: Any malicious traffic originated in VLAN 1 will span across the access-layer network. With
a VLAN-hopping attack it is possible to attack a system which does not reside in VLAN 1.
cr22-3750-LB(config)#vtp domain CCVE-LB Best practice to mitigate this security risk is to implement a unused and unique VLAN ID
cr22-3750-LB(config)#vtp mode transparent as a native VLAN on the Layer-2 trunk between the access and distribution switch. For
cr22-3750-LB(config)#vtp version 2 example, configure VLAN 801 in the access-switch and in the distribution switch. Then
change the default native VLAN setting in both the switches. Thereafter, VLAN 801 must
cr22-3750-LB#show vtp status not be used anywhere for any purpose in the same access-distribution block.
VTP Version capable:1 to 3 The following is the configuration example to implement Layer-2 trunk, filter VLAN list and
VTP version running:2 configure the native-VLAN to prevent attacks and optimize port channel interface. When
VTP Domain Name:CCVE-LB the following configurations are applied on port-channel interface (i.e., Port-Channel 1),
they are automatically inherited on each bundled member-link (i.e., Gig1/0/49 and
Gig1/0/50):
VLAN
cr22-3750-LB(config)#vlan 101 Access-Layer
cr22-3750-LB(config-vlan)#name Untrusted_PC_VLAN
cr22-3750-LB(config)#vlan 102 cr22-3750-LB(config)#vlan 801
cr22-3750-LB(config-vlan)#name Lobby_IP_Phone_VLAN cr22-3750-LB(config-vlan)#name Hopping_VLAN
Medium Enterprise Design Profile (MEDP)—LAN Design

Access-Layer
cr22-3750-LB(config)#interface Port-channel1
cr22-3750-LB(config-if)#description Connected to cr22-6500-LB cr22-3750-LB(config)#spanning-tree mode rapid-pvst
cr22-3750-LB(config-if)#switchport
cr22-3750-LB(config-if)#switchport trunk encapsulation dot1q
cr22-3750-LB(config-if)#switchport trunk native vlan 801 Hardening Spanning-Tree Toolkit
cr22-3750-LB(config-if)#switchport trunk allowed vlan 101-110,900
Ensuring a loop-free topology is critical in a multilayer network design. Spanning-Tree
cr22-3750-LB(config-if)#switchport mode trunk
Protocol (STP) dynamically develops a loop-free multilayer network topology that can
compute the best forwarding path and provide redundancy. Although STP behavior is
cr22-3750-LB#show interface port-channel 1 trunk deterministic, it is not optimally designed to mitigate network instability caused by
hardware miswiring or software misconfiguration. Cisco has developed several STP
Port Mode Encapsulation Status Native vlan extensions to protect against network malfunctions, and to increase stability and
Po1 on 802.1q trunking 801 availability. All Cisco Catalyst LAN switching platforms support the complete STP toolkit
suite that must be enabled globally on individual logical and physical ports of the
distribution and access layer switches.
Port Vlans allowed on trunk
Po1 101-110,900
Figure 40 shows an example of enabling various STP extensions on distribution and
access layer switches in all campus sites.
Port Vlans allowed and active in management domain Figure 40 Protecting Multilayer Network with Cisco STP Toolkit
Po1 101-110,900
VSL
Port Vlans in spanning tree forwarding state and not pruned STP Root Bridge
Po1 101-110,900

Spanning-Tree in Multilayer Network Root Guard UDLD

Spanning Tree (STP) is a Layer-2 protocol that prevents logical loops in switched networks
with redundant links. The medium enterprise LAN network design uses Etherchannel or
MEC (point-to-point logical Layer-2 bundle) connection between access-layer and UDLD
distribution switch which inherently simplifies the STP topology and operation. In this
point-to-point network design, the STP operation is done on a logical port, therefore, it will
be assigned automatically in forwarding state.
BPDU Guard
Over the years, the STP protocols have evolved into the following versions: Root Guard
• Per-VLAN Spanning Tree Plus (PVST+)—Provides a separate 802.1D STP for each PortFast

228967
active VLAN in the network. Edge Port
• IEEE 802.1w-Rapid PVST+—Provides an instance of RSTP (802.1w) per VLAN. It is Layer 2 Trunk Port
easy to implement, proven in large scale networks that support up to 3000 logical
ports and greatly improves network restoration time. Note For additional STP information, see the following URL:
• IEEE 802.1s-MST—Provides up to 16 instances of RSTP (802.1w) and combines many http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_troubl
VLANs with the same physical and logical topology into a common RSTP instance. eshooting_technotes_list.html.

The following is the example configuration for enabling STP in multilayer network: Designing the Routed Access Network

Distribution-Layer Routing functions in the access layer network simplify configuration, optimize distribution
performances, and provide end-to-end troubleshooting tools. Implementing Layer 3
cr22-6500-LB(config)#spanning-tree mode rapid-pvst
functions in the access layer replaces Layer 2 trunk configuration to a single point-to-point
Layer 3 interface with a collapsed core system in the aggregation layer. Pushing Layer 3
cr22-6500-LB #show spanning-tree summary | inc mode functions one tier down on Layer 3 access switches changes the traditional multilayer
network topology and forwarding development path. Implementing Layer 3 functions in
!Switch is in rapid-pvst mode the access switch does not require any physical or logical link reconfiguration; the
Medium Enterprise Design Profile (MEDP)—LAN Design

access-distribution block can be used, and is as resilient as in the multilayer network • EIGRP network boundary—All EIGRP neighbors must be in a single AS to build a
design. Figure 41 shows the differences between the multilayer and routed access common network topology. The Layer 3 access switches must be deployed in EIGRP
network designs, as well as where the Layer 2 and Layer 3 boundaries exist in each stub mode for a concise network view.
network design.
Implementing Routed Access in Access-Distribution Block
Figure 41 Layer 2 and Layer 3 Boundaries for Multilayer and Routed Access Network Design
Cisco IOS configuration to implement Layer 3 routing function on the Catalyst
Core
VSL
Core
VSL access-layer switch remains consistent. Refer to EIGRP routing configuration and best
practices defined in Designing End-to-End EIGRP Network section to routing function in
access-layer switches.
Routing Routing
EIGRP creates and maintains a single flat routing topology network between EIGRP peers.
Building a single routing domain in a large-scale campus core design allows for complete
VSL Layer 3 VSL Layer 3 network visibility and reachability that may interconnect multiple campus components,
Distribution Distribution
such as distribution blocks, services blocks, the data center, the WAN edge, and so on.
In the three- or two-tier deployment models, the Layer 3 access switch must always have
STP Routing
single physical or logical forwarding to a distribution switch. The Layer 3 access switch
dynamically develops the forwarding topology pointing to a single distribution switch as a
Layer 2
Access Access single Layer 3 next hop. Because the distribution switch provides a gateway function to
Layer 2 rest of the network, the routing design on the Layer 3 access switch can be optimized with
Marketing Sales Engineering Marketing Sales Engineering
VLAN VLAN VLAN VLAN VLAN VLAN
the following two techniques to improve performance and network reconvergence in the
10 20 30 10 20 30 access-distribution block, as shown in Figure 42:
Multi-Layer Network Routed-Access Network Deploying the Layer 3 access switch in EIGRP stub mode

229367

EIGRP stub router in Layer-3 access-switch can announce routes to a
Routed-access network design enables Layer 3 access switches to perform Layer 2 distribution-layer router with great flexibility.
demarcation point and provide Inter-VLAN routing and gateway function to the endpoints. The following is an example configuration to enable EIGRP stub routing in the Layer-3
The Layer 3 access switches makes more intelligent, multi-function and policy-based access-switch, no configuration changes are required in the distribution system:
routing and switching decision like distribution-layer switches. • Access layer
Although Cisco VSS and a single redundant distribution design are simplified with a
single point-to-point EtherChannel, the benefits in implementing the routed access cr22-4507-LB(config)#router eigrp 100
design in medium enterprises are as follows:
cr22-4507-LB(config-router)# eigrp stub connected
• Eliminates the need for implementing STP and the STP toolkit on the distribution
system. As a best practice, the STP toolkit must be hardened at the access layer.
cr22-4507-LB#show eigrp protocols detailed
• Shrinks the Layer 2 fault domain, thus minimizing the number of denial-of-service
(DoS)/ distributed denial-of-service (DDoS) attacks.
Address Family Protocol EIGRP-IPv4:(100)
• Bandwidth efficiency—Improves Layer 3 uplink network bandwidth efficiency by EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
suppressing Layer 2 broadcasts at the edge port.
EIGRP maximum hopcount 100
• Improves overall collapsed core and distribution resource utilization. EIGRP maximum metric variance 1
Enabling Layer 3 functions in the access-distribution block must follow the same core EIGRP NSF-aware route hold timer is 240
network designs as mentioned in previous sections to provide network security as well as EIGRP NSF enabled
optimize the network topology and system resource utilization:
NSF signal timer is 20s
• EIGRP autonomous system—Layer 3 access switches must be deployed in the NSF converge timer is 120s
same EIGRP AS as the distribution and core layer systems.
Time since last restart is 2w2d
• EIGRP adjacency protection—EIGRP processing must be enabled on uplink Layer 3 EIGRP stub, connected
EtherChannels, and must block remaining Layer 3 ports by default in passive mode.
Topologies : 0(base)
Access switches must establish secured EIGRP adjacency using the MD5 hash
algorithm with the aggregation system.
• Distribution layer
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-6500-LB#show ip eigrp neighbors detail port-channel 101 cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 25 permit
EIGRP-IPv4 neighbors for process 100 10.125.0.0/16
H Address Interface Hold UptimeSRTT RTO Q cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 30 permit
Seq 10.126.0.0/16
(sec) (ms) Cnt
Num
2 10.125.0.1 Po101 13 3d18h 4 2000
98 cr22-6500-LB(config)#router eigrp 100
Version 4.0/3.0, Retrans: 0, Retries: 0, Prefixes: 6 cr22-6500-LB(config-router)#distribute-list route-map
Topology-ids from peer - 0 EIGRP_STUB_ROUTES out Port-channel101
Stub Peer Advertising ( CONNECTED ) Routes cr22-6500-LB(config-router)#distribute-list route-map
Suppressing queries EIGRP_STUB_ROUTES out Port-channel102
cr22-6500-LB(config-router)#distribute-list route-map
EIGRP_STUB_ROUTES out Port-channel103
• Summarizing the network view with a default route to the Layer 3 access switch for
intelligent routing functions
cr22-6500-LB#show ip protocols
Figure 42 Designing and Optimizing EIGRP Network Boundary for the Access Layer
Outgoing update filter list for all interfaces is not set
EIGRP Stub Network Summarized EIGRP Route Advertisment Port-channel101 filtered by
VSL VSL
Port-channel102 filtered by
Port-channel103 filtered by

EIGRP • Access layer


AS-100

VSL VSL
cr22-4507-LB#show ip route eigrp
Summarized Aggregator
Network 10.0.0.0/8 is variably subnetted, 12 subnets, 4 masks
Summarized Network D 10.122.0.0/16 [90/3840] via 10.125.0.0, 07:49:11,
Non-Summarized + Default Network
Connected Port-channel1
Network
D 10.123.0.0/16 [90/3840] via 10.125.0.0, 01:42:22,
EIGRP EIGRP
Stub AS-100
Port-channel1
Mode D 10.126.0.0/16 [90/3584] via 10.125.0.0, 07:49:11,
Marketing Sales Engineering Marketing Sales Engineering
VLAN VLAN VLAN VLAN VLAN VLAN Port-channel1
10 20 30 10 20 30
D 10.124.0.0/16 [90/64000] via 10.125.0.0, 07:49:11,
Routed-Access Network Routed-Access Network
229368

Port-channel1
D 10.125.0.0/16 [90/768] via 10.125.0.0, 07:49:13,
The following sample configuration demonstrate the procedure to implement route Port-channel1
filtering at the distribution layer that allows summarized and default-route advertisement D *EX 0.0.0.0/0 [170/515584] via 10.125.0.0, 07:49:13, Port-channel1
to build concise network topology at the access layer:
• Distribution layer
Multicast for Application Delivery
cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 5 permit Because unicast communication is based on the one-to-one forwarding model, it
0.0.0.0/0 becomes easier in routing and switching decisions to perform destination address
cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 10 permit lookup, determine the egress path by scanning forwarding tables, and to switch traffic. In
10.122.0.0/16 the unicast routing and switching technologies discussed in the previous section, the
cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 15 permit network may need to be made more efficient by allowing certain applications where the
10.123.0.0/16 same content or application must be replicated to multiple users. IP multicast delivers
cr22-6500-LB(config)# ip prefix-list EIGRP_STUB_ROUTES seq 20 permit source traffic to multiple receivers using the least amount of network resources as
10.124.0.0/16
Medium Enterprise Design Profile (MEDP)—LAN Design

possible without placing an additional burden on the source or the receivers. Multicast The medium enterprise LAN design must be able to build packet distribution trees that
packet replication in the network is done by Cisco routers and switches enabled with specify a unique forwarding path between the subnet of the source and each subnet
Protocol Independent Multicast (PIM) as well as other multicast routing protocols. containing members of the multicast group. A primary goal in distribution trees
Similar to the unicast methods, multicast requires the following design guidelines: construction is to ensure that no more than one copy of each packet is forwarded on each
branch of the tree. The two basic types of multicast distribution trees are as follows:
• Choosing a multicast addressing design
• Source trees—The simplest form of a multicast distribution tree is a source tree, with
• Choosing a multicast routing protocol
its root at the source and branches forming a tree through the network to the
• Providing multicast security regardless of the location within the medium enterprise receivers. Because this tree uses the shortest path through the network, it is also
design referred to as a shortest path tree (SPT).
• Shared trees—Unlike source trees that have their root at the source, shared trees
Multicast Addressing Design use a single common root placed at a selected point in the network. This shared root
The Internet Assigned Numbers Authority (IANA) controls the assignment of IP multicast is called a rendezvous point (RP).
addresses. A range of class D address space is assigned to be used for IP multicast The PIM protocol is divided into the following two modes to support both types of
applications. All multicast group addresses fall in the range from 224.0.0.0 through multicast distribution trees:
239.255.255.255. Layer 3 addresses in multicast communications operate differently; • Dense mode (DM)—Assumes that almost all routers in the network need to distribute
while the destination address of IP multicast traffic is in the multicast group range, the multicast traffic for each multicast group (for example, almost all hosts on the network
source IP address is always in the unicast address range. Multicast addresses are belong to each multicast group). PIM in DM mode builds distribution trees by initially
assigned in various pools for well-known multicast-based network protocols or flooding the entire network and then pruning back the small number of paths without
inter-domain multicast communications, as listed in Table 5. receivers.
• Sparse mode (SM)—Assumes that relatively few routers in the network are involved
Table 5 Multicast Address Range Assignments in each multicast. The hosts belonging to the group are widely dispersed, as might be
the case for most multicasts over the WAN. Therefore, PIM-SM begins with an empty
Application Address Range
distribution tree and adds branches only as the result of explicit Internet Group
Reserved—Link local network protocols. 224.0.0.0/24 Management Protocol (IGMP) requests to join the distribution. PIM-SM mode is ideal
for a network without dense receivers and multicast transport over WAN
Global scope—Group communication between an 224.0.1.0 – 238.255.255.255
environments, and it adjusts its behavior to match the characteristics of each receiver
organization and the Internet.
group.
Source Specific Multicast (SSM)—PIM extension for 232.0.0.0/8 Selecting the PIM mode depends on the multicast applications that use various
one-to-many unidirectional multicast communication. mechanisms to build multicast distribution trees. Based on the multicast scale factor and
GLOP—Inter-domain multicast group assignment with 233.0.0.0/8 centralized source deployment design for one-to-many multicast communication in
reserved global AS. medium enterprise LAN infrastructures, Cisco recommends deploying PIM-SM because
it is efficient and intelligent in building multicast distribution tree. All the recommended
Limited scope—Administratively scoped address that 239.0.0.0/8 platforms in this design support PIM-SM mode on physical or logical (switched virtual
remains constrained within a local organization or AS. interface [SVI] and EtherChannel) interfaces.
Commonly deployed in enterprise, education, and other
organizations.
Designing PIM Rendezvous Point
The following sections discuss best practices in designing and deploying the PIM-SM
During the multicast network design phase, medium enterprise network architects must Rendezvous Point.
select a range of multicast sources from the limited scope pool (239/8).
PIM-SM RP Placement
Multicast Routing Design
It is assumed that each medium enterprise site has a wide range of local multicast sources
To enable end-to-end dynamic multicast operation in the network, each intermediate in the data center for distributed medium enterprise IT-managed media and employee
system between the multicast receiver and source must support the multicast feature. research and development applications. In such a distributed multicast network design,
Multicast develops the forwarding table differently than the unicast routing and switching Cisco recommends deploying PIM RP on each site for wired or wireless multicast
model. To enable communication, multicast requires specific multicast routing protocols receivers and sources to join and register at the closest RP. The Medium Enterprise
and dynamic group membership. Reference design recommends PIM-SM RP placement on a VSS-enabled and single
resilient core system in the three-tier campus design, and on the collapsed
core/distribution system in the two-tier campus design model. See Figure 43.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 43 Distributed PIM-SM RP Placement Figure 44 PIM-SM Network Design in Medium Enterprise Network

Main Site Main Site


PIM-SM
RP

Data VSL PIM-SM Access


Center Core

Multicast
Source
VSL
PIM-SM Distribution

VSL

PIM-SM
PIM-SM PIM-SM RP VSL
PIM-SM RP RP Core
RP

Data Data Data


Center Center Center PIM-SM

Multicast Multicast Multicast


Source Source Source PIM-SM
QFP WAN
Remote Remote Remote Edge

229369
Large Site Medium Site Small Site
WAN

PIM-SM RP Mode
PIM-SM PIM-SM
PIM-SM supports RP deployment in the following three modes in the network:
• Static—In this mode, RP must be statically identified and configured on each PIM PIM-SM
router in the network. RP load balancing and redundancy can be achieved using RP
anycast RP.
• Auto-RP—This mode is a dynamic method for discovering and announcing the RP in VSL
the network. Auto-RP implementation is beneficial when there are multiple RPs and PIM-SM
groups that often change in the network. To prevent network reconfiguration during a RP PIM-SM
change, the RP mapping agent router must be designated in the network to receive
RP group announcements and to arbitrate conflicts, as part of the PIM version 1 PIM-SM
specification. RP
• BootStrap Router (BSR)—This mode performs the same tasks as Auto-RP but in a VSL
different way, and is part of the PIM version 2 specification. Auto-RP and BSR cannot PIM-SM
co-exist or interoperate in the same network.
In a small- to mid-sized multicast network, static RP configuration is recommended over
the other modes. Static RP implementation offers RP redundancy and load sharing, and Remote Remote Remote

229370
an additional simple access control list (ACL) can be applied to deploy RP without Large Site Medium Site Small Site
compromising multicast network security. Cisco recommends designing the medium
enterprise LAN multicast network using the static PIM-SM mode configuration. See The following is an example configuration to deploy PIM-SM RP on all PIM-SM running
Figure 44. systems. To provide transparent PIM-SM redundancy, static PIM-SM RP configuration
must be identical across the campus LAN network and on each PIM-SM RP routers.
Medium Enterprise Design Profile (MEDP)—LAN Design

• Core layer Source: 10.125.31.153 (?)


Rate: 2500 pps/4240 kbps(1sec), 4239 kbps(last 30 secs), 12
cr23-VSS-Core(config)#ip multicast-routing kbps(life avg)

cr23-VSS-Core(config)#interface Loopback100 • Distribution layer


cr23-VSS-Core(config-if)#description Anycast RP Loopback
cr23-VSS-Core(config-if)#ip address 10.100.100.100 255.255.255.255 cr23-6500-LB(config)#ip multicast-routing
cr23-6500-LB(config)#ip pim rp-address 10.100.100.100
cr23-VSS-Core(config)#ip pim rp-address 10.100.100.100

cr23-VSS-Core#show ip pim rp cr23-6500-LB(config)#interface range Port-channel 100 – 103


cr22-6500-LB(config-if-range)#ip pim sparse-mode
Group: 239.192.51.1, RP: 10.100.100.100, next RP-reachable in
00:00:34 cr23-6500-LB(config)#interface range Vlan 101 – 120
Group: 239.192.51.2, RP: 10.100.100.100, next RP-reachable in cr22-6500-LB(config-if-range)#ip pim sparse-mode
00:00:34
Group: 239.192.51.3, RP: 10.100.100.100, next RP-reachable in cr22-6500-LB#show ip pim rp
00:00:34 Group: 239.192.51.1, RP: 10.100.100.100, uptime 00:10:42, expires
never
cr23-VSS-Core#show ip pim interface Group: 239.192.51.2, RP: 10.100.100.100, uptime 00:10:42, expires
never
Address Interface Ver/ Nbr Query Group: 239.192.51.3, RP: 10.100.100.100, uptime 00:10:41, expires
DR DR never
Mode Count Intvl Group: 224.0.1.40, RP: 10.100.100.100, uptime 3d22h, expires never
Prior
10.125.0.12 Port-channel101 v2/S 1 30 cr22-6500-LB#show ip pim interface
1 10.125.0.13
10.125.0.14 Port-channel102 v2/S 1 30 Address Interface Ver/ Nbr QueryDR
1 10.125.0.15
DR

Mode Count IntvlPrior
10.125.0.13Port-channel100v2/S 1 30 1 10.125.0.13
cr23-VSS-Core#show ip mroute sparse
10.125.0.0Port-channel101v2/S 1 30 1 10.125.0.1
(*, 239.192.51.8), 3d22h/00:03:20, RP 10.100.100.100, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0
10.125.103.129Vlan101v2/S 0 30 1 10.125.103.129
Outgoing interface list:

Port-channel105, Forward/Sparse, 00:16:54/00:02:54
Port-channel101, Forward/Sparse, 00:16:56/00:03:20
cr22-6500-LB#show ip mroute sparse
(*, 239.192.51.1), 00:14:23/00:03:21, RP 10.100.100.100, flags: SC
(10.125.31.147, 239.192.51.8), 00:16:54/00:02:35, flags: A
Incoming interface: Port-channel100, RPF nbr 10.125.0.12, RPF-MFD
Incoming interface: Port-channel105, RPF nbr 10.125.0.21
Outgoing interface list:
Outgoing interface list:
Port-channel102, Forward/Sparse, 00:13:27/00:03:06, H
Port-channel101, Forward/Sparse, 00:16:54/00:03:20
Vlan120, Forward/Sparse, 00:14:02/00:02:13, H
Port-channel101, Forward/Sparse, 00:14:20/00:02:55, H
cr23-VSS-Core#show ip mroute active
Port-channel103, Forward/Sparse, 00:14:23/00:03:10, H
Active IP Multicast Sources - sending >= 4 kbps
Vlan110, Forward/Sparse, 00:14:23/00:02:17, H

Group: 239.192.51.1, (?)


Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-6500-LB#show ip mroute active


Active IP Multicast Sources - sending >= 4 kbps cr11-asr-we(config)#ip pim rp-address 10.100.100.100

Group: 239.192.51.1, (?) cr11-asr-we(config)#interface range Port-channel1 , Gig0/2/0 ,


RP-tree: Gig0/2/1.102
Rate: 2500 pps/4240 kbps(1sec), 4240 kbps(last 10 secs), 4011 cr11-asr-we(config-if-range)#ip pim sparse-mode
kbps(life avg) cr11-asr-we(config)#interface Ser0/3/0
cr11-asr-we(config-if)#ip pim sparse-mode
• Access layer
cr11-asr-we#show ip pim rp
cr23-3560-LB(config)#ip multicast-routing distributed Group: 239.192.57.1, RP: 10.100.100.100, uptime 00:23:16, expires
cr23-3560-LB(config)#ip pim rp-address 10.100.100.100 never
Group: 239.192.57.2, RP: 10.100.100.100, uptime 00:23:16, expires
never
cr23-3560-LB(config)#interface range Vlan 101 – 110
Group: 239.192.57.3, RP: 10.100.100.100, uptime 00:23:16, expires
cr22-3560-LB(config-if-range)#ip pim sparse-mode
never

cr22-3560-LB#show ip pim rp
cr11-asr-we#show ip mroute sparse
Group: 239.192.51.1, RP: 10.100.100.100, uptime 00:01:36, expires
never
(*, 239.192.57.1), 00:24:08/stopped, RP 10.100.100.100, flags: SP
Group: 239.192.51.2, RP: 10.100.100.100, uptime 00:01:36, expires
never Incoming interface: Port-channel1, RPF nbr 10.125.0.22
Group: 239.192.51.3, RP: 10.100.100.100, uptime 00:01:36, expires Outgoing interface list: Null
never
Group: 224.0.1.40, RP: 10.100.100.100, uptime 5w5d, expires never (10.125.31.156, 239.192.57.1), 00:24:08/00:03:07, flags: T
cr22-3560-LB#show ip pim interface Incoming interface: Port-channel1, RPF nbr 10.125.0.22
Outgoing interface list:
Address Interface Ver/ Nbr Query DR Serial0/3/0, Forward/Sparse, 00:24:08/00:02:55
DR
Mode Count Intvl Prior cr11-asr-we#show ip mroute active
10.125.0.5 Port-channel1 v2/S 1 30 1 Active IP Multicast Sources - sending >= 4 kbps
10.125.0.5
10.125.101.1 Vlan101 v2/S 0 30 1 Group: 239.192.57.1, (?)
0.0.0.0
Source: 10.125.31.156 (?)

Rate: 625 pps/1130 kbps(1sec), 1130 kbps(last 40 secs), 872
10.125.103.65 Vlan110 v2/S 0 30 1 kbps(life avg)
10.125.103.65

cr22-3560-LB#show ip mroute sparse PIM-SM RP Redundancy


(*, 239.192.51.1), 00:06:06/00:02:59, RP 10.100.100.100, flags: SC
Incoming interface: Port-channel1, RPF nbr 10.125.0.4 PIM-SM RP redundancy and load sharing becomes imperative in the medium enterprise
LAN design, because each recommended core layer design model provides resiliency
Outgoing interface list:
and simplicity. In the Cisco Catalyst 6500 VSS-enabled core layer, the dynamically
Vlan101, Forward/Sparse, 00:06:08/00:02:09 discovered group-to-RP entries are fully synchronized to the standby switch. Combining
Vlan110, Forward/Sparse, 00:06:06/00:02:05 NSF/SSO capabilities with IPv4 multicast reduces the network recovery time and retains
the user and application performance at an optimal level. In the non-VSS-enabled network
• WAN edge layer design, PIM-SM uses Anycast RP and Multicast Source Discovery Protocol (MSDP) for
node failure protection. PIM-SM redundancy and load sharing is simplified with the Cisco
cr11-asr-we(config)#ip multicast-routing distributed
Medium Enterprise Design Profile (MEDP)—LAN Design

VSS-enabled core. Because VSS is logically a single system and provides node Implementing MSDP Anycast RP
protection, there is no need to implement Anycast RP and MSDP on a VSS-enabled
PIM-SM RP. Main Campus
Inter-Site PIM Anycast RP
cr23-VSS-Core(config)#ip msdp peer 10.122.200.2 connect-source
MSDP allows PIM RPs to share information about the active sources. PIM-SM RPs Loopback0
discover local receivers through PIM join messages, while the multicast source can be in cr23-VSS-Core(config)#ip msdp description 10.122.200.2
a local or remote network domain. MSDP allows each multicast domain to maintain an ANYCAST-PEER-6k-RemoteLrgCampus
independent RP that does not rely on other multicast domains, but does enable RPs to cr23-VSS-Core(config)#ip msdp peer 10.123.200.1 connect-source
forward traffic between domains. PIM-SM is used to forward the traffic between the Loopback0
multicast domains. cr23-VSS-Core(config)#ip msdp description 10.123.200.1
Anycast RP is a useful application of MSDP. Originally developed for interdomain multicast ANYCAST-PEER-4k-RemoteMedCampus
applications, MSDP used with Anycast RP is an intradomain feature that provides cr23-VSS-Core(config)#ip msdp peer 10.124.200.2 connect-source
redundancy and load sharing capabilities. Large networks typically use Anycast RP for Loopback0
configuring a PIM-SM network to meet fault tolerance requirements within a single cr23-VSS-Core(config)#ip msdp description 10.124.200.2
multicast domain. ANYCAST-PEER-4k-RemoteSmlCampus
The medium enterprise LAN multicast network must be designed with Anycast RP. cr23-VSS-Core(config)#ip msdp cache-sa-state
PIM-SM RP at the main or the centralized core must establish an MSDP session with RP cr23-VSS-Core(config)#ip msdp originator-id Loopback0
on each remote site to exchange distributed multicast source information and allow RPs
to join SPT to active sources as needed. Figure 45 shows an example of a medium cr23-VSS-Core#show ip msdp peer | inc MSDP Peer|State
enterprise LAN multicast network design.
MSDP Peer 10.122.200.2 (?), AS ?
Figure 45 Medium Enterprise Inter-Site Multicast Network Design State: Up, Resets: 0, Connection source: Loopback0
(10.125.200.254)
Main Site MSDP Peer 10.123.200.1 (?), AS ?
State: Up, Resets: 0, Connection source: Loopback0
(10.125.200.254)
VSL
Loopback0 : 10.125.200.254/32 MSDP Peer 10.124.200.2 (?), AS ?
Anycast RP : 10.100.100.100/32
Loopback
State: Up, Resets: 0, Connection source: Loopback0
(10.125.200.254)
PIM-SM RP

Remote Large Campus


MSDP Peering PIM-SM
cr14-6500-RLC(config)#ip msdp peer 10.125.200.254 connect-source
Anycast RP
Loopback0
cr14-6500-RLC(config)#ip msdp description 10.125.200.254
ANYCAST-PEER-6k-MainCampus
cr14-6500-RLC(config)#ip msdp cache-sa-state
VSL cr14-6500-RLC(config)#ip msdp originator-id Loopback0

cr14-6500-RLC#show ip msdp peer | inc MSDP Peer|State|SAs learned


PIM-SM RP PIM-SM RP MSDP Peer 10.125.200.254 (?), AS ?
PIM-SM RP
State: Up, Resets: 0, Connection source: Loopback0 (10.122.200.2)
Remote Remote Remote
Large Site Medium Site Small Site SAs learned from this peer: 94
Loopback0 : 10.122.200.2/32 Loopback0 : 10.123.200.1/32 Loopback0 : 10.124.200.2/32
229371

Anycast RP : 10.100.100.100/32 Anycast RP : 10.100.100.100/32 Anycast RP : 10.100.100.100/32


Loopback Loopback Loopback
Medium Enterprise Design Profile (MEDP)—LAN Design

Remote Medium Campus IGMP is still required when a Layer 3 access layer switch is deployed in the routed access
cr11-4507-RMC(config)#ip msdp peer 10.125.200.254 connect-source network design. Because the Layer 3 boundary is pushed down to the access layer, IGMP
Loopback0 communication is limited between a receiver host and the Layer 3 access switch. In
cr11-4507-RMC(config)#ip msdp description 10.125.200.254
addition to the unicast routing protocol, PIM-SM must be enabled at the Layer 3 access
ANYCAST-PEER-6k-MainCampus switch to communicate with RPs in the network.
cr11-4507-RMC(config)#ip msdp cache-sa-state
Implementing IGMP
cr11-4507-RMC(config)#ip msdp originator-id Loopback0
By default, the Layer-2 access-switch dynamically detects IGMP hosts and
cr11-4507-RMC#show ip msdp peer | inc MSDP Peer|State|SAs learned multicast-capable Layer-3 PIM routers in the network. The IGMP snooping and multicast
MSDP Peer 10.125.200.254 (?), AS ? router detection functions on a per-VLAN basis, and is globally enabled by default for all
the VLANs.
State: Up, Resets: 0, Connection source: Loopback0 (10.123.200.1)
SAs learned from this peer: 94 Multicast routing function changes when the access-switch is deployed in routed-access
mode. PIM operation is performed at the access layer; therefore, multicast router
detection process is eliminated. The following output from a Layer-3 switch verifies that
the local multicast ports are in router mode, and provide a snooped Layer-2 uplink
Remote Small Campus port-channel which is connected to the collapsed core router, for multicast routing:
The IGMP configuration can be validated using the following show command on the
cr14-4507-RSC(config)#ip msdp peer 10.125.200.254 connect-source Layer-2 and Layer-3 access-switch:
Loopback0
cr14-4507-RSC(config)#ip msdp description 10.125.200.254 Layer 2 Access
ANYCAST-PEER-6k-MainCampus
cr22-3750-LB#show ip igmp snooping groups
cr14-4507-RSC(config)#ip msdp cache-sa-state
Vlan Group Type Version Port List
cr14-4507-RSC(config)#ip msdp originator-id Loopback0
-----------------------------------------------------------------------
110 239.192.51.1 igmp v2 Gi1/0/20, Po1
cr14-4507-RSC#show ip msdp peer | inc MSDP Peer|State|SAs learned
110 239.192.51.2 igmp v2 Gi1/0/20, Po1
MSDP Peer 10.125.200.254 (?), AS ?
110 239.192.51.3 igmp v2 Gi1/0/20, Po1
State: Up, Resets: 0, Connection source: Loopback0 (10.124.200.2)
SAs learned from this peer: 94
cr22-3750-LB#show ip igmp snooping mrouter
Vlan ports
------- -------
Dynamic Group Membership 110 Po1(dynamic)

Multicast receiver registration is done via IGMP protocol signaling. IGMP is an integrated
component of an IP multicast framework that allows the receiver hosts and transmitting
Layer 3 Access
sources to be dynamically added to and removed from the network. Without IGMP, the
network is forced to flood rather than multicast the transmissions for each group. IGMP cr22-3560-LB#show ip igmp membership
operates between a multicast receiver host in the access layer and the Layer 3 router at Channel/Group Reporter Uptime Exp. Flags
the distribution layer. Interface
The multicast system role changes when the access layer is deployed in the multilayer *,239.192.51.1 10.125.103.106 00:52:36 02:09 2A
and routed access models. Because multilayer access switches do not run PIM, it Vl110
becomes complex to make forwarding decisions out of the receiver port. In such a *,239.192.51.2 10.125.103.107 00:52:36 02:12 2A
situation, Layer 2 access switches flood the traffic on all ports. This multilayer limitation in Vl110
access switches is solved by using the IGMP snooping feature, which is enabled by *,239.192.51.3 10.125.103.109 00:52:35 02:16 2A
default and is recommended to not be disabled. Vl110
*,224.0.1.40 10.125.0.4 3d22h 02:04 2A
Po1
*,224.0.1.40 10.125.101.129 4w4d 02:33 2LA
Vl103
Medium Enterprise Design Profile (MEDP)—LAN Design

cr23-VSS-Core(config-std-nacl)# permit 224.0.1.39


cr22-3560-LB#show ip igmp snooping mrouter cr23-VSS-Core(config-std-nacl)# permit 224.0.1.40
Vlan ports cr23-VSS-Core(config-std-nacl)# permit 239.192.0.0 0.0.255.255
------- ------ cr23-VSS-Core(config-std-nacl)# deny any
103 Router
106 Router cr23-VSS-Core(config)#ip pim rp-address 10.100.100.100
110 Router Allowed_MCAST_Groups override

Designing Multicast Security QoS for Application Performance Optimization

When designing multicast security in the medium enterprise LAN design, two key The function and guaranteed low latency bandwidth expectation of network users and
concerns are preventing a rogue source and preventing a rogue PIM-RP. endpoints has evolved significantly over the past few years. Application and device
awareness has become a key tool in providing differentiated service treatment at the
Preventing Rogue Source campus LAN edge. Media applications, and particularly video-oriented media
applications, are evolving as the enterprise networks enters the digital era of doing
In a PIM-SM network, an unwanted traffic source can be controlled with the pim business, as well as the increased campus network and asset security requirements.
accept-register command. When the source traffic hits the first-hop router, the first-hop Integrating video applications in the medium enterprise LAN network exponentially
router (DR) creates the (S,G) state and sends a PIM source register message to the RP. If increases bandwidth utilization and fundamentally shifts traffic patterns. Business drivers
the source is not listed in the accept-register filter list (configured on the RP), the RP behind this media application growth include remote learning, as well as leveraging the
rejects the register and sends back an immediate Register-Stop message to the DR. The network as a platform to build an energy-efficient network to minimize cost and go "green".
drawback with this method of source filtering is that with the pim accept-register High-definition media is transitioning from the desktop to conference rooms, and social
command on the RP, the PIM-SM (S,G) state is still created on the first-hop router of the networking phenomena are crossing over into enterprise settings. Besides internal and
source. This can result in traffic reaching receivers local to the source and located enterprise research applications, media applications are fueling a new wave of IP
between the source and the RP. Furthermore, because the pim accept-register command convergence, requiring the ongoing development of converged network designs.
works on the control plane of the RP, this can be used to overload the RP with fake register Converging media applications onto an IP network is much more complex than
messages and possibly cause a DoS condition. converging voice over IP (VoIP) alone. Media applications are generally
The following is the sample configuration with a simple ACL that has been applied to the bandwidth-intensive and bursty (as compared to VoIP), and many different types of media
RP to filter only on the source address. It is also possible to filter the source and the group applications exist; in addition to IP telephony, applications can include live and on-demand
using of an extended ACL on the RP: streaming media applications, digital signage applications, high-definition room-based
cr23-VSS-Core(config)#ip access-list extended PERMIT-SOURCES conferencing applications, as well as an infinite array of data-oriented applications. By
embracing media applications as the next cycle of convergence, medium enterprise IT
cr23-VSS-Core(config-ext-nacl)# permit ip 10.120.31.0 0.7.0.255
departments can think holistically about their network design and its readiness to support
239.192.0.0 0.0.255.255
the coming tidal wave of media applications, and develop a network-wide strategy to
cr23-VSS-Core(config-ext-nacl)# deny ip any any
ensure high quality end-user experiences.
The medium enterprise LAN infrastructure must set the administrative policies to provide
cr23-VSS-Core(config)#ip pim accept-register list PERMIT-SOURCES
differentiated forwarding services to the network applications, users and endpoints to
prevent contention. The characteristic of network services and applications must be well
understood, so that policies can be defined that allow network resources to be used for
Preventing Rogue PIM-RP internal applications, to provide best-effort services for external traffic, and to keep the
network protected from threats.
Like the multicast source, any router can be misconfigured or can maliciously advertise
itself as a multicast RP in the network with the valid multicast group address. With a static The policy for providing network resources to an internal application is further
RP configuration, each PIM-enabled router in the network can be configured to use static complicated when interactive video and real-time VoIP applications are converged over
RP for the multicast source and override any other Auto-RP or BSR multicast router the same network that is switching mid-to-low priority data traffic. Deploying QoS
announcement from the network. technologies in the campus allows different types of traffic to contend inequitably for
network resources. Real-time applications such as voice, interactive, and physical security
The following is the sample configuration that must be applied to each PIM-enabled video can be given priority or preferential services over generic data applications, but not
router in the campus network, to accept PIM announcements only from the static RP and to the point that data applications are starving for bandwidth.
ignore dynamic multicast group announcement from any other RP:

cr23-VSS-Core(config)#ip access-list standard Allowed_MCAST_Groups


Medium Enterprise Design Profile (MEDP)—LAN Design

Medium Enterprise LAN QoS Framework • Broadcast video—This service class is intended for broadcast TV, live events, video
surveillance flows, and similar inelastic streaming video flows, which are highly drop
Each group of managed and un-managed applications with unique traffic patterns and sensitive and have no retransmission and/or flow control capabilities. Traffic in this
service level requirements requires a dedicated QoS class to provision and guarantee class should be marked class selector 5 (CS5) and may be provisioned with an EF
these service level requirements. The medium enterprise LAN network architect may PHB; as such, admission to this class should be controlled. Examples of this traffic
need to determine the number of classes for various applications, as well as how should include live Cisco Digital Media System (DMS) streams to desktops or to Cisco Digital
these individual classes should be implemented to deliver differentiated services Media Players (DMPs), live Cisco Enterprise TV (ETV) streams, and Cisco IP Video
consistently in main and remote campus sites. Cisco recommends following relevant Surveillance.
industry standards and guidelines whenever possible, to extend the effectiveness of your
QoS policies beyond your direct administrative control. • Real-time interactive—This service class is intended for (inelastic) room-based,
high-definition interactive video applications and is intended primarily for voice and
With minor changes, the medium enterprise LAN QoS framework is developed based on video components of these applications. Whenever technically possible and
RFC4594 that follows industry standard and guidelines to function consistently in administratively feasible, data sub-components of this class can be separated out
heterogeneous network environment. These guidelines are to be viewed as industry and assigned to the transactional data traffic class. Traffic in this class should be
best-practice recommendations. Enterprise organizations and service providers are marked CS4 and may be provisioned with an EF PHB; as such, admission to this class
encouraged to adopt these marking and provisioning recommendations, with the aim of should be controlled. A sample application is Cisco TelePresence.
improving QoS consistency, compatibility, and interoperability. However, because these
guidelines are not standards, modifications can be made to these recommendations as • Multimedia conferencing—This service class is intended for desktop software
specific needs or constraints require. To this end, to meet specific business requirements, multimedia collaboration applications and is intended primarily for voice and video
Cisco has made a minor modification to its adoption of RFC 4594, namely the switching of components of these applications. Whenever technically possible and
call-signaling and broadcast video markings (to CS3 and CS5, respectively). administratively feasible, data sub-components of this class can be separated out
and assigned to the transactional data traffic class. Traffic in this class should be
RFC 4594 outlines twelve classes of media applications that have unique service level marked assured forwarding (AF) Class 4 (AF41) and should be provisioned with a
requirements, as shown in Figure 46. guaranteed bandwidth queue with Differentiated Services Code Point
Figure 46 Campus 12-Class QoS Policy Recommendation (DSCP)-based Weighted Random Early Detection (WRED) enabled. Admission to this
class should be controlled; additionally, traffic in this class may be subject to policing
Application Class Media Application Examples PHB
Admission
Queuing and Dropping and re-marking. Sample applications include Cisco Unified Personal Communicator,
Control
Cisco Unified Video Advantage, and the Cisco Unified IP Phone 7985G.
VoIP Telephony Cisco IP Phone EF Required Priority Queue (PQ)
• Multimedia streaming—This service class is intended for video-on-demand (VoD)
Broadcast Video Cisco IPVS, Enterprise TV CS5 Required (Optional) PQ streaming video flows, which, in general, are more elastic than broadcast/live
Real-Time Interactive Cisco TelePresence CS4 Required (Optional) PQ streaming flows. Traffic in this class should be marked AF Class 3 (AF31) and should
be provisioned with a guaranteed bandwidth queue with DSCP-based WRED
Multimedia Conferencing Cisco CUPC, WebEx AF4 Required BW Queue + DSCP WRED
enabled. Admission control is recommended on this traffic class (though not strictly
Multimedia Streaming Cisco DMS, IP/TV AF3 Recommended BW Queue + DSCP WRED required) and this class may be subject to policing and re-marking. Sample
Network Control EIGRP, OSPF, HSRP, IKE CS6 BW Queue
applications include Cisco Digital Media System VoD streams.

Call-Signaling SCCP, SIP, H.323 CS3 BW Queue


• Network control—This service class is intended for network control plane traffic,
which is required for reliable operation of the enterprise network. Traffic in this class
Ops/Admin/Mgmt (OAM) SNMP, SSH, Syslog CS2 BW Queue should be marked CS6 and provisioned with a (moderate, but dedicated) guaranteed
Transactional Data ERP Apps, CRM Apps AF2 BW Queue + DSCP WRED bandwidth queue. WRED should not be enabled on this class, because network
Bulk Data E-mail, FTP, Backup AF1 BW Queue + DSCP WRED
control traffic should not be dropped (if this class is experiencing drops, the
bandwidth allocated to it should be re-provisioned). Sample traffic includes EIGRP,
Best Effort Default Class DF Default Queue + RED
OSPF, Border Gateway Protocol (BGP), HSRP, Internet Key Exchange (IKE), and so on.
228497

Scavenger YouTube, Gaming, P2P CS1 Min BW Queue


• Call-signaling—This service class is intended for signaling traffic that supports IP
voice and video telephony. Traffic in this class should be marked CS3 and
The twelve classes are as follows:
provisioned with a (moderate, but dedicated) guaranteed bandwidth queue. WRED
• VoIP telephony—This service class is intended for VoIP telephony (bearer-only) should not be enabled on this class, because call-signaling traffic should not be
traffic (VoIP signaling traffic is assigned to the call-signaling class). Traffic assigned to dropped (if this class is experiencing drops, the bandwidth allocated to it should be
this class should be marked EF. This class is provisioned with expedited forwarding re-provisioned). Sample traffic includes Skinny Call Control Protocol (SCCP), Session
(EF) per-hop behavior (PHB). The EF PHB-defined in RFC 3246 is a strict-priority Initiation Protocol (SIP), H.323, and so on.
queuing service and, as such, admission to this class should be controlled
(admission control is discussed in the following section). Examples of this type of
• Operations/administration/management (OAM)—This service class is intended for
network operations, administration, and management traffic. This class is critical to
traffic include G.711 and G.729a.
the ongoing maintenance and support of the network. Traffic in this class should be
Medium Enterprise Design Profile (MEDP)—LAN Design

marked CS2 and provisioned with a (moderate, but dedicated) guaranteed applications, endpoints, or other network devices requires the same treatment when
bandwidth queue. WRED should not be enabled on this class, because OAM traffic traffic enters or leaves the network, and must be taken into account when designing the
should not be dropped (if this class is experiencing drops, the bandwidth allocated to trust model between network endpoints and core and edge campus devices.
it should be re-provisioned). Sample traffic includes Secure Shell (SSH), Simple The trust or un-trust model simplifies the rules for defining bi-directional QoS policy
Network Management Protocol (SNMP), Syslog, and so on. settings. Figure 47 shows the QoS trust model setting that sets the QoS implementation
• Transactional data (or low-latency data)—This service class is intended for interactive, guidelines in medium enterprise campus networks.
“foreground” data applications (foreground refers to applications from which users
Figure 47 Campus LAN QoS Trust and Policies
are expecting a response via the network to continue with their tasks; excessive
latency directly impacts user productivity). Traffic in this class should be marked AF
Class 2 (AF21) and should be provisioned with a dedicated bandwidth queue with VSL
DSCP-WRED enabled. This traffic class may be subject to policing and re-marking.
Sample applications include data components of multimedia collaboration
applications, Enterprise Resource Planning (ERP) applications, Customer Classification,
Relationship Management (CRM) applications, database applications, and so on. Marking and
Queueing
• Bulk data (or high-throughput data)—This service class is intended for Trust
non-interactive “background” data applications (background refers to applications VSL
from which users are not awaiting a response via the network to continue with their
tasks; excessive latency in response times of background applications does not Classification,
directly impact user productivity). Traffic in this class should be marked AF Class 1 Marking and
Trust
(AF11) and should be provisioned with a dedicated bandwidth queue with Queueing
DSCP-WRED enabled. This traffic class may be subject to policing and re-marking.
Sample applications include E-mail, backup operations, FTP/SFTP transfers, video Queueing and WTD Trust
and content distribution, and so on.
• Best effort (or default class)—This service class is the default class. The vast majority
Trust, Classification,
of applications will continue to default to this best-effort service class; as such, this
Marking, Policing Queueing and WTD
default class should be adequately provisioned. Traffic in this class is marked default and Queueing
forwarding (DF or DSCP 0) and should be provisioned with a dedicated queue. WRED
is recommended to be enabled on this class.
IP
• Scavenger (or low-priority data)—This service class is intended for
non-business-related traffic flows, such as data or video applications that are
entertainment and/or gaming-oriented. The approach of a less-than Best-Effort
service class for non-business applications (as opposed to shutting these down
entirely) has proven to be a popular, political compromise. These applications are
permitted on enterprise networks, as long as resources are always available for
business-critical voice, video, and data applications. However, as soon as the network Trusted, Conditional-Trusted or Un-Trusted Endpoints
experiences congestion, this class is the first to be penalized and aggressively

228498
dropped. Traffic in this class should be marked CS1 and should be provisioned with QoSTrust Boundary Ingress QoS Ploicy Egress QoS Ploicy
a minimal bandwidth queue that is the first to starve should network congestion occur.
Sample traffic includes YouTube, Xbox Live/360 movies, iTunes, BitTorrent, and so on.
Medium Enterprise LAN QoS Overview
Designing Medium Enterprise LAN QoS Trust Boundary and Policies With an overall application strategy in place, end-to-end QoS policies can be designed for
To build an end-to-end QoS framework that offers transparent and consistent QoS service each device and interface, as determined by their roles in the network infrastructure.
without compromising performance, it is important to create an blueprint of the network, However, because the Cisco QoS toolset provides many QoS design and deployment
classifying a set of trusted applications, devices, and forwarding paths; and then define options, a few succinct design principles can help simplify strategic QoS deployments, as
common QoS policy settings independent of how QoS is implemented within the system. discussed in the following sections.
QoS settings applied at the LAN network edge sets the ingress rule based on deep Hardware versus Software QoS
packet classification and marks the traffic before it is forwarded inside the campus core.
To retain the marking set by access layer switches, it is important that other LAN network A fundamental QoS design principle is to always enable QoS policies in hardware rather
devices in the campus trust the marking and apply the same policy to retain the QoS than software whenever possible. Cisco IOS routers perform QoS in software, which
settings and offer symmetric treatment. Bi-directional network communication between places incremental loads on the CPU, depending on the complexity and functionality of
Medium Enterprise Design Profile (MEDP)—LAN Design

the policy. Cisco Catalyst switches, on the other hand, perform QoS in dedicated supported). Following such markdowns, congestion management policies, such as
hardware application-specific integrated circuits (ASICs) on Ethernet-based ports, and as DSCP-based WRED, should be configured to drop AFx3 more aggressively than AFx2,
such do not tax their main CPUs to administer QoS policies. This allows complex policies which in turn should be dropped more aggressively than AFx1.
to be applied at line rates even up to Gigabit or 10-Gigabit speeds.
Queuing and Dropping
Classification and Marking
Critical media applications require uncompromised performance and service
When classifying and marking traffic, a recommended design principle is to classify and guarantees regardless of network conditions. Enabling outbound queuing in each
mark applications as close to their sources as technically and administratively feasible. network tier provides end-to-end service guarantees during potential network
This principle promotes end-to-end differentiated services and PHBs. congestion. This common principle applies to campus-to-WAN/Internet edges, where
In general, it is not recommended to trust markings that can be set by users on their PCs speed mismatches are most pronounced; and campus interswitch links, where
or other similar devices, because users can easily abuse provisioned QoS policies if oversubscription ratios create the greater potential for network congestion.
permitted to mark their own traffic. For example, if an EF PHB has been provisioned over Because each application class has unique service level requirements, each should be
the network, a PC user can easily configure all their traffic to be marked to EF, thus assigned optimally a dedicated queue. A wide range of platforms in varying roles exist in
hijacking network priority queues to service non-realtime traffic. Such abuse can easily medium enterprise networks, so each must be bounded by a limited number of hardware
ruin the service quality of realtime applications throughout the campus. On the other hand, or service provider queues. No fewer than four queues are required to support QoS
if medium enterprise network administrator controls are in place that centrally administer policies for various types of applications, specifically as follows:
PC QoS markings, it may be possible and advantageous to trust these. • Realtime queue (to support a RFC 3246 EF PHB service)
Following this rule, it is recommended to use DSCP markings whenever possible, • Guaranteed-bandwidth queue (to support RFC 2597 AF PHB services)
because these are end-to-end, more granular, and more extensible than Layer 2
markings. Layer 2 markings are lost when the media changes (such as a • Default queue (to support a RFC 2474 DF service)
LAN-to-WAN/VPN edge). There is also less marking granularity at Layer 2. For example, • Bandwidth-constrained queue (to support a RFC 3662 scavenger service)
802.1P supports only three bits (values 0-7), as does Multiprotocol Label Switching Additional queuing recommendations for these classes are discussed next.
Experimental (MPLS EXP). Therefore, only up to eight classes of traffic can be supported
at Layer 2, and inter-class relative priority (such as RFC 2597 Assured Forwarding Drop Strict-Priority Queuing
Preference markdown) is not supported. Layer 3-based DSCP markings allow for up to 64
classes of traffic, which provides more flexibility and is adequate in large-scale The realtime or strict priority class corresponds to the RFC 3246 EF PHB. The amount of
deployments and for future requirements. bandwidth assigned to the realtime queuing class is variable. However, if the majority of
As the network border blurs between enterprise network and service providers, the need bandwidth is provisioned with strict priority queuing (which is effectively a FIFO queue),
for interoperability and complementary QoS markings is critical. Cisco recommends the overall effect is a dampening of QoS functionality, both for latency- and jitter-sensitive
following the IETF standards-based DSCP PHB markings to ensure interoperability and realtime applications (contending with each other within the FIFO priority queue), and also
future expansion. Because the medium enterprise voice, video, and data applications for non-realtime applications (because these may periodically receive significant
marking recommendations are standards-based, as previously discussed, medium bandwidth allocation fluctuations, depending on the instantaneous amount of traffic being
enterprises can easily adopt these markings to interface with service provider classes of serviced by the priority queue). Remember that the goal of convergence is to enable
service. voice, video, and data applications to transparently co-exist on a single medium enterprise
network infrastructure. When realtime applications dominate a link, non-realtime
Policing and Markdown applications fluctuate significantly in their response times, destroying the transparency of
the converged network.
There is little reason to forward unwanted traffic that gets policed and drop by a For example, consider a 45 Mbps DS3 link configured to support two Cisco TelePresence
subsequent tier node, especially when unwanted traffic is the result of DoS or worm CTS-3000 calls with an EF PHB service. Assuming that both systems are configured to
attacks in the enterprise network. Excessive volume attack traffic can destabilize network support full high definition, each such call requires 15 Mbps of strict-priority queuing.
systems, which can result in outages. Cisco recommends policing traffic flows as close to Before the TelePresence calls are placed, non-realtime applications have access to 100
their sources as possible. This principle applies also to legitimate flows, because percent of the bandwidth on the link; to simplify the example, assume there are no other
worm-generated traffic can masquerade under legitimate, well-known TCP/UDP ports realtime applications on this link. However, after these TelePresence calls are established,
and cause extreme amounts of traffic to be poured into the network infrastructure. Such all non-realtime applications are suddenly contending for less than 33 percent of the link.
excesses should be monitored at the source and marked down appropriately. TCP windowing takes effect and many applications hang, timeout, or become stuck in a
Whenever supported, markdown should be done according to standards-based rules, non-responsive state, which usually translates into users calling the IT help desk to
such as RFC 2597 (AF PHB). For example, excess traffic marked to AFx1 should be marked complain about the network (which happens to be functioning properly, albeit in a
down to AFx2 (or AFx3 whenever dual-rate policing such as defined in RFC 2698 is poorly-configured manner).
Medium Enterprise Design Profile (MEDP)—LAN Design

Note As previously discussed, Cisco IOS software allows the abstraction (and thus Figure 48 Compatible 4-Class and 12-Class Queuing Models
configuration) of multiple strict priority LLQs. In such a multiple LLQ context, this
VoIP
design principle applies to the sum of all LLQs to be within one-third of link Telephony
capacity.

It is vitally important to understand that this strict priority queuing rule is simply a best Best
Effort Broadcast
practice design recommendation and is not a mandate. There may be cases where
specific business objectives cannot be met while holding to this recommendation. In such Video
cases, the medium enterprise network administrator must provision according to their
detailed requirements and constraints. However, it is important to recognize the tradeoffs
involved with over-provisioning strict priority traffic and its negative performance impact,
Best >
both on other realtime flows and also on non-realtime-application response times. Effort Realtime
And finally, any traffic assigned to a strict-priority queue should be governed by an Realtime
admission control mechanism. Scavenger Scavenger Interactive

Best Effort Queuing Bulk Data Guaranteed BW

The best effort class is the default class for all traffic that has not been explicitly assigned
to another application-class queue. Only if an application has been selected for
preferential/deferential treatment is it removed from the default class. Because most
medium enterprises may have several types of applications running in networks,
Transactional Multimedia
adequate bandwidth must be provisioned for this class as a whole to handle the number Data Conferencing
and volume of applications that default to it. Therefore, Cisco recommends reserving at
least 25 percent of link bandwidth for the default best effort class. OAM

228499
Scavenger Class Queuing Signaling Network Multimedia
Control Streaming
Whenever the scavenger queuing class is enabled, it should be assigned a minimal
amount of link bandwidth capacity, such as 1 percent, or whatever the minimal bandwidth
allocation that the platform supports. On some platforms, queuing distinctions between Deploying QoS in Campus LAN Network
bulk data and scavenger traffic flows cannot be made, either because queuing All Layer 2 and Layer 3 systems in IP-based networks forward traffic based on a
assignments are determined by class of service (CoS) values (and both of these best-effort, providing no differentiated services between different class-of-service
application classes share the same CoS value of 1), or because only a limited amount of network applications. The routing protocol forwards packets over the best low-metric or
hardware queues exist, precluding the use of separate dedicated queues for each of delay path, but offers no guarantee of delivery. This model works well for TCP-based data
these two classes. In such cases, the scavenger/bulk queue can be assigned a moderate applications that adapt gracefully to variations in latency, jitter, and loss. The medium
amount of bandwidth, such as 5 percent. enterprise LAN and WAN is a multi-service network designed to supports a wide-range of
These queuing rules are summarized in Figure 48, where the inner pie chart represents a low-latency voice and high bandwidth video with critical and non-critical data traffic over
hardware or service provider queuing model that is limited to four queues and the outer a single network infrastructure. For an optimal user-experience the real time applications
pie chart represents a corresponding, more granular queuing model that is not bound by (such as voice, video) require packets delivered within specified loss, delay and jitter
such constraints. parameters. Cisco quality-of-service (QoS) is a collection of features and hardware
capabilities that allow the network to intelligently dedicate the network resources for
higher priority real-time applications, while reserving sufficient network resources to
service medium to lower non-real-time traffic. QoS accomplishes this by creating a more
application-aware Layer 2 and Layer 3 network to provide differentiated services to
network applications and traffic. For a detailed discussion of QoS, refer to the Enterprise
QoS Design Guide at the following URL:
http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/Q
oS-SRND-Book.html
Medium Enterprise Design Profile (MEDP)—LAN Design

While the QoS design principles across the network are common, the QoS prevent traffic drop during congestion. This buffer allocation is static and cannot be
implementation in hardware and software-based switching platforms vary due to internal modified by the user. However, when Catalyst 2960-S is deployed in FlexStack
system design. This section discusses the internal switching architecture and the configuration mode, there is a flexibility to assign different buffer size on egress queue of
differentiated QoS structure on a per-hop-basis. StackPort. Figure 50 illustrates QoS architecture on Catalyst 2960-S Series platform
Figure 50 QoS Implementation in Catalyst 2960-S Switches
QoS in Catalyst Fixed Configuration Switches
Egress
The QoS implementation in Cisco Catalyst 2960, 3560-X, and 3750-X Series switches are Policer Marker Queues
similar to one another. There is no difference in the ingress or egress packet classification,
marking, queuing and scheduling implementation among these Catalyst platforms. The Q1
Policer Marker
Cisco Catalyst switches allow users to create policy-maps by classifying incoming traffic
(Layer 2 to Layer 4), and then attaching the policy-map to an individual physical port or to Q2
Receive Transmit
logical interfaces (SVI or port-channel). This creates a common QoS policy that may be Classify SRR
used in multiple networks. To prevent switch fabric and egress physical port congestion, Q3
the ingress QoS policing structure can strictly filter excessive traffic at the network edge.
Policer Marker
All ingress traffic from edge ports passes through the switch fabric and move to the Q4
egress ports, where congestion may occur. Congestion in access-layer switches can be
prevented by tuning queuing scheduler and Weighted Tail Drop (WTD) drop parameters. Policer Marker

229372
See Figure 49.
Ingress QoS Egress QoS
Figure 49 QoS Implementation in Cisco Catalyst Switches

Egress
Policer Marker Internal Queues QoS in Cisco Modular Switches
Ring
Ingress Q1
Policer Marker Queues
The Cisco Catalyst 4500-E and 6500-E are high-density, resilient switches for large scale
Normal-Q Q2
networks. The medium enterprise LAN network design uses both platforms across the
Receive
Classify SRR SRR
Transmit network; therefore, all the QoS recommendations in this section for these platforms will
Priority-Q Q3 remain consistent. Both Catalyst platforms are modular in design; however, there are
Policer Marker significant internal hardware architecture differences between the two platforms that
Q4 impact the QoS implementation model.
Policer Marker

228973
Ingress QoS Egress QoS
Catalyst 4500-E QoS

The main difference between these platforms is the switching capacity that ranges from The Cisco Catalyst 4500-E Series platform are widely deployed with classic and
1G to 10G. The switching architecture and some of the internal QoS structure also differs next-generation supervisors. This design guide recommends deploying the
between these switches. The following are some important differences to consider when next-generation supervisor Sup6E and Sup6L-E that offers a number of technical benefits
selecting an access switch: that are beyond QoS.
• The Cisco Catalyst 2960 does not support multilayer switching and does not support The Cisco Catalyst 4500 with next generation Sup-6E and Sup6L-E (see Figure 51) are
per-VLAN or per-port/per-VLAN policies. designed to offer better differentiated and preferential QoS services for various
• The Cisco Catalyst 2960 can police to a minimum rate of 1 Mbps; all other switches class-of-service traffic. New QoS capabilities in the Sup-6E and Sup6L-E enable
including next-generation Cisco Catalyst 2960-S Series within this product family administrators to take advantage of hardware-based intelligent classification and take
can police to a minimum rate of 8 kbps. action to optimize application performance and network availability. The QoS
implementation in Sup-6E and Sup6L-E supports the Modular QoS CLI (MQC) as
• Only the Cisco Catalyst 3560-X and 3750-X support IPv6 QoS. implemented in IOS-based routers that enhances QoS capabilities and eases
• Only the Cisco Catalyst 3560-X and 3750-X support policing on 10-Gigabit Ethernet implementation and operations. The following are some of the key QoS features that
interfaces. differentiate the Sup-6E versus classic supervisors:
• Only the Cisco Catalyst 3560-X and 3750-X support SRR shaping weights on • Trust and Table-Map—MQC-based QoS implementation offers a number of
10-Gigabit Ethernet interfaces. implementation and operational benefits over classic supervisors that rely on the
The next-generation Cisco Catalyst 2960-S Series platform introduces modified QoS Trust model and internal Table-map as a tool to classify and mark ingress traffic.
architecture. To reduce the latency and improve application performance, the new 2960-S • Internal DSCP—The queue placement in Sup-6E and Sup6L-E is simplified by
platform do not support ingress queueing and buffer function in harware. All other ingress leveraging the MQC capabilities to explicitly map DSCP or CoS traffic in a
and egress queuing, buffer and bandwidth sharing function remain consistent as Catalyst hard-coded egress queue structure. For example, DSCP 46 can be classified with
2960 platform. Each physical ports including StackPort have 2 MB buffer capacity to ACL and can be matched in PQ class-map of an MQC in Sup-6E and Sup6L-E.
Medium Enterprise Design Profile (MEDP)—LAN Design

• Sequential vs Parallel Classification—With MQC-based QoS classification, the Figure 52 Cisco Catalyst 6500-E PFC QoS Architecture
Sup6-E and Sup6L-E provides sequential classification rather than parallel. The
sequential classification method allows network administrators to classify traffic at the Identify traffic based
on match criteria:
egress based on the ingress markings. • ACL (L2, IP)
• DSCP
Figure 51 Catalyst 4500—Supervisor 6-E and 6L-E QoS Architecture • IP Prec
Port Trust State • MPLS EXP Scheduler operation on
• CoS • Class-map WRR, DWRR, SP
Ingress QoS • IP Prec
• DSCP Internal Final internal DSCP
Policer Marking • MPLS EXP DSCP map is mapped to CoS
Receive Forwarding
Classify Q1

Classification
Lookup SP
Unconditional Incoming Q1 Q2 Outgoing
Marking ToS Policy Result DSCP CoS WRR
Q2 Scheduler rewrite
CoS Q3
DWRR CoS set on
Q4 trunk port
Egress QoS DSCP set
Q1 Ingress Port PFC/DFC Egress Port
for IP
Q2
Scheduling Rules: WRR, PQ Action – policy-map Scheduler queue
Policer Marking Q3 Queueing Mode: CoS Trust – DSCP, IP Prec, MPLS EXP and threshold are
Queueing/ Transmit Mark – set internal DSCP configurable

228975
Classify DBL Q4 Police – rate limit, drop
Shaping
Unconditional Q5
Marking Q6
Deploying Access-Layer QoS
Q7

228974
Q8 The campus access switches provide the entry point to the network for various types of
end devices managed by medium enterprise IT department or employee's personal
devices (i.e., laptop etc.). The access switch must decide whether to accept the QoS
Catalyst 6500-E QoS markings from each endpoint, or whether to change them. This is determined by the QoS
policies, and the trust model with which the endpoint is deployed.
The Cisco Catalyst 6500-E Series are enterprise-class switches, with next-generation
hardware and software capabilities designed to deliver innovative, secure, converged QoS Trust Boundary
network services regardless of its place in the network. The Cisco Catalyst 6500-E can be
deployed as a service-node in the campus network to offer a high performance, robust, QoS needs to be designed and implemented considering the entire network. This
intelligent application and network awareness services. The Catalyst 6500-E provides includes defining trust points and determining which policies to enforce at each device
leading-edge Layer 2-Layer 7 services, including rich high availability, manageability, within the network. Developing the trust model, guides policy implementations for each
virtualization, security, and QoS feature sets, as well as integrated Power-over-Ethernet device.
(PoE), allowing for maximum flexibility in virtually any role within the campus. The devices (routers, switches, WLC) within the internal network boundary are managed
Depending on the network services and application demands of the Cisco Catalyst by the system administrator, and hence are classified as trusted devices. Access-layer
6500-E, the platform can be deployed with different types of Supervisor switches communicate with devices that are beyond the network boundary and within the
modules—Sup720-10GE, Sup720 and Sup32. This design guide uses the Sup720-10GE internal network domain. QoS trust boundary at the access-layer communicates with
supervisor, which is built with next-generation hardware allowing administrators to build various devices that could be deployed in different trust models (trusted,
virtual-network-systems in the enterprise LAN network. These supervisors leverage conditional-trusted, or untrusted). Figure 53 illustrates several types of devices in the
various featured daughter cards, including the Multilayer Switch Feature Card (MSFC) that network edge.
serves as the routing engine, the Policy Feature Card (PFC) that serves as the primary
QoS engine, as well as various Distributed Feature Cards (DFCs) that serve to scale
policies and processing. Specifically relating to QoS, the PFC sends a copy of the QoS
policies to the DFC to provide local support for the QoS policies, which enables the DFCs
to support the same QoS features that the PFC supports. Since Cisco VSS is designed
with a distributed forwarding architecture, the PFC and DFC functions are enabled and
active on active and hot-standby virtual-switch nodes. Figure 52 provides internal PFC
based QoS architecture.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 53 Campus LAN QoS Trust Boundary • Marking—Based on trust model, classification, and policer settings, the QoS marking
is set at the edge before approved traffic enters through the access-layer switching
Access
Catalyst 3560 SERIES PoE-48
fabric. Marking traffic with the appropriate DSCP value is important to ensure traffic is
SYST
RPS
1X
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
15X 17X
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
31X 33X
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
47X

1 3
mapped to the appropriate internal queue, and treated with the appropriate priority.
Queuing—To provide differentiated services internally in the Catalyst 29xx and 3xxx
STAT
DUPLX
SPEED
POE

MODE
2X 16X 18X 32X 34X 48X
2 4

switching fabric, all approved traffic is queued into priority or non-priority ingress
queue. Ingress queuing architecture assures real-time applications, like VoIP traffic,
IP
are given appropriate priority (eg transmitted before data traffic).

Enabling QoS
IP
UnTrusted Cisco IP Trusted Secured Phone + Printer Cisco Cisco Cisco Cisco UC By default, QoS is disabled on all Catalyst 29xx and 3xxx Series switches and must be
PC Phone PC Server UnTrusted TelePresence Wireless IP Video + Mobile
PC Access Surveillance PC explicitly enabled in global configuration mode. The QoS configuration is the same for a
Point Camers multilayer or routed-access deployment. The following sample QoS configuration must

228976
Trusted Device Conditionally-Trusted Device UnTrusted Device be enabled on all the access-layer switches deployed in campus network LAN network.
Enterprise network administrator must identify and classify each of this device type into
one of three different trust models; each with its own unique security and QoS policies to Access-Layer 29xx and 3xxx (Multilayer or Routed Access)
access the network:
• Untrusted—An unmanaged device that does not pass through the network security cr24-2960-S-LB(config)#mls qos
policies. For example, employee-owned PC or network printer. Packets with 802.1p or cr24-2960-S-LB#show mls qos
DSCP marking set by untrusted endpoints are reset to default by the access-layer QoS is enabled
switch at the edge. Otherwise, it is possible for an unsecured user to take away QoS ip packet dscp rewrite is enabled
network bandwidth that may impact network availability and security for other users.
• Trusted—Devices that passes through network access security policies and are Note QoS function on Catalyst 4500-E with Sup6E and Sup6L-E is enabled with the
managed by network administrator. Even when these devices are network policy-map attached to the port and do not require any additional global
administrator maintained and secured, QoS policies must still be enforced to classify
configuration.
traffic and assign it to the appropriate queue to provide bandwidth assurance and
proper treatment during network congestion. Upon enabling QoS in the Catalyst switches, all physical ports are assigned untrusted
• Conditionally-trusted—A single physical connection with one trusted endpoint and mode. The network administrator must explicitly enable the trust settings on the physical
an indirect untrusted endpoint must be deployed as conditionally-trusted model. The port where trusted or conditionally trusted endpoints are connected. The Catalyst
trusted endpoints are still managed by the network administrator, but it is possible switches can trust the ingress packets based on 802.1P (CoS-based), ToS (ip-prec-based)
that the untrusted user behind the endpoint may or may not be secure (for example, or DSCP (DSCP-based) values. Best practice is to deploy DSCP-based trust mode on all
Cisco Unified IP Phone + PC). These deployment scenarios require hybrid QoS the trusted and conditionally-trusted endpoints. This offers a higher level of classification
policy that intelligently distinguishes and applies different QoS policy to the trusted and marking granularity than other methods. The following sample DSCP-based trust
and untrusted endpoints that are connected to the same port. configuration must be enabled on the access-switch ports connecting to trusted or
The ingress QoS policy at the access switches needs to be established, since this is the conditionally-trusted endpoints.
trust boundary, where traffic enters the network. The following ingress QoS techniques
are applied to provide appropriate service treatment and prevent network congestion: QoS Trust Mode (Multilayer or Routed-Access)
• Trust—After classifying the endpoint the trust settings must be explicitly set by a
network administrator. By default, Catalyst switches set each port in untrusted mode Trusted Port
when QoS is enabled. • 29xx and 3xxx (Multilayer or Routed Access)
• Classification—IETF standard has defined a set of application classes and provides
recommended DSCP settings. This classification determines the priority the traffic cr22-3560-LB(config)#interface GigabitEthernet0/5
will receive in the network. Using the IETF standard, simplifies the classification
cr22-3560-LB(config-if)# description CONNECTED TO IPVS 2500 - CAMERA
process and improves application and network performance.
cr22-3560-LB(config-if)# mls qos trust dscp
• Policing—To prevent network congestion, the access-layer switch limits the amount cr22-3560-LB#show mls qos interface Gi0/5
of inbound traffic up to its maximum setting. Additional policing can be applied for
GigabitEthernet0/5
known applications, to ensure the bandwidth of an egress queue is not completely
consumed by one application. trust state: trust dscp
trust mode: trust dscp
Medium Enterprise Design Profile (MEDP)—LAN Design

trust enabled flag: ena UnTrusted Port


COS override: dis As described earlier, the default trust mode is untrusted when globally enabling QoS
default COS: 0 function. Without explicit trust configuration on Gi0/1 port, the following show command
DSCP Mutation Map: Default DSCP Mutation Map verifies current trust state and mode:
Trust device: none • 29xx and 3xxx (Multilayer or Routed Access)
qos mode: port-based
cr22-3560-LB#show mls qos interface Gi0/1
• 4500-E-Sup6LE (Multilayer or Routed Access) GigabitEthernet0/1
By default all the Sup6E and Sup6L-E ports are in trusted mode, such configuration trust state: not trusted
leverages internal DSCP mapping table to automatically classify QoS bit settings trust mode: not trusted
from incoming traffic and place it to appropriate to queue based on mapping table. To trust enabled flag: ena
appropriate network policy the default settings must be modified by implementing
COS override: dis
ingress QoS policy-map. Refer to the “Implementing Ingress QoS Policing” section on
default COS: 0
page -52 for further details.
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
Conditionally-Trusted Port
qos mode: port-based

cr22-3560-LB(config)#interface Gi0/4
• 4500-E-Sup6LE (Multilayer or Routed Access)
cr22-3560-LB(config-if)# description CONNECTED TO PHONE+PC
cr22-3560-LB(config-if)# mls qos trust device cisco-phone QoS trust function on Cisco Catalyst 4500-E with Sup6E and Sup6L-E is enabled by
default and must be modified with the policy-map attached to the port.
cr22-3560-LB(config-if)# mls qos trust dscp

cr22-3560-LB#show mls qos interface Gi0/4 cr22-4507-LB#show qos interface GigabitEthernet3/1

GigabitEthernet0/4 Operational Port Trust State: Trusted

trust state: not trusted Trust device: none

trust mode: trust dscp Default DSCP: 0 Default CoS: 0

trust enabled flag: dis Appliance trust: none

COS override: dis


Implementing Ingress QoS Classification
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map When creating QoS classification policies, the network administrator needs to consider
Trust device: cisco-phone what applications are present at the access edge (in the ingress direction) and whether
qos mode: port-based these applications are sourced from trusted or untrusted endpoints. If PC endpoints are
secured and centrally administered, then endpoint PCs may be considered trusted
endpoints. In most deployments, this is not the case, thus PCs are considered untrusted
• 4500-E-Sup6LE (Multilayer or Routed Access) endpoints for the remainder of this document.
Not every application class, as defined in the Cisco-modified RFC 4594-based model, is
cr22-4507-LB(config)#interface GigabitEthernet3/3 present in the ingress direction at the access edge; therefore, it is not necessary to
cr22-4507-LB(config-if)# qos trust device cisco-phone provision the following application classes at the access-layer:
• Network Control—It is assumed that access-layer switch will not transmit or receive
cr22-4507-LB#show qos interface Gig3/3 network control traffic from endpoints; hence this class is not implemented.
Operational Port Trust State: Trusted • Broadcast Video—Broadcast video and multimedia streaming server can be
Trust device: cisco-phone distributed across the campus network which may be broadcasting live video feed
Default DSCP: 0 Default CoS: 0 using multicast streams must be originated from trusted distributed data center
Appliance trust: none servers.
• Operation, Administration and Management—Primarily generated by network
devices (routers, switches) and collected by management stations which are typically
deployed in the trusted data center network, or a network control center.
Medium Enterprise Design Profile (MEDP)—LAN Design

All applications present at the access edge need to be assigned a classification, as shown cr22-4507-LB(config-ext-nacl)#ip access-list extended
in Figure 54. Voice traffic is primarily sourced from Cisco IP telephony devices residing in TRANSACTIONAL-DATA
the voice VLAN (VVLAN). These are trusted devices, or conditionally trusted (if users also cr22-4507-LB(config-ext-nacl)# remark HTTPS
attach PCs, etc.) to the same port. Voice communication may also be sourced from PCs cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 443
with soft-phone applications, like Cisco Unified Personal Communicator (CUPC). Since cr22-4507-LB(config-ext-nacl)# remark ORACLE-SQL*NET
such applications share the same UDP port range as multimedia conferencing traffic
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 1521
(UDP/RTP ports 16384-32767) this soft-phone VoIP traffic is indistinguishable, and should
cr22-4507-LB(config-ext-nacl)# permit udp any any eq 1521
be classified with multimedia conferencing streams. See Figure 54.
cr22-4507-LB(config-ext-nacl)# remark ORACLE
Figure 54 Ingress QoS Application Model
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 1526
cr22-4507-LB(config-ext-nacl)# permit udp any any eq 1526
Application PHB Application Examples Present at Campus Trust
Access-Edge Boundary cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 1575
(Ingress)?
cr22-4507-LB(config-ext-nacl)# permit udp any any eq 1575
Network Control CS6 EIGRP, OSPF, HSRP, IKE cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 1630
VoIP EF Cisco IP Phone Yes Trusted
cr22-4507-LB(config-ext-nacl)#ip access-list extended BULK-DATA
Broadcast Video Cisco IPVS, Enterprise TV
cr22-4507-LB(config-ext-nacl)# remark FTP
Realtime Interactive CS4 Cisco TelePresence Yes Trusted
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq ftp
Multimedia Conferencing AF4 Cisco CUPC. WebEx Yes Untrusted cr22-4507-LB(config-ext-nacl)# permit tcp any any eq ftp-data
Multimedia Streaming AF3 Cisco DMS, IP/TV cr22-4507-LB(config-ext-nacl)# remark SSH/SFTP
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 22
Signaling CS3 SCCP, SIP, H.323 Yes Trusted
cr22-4507-LB(config-ext-nacl)# remark SMTP/SECURE SMTP
Transactional Data AF2 ERP Apps, CRM Apps Yes Untrusted
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq smtp
OAM CS2 SNMP, SSH, Syslog cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 465
Bulk Data AF1 Email, FTP, Backups Yes Untrusted cr22-4507-LB(config-ext-nacl)# remark IMAP/SECURE IMAP
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 143
Best Effort DF Best Effort
Default Class BestYes
Effort Best Effort
Untrusted

228977
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 993
Best Effort
Scavenger DF
CS1 BestGaming,
YouTube, Effort P2P BestYes
Effort Best Effort
Untrusted
cr22-4507-LB(config-ext-nacl)# remark POP3/SECURE POP3
Modular QoS MQC offers scalability and flexibility in configuring QoS to classify all cr22-4507-LB(config-ext-nacl)# permit tcp any any eq pop3
8-application classes by using match statements or an extended access-list to match the cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 995
exact value or range of Layer-4 known ports that each application uses to communicate on cr22-4507-LB(config-ext-nacl)# remark CONNECTED PC BACKUP
the network. The following sample configuration creates an extended access-list for each cr22-4507-LB(config-ext-nacl)# permit tcp any eq 1914 any
application and then applies it under class-map configuration mode.
• Catalyst 29xx, 3xxx and 4500-E (MultiLayer and Routed Access) cr22-4507-LB(config-ext-nacl)#ip access-list extended DEFAULT
cr22-4507-LB(config-ext-nacl)# remark EXPLICIT CLASS-DEFAULT
cr22-4507-LB(config)#ip access-list extended MULTIMEDIA-CONFERENCING cr22-4507-LB(config-ext-nacl)# permit ip any any
cr22-4507-LB(config-ext-nacl)# remark RTP
cr22-4507-LB(config-ext-nacl)# permit udp any any range 16384 32767 cr22-4507-LB(config-ext-nacl)#ip access-list extended SCAVENGER
cr22-4507-LB(config-ext-nacl)# remark KAZAA
cr22-4507-LB(config-ext-nacl)#ip access-list extended SIGNALING cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 1214
cr22-4507-LB(config-ext-nacl)# remark SCCP cr22-4507-LB(config-ext-nacl)# permit udp any any eq 1214
cr22-4507-LB(config-ext-nacl)# permit tcp any any range 2000 2002 cr22-4507-LB(config-ext-nacl)# remark MICROSOFT DIRECT X GAMING
cr22-4507-LB(config-ext-nacl)# remark SIP cr22-4507-LB(config-ext-nacl)# permit tcp any any range 2300 2400
cr22-4507-LB(config-ext-nacl)# permit tcp any any range 5060 5061 cr22-4507-LB(config-ext-nacl)# permit udp any any range 2300 2400
cr22-4507-LB(config-ext-nacl)# permit udp any any range 5060 5061 cr22-4507-LB(config-ext-nacl)# remark APPLE ITUNES MUSIC SHARING
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 3689
cr22-4507-LB(config-ext-nacl)# permit udp any any eq 3689
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-4507-LB(config-ext-nacl)# remark BITTORRENT In addition to policing, the rate-limit function also provides the ability to take different
cr22-4507-LB(config-ext-nacl)# permit tcp any any range 6881 6999 actions on the excess incoming traffic which exceeds the established limits. The
cr22-4507-LB(config-ext-nacl)# remark YAHOO GAMES exceed-action for each class must be carefully designed based on the nature of
cr22-4507-LB(config-ext-nacl)# permit tcp any any eq 11999 application to provide best-effort service based on network bandwidth availability. Table 6
provides best practice policing guidelines for different classes to be implemented for
cr22-4507-LB(config-ext-nacl)# remark MSN GAMING ZONE
trusted and conditional-trusted endpoints at the network edge.
cr22-4507-LB(config-ext-nacl)# permit tcp any any range 28800 29100

Table 6 Access-Layer Ingress Policing Guidelines


Creating class-map for each application services and applying match statement:
Conform-Actio
cr22-4507-LB(config)#class-map match-all VVLAN-SIGNALING Application Policing Rate n Exceed-Action

cr22-4507-LB(config-cmap)# match ip dscp cs3 VoIP Signaling <32 kbps Pass Drop
VoIP Bearer <128 kbps Pass Drop
cr22-4507-LB(config-cmap)#class-map match-all VVLAN-VOIP
Multimedia Conferencing <5Mbps 1 Pass Drop
cr22-4507-LB(config-cmap)# match ip dscp ef
Signaling <32 kbps Pass Drop
1
cr22-4507-LB(config-cmap)#class-map match-all MULTIMEDIA-CONFERENCING Transactional Data <10 Mbps Pass Remark to CS1
cr22-4507-LB(config-cmap)# match access-group name 1
Bulk Data <10 Mbps Pass Remark to CS1
MULTIMEDIA-CONFERENCING
1
Best Effort <10 Mbps Pass Remark to CS1
cr22-4507-LB(config-cmap)#class-map match-all SIGNALING 1
Scavenger <10 Mbps Pass Drop
cr22-4507-LB(config-cmap)# match access-group name SIGNALING 1. Rate varies based on several factors as defined earlier. This table depicts sample
rate-limiting value
cr22-4507-LB(config-cmap)#class-map match-all TRANSACTIONAL-DATA
cr22-4507-LB(config-cmap)# match access-group name TRANSACTIONAL-DATA Catalyst 29xx
As described earlier, Catalyst 2960 can only police to a minimum rate of 1 Mbps; all other
cr22-4507-LB(config-cmap)#class-map match-all BULK-DATA platforms including next-generation Cisco Catalyst 2960-S within this switch-product
cr22-4507-LB(config-cmap)# match access-group name BULK-DATA family can police to a minimum rate of 8 kbps.
• Trusted or Conditionally-Trusted Port Policer
cr22-4507-LB(config-cmap)#class-map match-all DEFAULT cr22-2960-LB(config)#policy-map Phone+PC-Policy
cr22-4507-LB(config-cmap)# match access-group name DEFAULT cr22-2960-LB(config-pmap)# class VVLAN-VOIP
cr22-2960-LB(config-pmap-c)# police 1000000 8000 exceed-action drop
cr22-4507-LB(config-cmap)#class-map match-all SCAVENGER cr22-2960-LB(config-pmap-c)# class VVLAN-SIGNALING
cr22-4507-LB(config-cmap)# match access-group name SCAVENGER cr22-2960-LB(config-pmap-c)# police 1000000 8000 exceed-action drop
cr22-2960-LB(config-pmap-c)# class MULTIMEDIA-CONFERENCING
Implementing Ingress QoS Policing
cr22-2960-LB(config-pmap-c)# police 5000000 8000 exceed-action drop
It is important to limit how much bandwidth each class may use at the ingress to the cr22-2960-LB(config-pmap-c)# class SIGNALING
access-layer for two primary reasons: cr22-2960-LB(config-pmap-c)# police 1000000 8000 exceed-action drop
• Bandwidth Bottleneck—To prevent network congestion, each physical port at the cr22-2960-LB(config-pmap-c)# class TRANSACTIONAL-DATA
trust boundary must be rate-limited. The rate-limit value may differ based on several cr22-2960-LB(config-pmap-c)# police 10000000 8000 exceed-action
factors—end-to-end network bandwidth capacity, end-station, and application policed-dscp-transmit
performance capacities, etc. cr22-2960-LB(config-pmap-c)# class BULK-DATA
• Bandwidth Security—Well-known applications like Cisco IP telephony, use a fixed cr22-2960-LB(config-pmap-c)# police 10000000 8000 exceed-action
amount of bandwidth per device, based on codec. It is important to police policed-dscp-transmit
high-priority application traffic which is assigned to the high-priority queue, cr22-2960-LB(config-pmap-c)# class SCAVENGER
otherwise it could consume too much overall network bandwidth and impact other cr22-2960-LB(config-pmap-c)# police 10000000 8000 exceed-action drop
application performance.
cr22-2960-LB(config-pmap-c)# class DEFAULT
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-2960-LB(config-pmap-c)# police 10000000 8000 exceed-action The following sample configuration shows how to implement explicit marking for multiple
policed-dscp-transmit classes on trusted and conditionally-trusted ingress ports in access-layer switches:

Trusted or Conditionally-Trusted Port


Catalyst 2960-S, 3xxx and 4500-E (Multilayer and Routed-Access) • Catalyst 29xx, 3xxx and 4500-E (Multilayer and Routed-Access)
• Trusted or Conditionally-Trusted Port Policer cr22-3750-LB(config)#policy-map Phone+PC-Policy
cr22-4507-LB(config)#policy-map Phone+PC-Policy cr22-3750-LB(config-pmap)# class VVLAN-VOIP
cr22-4507-LB(config-pmap)# class VVLAN-VOIP cr22-3750-LB(config-pmap-c)# set dscp ef
cr22-4507-LB(config-pmap-c)# police 128000 8000 exceed-action drop cr22-3750-LB(config-pmap-c)# class VVLAN-SIGNALING
cr22-4507-LB(config-pmap-c)# class VVLAN-SIGNALING cr22-3750-LB(config-pmap-c)# set dscp cs3
cr22-4507-LB(config-pmap-c)# police 32000 8000 exceed-action drop cr22-3750-LB(config-pmap-c)# class MULTIMEDIA-CONFERENCING
cr22-4507-LB(config-pmap-c)# class MULTIMEDIA-CONFERENCING cr22-3750-LB(config-pmap-c)# set dscp af41
cr22-4507-LB(config-pmap-c)# police 5000000 8000 exceed-action drop cr22-3750-LB(config-pmap-c)# class SIGNALING
cr22-4507-LB(config-pmap-c)# class SIGNALING cr22-3750-LB(config-pmap-c)# set dscp cs3
cr22-4507-LB(config-pmap-c)# police 32000 8000 exceed-action drop cr22-3750-LB(config-pmap-c)# class TRANSACTIONAL-DATA
cr22-4507-LB(config-pmap-c)# class TRANSACTIONAL-DATA cr22-3750-LB(config-pmap-c)# set dscp af21
cr22-4507-LB(config-pmap-c)# police 10000000 8000 exceed-action cr22-3750-LB(config-pmap-c)# class BULK-DATA
policed-dscp-transmit cr22-3750-LB(config-pmap-c)# set dscp af11
cr22-4507-LB(config-pmap-c)# class BULK-DATA cr22-3750-LB(config-pmap-c)# class SCAVENGER
cr22-4507-LB(config-pmap-c)# police 10000000 8000 exceed-action cr22-3750-LB(config-pmap-c)# set dscp cs1
policed-dscp-transmit
cr22-3750-LB(config-pmap-c)# class DEFAULT
cr22-4507-LB(config-pmap-c)# class SCAVENGER
cr22-3750-LB(config-pmap-c)# set dscp default
cr22-4507-LB(config-pmap-c)# police 10000000 8000 exceed-action drop
cr22-4507-LB(config-pmap-c)# class DEFAULT
All ingress traffic (default class) from an untrusted endpoint must be marked without a
cr22-4507-LB(config-pmap-c)# police 10000000 8000 exceed-action
explicit classification. The following sample configuration shows how to implement explicit
policed-dscp-transmit
DSCP marking:
Catalyst 29xx, 3xxx and 4500-E (Multilayer and Routed-Access)

Untrusted Port
• UnTrusted Port Policer
• Catalyst 29xx, 3xxx and 4500-E (Multilayer and Routed-Access)
All ingress traffic (default class) from untrusted endpoint be must be policed without
cr22-3750-LB(config)#policy-map UnTrusted-PC-Policy
explicit classification that requires differentiated services. The following sample
configuration shows how to deploy policing on untrusted ingress ports in access-layer cr22-3750-LB(config-pmap)# class class-default
switches: cr22-3750-LB(config-pmap-c)# set dscp default
cr22-2960-LB(config)#policy-map UnTrusted-PC-Policy
Applying Ingress Policies
cr22-2960-LB(config-pmap)# class class-default
cr22-2960-LB(config-pmap-c)# police 10000000 8000 exceed-action drop After creating complete a policy-map on all the Layer 2 and Layer 3 access-switches with
QoS policies defined, the service-policy must be applied on the edge interface of the
Implementing Ingress Marking access-layer to enforce the QoS configuration. Cisco Catalyst switches offers three
simplified methods to apply service-policies; depending on the deployment model either
Accurate DSCP marking of ingress traffic at the access-layer switch is critical to ensure of the methods can be implemented:
proper QoS service treatment as traffic traverses through the network. All classified and
• Port-Based QoS—Applying the service-policy on per physical port basis will force
policed traffic must be explicitly marked using the policy-map configuration based on an
8-class QoS model as shown in Figure 59. traffic to pass-through the QoS policies before entering in to the campus network.
Port-Based QoS discretely functions on a per-physical port basis even if it is
The best practice is to use a explicit marking command (set dscp) even for trusted associated with a logical VLAN which is applied on multiple physical ports.
application classes (like VVLAN-VOIP and VVLAN-SIGNALING), rather than a trust
policy-map action. A trust statement in a policy map requires multiple hardware entries,
with the use of an explicit (seemingly redundant) marking command, and improves the
hardware efficiency.
Medium Enterprise Design Profile (MEDP)—LAN Design

• VLAN-Based QoS—Applying the service-policy on a per VLAN bas requires the that meet the specified policy are forwarded to the switching fabric for egress switching.
policy-map to be attached to a logical Layer 3 SVI interface. Every physical port The aggregate bandwidth from all edge ports may exceed the switching fabric bandwidth
associated to VLAN requires an extra configuration to ensure all traffic to passes and cause internal congestion.
through the QoS policies defined on an logical interface. Cisco Catalyst 2960 and 3xxx platforms support two internal ingress queues: normal
• Per-Port / Per-VLAN-Based QoS—This is not supported on all the Catalyst platforms queue and priority queue. The ingress queue inspects the DSCP value on each incoming
and the configuration commands are platform-specific. Per-Port/Per-VLAN-based frame and assigns it to either the normal or priority queue. High priority traffic, like DSCP
QoS create a nested hierarchical policy-map that operates on a trunk interface. A EF marked packets, are placed in the priority queue and switched before processing the
different policy-map can be applied on each logical SVI interface that is associated normal queue.
to same physical port. The Catalyst 3750-X family of switches supports the weighted tail drop (WTD) congestion
See Figure 55. avoidance mechanism. WTD is implemented on queues to manage the queue length.
WTD drops packets from the queue, based on DSCP value, and the associated threshold.
Figure 55 Depicts all three QoS implementation method
If the threshold is exceeded for a given internal DSCP value, the switch drops the packet.
Per-Port/Per-VLAN Each queue has three threshold values. The internal DSCP determines which of the three
Port-Based QoS VLAN-Based QoS Based QoS threshold values is applied to the frame. Two of the three thresholds are configurable
VLAN Interface VLAN Interface VLAN Interface (explicit) and one is not (implicit). This last threshold corresponds to the tail of the queue
WLAN 10 WLAN 20 (100 percent limit).
VLAN 10 VLAN 20 VLAN 10 VLAN 20
DVLAN 100 DVLAN 200
Figure 56 depicts how different class-of-service applications are mapped to the Ingress
Physical Ports
Queue structure (1P1Q3T) and how each queue is assigned a different WTD threshold.
Figure 56 Catalyst 2960 and 3xxx Ingress Queuing Model

228978
Physical port attached Single Logical port attached Multiple Logical ports attached
with single service-policy with single service-policy with different service-policy
Application PHB Ingress Queue 1P1Q3T
The following sample configuration provides guideline to deploy port-based QoS on the
access-layer switches in campus network: Network Control CS7 EF
• Catalyst 29xx, 3xxx and 4500-E (Multilayer and Routed-Access) Q2
Internetwork Control CS6 CS5 Priority Queue
cr22-2960-LB(config)#interface FastEthernet0/1
cr22-2960-LB(config-if)# service-policy input UnTrusted-PC-Policy VoIP EF CS4

Broadcast Video CS5 CS7 Q1T3


cr22-2960-LB#show mls qos interface FastEthernet0/1
FastEthernet0/1 Multimedia Conferencing AF4 CS6
Attached policy-map for Ingress: UnTrusted-PC-Policy
CS3 Q1T2
trust state: not trusted Realtime Interactive CS4
trust mode: not trusted AF4 Q1T1
Multimedia Streaming AF3
trust enabled flag: ena
AF3
COS override: dis Signaling CS3
Queue 1
default COS: 0
Transactional Data AF2 AF2 Normal Queue
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none Network Management CS2 CS2
qos mode: port-based
Bulk Data AF1 AF1

Best Effort DF DF
Applying Ingress Queuing

228979
Fixed configuration Cisco Catalyst switches (2960 and 3xxx) not only offer differentiated Best Effort
Scavenger DF
CS1 CS1
services on the network ports, but also internally on the switching fabric. Note, Cisco
Catalyst 2960-S Series platform do not support ingress queueing and buffer allocation. • Catalyst 2960 and 3xxx (Multilayer and Routed-Access)
After enabling QoS and attaching inbound policies on the physical ports, all the packets
cr22-3750-LB(config)#mls qos srr-queue input priority-queue 2
bandwidth 30
Medium Enterprise Design Profile (MEDP)—LAN Design

! Q2 is enabled as a strict-priority ingress queue with 30% BW


---------------------------------------------------------------------
-----------------
cr22-3750-LB (config)#mls qos srr-queue input bandwidth 70 30
0 : 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01
! Q1 is assigned 70% BW via SRR shared weights
01-01
! Q1 SRR shared weight is ignored (as it has been configured as a
1 : 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01
PQ)
01-01
2 : 01-01 01-01 01-01 01-01 01-02 01-01 01-01 01-01 01-01
cr22-3750-LB (config)#mls qos srr-queue input threshold 1 80 90
01-01
! Q1 thresholds are configured at 80% (Q1T1) and 90% (Q1T2) 3 : 01-01 01-01 02-03 01-01 01-01 01-01 01-01 01-01 01-01
! Q1T3 is implicitly set at 100% (the tail of the queue) 01-01
! Q2 thresholds are all set (by default) to 100% (the tail of Q2) 4 : 02-03 02-01 02-01 02-01 02-01 02-01 02-03 02-01 01-03
01-01
! This section configures ingress DSCP-to-Queue Mappings 5 : 01-01 01-01 01-01 01-01 01-01 01-01 01-03 01-01 01-01
cr22-3750-LB (config)# mls qos srr-queue input dscp-map queue 1 01-01
threshold 1 0 8 10 12 14 6 : 01-01 01-01 01-01 01-01
! DSCP DF, CS1 and AF1 are mapped to ingress Q1T1
cr22-3750-LB (config)# mls qos srr-queue input dscp-map queue 1 Note The ingress queuing function on Catalyst 4500-E Sup6E and Sup6L-E is not
threshold 1 16 18 20 22 supported as described in Figure 51.
! DSCP CS2 and AF2 are mapped to ingress Q1T1
cr22-3750-LB (config)# mls qos srr-queue input dscp-map queue 1 Implementing Access-Layer Egress QoS
threshold 1 26 28 30 34 36 38
! DSCP AF3 and AF4 are mapped to ingress Q1T1
The QoS implementation of egress traffic towards network edge devices on access-layer
switches are much simplified compared to ingress traffic which requires stringent QoS
cr22-3750-LB (config)#mls qos srr-queue input dscp-map queue 1
policies to provide differentiated services and network bandwidth protection. Unlike the
threshold 2 24
Ingress QoS model, the egress QoS model must provide optimal queuing policies for
! DSCP CS3 is mapped to ingress Q1T2 each class and set the drop thresholds to prevent network congestion and prevent an
application performance impact. With egress queuing in DSCP mode, the Cisco Catalyst
cr22-3750-LB(config)#mls qos srr-queue input dscp-map queue 1 switching platforms are bounded by a limited number of hardware queues.
threshold 3 48 56
! DSCP CS6 and CS7 are mapped to ingress Q1T3 (the tail of Q1) Catalyst 2960 and 3xxx Egress QoS
cr22-3750-LB(config)#mls qos srr-queue input dscp-map queue 2
Cisco Catalyst 29xx and 3xxx Series platform supports four egress queues that are
threshold 3 32 40 46
required to support the variable class QoS policies for the medium enterprise campus
! DSCP CS4, CS5 and EF are mapped to ingress Q2T3 (the tail of the LAN network; specifically, the following queues would be considered a minimum:
PQ)
• Realtime queue (to support a RFC 3246 EF PHB service)
cr22-3750-LB#show mls qos input-queue • Guaranteed bandwidth queue (to support RFC 2597 AF PHB services)
Queue: 12 • Default queue (to support a RFC 2474 DF service)
---------------------------------------- • Bandwidth constrained queue (to support a RFC 3662 scavenger service)
buffers :9010 As a best practice, each physical or logical interfaces must be deployed with IETF
bandwidth :7030 recommended bandwidth allocations for different class-of-service applications:
priority :030 • The real-time queue should not exceed 33 percent of the link's bandwidth.
threshold1:80100 • The default queue should be at least 25 percent of the link's bandwidth.
threshold2:90100
• The bulk/scavenger queue should not exceed 5 percent of the link's bandwidth.
Figure 57 illustrates the egress bandwidth allocation best practices design for different
cr22-3750-LB#show mls qos maps dscp-input-q
classes.
Dscp-inputq-threshold map:
d1 :d2 0 1 2 3 4 5
6 7 8 9
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 57 Class-of-Service Egress Bandwidth Allocations Figure 58 1P3Q3T Egress QoS Model on Catalyst 29xx and 3xxx platforms

Application PHB Egress Queue 1P3Q3T

Network Control CS7 AF1 Queue 4 Q4T2


CS1 (5%) Q4T1
Internetwork Control CS6

VoIP EF Queue 3
DF (35%)
Broadcast Video CS5
CS7 Q2T3
Multimedia Conferencing AF4
CS6
Realtime Interactive CS4
CS3 Q2T2
Multimedia Streaming AF3 Queue 2
AF4 (30%) Q2T1
Given these minimum queuing requirements and bandwidth allocation Signaling CS3
recommendations, the following application classes can be mapped to the respective AF3
queues: Transactional Data AF2
AF2
• Realtime Queue—Voice, broadcast video, and realtime interactive may be mapped Network Management CS2
to the realtime queue (per RFC 4594). CS2
• Guaranteed Queue—Network/internetwork control, signaling, network management, Bulk Data AF1
multimedia conferencing, multimedia streaming, and transactional data can be EF Queue 1
Best Effort DF Priority Queue
mapped to the guaranteed bandwidth queue. Congestion avoidance mechanisms CS5

228981
(i.e., selective dropping tools), such as WRED, can be enabled on this class; (30%)
Best Effort
Scavenger DF
CS1 CS4
furthermore, if configurable drop thresholds are supported on the platform, these
may be enabled to provide intra-queue QoS to these application classes, in the DSCP marked packets are assigned to the appropriate queue and each queue is
respective order they are listed (such that control plane protocols receive the highest configured with appropriate WTD threshold as defined in Figure 58. Egress queuing
level of QoS within a given queue). settings are common between all the trust-independent network edge ports as well as on
• Scavenger/Bulk Queue—Bulk data and scavenger traffic can be mapped to the the Layer 2 or Layer 3 uplink connected to internal network. The following egress queue
bandwidth-constrained queue and congestion avoidance mechanisms can be configuration entered in global configuration mode must be enabled on every
enabled on this class. If configurable drop thresholds are supported on the platform, access-layer switch in the network.
these may be enabled to provide inter-queue QoS to drop scavenger traffic ahead of • Catalyst 2960, 2960-S and 3xxx (Multilayer and Routed-Access)
bulk data.
• Default Queue—Best-effort traffic can be mapped to the default queue; congestion cr22-3750-LB(config)#mls qos queue-set output 1 buffers 15 30 35 20
avoidance mechanisms can be enabled on this class. ! Queue buffers are allocated
Like the ingress queuing structure that maps various applications based on DSCP value cr22-3750-LB (config)#mls qos queue-set output 1 threshold 1 100 100 100
into two ingress queues, the egress queuing must be similar designed to map with four 100
egress queues. The DSCP-to-queue mapping for egress queuing must be mapped to ! All Q1 (PQ) Thresholds are set to 100%
each egress queues as stated above which allows better queuing-policy granularity. A cr22-3750-LB (config)#mls qos queue-set output 1 threshold 2 80 90 100
campus egress QoS model example for a platform that supports DSCP-to-queue 400
mapping with a 1P3Q8T queuing structure is depicted in Figure 58. ! Q2T1 is set to 80%; Q2T2 is set to 90%;
! Q2 Reserve Threshold is set to 100%;
! Q2 Maximum (Overflow) Threshold is set to 400%
cr22-3750-LB (config)#mls qos queue-set output 1 threshold 3 100 100 100
400
! Q3T1 is set to 100%, as all packets are marked the same weight in Q3
! Q3 Reserve Threshold is set to 100%;
! Q3 Maximum (Overflow) Threshold is set to 400%
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-3750-LB (config)#mls qos queue-set output 1 threshold 4 60 100 100 Shaped queue weights (absolute) : 25 0 0 0
400 Shared queue weights : 1 30 35 5
! Q4T1 is set to 60%; Q4T2 is set to 100% The port bandwidth limit : 100 (Operational Bandwidth:100.0)
! Q4 Reserve Threshold is set to 100%; The port is mapped to qset : 1
! Q4 Maximum (Overflow) Threshold is set to 400%
• Catalyst 4500-E Sup6E and Sup6L-E Egress QoS
The enterprise-class 4500-E switch with next-generation supervisor hardware
cr22-3750-LB(config)# mls qos srr-queue output dscp-map queue 1 threshold architecture are designed to offers better egress QoS techniques, capabilities, and
3 32 40 46 flexibilities to provide for a well diverse queuing structure for multiple
! DSCP CS4, CS5 and EF are mapped to egress Q1T3 (tail of the PQ) class-of-service traffic types. Deploying the next-generation Sup-6E and Sup6L-E in
cr22-3750-LB(config)# mls qos srr-queue output dscp-map queue 2 threshold the campus network provides more QoS granularity to map the 8-class traffic types
1 16 18 20 22 to hardware-based egress-queues as illustrated in Figure 59.
! DSCP CS2 and AF2 are mapped to egress Q2T1
Figure 59 8 Class-of-Service Egress Bandwidth Allocations
cr22-3750-LB(config)# mls qos srr-queue output dscp-map queue 2 threshold
1 26 28 30 34 36 38
! DSCP AF3 and AF4 are mapped to egress Q2T1
cr22-3750-LB(config)#mls qos srr-queue output dscp-map queue 2 threshold
2 24
! DSCP CS3 is mapped to egress Q2T2
cr22-3750-LB(config)#mls qos srr-queue output dscp-map queue 2 threshold
3 48 56
! DSCP CS6 and CS7 are mapped to egress Q2T3
cr22-3750-LB(config)#mls qos srr-queue output dscp-map queue 3 threshold
3 0
! DSCP DF is mapped to egress Q3T3 (tail of the best effort queue)
cr22-3750-LB(config)#mls qos srr-queue output dscp-map queue 4 threshold
1 8
! DSCP CS1 is mapped to egress Q4T1 The Cisco Catalyst 4500-E Sup-6E and Sup6L-E supervisor supports platform-specific
cr22-3750-LB(config)# mls qos srr-queue output dscp-map queue 4 threshold congestion avoidance algorithms to provide Active Queue Management (AQM), namely
2 10 12 14 Dynamic Buffer Limiting (DBL). DBL tracks the queue length for each traffic flow in the
! DSCP AF1 is mapped to Q4T2 (tail of the less-than-best-effort queue) switch. When the queue length of a flow exceeds its limit, DBL drops packets or sets the
Explicit Congestion Notification (ECN) bits in the TCP packet headers. With 8 egress
! This section configures edge and uplink port interface with common (1P7Q1T) queues and DBL capability in the Sup-6E-based supervisor, the bandwidth
egress queuing parameters distribution for different classes change. Figure 60 provides the new recommended
cr22-3750-LB(config)#interface range GigabitEthernet1/0/1-48
bandwidth allocation.
cr22-3750-LB(config-if-range)# queue-set 1
! The interface(s) is assigned to queue-set 1
cr22-3750-LB(config-if-range)# srr-queue bandwidth share 1 30 35 5
! The SRR sharing weights are set to allocate 30% BW to Q2
! 35% BW to Q3 and 5% BW to Q4
! Q1 SRR sharing weight is ignored, as it will be configured as a PQ
cr22-3750-LB(config-if-range)# priority-queue out
! Q1 is enabled as a strict priority queue

cr22-3750-LB#show mls qos interface GigabitEthernet1/0/27 queueing


GigabitEthernet1/0/27
Egress Priority Queue : enabled
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 60 1P7Q1T Egress QoS Model on Catalyst 4500-E with Sup6E and Sup6L-E cr24-4507-LB(config-cmap)# match dscp cs6
cr24-4507-LB(config-cmap)# match dscp cs3
Application PHB Egress Queue cr24-4507-LB(config-cmap)# match dscp cs2
1P7Q1T (+DBL)
cr24-4507-LB(config-cmap)#class-map match-all
Network Control CS7 EF MULTIMEDIA-CONFERENCING-QUEUE
Priority Queue cr24-4507-LB(config-cmap)# match dscp af41 af42 af43
Internetwork Control CS6 CS5
(30%) cr24-4507-LB(config-cmap)#class-map match-all MULTIMEDIA-STREAMING-QUEUE
VoIP EF CS4 cr24-4507-LB(config-cmap)# match dscp af31 af32 af33
CS7 & CS6 cr24-4507-LB(config-cmap)#class-map match-all TRANSACTIONAL-DATA-QUEUE
Broadcast Video CS5 Q7 (10%)
CS3 & CS2 cr24-4507-LB(config-cmap)# match dscp af21 af22 af23
Multimedia Conferencing AF4 cr24-4507-LB(config-cmap)#class-map match-all BULK-DATA-QUEUE
AF4 cr24-4507-LB(config-cmap)# match dscp af11 af12 af13
Realtime Interactive CS4 Q6 (10%)
cr24-4507-LB(config-cmap)#class-map match-all SCAVENGER-QUEUE
Multimedia Streaming AF3 cr24-4507-LB(config-cmap)# match dscp cs1
AF3 Q5 (10%)
Signaling CS3 ! Creating policy-map and configure queueing for class-of-service
Transactional Data AF2 AF2 Q4 (10%) cr22-4507-LB(config)#policy-map EGRESS-POLICY
cr22-4507-LB(config-pmap)# class PRIORITY-QUEUE
Network Management CS2 cr22-4507-LB(config-pmap-c)# priority
AF1 Q3 (4%)
Bulk Data AF1 cr22-4507-LB(config-pmap-c)# class CONTROL-MGMT-QUEUE
cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 10
Best Effort
Scavenger DF
CS1 CS1 Q2 (1%)
cr22-4507-LB(config-pmap-c)# class MULTIMEDIA-CONFERENCING-QUEUE

228983
Best Effort DF DF Q1 (25%) cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 10
cr22-4507-LB(config-pmap-c)# class MULTIMEDIA-STREAMING-QUEUE
The QoS architecture and implementation procedure are identical between Sup-6E and cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 10
Sup6L-E modules. Implementing QoS policies on Sup-6E-based Catalyst 4500 platform cr22-4507-LB(config-pmap-c)# class TRANSACTIONAL-DATA-QUEUE
follows the IOS (MQC) based configuration model instead of the Catalyst OS-based QoS
cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 10
model. To take advantage of hardware-based QoS egress, the queuing function using
cr22-4507-LB(config-pmap-c)# dbl
MQC must be applied on per member-link of the EtherChannel interface. Therefore,
load-sharing egress per-flow traffic across EtherChannel links offers the advantage to cr22-4507-LB(config-pmap-c)# class BULK-DATA-QUEUE
optimally use distributed hardware resources. cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 4
Recommended DSCP markings for each traffic class can be classified in a different cr22-4507-LB(config-pmap-c)# dbl
class-map for egress QoS functions. Based on Figure 60, the following configuration use cr22-4507-LB(config-pmap-c)# class SCAVENGER-QUEUE
the new egress policy-map with queuing and DBL function implemented on the Catalyst cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 1
4500-E deployed with a Sup6E and SupL-E supervisor module. All network edge port and cr22-4507-LB(config-pmap-c)# class class-default
core-facing uplink ports must use a common egress policy-map. cr22-4507-LB(config-pmap-c)# bandwidth remaining percent 25
• Catalyst 4500 Sup-6E and SupL-E (MultiLayer and Routed-Access) cr22-4507-LB(config-pmap-c)# dbl

! Creating class-map for each classes using match dscp statement as ! Attaching egress service-policy on all physical member-link ports
marked by edge systems cr24-4507-DO(config)#int range Ten3/1 , Te4/1 , Ten5/1 , Ten5/4, Ten
cr22-4507-LB(config)#class-map match-all PRIORITY-QUEUE Gi1/1 - 6
cr22-4507-LB(config-cmap)# match dscp ef cr24-4507-DO(config-if-range)# service-policy output EGRESS-POLICY
cr22-4507-LB(config-cmap)# match dscp cs5
cr22-4507-LB(config-cmap)# match dscp cs4
cr22-4507-LB(config-cmap)#class-map match-all CONTROL-MGMT-QUEUE
cr22-4507-LB(config-cmap)# match dscp cs7
Medium Enterprise Design Profile (MEDP)—LAN Design

Policing Priority-Queue
Table 8 Summarized Access-Layer Egress QoS Deployment Guidelines
EtherChannel is an aggregated logical bundle of interfaces that do not perform queuing
and rely on individual member-links to queue egress traffic by using hardware-based Classification /
queuing. The hardware-based priority-queue implementation on the Catalyst 4500-E Trust Marking / Egress Bandwidth
does not support a built-in policer to restrict traffic during network congestion. To mitigate End-Point Model Policing Queuing Share
this challenge, it is recommended to implement an additional policy-map to rate-limit the
Unmanaged UnTrusted None Yes Yes
priority class traffic and must be attached on the EtherChannel to govern the aggregated
devices,
egress traffic limits. The following additional policy-map must be created to classify
printers etc
priority-queue class traffic and rate-limit up to 30 percent egress link capacity:
Managed Trusted None Yes Yes
cr22-4507-LB(config)#class-map match-any PRIORITY-QUEUE secured
devices,
cr22-4507-LB (config-cmap)# match dscp ef
Servers etc
cr22-4507-LB (config-cmap)# match dscp cs5
cr22-4507-LB (config-cmap)# match dscp cs4 Phone Trusted None Yes Yes
Phone + Mobile Conditionally- None Yes Yes
cr22-4507-LB (config)#policy-map PQ-POLICER PC Trusted
cr22-4507-LB (config-pmap)# class PRIORITY-QUEUE
IP Video Trusted None Yes Yes
cr22-4507-LB (config-pmap-c)# police cir 300 m conform-action transmit surveillance
exceed-action drop
Camera

cr22-4507-LB (config)#interface range Port-Channel 1


Digital Media Trusted None Yes Yes
Player
cr22-4507-LB (config-if-range)#service-policy output PQ-POLICER
Core facing Trusted Yes (PQ Policer) Yes Yes
Uplinks
Table 7 Summarized Access-Layer Ingress QoS Deployment Guidelines

Classifi
Trust - Ingress Deploying Network-Layer QoS
End-Point Model DSCP Trust cation Marking Policing Queuing1 Campus network systems at the main site and remote campus are managed and
Unmanaged UnTrusted Don’t Trust. None None Yes Yes maintained by the enterprise IT administration to provide key network foundation services
devices, Default. such as routing, switching, QoS, and virtualization. In a best practice network environment,
printers etc these systems must be implemented with the recommended configuration to provide
differentiated network services on per-hop basis. To allow for consistent application
Managed Trusted Trust 8 Class Yes Yes Yes
delivery through the network, it is recommended to implement bidirectional QoS policies
secured Model on distribution and core layer systems.
devices,
Servers etc
QoS Trust Boundary
Phone Trusted Trust Yes Yes Yes Yes
All medium enterprise IT managed campus LAN and WAN network systems can be
Phone + Mobile Conditionally- Trust Yes Yes Yes Yes classified as trusted device and must follow same QoS best practices recommended in
PC Trusted previous subsection. It is recommended to avoid deploying trusted or untrusted
endpoints directly to the campus distribution and core layer systems.
IP Video Trusted Trust No No No Yes
surveillance Based on global network QoS policy each class-of-service applications get common
Camera treatment. Independent of enterprise network tier—LAN/WAN, platform type and their
capabilities— each devices in the network will protect service quality and enable
Digital Media Trusted Trust No No No Yes communication across the network without degrading the application performance.
Player
Core facing Trusted Trust No No No Yes
Uplinks
1. Catalyst 29xx and 3xxx only
Medium Enterprise Design Profile (MEDP)—LAN Design

Implementing Network-Layer Ingress QoS

As described earlier, the internal campus core network must be considered to be trusted.
Implement DSCP Trust Mode
The next-generation Cisco Catalyst access-layer platform must be deployed with more
application-aware and intelligence at the network edge. The campus core and distribution • Catalyst 6500-E (Multilayer or Routed Access)
network devices should rely on the access-layer switches to implement QoS
classification and marking based on a wide-range of applications and IP-based devices cr22-6500-LB(config)#interface Port-channel100
deployed at the network edge. cr22-6500-LB(config-if)# description Connected to cr22-4507-LB
To provide consistent and differentiated QoS services on per-hop basis across the cr22-6500-LB(config-if)# mls qos trust dscp
network, the distribution and core network must be deployed to trust incoming
pre-marked DSCP traffic from the downstream Layer 2 or Layer 3 network device. This
Catalyst 6500-E will automatically replicate “mls qos trust dscp”
medium enterprise LAN network design recommends deploying a broad-range of
command from port-channel interface to each bundled member-links.
Layer-3 Catalyst switching platforms in the campus distribution and core layer. As
mentioned in the previous section, the hardware architecture of each switching platform
is different, based on the platform capabilities and resources. This will change how each cr22-6500-LB#show queueing interface Ten1/1/2 | inc QoS|Trust
various class-of-service traffic will be handled in different directions: ingress, switching Port QoS is enabled
fabric, and egress. Trust boundary disabled
Cisco Catalyst access-layer switches must classify the application and device type to Trust state: trust DSCP
marks DSCP value based on the trust model with deep packet inspection using
access-lists (ACL) or protocol-based device discovery; therefore, there is no need to
reclassify the same class-of-service at the campus distribution and core layer. The Catalyst 3750-X (Multilayer or Routed Access)
campus distribution and core layers can trust DSCP markings from access-layer and
provide QoS transparency without modifying the original parameters unless the network Catalyst 3750-X does not support mls qos trust dscp command on port-channel
is congested. interface; therefore, network administrator must apply this command on each bundled
member-links.
Based on the simplified internal network trust model, the ingress QoS configuration also
cr36-3750x-xSB(config)#interface range Ten1/0/1 - 2 , Ten2/0/1 - 2
becomes more simplified and manageable. This subsection provides common ingress
QoS deployment guidelines for the campus distribution and core for all locations: cr36-3750x-xSB(config-if-range)# description Connected to cr23-VSS-Core
cr36-3750x-xSB(config-if-range)# mls qos trust dscp
QoS Trust Mode
cr36-3750x-xSB#show mls qos interface Ten1/0/1
As described earlier, the Catalyst 4500-E deployed with either a Sup6E or Sup6L-E
supervisor module in the distribution or core layer will automatically sets the physical TenGigabitEthernet1/0/1
ports in the trust mode. The Catalyst 4500-E by default will perform DSCP-CoS or trust state: trust dscp
CoS-DSCP mappings to transmit traffic transparently without any QoS bits rewrites. trust mode: trust dscp
However the default QoS function on campus distribution or core platforms like the …
Catalyst 3750-X and 6500-E Series switches is disabled.
The network administrator must manually enable QoS globally on the switch and explicitly
enable DSCP trust mode on each logical EtherChannel and each member-link interface Applying Ingress Queuing
connected to upstream and downstream devices. The distribution layer QoS trust
configuration is the same for a multilayer or routed-access deployment. The following When Cisco Catalyst 3750-X and 6500-E switching platforms receive various
sample QoS configuration must be enabled on all the distribution and core layer switches class-of-service requests from different physical ports, then depending on the DSCP and
deployed in campus LAN network. CoS markings it can queue the traffic prior sending it to the switching fabric in a FIFO
manner. Both Catalyst platforms support up to two ingress queues but how they are
implemented differs. The Cisco Catalyst 4500-E deployed with a Sup6E or a Sup6L-E
Distribution-Layer Catalyst 3750-X and 6500-E
supervisor module does not support ingress queuing.
• 3750-X and 6500-E (Multilayer or Routed Access)
Implementing Catalyst 3750-X Ingress Queuing
cr22-6500-LB(config)#mls qos
The ingress queuing function in the distribution-layer Catalyst 3750-X StackWise Plus
cr22-6500-LB#show mls qos must be deployed to differentiate and place the normal versus high-priority class traffic in
QoS is enabled globally separate ingress queue before forwarding it to the switching fabric.

Medium Enterprise Design Profile (MEDP)—LAN Design

For consistent QoS within the campus network, the core and access layers should map While the presence of oversubscribed linecard architectures may be viewed as the sole
DSCP-marked traffic into ingress queues the same way. Refer to the “Applying Ingress consideration as to enabling ingress queuing or not, a second important consideration
Queuing” section on page -54 for implementation detail. that many Catalyst 6500-E linecards only support CoS-based ingress queuing models
that reduces classification and marking granularity—limiting the administrator to an
Implementing Catalyst 6500-E Ingress Queuing 8-class 802.1Q/p model. Once CoS is trusted, DSCP values are overwritten (via the
CoS-to-DSCP mapping table) and application classes sharing the same CoS values are
There are two main considerations relevant to ingress queuing design on the Catalyst longer distinguishable from one another. Therefore, given this classification and marking
6500/6500-E: limitation and the fact that the value of enabling ingress queuing is only achieved in
• The degree of oversubscription (if any) of the linecard extremely rare scenarios, it is not recommended to enable CoS-based ingress queuing
• Whether the linecard requires trust-CoS to be enabled to engage ingress queuing on the Catalyst 6500-E; rather, limit such linecards and deploy either non-oversubscribed
linecards and/or linecards supporting DSCP-based queuing at the distribution and core
Some linecards may be designed to support a degree of oversubscription that
layers of the campus network.
theoretically offers more traffic to the linecard than the sum of all GE/10GE switch ports
than can collectively access the switching backplane at once. Since such a scenario is Table 9 summarizes recommended linecards consideration by listing and
extremely unlikely, it is often more cost-effective to use linecards that have a degree of oversubscription ratios and whether the ingress queuing models are CoS or
oversubscription within the campus network. However, if this design choice has been DSCP-based.
made, it is important for network administrators to recognize the potential for drops due to
oversubscribed linecard architectures. To manage application-class service levels during
such extreme scenarios, ingress queuing models may be enabled.
Table 9 Catalyst 6500-E Switch Module Ingress Queuing Architecture

Maximum Ingress
Output (To Oversubscription Queuing CoS / DSCP Ingress Queuing
Switch Module Maximum Input Backplane) Ratio Structure Based Recommendations
WS-6724-SFP 24 Gbps 40 Gbps - 1P3Q8T CoS based Not Required
(24 x GE ports) (2 x 20 Gbps)
WS-6704-10G 40 Gbps - 8Q8T CoS or DSCP Not Required
E (4 x 10GE ports) based

WS-6708-10G 80 Gbps 2:1 8Q4T CoS or DSCP Use DSCP-based


E (8 x 10GE ports) based 8Q4T ingress
queuing
WS-6716-10G 160 Gbps (16 x 4:1 8Q4T / CoS or DSCP Use DSCP-based
E 10GE ports) 1P7Q2T* based 1P7Q2T ingress
queuing

Note The Catalyst WS-X6716-10GE can be configured to operate in Performance Additional details on these WS-X6716-10GE operational modes can be found at the
Mode (with an 8Q4T ingress queuing structure) or in Oversubscription Mode (with following URL:
a 1P7Q2T ingress queuing structure). In Performance mode, only one port in http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/qa_cisco_catalyst
every group of four is operational (while the rest are administratively shut down), _6500_series_16port_10gigabit_ethernet_module.html
which eliminates any oversubscription on this linecard and as such ingress If 6708 and 6716 linecards (with the latter operating in oversubscription mode) are used
queuing is not required (as only 4 x 10GE ports are active in this mode and the in the distribution and core layers of the campus network, then 8Q4T DSCP-based ingress
backplane access rate is also at 40 Gbps). In Oversubscription Mode (the default queuing and 1P7Q2T DSCP-based ingress queuing (respectively) are recommended to
mode), all ports are operational and the maximum oversubscription ratio is 4:1. be enabled. These queuing models are detailed in the following sections.
Therefore, it is recommended to enable 1P7Q2T DSCP-based ingress queuing Figure 61 depicts how different class-of-service applications are mapped to the Ingress
on this linecard in Oversubscription Mode. Queue structure (8Q4T) and how each queue is assigned a different WTD threshold.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 61 Catalyst 6500-E Ingress Queuing Model cr22-vss-core(config-if-range)# rcv-queue random-detect 1


! Enables WRED on Q1
Application phb Ingress Queue 1P7Q4T cr22-vss-core(config-if-range)# rcv-queue random-detect 2
EF ! Enables WRED on Q2
Network Control CS7 Priority Queue cr22-vss-core(config-if-range)# rcv-queue random-detect 3
CS5 Q8 (30%)
Internetwork Control CS6 ! Enables WRED on Q3
CS4 cr22-vss-core(config-if-range)# rcv-queue random-detect 4
VoIP EF
CS7 ! Enables WRED on Q4
Broadcast Video CS5 CS6 Q7 (10%) cr22-vss-core(config-if-range)# rcv-queue random-detect 5
CS3 ! Enables WRED on Q5
Realtime Interactive CS4 CS2 cr22-vss-core(config-if-range)# rcv-queue random-detect 6
Multimedia Conferencing AF4 AF4 Q6 (10%)
! Enables WRED on Q6
cr22-vss-core(config-if-range)# rcv-queue random-detect 7
Multimedia Streaming AF3 ! Enables WRED on Q7
AF3 Q5 (10%)
Signaling CS3 cr22-vss-core(config-if-range)# no rcv-queue random-detect 8
! Disables WRED on Q8
Transactional Data AF2 AF2 Q4 (10%)
Network Management CS2 ! This section configures WRED thresholds for Queues 1 through 7

AF1 cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 1


Q3 (4%)
Bulk Data AF1 100 100 100 100
! Sets all WRED max thresholds on Q1 to 100%
Best Effort
Scavenger DF
CS1 CS1 Q2 (1%)
cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 1

228984
Best Effort DF DF Q1 (25%) 80 100 100 100
! Sets Q1T1 min WRED threshold to 80%
The corresponding configuration for 8Q8T (DSCP-to-Queue) ingress queuing on a cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 2
Catalyst 6500-E VSS in distribution and core layer is shown below. PFC function is active 80 100 100 100
on active and hot-standby virtual-switch nodes; therefore, ingress queuing must be ! Sets Q2T1 min WRED threshold to 80%
configured on each distributed member-links of Layer 2 or Layer 3 MEC. cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 2
• Distribution and Core-Layer Catalyst 6500-E in VSS mode 100 100 100 100
! Sets all WRED max thresholds on Q2 to 100%
! This section configures the port for DSCP-based Ingress queuing
cr22-vss-core(config)#interface range TenGigabitEthernet 1/1/2 - 8 , cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 3
2/1/2-8 70 80 90 100
cr22-vss-core(config-if-range)# mls qos queue-mode mode-dscp ! Sets WRED min thresholds for Q3T1, Q3T2, Q3T3 to 70 %, 80% and 90%

! Enables DSCP-to-Queue mapping cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 3


80 90 100 100
! Sets WRED max thresholds for Q3T1, Q3T2, Q3T3 to 80%, 90% and 100%
! This section configures the receive queues BW and limits
cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 4
cr22-vss-core(config-if-range)# rcv-queue queue-limit 10 25 10 10 10 10
70 80 90 100
10 15
! Sets WRED min thresholds for Q4T1, Q4T2, Q4T3 to 70 %, 80% and 90%
! Allocates 10% to Q1, 25% to Q2, 10% to Q3, 10% to Q4,
cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 4
! Allocates 10% to Q5, 10% to Q6, 10% to Q7 and 15% to Q8
80 90 100 100
cr22-vss-core(config-if-range)# rcv-queue bandwidth 1 25 4 10 10 10 10 30
! Sets WRED max thresholds for Q4T1, Q4T2, Q4T3 to 80%, 90% and 100%
! Allocates 1% BW to Q1, 25% BW to Q2, 4% BW to Q3, 10% BW to Q4,
cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 5
! Allocates 10% BW to Q5, 10% BW to Q6, 10% BW to Q7 & 30% BW to Q8 70 80 90 100
! Sets WRED min thresholds for Q5T1, Q5T2, Q5T3 to 70 %, 80% and 90%
! This section enables WRED on all queues except Q8
Medium Enterprise Design Profile (MEDP)—LAN Design

cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 5 ! Maps AF41 (Multimedia Conferencing-Drop Precedence 1) to Q6T3
80 90 100 100 cr22-vss-core(config-if-range)# rcv-queue dscp-map 7 1 16
! Sets WRED max thresholds for Q5T1, Q5T2, Q5T3 to 80%, 90% and 100% ! Maps CS2 (Network Management) to Q7T1
cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 6 cr22-vss-core(config-if-range)# rcv-queue dscp-map 7 2 24
70 80 90 100 ! Maps CS3 (Signaling) to Q7T2
! Sets WRED min thresholds for Q6T1, Q6T2, Q6T3 to 70 %, 80% and 90% cr22-vss-core(config-if-range)# rcv-queue dscp-map 7 3 48
cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 6 ! Maps CS6 (Internetwork Control) to Q7T3
80 90 100 100
cr22-vss-core(config-if-range)# rcv-queue dscp-map 7 4 56
! Sets WRED max thresholds for Q6T1, Q6T2, Q6T3 to 80%, 90% and 100%
! Maps CS7 (Network Control) to Q7T4
cr22-vss-core(config-if-range)# rcv-queue random-detect min-threshold 7
cr22-vss-core(config-if-range)# rcv-queue dscp-map 8 4 32 40 46
60 70 80 90
! Maps CS4 (Realtime Interactive), CS5 (Broadcast Video),
! Sets WRED min thresholds for Q7T1, Q7T2, Q7T3 and Q7T4
! and EF (VoIP) to Q8
! to 60%, 70%, 80% and 90%, respectively
cr22-vss-core(config-if-range)# rcv-queue random-detect max-threshold 7
70 80 90 100 cr23-VSS-Core#show queueing interface Ten1/1/2 | begin Rx
! Sets WRED max thresholds for Q7T1, Q7T2, Q7T3 and Q7T4 Queueing Mode In Rx direction: mode-dscp
! to 70%, 80%, 90% and 100%, respectively Receive queues [type = 8q4t]:
Queue Id Scheduling Num of thresholds
! This section configures the DSCP-to-Receive-Queue mappings -----------------------------------------
cr22-vss-core(config-if-range)# rcv-queue dscp-map 1 1 8 01 WRR 04
! Maps CS1 (Scavenger) to Q1T1 02 WRR 04
cr22-vss-core(config-if-range)# rcv-queue dscp-map 2 1 0 03 WRR 04
! Maps DF (Best Effort) to Q2T1 04 WRR 04
cr22-vss-core(config-if-range)# rcv-queue dscp-map 3 1 14 05 WRR 04
! Maps AF13 (Bulk Data-Drop Precedence 3) to Q3T1 06 WRR 04
cr22-vss-core(config-if-range)# rcv-queue dscp-map 3 2 12 07 WRR 04
! Maps AF12 (Bulk Data-Drop Precedence 2) to Q3T2 08 WRR 04
cr22-vss-core(config-if-range)# rcv-queue dscp-map 3 3 10
! Maps AF11 (Bulk Data-Drop Precedence 1) to Q3T3 WRR bandwidth ratios: 1[queue 1] 25[queue 2] 4[queue 3]
10[queue 4] 10[queue 5] 10[queue 6] 10[queue 7] 30[queue 8]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 4 1 22
! Maps AF23 (Transactional Data-Drop Precedence 3) to Q4T1 queue-limit ratios: 10[queue 1] 25[queue 2] 10[queue 3]
10[queue 4] 10[queue 5] 10[queue 6] 10[queue 7] 15[queue 8]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 4 2 20
! Maps AF22 (Transactional Data-Drop Precedence 2) to Q4T2
queue tail-drop-thresholds
cr22-vss-core(config-if-range)# rcv-queue dscp-map 4 3 18
--------------------------
! Maps AF21 (Transactional Data-Drop Precedence 1) to Q4T3
1 70[1] 80[2] 90[3] 100[4]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 5 1 30
2 100[1] 100[2] 100[3] 100[4]
! Maps AF33 (Multimedia Streaming-Drop Precedence 3) to Q5T1
3 100[1] 100[2] 100[3] 100[4]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 5 2 28
4 100[1] 100[2] 100[3] 100[4]
! Maps AF32 (Multimedia Streaming-Drop Precedence 2) to Q5T2
5 100[1] 100[2] 100[3] 100[4]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 5 3 26
6 100[1] 100[2] 100[3] 100[4]
! Maps AF31 (Multimedia Streaming-Drop Precedence 1) to Q5T3
7 100[1] 100[2] 100[3] 100[4]
cr22-vss-core(config-if-range)# rcv-queue dscp-map 6 1 38
8 100[1] 100[2] 100[3] 100[4]
! Maps AF43 (Multimedia Conferencing-Drop Precedence 3) to Q6T1
cr22-vss-core(config-if-range)# rcv-queue dscp-map 6 2 36
queue random-detect-min-thresholds
! Maps AF42 (Multimedia Conferencing-Drop Precedence 2) to Q6T2
----------------------------------
cr22-vss-core(config-if-range)# rcv-queue dscp-map 6 3 34
1 80[1] 100[2] 100[3] 100[4]
Medium Enterprise Design Profile (MEDP)—LAN Design

2 80[1] 100[2] 100[3] 100[4] 6 3 34


3 70[1] 80[2] 90[3] 100[4] 6 4
4 70[1] 80[2] 90[3] 100[4] 7 1 16
5 70[1] 80[2] 90[3] 100[4] 7 2 24
6 70[1] 80[2] 90[3] 100[4] 7 3 48
7 60[1] 70[2] 80[3] 90[4] 7 4 56
8 100[1] 100[2] 100[3] 100[4] 8 1
8 2
queue random-detect-max-thresholds 8 3
---------------------------------- 8 4 32 40 46
1 100[1] 100[2] 100[3] 100[4] …
2 100[1] 100[2] 100[3] 100[4] Packets dropped on Receive:
3 80[1] 90[2] 100[3] 100[4] BPDU packets: 0
4 80[1] 90[2] 100[3] 100[4]
5 80[1] 90[2] 100[3] 100[4] queue dropped [dscp-map]
6 80[1] 90[2] 100[3] 100[4] ---------------------------------------------
7 70[1] 80[2] 90[3] 100[4] 1 0 [1 2 3 4 5 6 7 8 9 11 13 15 17 19 21 23
8 100[1] 100[2] 100[3] 100[4] 25 27 29 31 33 39 41 42 43 44 45 47 ]
2 0 [0 ]
WRED disabled queues: 8 3 0 [14 12 10 ]
… 4 0 [22 20 18 ]
queue thresh dscp-map 5 0 [30 35 37 28 26 ]
--------------------------------------- 6 0 [38 49 50 51 52 53 54 55 57 58 59 60 61
1 1 1 2 3 4 5 6 7 8 9 11 13 15 17 19 21 23 25 27 29 31 33 39 62 63 36 34 ]
41 42 43 44 45 47 7 0 [16 24 48 56 ]
1 2 8 0 [32 40 46 ]
1 3
1 4
Implementing Network Core Egress QoS
2 1 0 The QoS implementation of egress traffic towards network edge devices on access-layer
2 2 switches are much simplified compared to ingress traffic which requires stringent QoS
2 3 policies to provide differentiated services and network bandwidth protection. Unlike the
2 4 Ingress QoS model, the egress QoS model must provides optimal queuing policies for
3 1 14
each class and sets the drop thresholds to prevent network congestion and an application
performance impact. With egress queuing in DSCP mode, the Cisco Catalyst switching
3 2 12
platforms and linecards are bounded by a limited number of egress hardware queues.
3 3 10
3 4
Catalyst 3750-X and 4500-E
4 1 22
The configuration and implementation guideline for egress QoS on Catalyst 3750-X
4 2 20
StackWise and Catalyst 4500-E with Sup6E and Sup6L-E in distribution and access-layer
4 3 18 roles remains consistent. All conformed traffic marked with DSCP values must be
4 4 manually assigned to each egress queue based on a four class-of-service QoS model.
5 1 30 35 37 Refer to the “Implementing Access-Layer Egress QoS” section on page -55 for the
5 2 28 deployment details.
5 3 26
5 4
6 1 38 49 50 51 52 53 54 55 57 58 59 60 61 62 63
6 2 36
Medium Enterprise Design Profile (MEDP)—LAN Design

Catalyst 6500-E – VSS Table 62 describes the deployment guidelines for the Catalyst 6500-E Series linecard
The Cisco Catalyst 6500-E in VSS mode operates in a centralized management mode but module in the campus distribution and core layer network. In the solutions lab, the
uses a distributed forwarding architecture. The Policy Feature Card (PFC) on active and WS-6724-SFP and WS-6708-10GE was validated in the campus distribution and core
hot-standby is functional on both nodes and is independent of the virtual-switch role. Like layers. Both modules supports different egress queuing models, this sub-section will
ingress queuing, the network administrator must implement egress queuing on each of provide deployment guidelines for both module types.
the member-links of the Layer 2 or Layer 3 MEC. The egress queuing model on the
Catalyst 6500-E is based on linecard type and its capabilities, when deploying Catalyst
6500-E in VSS mode only the WS-67xx series 1G/10G linecard with daughter card – CFC
or DFC3C/DFC3CXL is supported.
Table 62 Catalyst 6500-E Switch Module Egress Queuing Architecture

Egress Queue and Egress Queue Total Buffer


Switch Module Daughter Card Drop Thresholds Scheduler Size Egress Buffer Size
WS-6724-SFP CFC or DFC3 1P3Q8T DWRR 1.3 MB 1.2 MB
WS-6704-10GE CFC 1P7Q8T DWRR 16 MB 14 MB
DFC3
WS-6708-10GE DFC3 1P7Q4T DWRR 198 MB 90 MB
SRR
WS-6716-10GE DFC3 1P7Q8T 198 MB1 90 MB1
(Oversubscription 91 MB2 1 MB2
and Perf. Mode)
1. Per Port Capacity in Performance Mode
2. Per Port Capacity in Oversubscription Mode

WS-6724-SFP – 1P3Q8T Egress Queuing Model


On the WS-6724-SFP module the egress queuing functions on per physical port basis
and independent of link-layer and above protocols settings, these functions remain
consistent when the physical port is deployed in standalone or bundled into an
EtherChannel. Each 1G physical port support 4 egress queues with default CoS based on
the transmit side. This module is a cost-effective 1G non-blocking high speed network
module but does not provide deep application granularity based on different DSCP
markings. It does not have the flexibility to use various class-of-service egress queue for
applications. Campus LAN QoS consolidation to a 4 class model occurs on the physical
paths that connects to the WAN or Internet Edge routers, which forwards traffic across a
private WAN or the Internet. Deploying the WS-6724-SFP module in 4 class model would
be recommended in that design. Figure 63 illustrates 1P3Q8T egress queuing model to
be applied on Catalyst 6500-E – WS-6724-SF module.
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 63 1P3Q8T Egress Queuing Model ! Sets all WRED max thresholds on Q1 to 100%
cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 1
Application PHP CoS Egress Queue 1P3Q8T 80 100 100 100 100 100 100 100
Network Control CS7 CoS 7 ! Sets Q1T1 min WRED threshold to 80%; all others set to 100%
CoS 5 Priority Queue cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 2
Internetwork Control CS6 CoS 6 CoS 4 (30%) 100 100 100 100 100 100 100 100
VoIP EF ! Sets all WRED max thresholds on Q2 to 100%
CoS 5 CoS 7 Q3T4 cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 2
Broadcast Video CS5 80 100 100 100 100 100 100 100
CoS 6 Q3T3
Multimedia Conferencing AF4 ! Sets Q2T1 min WRED threshold to 80%; all others set to 100%
CoS 3 Q3T2
CoS 4 cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 3
Realtime Interactive CS4 CoS 2 Queue 3 Q3T1
70 80 90 100 100 100 100 100
(40%)
Multimedia Streaming AF3 ! Sets Q3T1 max WRED threshold to 70%; Q3T2 max WRED threshold to 80%;
CoS 3 ! Sets Q3T3 max WRED threshold to 90%; Q3T4 max WRED threshold to 100%
Signaling CS3
cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 3
Transactional Data AF2 60 70 80 90 100 100 100 100
CoS 2 ! Sets Q3T1 min WRED threshold to 60%; Q3T2 min WRED threshold to 70%;
Network Management CS2 Queue 2
CoS 0 (25%) ! Sets Q3T3 min WRED threshold to 80%; Q3T4 min WRED threshold to 90%
Bulk Data AF1
CoS 1
Best Effort
Scavenger DF
CS1 DF ! This section configures the CoS-to-Queue/Threshold mappings
Queue 1

228985
CoS 1 (5%) cr23-vss-core(config-if-range)# wrr-queue cos-map 1 1 1
Best Effort DF CoS 0
! Maps CoS 1 (Scavenger and Bulk Data) to Q1T1
The following corresponding 1P3Q8T egress queuing configuration must be applied on cr23-vss-core(config-if-range)# wrr-queue cos-map 2 1 0
each member-links of MEC. ! Maps CoS 0 (Best Effort) to Q2T1
• Catalyst 6500-E VSS (Distribution and Core) cr23-vss-core(config-if-range)# wrr-queue cos-map 3 1 2
! Maps CoS 2 (Network Management and Transactional Data) to Q3T1
cr23-vss-core(config-if-range)# wrr-queue cos-map 3 2 3
cr23-vss-core(config)#interface range GigabitEthernet 1/2/1-24 , Gi2/2/1
- 24 ! Maps CoS 3 (Signaling and Multimedia Streaming) to Q3T2
cr23-vss-core(config-if-range)# wrr-queue queue-limit 20 25 40 cr23-vss-core(config-if-range)# wrr-queue cos-map 3 3 6
! Allocates 20% of the buffers to Q1, 25% to Q2 and 40% to Q3 ! Maps CoS 6 (Internetwork Control) to Q3T3
cr23-vss-core(config-if-range)# priority-queue queue-limit 15 cr23-vss-core(config-if-range)# wrr-queue cos-map 3 4 7
! Allocates 15% of the buffers to the PQ ! Maps CoS 7 (Network Control) to Q3T4
cr23-vss-core(config-if-range)# wrr-queue bandwidth 5 25 40 cr23-vss-core(config-if-range)# priority-queue cos-map 1 4 5
! Allocates 5% BW to Q1, 25% BW to Q2 and 30% BW to Q3 ! Maps CoS 4 (Realtime Interactive and Multimedia Conferencing) to PQ
! Maps CoS 5 (VoIP and Broadcast Video) to the PQ
! This section enables WRED on Queues 1 through 3
cr23-vss-core(config-if-range)# wrr-queue random-detect 1 cr23-VSS-Core#show queueing interface GigabitEthernet 1/2/1
! Enables WRED on Q1 Interface GigabitEthernet1/2/1 queueing strategy: Weighted Round-Robin
cr23-vss-core(config-if-range)# wrr-queue random-detect 2 Port QoS is enabled
! Enables WRED on Q2 Trust boundary disabled
cr23-vss-core(config-if-range)# wrr-queue random-detect 3
! Enables WRED on Q3 Trust state: trust DSCP
Extend trust state: not trusted [COS = 0]
! This section configures WRED thresholds for Queues 1 through 3 Default COS is 0
cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 1 Queueing Mode In Tx direction: mode-cos
100 100 100 100 100 100 100 100 Transmit queues [type = 1p3q8t]:
Medium Enterprise Design Profile (MEDP)—LAN Design

Queue Id Scheduling Num of thresholds 2 7


----------------------------------------- 2 8
01 WRR 08 3 1 2
02 WRR 08 3 2 3
03 WRR 08 3 3 6
04 Priority 01 3 4 7
3 5
WRR bandwidth ratios: 5[queue 1] 25[queue 2] 40[queue 3] 3 6
queue-limit ratios: 20[queue 1] 25[queue 2] 40[queue 3] 15[Pri 3 7
Queue] 3 8
4 1 4 5
queue tail-drop-thresholds …
--------------------------
1 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] WS-6708-10GE and WS-6716-10GE – 1P7Q4T Egress Queuing Model
2 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
The hardware design of the next-generation 10G linecards are designed with advanced
3 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] ASICs and higher capacity to ensure the campus backbone of large enterprise networks
are ready for future. Both modules support DSCP based on the 8 queue model to deploy
queue random-detect-min-thresholds flexible and scalable QoS in the campus core. With 8-egress queue support the
---------------------------------- WS-6708-10G and WS-6716-10G modules increased application granularity based on
1 80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] various DSCP markings are done at the network edge. Figure 64 illustrates DSCP-based
1P7Q4T egress queuing model.
2 80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
3 60[1] 70[2] 80[3] 90[4] 100[5] 100[6] 100[7] 100[8] Figure 64 P7Q4T Egress Queuing Model

queue random-detect-max-thresholds Application phb Ingress Queue 1P7Q4T


---------------------------------- Network Control CS7 EF
Priority Queue
1 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] Q8 (30%)
Internetwork Control CS6 CS5
2 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
CS4
3 70[1] 80[2] 90[3] 100[4] 100[5] 100[6] 100[7] 100[8] VoIP EF
CS7
Broadcast Video CS5 CS6 Q7 (10%)
WRED disabled queues:
CS3
queue thresh cos-map Realtime Interactive CS4 CS2
---------------------------------------
1 1 1 Multimedia Conferencing AF4 AF4 Q6 (10%)
1 2 Multimedia Streaming AF3
1 3 AF3 Q5 (10%)
1 4 Signaling CS3
1 5 Transactional Data AF2 AF2 Q4 (10%)
1 6
1 7 Network Management CS2
1 8 AF1 Q3 (4%)
Bulk Data AF1
2 1 0
Best Effort
Scavenger DF
CS1 CS1 Q2 (1%)
2 2

228984
2 3 Best Effort DF DF Q1 (25%)
2 4
2 5 The following corresponding 1P7Q4T egress queuing configuration must be applied on
2 6 each member-links of MEC.
Medium Enterprise Design Profile (MEDP)—LAN Design

• Catalyst 6500-E VSS (Distribution and Core) cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 3
70 80 90 100
cr23-vss-core(config)#interface range TenGigabitEthernet 1/1/2 - 8 , ! Sets WRED min thresholds for Q3T1, Q3T2, Q3T3 to 70 %, 80% and 90%
2/1/2 - 8
cr23-vss-core(config-if-range)# wrr-queue queue-limit 10 25 10 10 10 10 cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 4
10 70 80 90 100
! Allocates 10% of the buffers to Q1, 25% to Q2, 10% to Q3, 10% to Q4, ! Sets WRED min thresholds for Q4T1, Q4T2, Q4T3 to 70 %, 80% and 90%
! Allocates 10% to Q5, 10% to Q6 and 10% to Q7 cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 4
cr23-vss-core(config-if-range)# wrr-queue bandwidth 1 25 4 10 10 10 10 80 90 100 100
! Allocates 1% BW to Q1, 25% BW to Q2, 4% BW to Q3, 10% BW to Q4, ! Sets WRED max thresholds for Q4T1, Q4T2, Q4T3 to 80%, 90% and 100%
! Allocates 10% BW to Q5, 10% BW to Q6 and 10% BW to Q7 cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 5
70 80 90 100
cr23-vss-core(config-if-range)# priority-queue queue-limit 15
! Sets WRED min thresholds for Q5T1, Q5T2, Q5T3 to 70 %, 80% and 90%
! Allocates 15% of the buffers to the PQ
cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 5
80 90 100 100
! This section enables WRED on Queues 1 through 7
! Sets WRED max thresholds for Q5T1, Q5T2, Q5T3 to 80%, 90% and 100%
cr23-vss-core(config-if-range)# wrr-queue random-detect 1
cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 6
! Enables WRED on Q1 70 80 90 100
cr23-vss-core(config-if-range)# wrr-queue random-detect 2 ! Sets WRED min thresholds for Q6T1, Q6T2, Q6T3 to 70 %, 80% and 90%
! Enables WRED on Q2 cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 6
cr23-vss-core(config-if-range)# wrr-queue random-detect 3 80 90 100 100
! Enables WRED on Q3 ! Sets WRED max thresholds for Q6T1, Q6T2, Q6T3 to 80%, 90% and 100%
cr23-vss-core(config-if-range)# wrr-queue random-detect 4 cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 7
! Enables WRED on Q4 60 70 80 90
cr23-vss-core(config-if-range)# wrr-queue random-detect 5 ! Sets WRED min thresholds for Q7T1, Q7T2, Q7T3 and Q7T4
! Enables WRED on Q5 ! to 60%, 70%, 80% and 90%, respectively
cr23-vss-core(config-if-range)# wrr-queue random-detect 6 cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 7
! Enables WRED on Q6 70 80 90 100
cr23-vss-core(config-if-range)# wrr-queue random-detect 7 ! Sets WRED max thresholds for Q7T1, Q7T2, Q7T3 and Q7T4
! Enables WRED on Q7 ! to 70%, 80%, 90% and 100%, respectively

! This section configures WRED thresholds for Queues 1 through 7


cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 1 ! This section configures the DSCP-to-Queue/Threshold mappings
100 100 100 100 cr23-vss-core(config-if-range)# wrr-queue dscp-map 1 1 8
! Sets all WRED max thresholds on Q1 to 100% ! Maps CS1 (Scavenger) to Q1T1
cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 1 cr23-vss-core(config-if-range)# wrr-queue dscp-map 2 1 0
80 100 100 100 ! Maps DF (Best Effort) to Q2T1
! Sets Q1T1 min WRED threshold to 80% cr23-vss-core(config-if-range)# wrr-queue dscp-map 3 1 14
cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 2 ! Maps AF13 (Bulk Data-Drop Precedence 3) to Q3T1
100 100 100 100 cr23-vss-core(config-if-range)# wrr-queue dscp-map 3 2 12
! Sets all WRED max thresholds on Q2 to 100% ! Maps AF12 (Bulk Data-Drop Precedence 2) to Q3T2
cr23-vss-core(config-if-range)# wrr-queue random-detect min-threshold 2 cr23-vss-core(config-if-range)# wrr-queue dscp-map 3 3 10
80 100 100 100
! Maps AF11 (Bulk Data-Drop Precedence 1) to Q3T3
! Sets Q2T1 min WRED threshold to 80%
cr23-vss-core(config-if-range)# wrr-queue dscp-map 4 1 22
cr23-vss-core(config-if-range)# wrr-queue random-detect max-threshold 3
! Maps AF23 (Transactional Data-Drop Precedence 3) to Q4T1
80 90 100 100
cr23-vss-core(config-if-range)# wrr-queue dscp-map 4 2 20
! Sets WRED max thresholds for Q3T1, Q3T2, Q3T3 to 80%, 90% and 100%
! Maps AF22 (Transactional Data-Drop Precedence 2) to Q4T2
Medium Enterprise Design Profile (MEDP)—LAN Design

cr23-vss-core(config-if-range)# wrr-queue dscp-map 4 3 18 without making major network changes. However, the critical network systems that are
! Maps AF21 (Transactional Data-Drop Precedence 1) to Q4T3 deployed in the main campus that provide global connectivity may require additional
cr23-vss-core(config-if-range)# wrr-queue dscp-map 5 1 30 hardware and software components to provide non-stop communications. The following
! Maps AF33 (Multimedia Streaming-Drop Precedence 3) to Q5T1 three major resiliency requirements encompass most of the common types of failure
conditions; depending on the LAN design tier, the resiliency option appropriate to the role
cr23-vss-core(config-if-range)# wrr-queue dscp-map 5 2 28
and network service type must be deployed:
! Maps AF32 (Multimedia Streaming-Drop Precedence 2) to Q5T2
• Network resiliency—Provides redundancy during physical link failures, such as fiber
cr23-vss-core(config-if-range)# wrr-queue dscp-map 5 3 26
cut, bad transceivers, incorrect cablings, and so on.
! Maps AF31 (Multimedia Streaming-Drop Precedence 1) to Q5T3
• Device resiliency—Protects the network during abnormal node failure triggered by
cr23-vss-core(config-if-range)# wrr-queue dscp-map 6 1 38
hardware or software, such as software crashes, a non-responsive supervisor, and so
! Maps AF43 (Multimedia Conferencing-Drop Precedence 3) to Q6T1 on.
cr23-vss-core(config-if-range)# wrr-queue dscp-map 6 2 36
• Operational resiliency—Enables resiliency capabilities to the next level, providing
! Maps AF42 (Multimedia Conferencing-Drop Precedence 2) to Q6T2 complete network availability even during planned network outage conditions, using
cr23-vss-core(config-if-range)# wrr-queue dscp-map 6 3 34 In Service Software Upgrade (ISSU) features.
! Maps AF41 (Multimedia Conferencing-Drop Precedence 1) to Q6T3
cr23-vss-core(config-if-range)# wrr-queue dscp-map 7 1 16 Medium Enterprise High-Availability Framework
! Maps CS2 (Network Management) to Q7T1
Independent of the business function, the network architects builds strong, scalable, and
cr23-vss-core(config-if-range)# wrr-queue dscp-map 7 2 24 resilient next-generation IP network. Networks that are built on these three fundamentals,
! Maps CS3 (Signaling) to Q7T2 offers high availability to use network as a core platform that enables flexibility to overlay
cr23-vss-core(config-if-range)# wrr-queue dscp-map 7 3 48 advanced and emerging technologies and provide non-stop network communications.
! Maps CS6 (Internetwork Control) to Q7T3 The medium enterprise campus network must be build based on same fundamentals that
cr23-vss-core(config-if-range)# wrr-queue dscp-map 7 4 56 can provide constant “on” network service for uninterrupted business operations and
! Maps CS7 (Network Control) to Q7T4
protects campus physical security and assets.
cr23-vss-core(config-if-range)# priority-queue dscp-map 1 32 40 46 Network faults domains in this reference architecture are identifiable but the failure
! Maps CS4 (Realtime Interactive), CS5 (Broadcast Video),
conditions within the domains are un-predicted. Improper network design or non-resilient
network systems can experience higher number of faults that not only degrades user
! and EF (VoIP) to the PQ
experience but may severely impact application performance and may not capture the
critical physical security video information. For example failure of 1-Gigabit Ethernet
Note Due to the default WRED threshold settings, at times the maximum threshold backbone connection for 10 seconds can the drop network information for more than
needs to be configured before the minimum (as is the case on queues 1 through 1Gig, which may include critical medium enterprise data or video surveillance captured
3 in the example above); at other times, the minimum threshold needs to be data. The fault levels can range from network interruption to disaster, which can be
configured before the maximum (as is the case on queues 4 through 7 in the triggered by system, human, or even by nature. Network failures can be classified in one
example above). of the following two ways:
• Planned Failure—Planned network outage occurs when any network systems is
High-Availability in LAN Network Design administratively planned to disable inthe network for scheduled event (i.e., software
Network reliability and availability is not a new demand, but is well planned during the early upgrade etc.).
network design phase. To prevent a catastrophic network failure during an unplanned • Unplanned Failure—Any unforeseen failures of network elements can be considered
network outage event, it is important to identify network fault domains and define rapid as unplanned failure. Such failures can include internal faults in the network device
recovery plans to minimize the application impact during minor and major network outage caused by hardware or software malfunctions which includes software crash,
conditions. linecard, or link transceiver failures conditions.
Because every tier of the LAN network design can be classified as a fault domain,
deploying redundant systems can be effective. However, this introduces a new set of Baselining Campus High Availability
challenges, such as higher cost and the added complexity of managing more systems. Typical application response time is in milliseconds when the campus network is build
Network reliability and availability can be simplified using several Cisco high availability with high speed backbone connection and is in fully-operational state. When constantly
technologies that offer complete failure transparency to the end users and applications working in deterministic network response time environment the learning and work
during planned or unplanned network outages. practice of end-users is rapid; however, during abnormal network failure causing traffic
Cisco high availability technologies can be deployed based on critical versus non-critical loss, congestion and application retries will impact the performance and alerts the user
platform roles in the network. Some of the high availability techniques can be achieved about the network faults. During the major network fault event, user determines network
with the LAN network design inherent within the medium enterprise network design, connection problem based on routine experience even before an application protocols
Medium Enterprise Design Profile (MEDP)—LAN Design

determines connection problem (i.e,. slow internet browsing response time). Network Resiliency Overview
Protocol-based delayed failure detection are intentional, they are designed to minimize
overall productivity impact and allows network to gracefully adjust and recover during The most common network fault occurrence in the LAN network is a link failure between
minor failure conditions. Every protocol operation is different in the network; while the two systems. Link failures can be caused by issues such as a fiber cut, miswiring, linecard
retries for non-critical data traffic is acceptable the applications running in real-time may module failure and so on. In the modular platform design the redundant parallel physical
not. Figure 65 provides a sample real-time VoIP application in campus network and links between distributed modules between two systems reduces fault probabilistic and
sequence of user experience in different phases during minor and major unplanned can increase network availability. It is important to remember how multiple parallel paths
network outage: between two systems also changes overall higher layer protocols construct the
adjacency and loop-free forwarding topology.
Figure 65 VoIP Impact During Minor and Major Network Outage
Deploying redundant parallel paths in the recommended medium enterprise LAN design
50 by default develops a non-optimal topology that keeps the network underutilized and
Major Network Outage
45
requires protocol-based network recovery. In the same network design, the routed
access model eliminates such limitations and enables the full load balancing capabilities
40
to increase bandwidth capacity and minimize the application impact during a single path
35
failure. To develop a consistent network resiliency service in the centralized main and
30 remote campus sites, the following basic principles apply:
sec

25 Data Loss (seconds)


• Deploying redundant parallel paths are the basic requirement to employ network
20
resiliency at any tier. It is critical to simplify the control plane and forwarding plane
15 operation by bundling all physical paths into a single logical bundled interface
10 Minor Network Outage (EtherChannel). Implement a defense-in-depth approach to failure detection and
5 recovery mechanisms. An example of this is configuring the UniDirectional Link
0 Detection (UDLD) protocol, which uses a Layer 2 keep-alive to test that the

228971
No Impact Minimal Voice User Hangs Up Phone Reset switch-to-switch links are connected and operating correctly, and acts as a backup to
Impact
the native Layer 1 unidirectional link detection capabilities provided by 802.3z and
802.3ae standards. UDLD is not an EtherChannel function; it operates independently
over each individual physical port at Layer 2 and remains transparent to the rest of the
This high availability framework is based on the three major resiliency strategies to solve
port configuration. Therefore, UDLD can be deployed on ports implemented in Layer
a wide-range of planned and unplanned network outage types described in the previous 2 or Layer 3 modes.
section. Several high availability technologies must be deployed at each layer to provide
• Ensure that the network design is self-stabilizing. Hardware or software errors may
higher network availability and rapid recovery during failure conditions, to prevent cause ports to flap, which creates false alarms and destabilizes the network topology.
communication failure or degraded network-wide application performance. (See Implementing route summarization advertises a concise topology view to the
Figure 66.) network, which prevents core network instability. However, within the summarized
Figure 66 High-Availability Goals, Strategy, and Technologies
boundary, the flood may not be protected. Deploy IP event dampening as an tool to
prevent the control and forwarding plane impact caused by physical topology
instability.
Resilient Network Service Availability
Goal These principles are intended to be a complementary part of the overall structured
modular design approach to the campus design, and serve primarily to reinforce good
resilient design practices.
Resilient Network Device Operational
Strategies Resiliency Resiliency Resiliency Device Resiliency Overview
Another major component of an overall campus high availability framework is providing
device or node level protection that can be triggered during any type of abnormal internal
hardware or software process within the system. Some of the common internal failures are
EtherChannel/MEC a software-triggered crash, power outages, line card failures, and so on. LAN network
NSF/SSO ISSU devices can be considered as a single-point-of-failure and are considered to be major
Resilient
Technologies UDLD failure condition because the recovery type may require a network administrator to
Stack Wise eFSU mitigate the failure and recover the system. The network recovery time can remain
IP Event Dampening
228500

undeterministic, causing complete or partial network outage, depending on the network


design.
Medium Enterprise Design Profile (MEDP)—LAN Design

Redundant hardware components for device resiliency vary between fixed configuration a stack ring. During master switch failure, the new master switch re-election remains
and modular Cisco Catalyst switches. To protect against common network faults or resets, transparent to the network devices and endpoints. Deploying Cisco StackWise according
all critical medium enterprise campus network devices must be deployed with a similar to the recommended guidelines protects against network interruption, and recovers the
device resiliency configuration. This subsection provides basic redundant hardware network in sub-seconds during master switch re-election.
deployment guidelines at the access layer and collapsed core switching platforms in the Bundling SSO with NSF capability and the awareness function allows the network to
campus network. operate without errors during a primary supervisor module failure. Users of realtime
applications such as VoIP do not hang up the phone, and IP video surveillance cameras
Redundant Power System do not freeze.
Redundant power supplies for network systems protect against power outages, power
supply failures, and so on. It is important not only to protect the internal network system but Non-Stop Forwarding
also the endpoints that rely on power delivery over the Ethernet network. Redundant Cisco VSS and the single highly resilient-based campus system provides uninterrupted
power systems can be deployed in the two following configuration modes: network availability using non-stop forwarding (NSF) without impacting end-to-end
• Modular switch—Dual power supplies can be deployed in modular switching application performance. The Cisco VSS and redundant supervisor system is an
platforms such as the Cisco Catalyst 6500-E and 4500-E Series platforms. By default, NSF-capable platform; thus, every network device that connects to VSS or the redundant
the power supply operates in redundant mode, offering the 1+1 redundant option. supervisor system must be NSF-aware to provide optimal resiliency. By default, most
Overall power capacity planning must be done to dynamically allow for network Cisco Layer 3 network devices are NSF-aware systems that operate in NSF helper mode
growth. Lower power supplies can be combined to allocate power to all internal and for graceful network recovery. (See Figure 67.)
external resources, but may not be able to offer power redundancy.
Figure 67 Medium Enterprise NSF/SSO Capable and Aware Systems
• Fixed configuration switch—Depending on the Catalyst switch capability the fixed
configuration switches offers wide range of power redundancy options includes the
latest innovation Cisco StackPower in Catalyst 3750-X series platform. To prevent
network outage on fixed configuration the Catalyst switches they must be deployed Edge NSF-Capable NSF-Aware
QFP
with Cisco StackPower technology, an internal redundant power supplies on Catalyst
3560-X and use Cisco RPS 2300 external power supplies solution on Catalyst
2960-S Series switches. A single Cisco RPS 2300 power supply uses a modular
power supply and fan for flexibility, and can deliver power to multiple switches.
Deploying an internal and external power supply solution protects critical access
layer switches during power outages, and provides completes fault transparency and Switch-1 Switch-2
constant network availability. Active Standby
VSL
Core
Redundant Control Plane
NSF-Capable
Device or node resiliency in modular Cisco Catalyst 6500-E/4500-E platforms and Cisco
StackWise provides a 1+1 redundancy option with enterprise-class high availability and
deterministic network recovery time. The following subsections provide high availability
design details, as well as graceful network recovery techniques that do not impact the Switch-1 Switch-2
control plane and provide constant forwarding capabilities during failure events. Active Standby
VSL
Distribution
Stateful Switchover
NSF-Capable
The stateful switchover (SSO) capability in modular switching platforms such as the Cisco
Catalyst 4500 and 6500 provides complete carrier-class high availability in the campus
network. Cisco recommends distribution and core layer design model be the center point
Standby
of the entire enterprise communication network. Deploying redundant supervisors in the
mission-critical distribution and core system provides non-stop communication Active
throughout the network. To provide 99.999 percent service availability in the access layer, Access
the Catalyst 4500 must be equipped with redundant supervisors to critical endpoints,

228501
such as Cisco TelePresence.
NSF-Aware NSF-Capable
Cisco StackWise is an low-cost solution to provide device-level high availability. Cisco
StackWise is designed with unique hardware and software capabilities that distribute,
synchronize, and protect common forwarding information across all member switches in
Medium Enterprise Design Profile (MEDP)—LAN Design

Operational Resiliency Overview Having the ability to operate the campus as a non-stop system depends on the
appropriate capabilities being designed-in from the start. Network and device level
Designing the network to recover from failure events is only one aspect of the overall redundancy, along with the necessary software control mechanisms, guarantee
campus non-stop design. Converged network environments are continuing to move controlled and fast recovery of all data flows following any network failure, while
toward requiring true 7x24x365 availability. The medium enterprise LAN network is part concurrently providing the ability to proactively manage the non-stop infrastructure.
of the backbone of the enterprise network and must be designed to enable standard
operational processes, configuration changes, and software and hardware upgrades Catalyst 6500 VSS—eFSU
without disrupting network services. A network upgrade requires planned network and system downtime. VSS offers
The ability to make changes, upgrade software, and replace or upgrade hardware unmatched network availability to the core. With the Enhanced Fast Software Upgrade
becomes challenging without a redundant system in the campus core. Upgrading (eFSU) feature, the VSS can continue to provide network services during the upgrade.
individual devices without taking them out of service is similarly based on having internal With the eFSU feature, the VSS network upgrade remains transparent and hitless to the
component redundancy (such as with power supplies and supervisors), complemented applications and end users. Because eFSU works in conjunction with NSF/SSO
with the system software capabilities. The Cisco Catalyst 4500-E, 6500-E and ASR 1000 technology, the network devices can gracefully restore control and forwarding
series platform support realtime upgrade software in the campus. The Cisco In-Service information during the upgrade process, while the bandwidth capacity operates at 50
Software Upgrade (ISSU) and Enhanced Fast Software Upgrade (eFSU) leverages percent and the data plane can converge within sub-seconds.
NSF/SSO technology to provide continuous network availability while upgrading the For a hitless software update, the ISSU process requires three sequential upgrade
critical systems that eliminates network services downtime planning and maintenance events for error-free software install on both virtual switch systems. Each upgrade event
window. Figure 68 demonstrates platform-independent Cisco IOS software upgrade causes traffic to be re-routed to a redundant MEC path, causing sub-second traffic loss
flow process using ISSU technology. that does not impact realtime network applications, such as VoIP.
Figure 68 Cisco ISSU Software Process Cycle
Design Strategies for Network Survivability
1
ACTIVE
issu loadversion STANDBY Sup/RP reboots
The network reliability and availability is not a new demand, it is one of the critical
OLD
STANDBY with software version integrated component that gets well planned during early network design phase. To
OLD prevent catastrophic network failure during un-planned network outage event, it is
5 2 important to identify network fault domains and define rapid recovery plans to minimize
STANDBY ACTIVE
the application impact during minor and major network outage conditions.
ion

iss

NEW OLD
ua
ers

ACTIVE STANDBY Each network tier can be classified as a fault domains, deploying redundant components
bo
rtv

NEW NEW
and systems increases redundancy and load sharing capabilities. However, it introduces
rtv
abo

New software version becomes


ers

the new set of challenges – higher cost and complexities to manage more number of
u

issu commitversion effective with current ACTIVE


ion
iss

issu runversion
Commit and reboot the supervisor switchover running
STANDBY with new IOS old image
systems. Network reliability and availability can be simplified using several Cisco
Software Version 4 3 high-availability and virtual-system technologies like VSS offers complete failure
STANDBY STANDBY
OLD OLD transparency to the end-users and applications during planned or un-planned network
ACTIVE ACTIVE outage conditions. Minor and major network failure are the broad terms that’s includes
NEW NEW several types of network faults that must be taken into consideration and implement the
issu acceptversion rapid recovery solution.
Acknowledge successful new
228972

software activation and stops Cisco high-availability technologies can be deployed based on critical versus non-critical
ISSU roll-over timer platform role in the network. Some of the high-availability techniques can be achieved
with inherent campus network design without making major network changes; however,
the critical network systems that is deployed in the center of the network to provide global
Catalyst 4500—ISSU connectivity may require additional hardware and software component to offer non-stop
Full-image ISSU on the Cisco Catalyst 4500-E leverages dual redundant supervisors to communication. The network survivability strategy can categorized in following three
allow for a full, in-place Cisco IOS upgrade, such as moving from IOS Release 12.2(53)SG major resiliency requirements that can encompass most of the common types of failure
to 12.2(53)SG1 for example. This leverages the NSF/SSO capabilities and unique uplink conditions. Depending on the network system tier, role and network service type
port capability to keep in operational and forwarding state even when supervisor module appropriate resilient option must be deployed. See Table 69.
gets reset, such design helps in retaining bandwidth capacity while upgrading both
supervisor modules at the cost of less than sub-second of traffic loss during a full Cisco
IOS upgrade.
Medium Enterprise Design Profile (MEDP)—LAN Design

Table 69 Medium Enterprise Network High Availability Strategy


Platform Role Network Resiliency Device Resiliency Operational Efficiency
1
Catalyst 2960-S Access EtherChannel RPS 2300 Cisco FlexStack
FlexStack UDLD NSF-Aware
Dampening
Catalyst 3560-X Redundant Power Supplies None. Standalone
systems
Catalyst 3750-X

Catalyst 3750ME WAN Edge


Catalyst 3750-X Access Cisco StackPower Stackwise Plus
StackWise NSF-Capable and Aware
Distribution
Catalyst 4500-E Access Red. Power Supplies 2 ISSU

Distribution Red. Linecard modules 2


Red. Supervisor modules 3
Core
SSO/NSF Capable & Aware 2
Catalyst 6500-E Distribution VSS
eFSU
Core
ASR 1006 WAN Edge EtherChannel Red. Power Supplies ISSU
Dampening Red. ESP modules
Red. Route Processors
SSO/NSF Capable & Aware
ASR 1004 Internet Edge Red. Power Supplies ISSU
SSO/NSF Capable & Aware4
Cisco ISR PSTN Gateway - None. Standalone
system
1. Redundant uplinks from each 3750-E member switch in Stack ring and 6500-E virtual-switch in VSS domain
2. Redundant power and hardware components from each 3750-E member switch in Stack ring and 6500-E virtual-switch in VSS domain
3. Redundant supervisor per VSS Domain (One per virtual-switch node basis). Starting 12.2(33)SXI4 it is recommended to deploy redundant supervisor on each
virtual-switch in a VSS domain.
4. Software based SSO redundancy

Implementing Network Resiliency EtherChannel / Multi-Chassis EtherChannel


The medium enterprise design guide recommends deploying a mix of hardware and In a non-EtherChannel network environment, the network protocol requires fault detection,
software resiliency designed to address the most common campus LAN network faults topology synchronization, and best-path recomputation to reroute traffic which requires
and instabilities. It is important to analyze the network and the application impacts from a variable time to restart the forwarding traffic. Conversely, EtherChannel or MEC network
top-down level to adapt and implement the appropriate high availability solution for environments provide significant benefits in such conditions, as network protocol remains
creating a resilient network. Implementing a resilient hardware and software design unaware of the topology changes and allows the hardware to self-recover from faults.
increases network resiliency and maintains the availability of all upper layer network Re-routing traffic over an alternate member-link of EtherChannel or MEC is based on
services that are deployed in a medium enterprise campus network design.
Medium Enterprise Design Profile (MEDP)—LAN Design

minor system internal EtherChannel hash re-computations instead of an entire network Figure 70 Catalyst 6500-E VSS Inter-Chassis MEC Link Recovery Analysis
topology re-computation. Hence an EtherChannel and MEC based network provides
deterministic sub-second network recovery of minor to major network faults. 0.7
The design and implementation considerations for deploying diverse physical 0.6
connectivity across redundant standalone systems and virtual-systems to create a single 0.5
point-to-point logical EtherChannel is explained in the “Designing EtherChannel Network”
0.4

sec
section on page -23.
Upstream
0.3
EtherChannel/MEC Network Recovery Analysis Downstream
0.2
The network recovery with EtherChannel and MEC is platform and diverse physical path 0.1 Multicast
dependent instead of Layer 2 or Layer 3 network protocol dependent. The medium
enterprise campus LAN network design deploys EtherChannel and MEC throughout the 0
network to develop a simplified single point-to-point network topology which does not 2960 3560E 3750E 3750 4500E

228987
build any parallel routing paths between any devices at any network tiers. L2 MEC L3 MEC L2 MEC StackWise L3 MEC
During individual member-link failures, the Layer 2 and Layer 3 protocols dynamically L3 MEC
adjusts the metrics of the aggregated port-channel interfaces. Spanning-Tree updates
The medium enterprise LAN can be designed optimally for deterministic and bidirectional
the port-cost and Layer 3 routing protocols like EIGRP updates the composite metric or
symmetric network recovery for unicast and multicast traffic.Refer to the “Redundant
OSPF may change the interface cost. In such events, the metric change will require minor
Linecard Network Recovery Analysis” section on page -78 for intra-chassis recovery
update messages in the network and do not require end-to-end topology recomputation
analysis with the same network faults tested in inter-chassis scenarios.
that impacts the overall network recovery process. Since the network topology remains
intact during individual link failures, the re-computation to select alternate member-links in
EtherChannel and MEC becomes locally significant on each end of the impacted Catalyst 4507R-E EtherChannel Link Recovery Analysis
EtherChannel neighbors. EtherChannel re-computation requires recreating new logical In the medium enterprise campus reference design, a single Catalyst 4507R-E with
hash table and re-programming the hardware to re-route the traffic over the remaining redundant hardware components is deployed in the different campus LAN network tiers.
available paths in the bundled interface. The Layer 2 or Layer 3 EtherChannel and MEC A Cisco Catalyst 4507R-E can only be deployed in standalone mode with in-chassis
re-computation is rapid and network scale independent. supervisor and module redundancy. However, the traffic load balancing and rerouting
across different EtherChannel member-links occurs within the local chassis. The
Catalyst 6500-E VSS MEC Link Recovery Analysis centralized forwarding architecture in Catalyst 4500-Es can rapidly detect link failures and
reprogram the hardware with new EtherChannel hash results. The test results in Figure 71
Several types of network faults can trigger link failures in the network (i.e., fiber pullout,
confirm the deterministic and consistent network recovery during individual Layer 2/3
GBIC failure, etc.). The network recovery remains consistent and deterministic in all
EtherChannel member-link failures.
network fault conditions. In standalone or non-virtual systems like Catalyst 2960-S or
4500-E, the EtherChannel recomputation is fairly easy as the alternate member-link Figure 71 Catalyst 4507R-E EtherChannel Link Recovery Analysis
resides within the system. However, with the distributed forwarding architecture in
virtual-systems like Catalyst 6500-E VSS and Catalyst 3750-X StackWise Plus may 0.5
require extra computation to select alternate member-link paths through its inter-chassis
backplane interface—VSL or StackRing. Such designs still provides deterministic 0.4
recovery, but with an additional delay to recompute a new forwarding path through the
remote virtual-switch node. The link failure analysis chart with inter-chassis reroute in 0.3
sec

Figure 70 summarizes several types of faults induced in large scale Cisco lab during
Upstream
developing this validated design guide.
0.2
Downstream
0.1 Multicast

0
2960 3560E 3750E 3750 4500E

228988
L2 MEC L3 MEC L2 MEC StackWise L3 MEC
L3 MEC
Medium Enterprise Design Profile (MEDP)—LAN Design

Unidirectional Link Detection (UDLD) To ensure local network domain stability during to port-flaps, all Layer 3 interfaces can be
implemented with IP Event Dampening. It uses the same fundamental principles as BGP
UDLD is a Layer 2 protocol that works with the Layer 1 features to determine the physical dampening. Each time the Layer 3 interface flaps, IP dampening tracks and records the
status of a link. At Layer 1, auto-negotiation takes care of physical signaling and fault flap events. On multiple flaps, a logical penalty is assigned to the port and suppresses link
detection. UDLD performs tasks that auto-negotiation cannot perform, such as detecting status notifications to IP routing until the port becomes stable.
the identity of neighbors and shutting down misconnected ports. When auto-negotiation
IP Event Dampening is a local specific function and does not have any signaling
and UDLD are enabled together, the Layer 1 and Layer 2 detection methods work together
to prevent physical and logical unidirectional connections and prevent malfunctioning of mechanism to communicate with remote systems. It can be implemented on each
individual physical or logical Layer 3 interface—physical ports, SVI, or port-channels:
other protocols.
Copper media ports use Ethernet link pulses as a link monitoring tool and are not • Layer 3 Port-Channel
susceptible to unidirectional link problems. However, because one-way communication is cr24-4507e-MB(config)#interface Port-Channel 1
possible in fiber-optic environments, mismatched transmit/receive pairs can cause a link cr24-4507e-MB(config-if)#no switchport
up/up condition even though bidirectional upper-layer protocol communication has not cr24-4507e-MB(config-if)#dampening
been established. When such physical connection errors occur, it can cause loops or
traffic black holes. UDLD functions transparently on Layer-2 or Layer-3 physical ports. • Layer 2 Port-Channel
UDLD operates in one of two modes:
cr24-4507e-MB(config)#interface Port-Channel 15
• Normal mode (Recommended)—If bidirectional UDLD protocol state information
cr24-4507e-MB(config-if)#switchport
times out; it is assumed there is no fault in the network, and no further action is taken.
The port state for UDLD is marked as undetermined and the port behaves according cr24-4507e-MB(config-if)#dampening
to its STP state.
• Aggressive mode—If bidirectional UDLD protocol state information times out, UDLD • SVI Interface
will attempt to reestablish the state of the port, if it detects the link on the port is cr24-4507e-MB(config)#interface range Vlan101 - 120
operational. Failure to reestablish communication with UDLD neighbor will force the cr24-4507e-MB(config-if-range)#dampening
port into the err-disable state that must be manually recovered by the user or the
switch can be configured for auto recovery within a specified interval of time. cr24-4507e-MB#show interface dampening
The following illustrates a configuration example to implement the UDLD protocol: Vlan101
Flaps Penalty Supp ReuseTm HalfL ReuseV SuppV MaxSTm
cr22-6500-LB#config t MaxP Restart
cr22-6500-LB(config)#interface range gi1/2/3 , gi2/2/3 3 0 FALSE 0 5 1000 2000 20
cr22-6500-LB(config-if-range)#udld port 16000 0

cr22-6500-LB#show udld neighbors TenGigabitEthernet3/1 Connected to cr23-VSS-Core
Port Device Name Device ID Port ID Neighbor State Flaps Penalty Supp ReuseTm HalfL ReuseV SuppV MaxSTm
MaxP Restart
---- ------------------- --------- ---------
------------------- 10 0 FALSE 0 5 1000 2000 20
16000 0
Gi1/2/3 FDO1328R0E2 1 Gi1/0/49 Bidirectional

Gi2/2/3 FDO1328R0E2 1 Gi1/0/50 Bidirectional
Port-channel1 Connected to cr23-VSS-Core
Flaps Penalty Supp ReuseTm HalfL ReuseV SuppV MaxSTm
MaxP Restart
IP Event Dampening
3 0 FALSE 0 5 1000 2000 20
Unstable physical network connectivity with poor signaling or loose connection may 16000 0
cause continuous port-flaps. When the medium enterprise network is not deployed using Port-channel15 Connected to cr24-2960-S-MB
best practice guidelines to summarize the network boundaries at the aggregation layer, a Flaps Penalty Supp ReuseTm HalfL ReuseV SuppV MaxSTm
single interface flap can severely impact stability and availability of the entire campus MaxP Restart
network. Route summarization is one technique used to isolate the fault domain and 3 0 FALSE 0 5 1000 2000 20
contain local network faults within the domain. 16000 0
Medium Enterprise Design Profile (MEDP)—LAN Design

Implementing Device Resiliency During individual power supply, fault from the stack can regain power from global power
pool to provide seamless operation in the network. With the modular power supply design
Each device in the medium enterprise LAN and WAN network design is connected to a in Catalyst 3750-X Series platform, the defective power supply can be swapped without
critical system or end-point to provide network connectivity and services for business disrupting network operation. The Cisco StackPower can be deployed in following two
operations. Like network resiliency, the device resiliency solves the problem by modes:
integrating redundant hardware components and software based solutions into single
standalone or virtual systems. Depending on the platform architecture of the Cisco router • Sharing mode—All input power is available to be used for power loads. The total
or switch deployed in the campus network design, the device redundancy is divided into aggregated available power in all switches in the power stack (up to four) is treated as
four major categories—Redundant Power Supplies, Redundant Line cards, Redundant a single large power supply. All switches in stack can share power with available
Supervisor/RP, and Non-Stop Forwarding (NSF) with Stateful Switchover (SSO). power to all powered devices connected to PoE ports. In this mode, the total available
power is used for power budgeting decisions and no power is reserved to
Redundant Power accommodate power-supply failures. If a power supply fails, powered devices and
switches could be shut down. This is the default mode.
To provide non-stop network communication during power outages, critical network • Redundant mode—The power from the largest power supply in the system is
devices must be deployed with redundant power supplies. Network administrators must subtracted from the power budget, which reduces the total available power, but
identify the network systems that provide network connectivity and services to mission provides backup power in case of a power-supply failure. Although there is less
critical servers. This would also include Layer 1 services like PoE to boot IP Phone and IP available power in the pool for switches and powered devices to draw from, the
Video Surveillance Cameras for campus physical security and communications. possibility of having to shut down switches or powered devices in case of a power
Depending on the Cisco platform design, the in-chassis power redundancy option allows failure or extreme power load is reduced. It is recommended to budget the required
flexibility to deploy dual power supplies into a single system. The next-generation power and deploy each Catalyst 3750-X switch in stack with dual power supply to
borderless network ready Cisco Catalyst 3750-X introduces latest Cisco StackPower meet the need. Enabling redundant mode will offer power redundancy as a backup
innovation that creates a global pool of power that can provide power load sharing and during one of the power supply unit failure event.
redundancy option. While the Cisco Catalyst 3560-X Series switches are designed to Since Cisco StackWise Plus can group up to nine 3750-X Series switches in the
increase device resiliency with dual redundant power supplies and fans. The Catalyst stack-ring, the Cisco StackPower must be deployed with two power stack group to
platforms like the 2960 and 2960-S can be deployed with Cisco RPS 2300 for external accommodate up to four switches. Following sample configuration demonstrate
power redundancy solution. Figure 72 provides complete power redundancy design and deploying Cisco StackPower redundancy mode and grouping the stack-member into
solution on various Cisco Catalyst switching platforms: power stack group, to make new power configuration effective, it is important that network
Figure 72 Power Supply Redundancy Design administrator must plan downtime as all the switches in the stack ring must be reloaded:
cr36-3750X-xSB(config)#stack-power stack PowerStack
Redundant Redundant cr36-3750X-xSB(config-stackpower)#mode redundant
Redundant Dual
Internal External
Power Supply
Power Power
1

Supply S W IT C H E D S H O U LD B E IN T H E O FF ‘O ’ P O S IT IO N T O IN S T A LL /
R E M O V E P O W E R S U P P LIE S . FA S T E N E R S M U S T B E FU LLY E N G A G E D

100-240V ~
12A
50/60H z
P R IO R T O O P E R A T IN G P O W E R S U P P LY
S W IT C H E D S H O U LD B E IN T H E O FF ‘O ’ P O S IT IO N T O IN S T A LL /
R E M O V E P O W E R S U P P LIE S . FA S T E N E R S M U S T B E FU LLY E N G A G E D

100-240V ~
12A
50/60H z
P R IO R T O O P E R A T IN G P O W E R S U P P LY

Redundant Master Supply


O U T P U T FA IL O U T P U T FA IL
P O E E N A B LE D P O E E N A B LE D
FA N O K FA N O K

Power
2

3
IN P U T 1
OK

IN P U T 2
OK
IN P U T 1
OK

IN P U T 2
OK

Supply Cisco 2300 RPS cr36-3750X-xSB(config)#stack-power switch 1


4 1 2 3 4 5 6
SYSTEM

5
100-240V ~
12A
50/60H z
4200A C V
100-240V ~
12A
50/60H z
4200A C V

StackPower cr36-3750X-xSB(config-switch-stackpower)#stack-id PowerStack


6

Catalyst
StackWise Cable
Plus
7
4507R-E

FAN
8
2

RPS %The change may not take effect until the entire data stack is reloaded
Cable
STATUS
SUPERVISOR

9 3

Redundant
SUPERVISOR

Power Member
200-240 V
23 A
60/50 Hz
200-240 V
23 A
60/50 Hz
Supply
L
INS
TAL
N
L
cr36-3750X-xSB(config)#stack-power switch 2
Catalyst 3750-X
TAL

Master Member
RU
INS N
229373

RU

7
INPUT FAN OUTPUT
OK OK FAIL
INPUT FAN Power
OUTPUT Supply 1 Power Supply 2
OK OK FAIL

Catalyst 6509-E
Catalyst 6500 SERIES

Catalyst 4507R-E
StackWise Plus
Catalyst 2960-S
cr36-3750X-xSB(config-switch-stackpower)#stack-id PowerStack
%The change may not take effect until the entire data stack is reloaded
The following configuration examples provide guidelines to deploy in-chassis and
external power redundancy in the Catalyst switching platforms.
Catalyst 2960 (External Power Redundancy)
The Cisco Redundant Power Supply (RPS) 2300 can support up to 6 RPS ports to provide
Catalyst 3750-X—Cisco StackPower Redundancy
seamless power backup to critical access-layer switches in the campus network.
The next-generation Catalyst 3750-X Series platform introduces innovative Cisco Additional power resiliency can be added by deploying dual power supply to backup to
StackPower technology to provide power-redundancy solution for fixed configuration two devices simultaneously. Cisco RPS 2300 can be provisioned for the 3750-E or 3560-E
switches. Cisco StackPower unifies the individual power supplies installed in the switches series switches through CLI:
and creates a pool of power, directing that power where it is needed. Up to four switches
can be configured in a StackPower stack with the special Cisco proprietary StackPower
cable. The StackPower cable is different than the StackWise data cables and is available
on all Cisco Catalyst 3750-X models.
Medium Enterprise Design Profile (MEDP)—LAN Design

Catalyst 4500-E and 6500-E (In-Chassis Power Redundancy) must be deployed following the same rule, the master and member-switches in the stack
The Cisco Catalyst 4500-E and 6500-E Series modular platforms allocate power to ring must be deployed with the external redundant power system. Protecting
several internal hardware components and external power devices like IP Phones, virtual-systems with redundant power supplies will prevent reducing network bandwidth
Wireless Access Points, etc. All the power allocation is assigned from the internal power capacity, topology changes, and poor application performance.
supply. Dual power supplies in these systems can operate in two different modes as listed Several power failures on power redundant systems were conducted to characterize
below: overall network and application impact. The lab test results shown in Figure 73 performed
• Redundant Mode—By default, power supplies operate in redundant mode offering a on all power redundant campus systems confirms zero-packet loss during individual
1+1 redundant option. The system determines power capacity and the number of power supply failure. Note that the network administrator must analyze the required power
power supplies required based on the allocated power to all internal and external capacity that will be drawn by different hardware components (i.e., Network modules,
power components. Both power supplies must have sufficient power to allocate PoE+ etc.).
power to all the installed modules in order to operate in 1+1 redundant mode. Figure 73 Redundant Power Analysis

0.2
cr24-4507e-LB(config)#power redundancy-mode redundant

cr24-4507e-LB#show power supplies


Power supplies needed by system :1

sec
Power supplies currently available :2
0.1 Upstream
cr22-vss-core(config)#power redundancy-mode redundant switch 1 Downstream
cr22-vss-core(config)#power redundancy-mode redundant switch 2
Multicast
cr2-6500-vss#show power switch 1 | inc Switch|mode 0
Switch Number: 1
2960 3560E 3750E 3750 4500E

228990
system power redundancy mode = redundant
L2 MEC L3 MEC L2 MEC StackWise L3 MEC
L3 MEC
cr2-6500-vss#show power switch 2 | inc Switch|mode
Switch Number: 2
system power redundancy mode = redundant Redundant Linecard Modules

Modular Catalyst platforms support a wide range of linecards for network connectivity to
• Combined mode—If the system power requirement exceeds the single power the network core and edge. The high speed core design linecards are equipped with
supply capacity, then the network administrator can utilize both power supplies in special hardware components to build the campus backbone whereas the network edge
combined mode to increase capacity. However it may not offer 1+1 power linecards are developed with more intelligence and application awareness. Using internal
redundancy during a primary power supply failure event. The following global system protocols, each line card communicates with the centralized control-plane
configuration will enable power redundancy mode to operate in combined mode: processing supervisor module through the internal backplane. Any type of internal
communication failure or protocol malfunction may disrupt the communication between
cr24-4507e-LB(config)#power redundancy-mode combined the linecard and the supervisor, which may lead to the linecard and all the physical ports
associated with it to forcibly reset to resynchronize with the supervisor.
cr24-4507-LB#show power supplies When the distribution and core layer Catalyst 4500-E and 6500-E systems are deployed
Power supplies needed by system:2
with multiple redundant line cards, the network administrator must design the network by
diversifying the physical cables across multiple linecard modules. A per system
Power supplies currently available:2
“V”-shaped, full-mesh physical design must have quad paths to address multiple types of
faults. Deploying redundant linecards and diversifying paths across the modules will allow
Network Recovery Analysis with Power Redundancy for inter-chassis re-route and, more importantly, the Cisco VSS traffic-engineering will
Each campus LAN router and switch providing critical network services must be prevent VSL reroute which may cause network congestion if there is not sufficient
protected with either the in-chassis or external redundant power supply system. This best bandwidth to accommodate the rerouted traffic. Figure 74 demonstrates inter-chassis
practice is also applicable to the standalone or virtual-systems devices. Each physical reroute (without linecard redundancy) and intra-chassis re-route (with linecard
Catalyst 6500-E chassis in VSS mode at the campus distribution and core layer must be redundancy).
deployed with a redundant in-chassis power supply. The Catalyst 3750-X StackWise Plus
Medium Enterprise Design Profile (MEDP)—LAN Design

Figure 74 Intra-Chassis versus Inter-Chassis Traffic Re-route Figure 75 Catalyst 6500-E VSS Intra-Chassis Link and Linecard Module Recovery Analysis

Inter-Chassis Re-Route Intra-Chassis Re-Route 0.5


(Without Linecard Redundancy) (With Linecard Redundancy)
0.4

0.3

sec
VSL VSL Upstream
0.2
Downstream
SW1 SW2 SW1 SW2 0.1
Multicast

228991
0
2960 3560E 3750E 3750 4500E

228992
The single standalone Catalyst 4500-E in distribution or core layer must be deployed with L2 MEC L3 MEC L2 MEC StackWise L3 MEC
linecard redundancy. The campus LAN network may face a complete network outage L3 MEC
during linecard failures without deploying linecard redundancy as it can be considered a
single point-of-failure.
Catalyst 4507R-E Linecard module Recovery Analysis
Redundant Linecard Network Recovery Analysis
The centralized forwarding architecture in a Catalyst 4507R-E programs all the forwarding
information on the active and standby supervisor Sup6E or Sup6L-E modules. All the
Catalyst 6500-E VSS Linecard module Recovery Analysis redundant linecards in the chassis are stub and maintains low level information to handle
The distributed forwarding architecture in Catalyst 6500-Es operating in VSS mode is ingress and egress forwarding information. During a link or linecard module failure, the
designed with unique traffic-engineering capabilities. The centralized control-plane new forwarding information gets rapidly reprogrammed on both supervisors in the
design on the active virtual-switch node builds Layer 2/3 peerings with the neighboring chassis. However, deploying the EtherChannel utilizing diversified fibers across different
devices. However with MEC, both virtual-switch nodes programs their local linecard linecard modules will provide consistent sub-second network recovery during abnormal
modules to switch egress data plane traffic. This design minimizes data traffic re-routing failure or the removal of a linecard from the Catalyst 4507R-E chassis. The chart in
across VSL links. Data traffic traverses the VSL links as a “last-resort” in hardware if either Figure 76 provides test results conducted by removing a linecard from the Catalyst
of the virtual-switch nodes lose a local member-link from the MEC link due to a fiber cut or 4507R-E chassis deployed in campus network in various roles.
linecard failure. The impact on traffic could be in the sub-second to seconds range and Figure 76 Catalyst 4507R-E Linecard Recovery Analysis
may create congestion on the VSL Etherchannel link if rerouting traffic exceeds overall
VSL bandwidth capacity. 0.5
At the critical large campus LAN core and distribution layer, traffic loss can be minimized
and consistent bi-directional sub-second network recovery can be achieved by 0.4
deploying redundant network modules on a per virtual-switch node basis. Additionally,
proper Cisco VSS traffic-engineering will prevent traffic routing over the VSL which may 0.3
sec

cause network congestion during individual link or entire high-speed network module
failure.Figure 74 provides an example of asymmetric traffic-loss statistics when traffic is Upstream
0.2
rerouted via remote virtual-switch node across VSL links. Figure 75 illustrates Downstream
intra-chassis network recovery analysis showing symmetric sub-second traffic loss
during individual member-links and the entire linecard module at the campus core and 0.1 Multicast
distribution-layer.
0
2960 3560E 3750E 3750 4500E

228993
L2 MEC L3 MEC L2 MEC StackWise L3 MEC
L3 MEC
Medium Enterprise Design Profile (MEDP)—LAN Design

Redundant Supervisor To deploy supervisor redundancy, it is important to remember that both supervisor
modules must be identical in type and all the internal hardware components like
Enterprise-class modular Cisco Catalyst 4500-E and 6500-E platforms support memory and bootflash must be the same to provide complete operational
dual-redundant supervisor modules to prevent disrupting the network control-plane and transparency during failure.
topology during abnormal supervisor module failures or when forced by the admin reset.
The Cisco Catalyst 4507R-E and 4510R-E Series platforms and all current generation The default redundancy mode on Catalyst 4500-E and Catalyst 6500-E series platforms is
SSO. Hence it does not require any additional configuration to enable SSO redundancy.
Catalyst 6500-E Series chassis and supervisors support in-chassis redundant supervisor
modules. However, with Cisco’s latest Virtual-Switching System (VSS) innovation and the The following sample configuration illustrates how to implement VSS in SSO mode:
next-generation Sup720-10GE supervisor module, supervisor redundancy can be cr23-VSS-Core#config t
extended across dual chassis by logically clustering them into one single large cr23-VSS-Core(config)#redundancy
virtual-switch. See Figure 77. cr23-VSS-Core(config-red)#mode sso
Figure 77 Intra-Chassis versus Inter-Chassis SSO Redundancy
cr23-VSS-Core#show switch virtual redundancy
My Switch Id = 1
Peer Switch Id = 2
Configured Redundancy Mode = sso
Operating Redundancy Mode = sso

Switch 1 Slot 5 Processor Information :


-----------------------------------------------
Current Software state = ACTIVE
<snippet>
Fabric State = ACTIVE
Control Plane State = ACTIVE

Switch 2 Slot 5 Processor Information :


Intra-Chassis SSO Redundancy -----------------------------------------------
Intra-Chassis SSO redundancy in the Catalyst 4500-E switch provides continuous Current Software state = STANDBY HOT (switchover target)
network availability across all the installed modules and the uplinks ports from active and <snippet>
standby supervisor modules. The uplink port remains in operation and forwarding state Fabric State = ACTIVE
during an active supervisor switchover condition. Thus, it provides full network capacity
Control Plane State = STANDBY
even during SSO switchover. Cisco Catalyst 6500-E deployed in standalone mode also
synchronizes all the hardware and software state-machine info in order to provide
constant network availability during intra-chassis supervisor switchover. Non-Stop Forwarding (NSF)
• Inter-Chassis SSO Redundancy When implementing NSF technology in SSO redundancy mode systems, the network
The Cisco VSS solution extends supervisor redundancy by synchronizing SSO and disruption remains transparent and provides seamless availability to the campus users
all system internal communication over the special VSL EtherChannel interface and applications remains during control-plane processing module
between the paired virtual systems. Note VSS does not currently support (Supervisor/Route-Processor) gets reset. During a failure, the underlying Layer 3 NSF
intra-chassis supervisor redundancy on each individual virtual nodes. The capable protocols perform graceful network topology re-synchronization and the preset
virtual-switch node running in active supervisor mode will be forced to reset during forwarding information in hardware on the redundant processor or distributed linecards
the switchover. This may disrupt the network topology if not deployed with the best remain intact in order to continue switching network packets. This service availability
practices defined in this design guide. The “V”-shaped, distributed, full-mesh fiber significantly lowers the Mean Time To Repair (MTTR) and increases the Mean Time
paths combined with single point-to-point EtherChannel or MEC links play a vital role Between Failure (MTBF) to achieve highest level of network availability.
during such type of network events. During the failure, the new active virtual-switch NSF is an integral part of a routing protocol and depends on the following fundamental
node will perform a Layer 3 protocol graceful recovery with its neighbors in order to principles of Layer 3 packet forwarding:
provide constant network availability over the local interfaces. • Cisco Express Forwarding (CEF)—CEF is the primary mechanism used to program
• Implementing SSO Redundancy the network path into the hardware for packet forwarding. NSF relies on the separation
of the control plane update and the forwarding plane information. The control plane is
the routing protocol graceful restart, and the forwarding plane switches packets using
Medium Enterprise Design Profile (MEDP)—LAN Design

hardware acceleration where available. CEF enables this separation by programming %LINK-3-UPDOWN: Interface TenGigabitEthernet2/1/2, changed state to
hardware with FIB entries in all Catalyst switches. This ability plays a critical role in down
NSF/SSO failover. %LINK-3-UPDOWN: Interface TenGigabitEthernet2/1/4, changed state to
• Routing protocol—The motivation behind NSF is route convergence avoidance. down
From the protocol operation perspective, this requires the adjacent routers to
support a routing protocol with special intelligence that allows a neighbor to be aware ! Downed interfaces are automatically removed from EtherChannel/MEC,
that NSF-capable routers can undergo switchover so that its peer can continue to ! however additional interface to new active chassis retains
forward packets, but may bring its adjacency to hold-down (NSF recovery mode) for port-channel in up/up state
a brief period, and requests routing protocol information to be resynchronized. %EC-SW1_SP-5-UNBUNDLE: Interface TenGigabitEthernet2/1/2 left the
A router that has the capability for continuous forwarding during a switchover is port-channel Port-channel100
NSF-capable. Devices that support the routing protocol extensions to the extent that they %EC-SW1_SP-5-UNBUNDLE: Interface TenGigabitEthernet2/1/4 left the
continue to forward traffic to a restarting router are NSF-aware. A Cisco device that is port-channel Port-channel100
NSF-capable is also NSF-aware., The NSF capability must be manually enabled on each
redundant system on a per routing protocol basis. The NSF aware function is enabled by ! EIGRP protocol completes graceful recovery with new active
default on all Layer 3 platforms. Table 69 describes the Layer 3 NSF-capable and aware virtual-switch.
platforms deployed in the campus network environment. %DUAL-5-NBRCHANGE: EIGRP-IPv4:(613) 100: Neighbor 10.125.0.12
The following configuration illustrates how to enable the NSF capability within EIGRP on (Port-channel100) is resync: peer graceful-restart
each Layer 3 campus LAN/WAN systems deployed with redundant supervisor,
route-processors or in virtual-switching modes (i.e., Cisco VSS and StackWise Plus):
cr23-vss-core(config)#router eigrp 100 NSF Timers
cr23-vss-core (config-router)#nsf
As depicted in the above show commands, up to 240 seconds NSF aware system can
cr23-vss-core #show ip protocols | inc NSF hold the routing information until routing protocol do not gracefully synchronize routing
*** IP Routing is NSF aware *** database. Lowering the timer values may abruptly terminate graceful recovery causing
EIGRP NSF-aware route hold timer is 240 network instability. The default timer setting is well tuned for a well structured and concise
EIGRP NSF enabled campus LAN network topology. It is recommended to retain the default route hold timers
NSF signal timer is 20s in the network unless it is observed that NSF recovery takes more than 240 seconds.
NSF converge timer is 120s 600 seconds after the protocol graceful-recovery starts, the NSF route hold-timer expires
on the NSF aware system and clears the stale NSF route marking and continues to use the
cr23-vss-core #show ip protocols | inc NSF
synchronized routing database.
*** IP Routing is NSF aware ***
EIGRP NSF-aware route hold timer is 240
NSF/SSO Recovery Analysis
As described in the previous section, the NSF/SSO implementation and its recovery
process differs on Catalyst 4507R-E (Intra-Chassis) and Catalyst 6500-E VSS
(Inter-Chassis) in the medium enterprise campus LAN network design. In both
Graceful Restart Example
deployment scenarios, Cisco validated the network recovery and application
The following example demonstrates how the EIGRP protocol will gracefully recover when performance by inducing several types of active supervisor faults that trigger Layer 3
active supervisor/chassis switchover on a Cisco VSS core system is forced by a reset: protocol graceful recovery. During each test, the switches continued to provide network
• NSF Capable System accessibility during the recovery stage.
cr23-VSS-Core#redundancy force-switchover During the SSO switchover process, the Cisco Catalyst 4507R-E deployed with
This will reload the active unit and force switchover to redundant Sup6E or Sup6L-E will retain the operational and forwarding state of the uplink
standby[confirm]y ports and linecard modules in the chassis.
The inter-chassis SSO implementation in Catalyst 6500-E VSS differs from the
NSF Aware/Helper System single-chassis redundant implementation, in that during active virtual-switch node failure
the entire chassis and all the linecards installed will reset. However, with Layer 2/3 MEC
links, the network protocols and forwarding information remains protected via the remote
! VSS active system reset will force all linecards and ports to go
down
virtual-switch node that can provide seamless network availability.
!the following logs confirms connectivity loss to core system
Medium Enterprise Design Profile (MEDP)—LAN Design

Catalyst 4507R-E NSF/SSO Recovery Analysis is no major network topology changes and there are member-links still in an operational
Figure 78 illustrates intra-chassis NSF/SSO recovery analysis for the Catalyst 4507R-E state, the NSF/SSO recovery in Catalyst 6500-E VSS system is identical as losing
chassis deployed with Sup6E or Sup6L-E in redundant mode. With EIGRP NSF/SSO individual links.
capability the unicast traffic recovers consistently within 200 msec or less. However, Additionally, the Cisco Catalyst 6500-E supports Multicast Multilayer Switching (MMLS)
Catalyst 4507R-E does not currently support redundancy for Layer 3 multicast routing and NSF with SSO enabling the system to maintain the multicast forwarding state in PFC3 and
forwarding information. Therefore, there may be around 2 second multicast traffic loss DFC3 based hardware during an active virtual-switch reset. The new active virtual-switch
since the switch has to re-establish all the multicast routing information and forwarding reestablishes PIM adjacency while continuing to switch multicast traffic based on
information during the Sup6E or Sup6L-E switchover event. pre-switchover programmed information. See Figure 79.
Figure 78 Catalyst 4507R-E NSF/SSO Recovery Analysis Figure 79 Catalyst 6500-E VSS NSF/SSO Recovery Analysis

2.5 0.3

2
0.2
1.5

sec
sec

Upstream Upstream
1
Downstream 0.1 Downstream
0.5 Multicast Multicast

0 0
2960 3560E 3750E 3750 4500E 2960 3560E 3750E 3750 4500E

228996
228995
L2 MEC L3 MEC L2 MEC StackWise L3 MEC L2 MEC L3 MEC L2 MEC StackWise L3 MEC
L3 MEC L3 MEC

In the remote medium campus, the Catalyst 4507R-E is also deployed as the PIM-SM RP
with MSDP Anycast-RP peering to the Cisco VSS core in the main campus location. If a Catalyst 6500-E VSS Standby Failure and Recovery Analysis
user from the remote medium campus location joins the multicast source from the main
The network impact during a VSS standby failure is similar to a failure of a VSS active
campus location then during Sup6E switchover there could be around a 3 second
virtual-switch node. The primary difference with a standby virtual-switch failure is that it
multicast packet loss. However unicast recovery will still remain in the 200 msec or less
will not trigger a Layer 3 protocol graceful recovery since the active virtual-switch is in an
range in the same scenario.
operational state. Each MEC neighbors will lose their physical path to standby switch and
re-route traffic to the remaining MEC member-links connected to the active virtual-switch
Catalyst 4507R-E Standby Supervisor Failure and Recovery Analysis node. The VSS standby virtual-switch failure will trigger a bidirectional subsecond loss as
The standby Sup6E or Sup6L-E supervisor remains in redundant mode while the active illustrated in Figure 79.
supervisor is in the operational state. If the standby supervisor gets reset or gets Since VSS is developed with the distributed forwarding architecture it can create certain
re-inserted, this event will not trigger protocol graceful recovery or any network topology race conditions during a standby re-initialization state since the virtual-switch receives
change. The uplink port of the standby supervisor remains in operational and forwarding traffic from the network while it is not fully ready to switch the traffic. The amount and the
state and the network bandwidth capacity remains intact during a standby supervisor direction of traffic loss depend on multiple factors – VSL interface, ingress and egress
removal or insertion event. module type, boot up ordering etc.
When the upstream device is a Catalyst 6500-E and it is deployed in standalone mode,
Catalyst 6500-E VSS NSF/SSO Recovery Analysis then Cisco recommends configuring the port-channel load-defer command under the
As described earlier, the entire chassis and all linecard modules installed gets reset port-channel to prevent the traffic loss during the standby initialization state. It is possible
during an active virtual-switch switchover event. With a diverse full-mesh fiber network to configure the same command line under the MEC interface when the upstream device
design, the Layer 2/3 remote device perceives this event as a loss of a member-link since is Catalyst 6500-E and it is deployed in VSS mode instead of standalone.
the alternate link to the standby switch is in an operational and forwarding state. The Cisco recommends not configuring the port-channel load-defer command under the
standby virtual-switch detects the loss of the VSL Etherchannel and transitions in active MEC as it will create an adverse impact to the downstream unicast and multicast traffic:
role and initializes Layer 3 protocol graceful recovery with the remote devices. Since there
Medium Enterprise Design Profile (MEDP)—LAN Design

• The port-channel load-defer command is primarily developed for Catalyst 6500-E ISSU Software Upgrade Pre-Requisite
based standalone systems and does not have much effect when the campus
upstream device type is Catalyst 6500-E deployed in VSS mode. ISSU Compatibility Matrix
• There is no software restriction on turning on the feature on VSS systems. However, it When a redundant Catalyst 4500-E system is brought up with a different Cisco IOS
may create an adverse impact on downstream multicast traffic. With the default software version, the ISSU stored compatibility matrix information is analyzed internally to
multicast replication configuration, the MEC may drop multicast traffic until the defer determine interoperability between the software running on the active and standby
timer expires (120 second default timer). Therefore, the user may experience traffic supervisors. ISSU provides SSO compatibility between several versions of software
loss for a long period of time. releases shipped during a 18 month period. Prior to upgrading the software, the network
• Modifying the default (egress) multicast mode to the ingress replication mode may administrator must verify ISSU software compatibility with the following command.
resolve the multicast traffic loss problem. However, depending on the network scale Incompatible software may cause the standby supervisor to boot in RPR mode which may
size, it may degrade performance and scalability. result in a network outage:

Implementing Operational Resiliency cr24-4507e-MB#show issu comp-matrix stored


Path redundancy often is used to facilitate access during periods of maintenance activity, Number of Matrices in Table = 1
but the single standalone systems are single points of failure sometimes exist or the My Image ver: 12.2(53)SG
network design simply does not allow for access if a critical node is taken out of service. Peer Version Compatibility
Leveraging enterprise-class high availability features like NSF/SSO in the distribution and ----------------- -------------
core layer Catalyst 4500-E and 6500-E Series platforms supports ISSU to enable
12.2(44)SGBase(2)
real-time network upgrade capability. Using ISSU and eFSU technology, the network
administrator can upgrade the Cisco IOS software to implement new features, software 12.2(46)SG Base(2)
bug fixes or critical security fixes in real time. 12.2(44)SG1 Base(2)

Catalyst 4500-E ISSU Software Design and Upgrade Process
Managing System Parameters
Figure 80 Catalyst 4500-E ISSU Software Upgrade Process
Software
Prior to starting the software upgrade process, it is recommended to copy the old and
new Cisco IOS software on Catalyst 4500-E active and standby supervisor into local file
systems—Bootflash or Compact Flash.

cr24-4507e-MB#dir slot0:
Directory of slot0:/
1 -rw- 25442405 Nov 23 2009 17:53:48 -05:00 cat4500e-entservicesk9-mz.122-53.SG1
 new image
2 -rw- 25443451 Aug 22 2009 13:26:52 -04:00 cat4500e-entservicesk9-mz.122-53.SG
 old image

cr24-4507e-MB#dir slaveslot0:
Directory of slaveslot0:/

1 -rw- 25443451 Aug 22 2009 13:22:00 -04:00 cat4500e-entservicesk9-mz.122-53.SG


 old image
2 -rw- 25442405 Nov 23 2009 17:56:46 -05:00
cat4500e-entservicesk9-mz.122-53.SG1  new image

Configuration
It is recommended to save the running configuration to NVRAM and other local or remote
locations such as bootflash or TFTP server prior upgrading IOS software.
Medium Enterprise Design Profile (MEDP)—LAN Design

Boot Variable and String Note Resetting the standby supervisor will not trigger a network protocol graceful
The system default boot variable is to boot from the local file system. Make sure the default recovery and all standby supervisor uplink ports will remain in operational and
setting is not changed and the configuration register is set to 0x2102. forwarding state for the transparent upgrade process.
Modify the boot string to point to the new image to boot from new IOS software version With the broad range of ISSU version compatibility to form SSO communication the
after the next reset triggered during ISSU upgrade process. Refer to following URL for standby supervisor will successfully bootup again in its original standby state, see the
additional ISSU pre-requisites: following output.
http://www.cisco.com/en/US/partner/docs/switches/lan/catalyst4500/12.2/53SG/confi cr24-4507e-MB#show module | inc Chassis|Sup|12.2
guration/issu.html#wp1072849 Chassis Type : WS-C4507R-E
! Common Supervisor Module Type
Catalyst 4500-E ISSU Software Upgrade Procedure 3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXQ3
4 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXRQ
This subsection provides the realtime software upgrade procedure for a Catalyst 4500-E ! Mismatch operating system version
deployed in the medium enterprise campus LAN network design in several different 3 0021.d8f5.45c0 to 0021.d8f5.45c5 0.4 12.2(33r)SG( 12.2(53)SG Ok
roles—access, distribution, core, collapsed core, and Metro Ethernet WAN edge. ISSU is 4 0021.d8f5.45c6 to 0021.d8f5.45cb 0.4 12.2(33r)SG( 12.2(53)SG1 Ok
supported on Catalyst 4500-E Sup6E and Sup6L-E supervisor running Cisco IOS !SSO Synchronized
Enterprise feature set. 3 Active Supervisor SSO Active
In the following sample output, the Sup6E supervisor is installed in Slot3 and Slot4 4 Standby Supervisor SSO Standby hot
respectively. The Slot3 supervisor is in the SSO Active role and the Slot4 supervisor is in
Standby role. Both supervisors are running identical 12.2(53)SG Cisco IOS software
This bootup process will force the active supervisor to re-synchronize all SSO
version and is fully synchronized with SSO.
redundancy and checkpoints, VLAN database and forwarding information with the
cr24-4507e-MB#show module | inc Chassis|Sup|12.2
standby supervisor and will notify the user to proceed with the next ISSU step.
Chassis Type : WS-C4507R-E
!Common Supervisor Module Type
3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXQ3 %C4K_REDUNDANCY-5-CONFIGSYNC: The config-reg has been successfully
4 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXRQ synchronized to the standby supervisor
!Common operating system version %C4K_REDUNDANCY-5-CONFIGSYNC: The startup-config has been successfully
3 0021.d8f5.45c0 to 0021.d8f5.45c5 0.4 12.2(33r)SG ( 12.2(53)SG Ok synchronized to the standby supervisor
4 0021.d8f5.45c6 to 0021.d8f5.45cb 0.4 12.2(33r)SG ( 12.2(53)SG Ok %C4K_REDUNDANCY-5-CONFIGSYNC: The private-config has been successfully
!SSO Synchronized synchronized to the standby supervisor
3 Active Supervisor SSO Active
%C4K_REDUNDANCY-5-CONFIGSYNC_RATELIMIT: The vlan database has been
4 Standby Supervisor SSO Standby hot
successfully synchronized to the standby supervisor

The following provides the step-by-step procedure to upgrade the Cisco IOS Release
12.2(53)SG to 12.2(53)SG1 Cisco IOS release without causing network topology and %ISSU_PROCESS-7-DEBUG: Peer state is [ STANDBY HOT ]; Please issue the
forwarding disruption. Each upgrade steps can be aborted at any stage by issuing the issu runversion command
abortversion command if software detects any failure.
• ISSU loadversion—This first step will direct the active supervisor to initialize the ISSU • ISSU runversion—After performing several steps to assure the new loaded software
software upgrade process. is stable on the standby supervisor, the network administrator must proceed to the
second step.
cr24-4507e-MB#issu loadversion 3
slot0:cat4500e-entservicesk9-mz.122-53.SG1 4 slaveslot0: cr24-4507e-MB#issu runversion 4
cat4500e-entservicesk9-mz.122-53.SG1 This command will reload the Active unit. Proceed ? [confirm]y
%RF-5-RF_RELOAD: Self reload. Reason: Admin ISSU runversion CLI
After issuing the above command, the active supervisor ensures the new IOS software is %SYS-5-RELOAD: Reload requested by console. Reload reason: Admin ISSU
downloaded on both supervisors file system and performs several additional checks on runversion
the standby supervisor for the graceful software upgrade process. ISSU changes the
boot variable with the new IOS software version if no errors are found and resets the This step will force the current active supervisor to reset itself which will trigger network
standby supervisor module. protocol graceful recovery with peer devices, however the uplink ports of the active
%RF-5-RF_RELOAD: Peer reload. Reason: ISSU Loadversion supervisor remains intact and the data plane will remain un-impacted during the
switchover process. From the overall network perspective, the active supervisor reset
caused by the issu runversion command will be no different than similar switchover
Medium Enterprise Design Profile (MEDP)—LAN Design

procedures (i.e., administrator-forced switchover or supervisor online insertion and %ISSU_PROCESS-7-DEBUG: Peer state is [ STANDBY HOT ]; Please issue the
removal). During the entire software upgrade procedure; this is the only step that performs acceptversion command
SSO-based network graceful recovery. The following syslog on various Layer 3 systems
confirm stable and EIGRP graceful recovery with the new supervisor running the new
Cisco IOS software version. • ISSU acceptversion—This step provides confirmation from the network
• NSF-Aware Core administrator that the system and network is stable after the IOS install and they are
cr23-VSS-Core# ready to accept the new IOS software on the standby supervisor. This step stops the
%DUAL-5-NBRCHANGE: EIGRP-IPv4:(415) 100: Neighbor 10.125.0.15 rollback timer and instructs the network administrator to issue the final commit
(Port-channel102) is resync: peer graceful-restart command However, it does not perform any additional steps to install the new
software on standby supervisor
• NSF-Aware Layer 3 Access cr24-4507e-MB#issu acceptversion 4
% Rollback timer stopped. Please issue the commitversion command.

cr24-3560-MB#
cr24-4507e-MB#show issu rollback-timer
%DUAL-5-NBRCHANGE: EIGRP-IPv4:(100) 100: Neighbor 10.125.0.10
(Port-channel1) is resync: peer graceful-restart Rollback Process State = Not in progress
Configured Rollback Time = 45:00
The previously active supervisor module will boot up in the standby role with the older IOS
software version instead the new IOS software version. cr24-4507e-MB#show module | inc Chassis|Sup|12.2
Chassis Type : WS-C4507R-E
cr24-4507e-MB#show module | inc Chassis|Sup|12.2 ! Common Supervisor Module Type
Chassis Type : WS-C4507R-E 3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXQ3
! Common Supervisor Module Type 4 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXRQ
3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXQ3 ! Mismatch operating system version
4 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXRQ 3 0021.d8f5.45c0 to 0021.d8f5.45c5 0.4 12.2(33r)SG( 12.2(53)SG Ok
! Mismatch operating system version 4 0021.d8f5.45c6 to 0021.d8f5.45cb 0.4 12.2(33r)SG( 12.2(53)SG1 Ok
3 0021.d8f5.45c0 to 0021.d8f5.45c5 0.4 12.2(33r)SG( 12.2(53)SG Ok !SSO Synchronized
4 0021.d8f5.45c6 to 0021.d8f5.45cb 0.4 12.2(33r)SG( 12.2(53)SG1 Ok 3 Active Supervisor SSO Standby hot
!SSO Synchronized 4 Standby Supervisor SSO Active
3 Active Supervisor SSO Standby hot
4 Standby Supervisor SSO Active
• ISSU commitversion—This final ISSU step forces the active supervisor to
synchronize its configuration with the standby supervisor and force it to reboot with
the new IOS software. This stage concludes the ISSU upgrade procedure and the
This safeguarded software design provides an opportunity to roll back to the previous IOS new IOS version is permanently committed on both supervisor modules. If for some
software if the system upgrade causes any type of network abnormalities. At this stage, reason the network administrator wants to rollback to the older image, then it is
ISSU automatically starts internal rollback timers to re-install old IOS image. The default recommended to perform an ISSU-based downgrade procedure to retain the
rollback timer is up to 45 minutes which provides a network administrator an opportunity network operational state without any downtime planning.
to perform several sanity checks. In small to mid size network designs, the default timer
may be sufficient. However, for large networks, network administrators may want to adjust cr24-4507e-MB#issu commitversion 3
the timer up to 2 hours:
Building configuration...
cr24-4507e-MB#show issu rollback-timer
Compressed configuration from 24970 bytes to 10848 bytes[OK]
Rollback Process State = In progress
%C4K_REDUNDANCY-5-CONFIGSYNC: The private-config has been successfully
Configured Rollback Time = 45:00 synchronized to the standby supervisor
Automatic Rollback Time = 19:51 %RF-5-RF_RELOAD: Peer reload. Reason: ISSU Commitversion

The system will notify the network administrator with the following syslog to instruct them cr24-4507e-MB#show module | inc Chassis|Sup|12.2
to move to the next ISSU upgrade step if no stability issues are observed and all the Chassis Type : WS-C4507R-E
network services are operating as expected. ! Common Supervisor Module Type
3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXQ3
4 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) WS-X45-SUP6-E JAE1132SXRQ
Medium Enterprise Design Profile (MEDP)—LAN Design

! Common new operating system version Catalyst 6500-E eFSU Software Upgrade Procedure
3 0021.d8f5.45c0 to 0021.d8f5.45c5 0.4 12.2(33r)SG( 12.2(53)SG1 Ok
4 0021.d8f5.45c6 to 0021.d8f5.45cb 0.4 12.2(33r)SG( 12.2(53)SG1 Ok This subsection provides the software upgrade procedure for Catalyst 6500-Es deployed
in VSS mode in the medium enterprise campus LAN network design. eFSU is supported
!SSO Synchronized on the Catalyst 6500-E Sup720-10GE supervisor module running Cisco IOS release with
3 Active Supervisor SSO Standby hot the Enterprise feature set.
4 Standby Supervisor SSO Active
In the following sample output, a VSS capable Sup720-10G supervisor module is installed
in Slot5 of virtual-switch SW1 and SW2 respectively. The virtual-Switch SW1 supervisor is
in the SSO Active role and the SW2 supervisor is in the Standby hot role. In addition, with
MEC and the distributed forwarding architecture, the forwarding plane is in an active state
Catalyst 6500-E VSS eFSU Software Design and Upgrade Process on both virtual-switch nodes. Both supervisor are running identical the Cisco IOS Release
12.2(33)SXI2a software version and is fully synchronized with SSO.
Cisco Catalyst VSS was introduced in the initial IOS Release 12.2(33)SXH that supported
Fast Software Upgrade (FSU). In the initial introduction, it had limited high-availability cr23-VSS-Core#show switch virtual redundancy | inc Mode|Switch|Image|Control
consideration to upgrade the IOS software release. The ISSU mismatched software ! VSS switch node with control-plane ownership
version compatibility was not supported by the FSU infrastructure which could cause My Switch Id = 1
network down time. This may not be a desirable solution when deploying Catalyst 6500-E Peer Switch Id = 2
! SSO Synchronized
in the critical aggregation or core network tier.
Configured Redundancy Mode = sso
Starting with the IOS Release 12.2(33)SXI, the Catalyst 6500-E supports true hitless IOS Operating Redundancy Mode = sso
software upgrade in standalone and virtual-switch network designs. Enhanced Fast ! Common operating system version
Software Upgrade (eFSU) made it completely ISSU infrastructure compliant and Switch 1 Slot 5 Processor Information :
enhances the software and hardware design to retain its functional state during the Image Version = Cisco IOS Software, s72033_rp Software
graceful upgrade process. (s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI2a
Control Plane State = ACTIVE
Switch 2 Slot 5 Processor Information :
Catalyst 6500-E VSS eFSU Software Design and Upgrade Process
Image Version = Cisco IOS Software, s72033_rp Software
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI2a
Figure 81 Catalyst 6500-E VSS eFSU Software Upgrade Process
Control Plane State = STANDBY

The following provides a step-by-step procedure to upgrade from Cisco IOS Release
12.2(33)SXI2a to 12.2(33)SXI3 without causing network topology and forwarding
disruption. Each upgrade step can be aborted at any stage by issuing the issu
abortversion command if the software detects any failures.
• ISSU loadversion—This first step will direct the active virtual-switch node to initialize
the ISSU software upgrade process.

cr23-VSS-Core#issu loadversion 1/5 disk0:


s72033-adventerprisek9_wan-mz.122-33.SXI3 2/54 slavedisk0:
s72033-adventerprisek9_wan-mz.122-33.SXI3

After issuing the above command, the active virtual-switch ensures the new IOS software
is downloaded on both supervisors file system and performs several additional checks on
the standby supervisor on the remote virtual-switch for the graceful software upgrade
Since eFSU in the Catalyst 6500-E system is built on the ISSU infrastructure, most of the process. ISSU changes the boot variable to the new IOS software version if no error is
eFSU pre-requisites and IOS upgrade procedures remain consistent as explained in found and resets the standby virtual-switch and installed modules.
previous sub-section. As described earlier, the Cisco VSS technology enables
inter-chassis SSO communication between two virtual-switch nodes. However, while the %RF-SW1_SP-5-RF_RELOAD: Peer reload. Reason: ISSU Loadversion
software upgrade procedure for inter-chassis eFSU upgrades is similar, the network
%SYS-SW2_SPSTBY-5-RELOAD: Reload requested - From Active Switch (Reload
operation slightly differs compared to ISSU implemented on intra-chassis based SSO
peer unit).
design.
Medium Enterprise Design Profile (MEDP)—LAN Design

Note Resetting standby virtual-switch node will not trigger the network protocol continue get switched during the switchover process. From the network perspective, the
graceful recovery process and will not reset the linecards on the active affects of the active supervisor resetting during the ISSU runversion step will be no
virtual-switch. It will remain in operational and forwarding state for the transparent different than the normal switchover procedure (i.e., administration-forced switchover or
upgrade process. supervisor online insertion and removal). In the entire eFSU software upgrade procedure,
this is the only time that the systems will perform an SSO-based network graceful
With the broad range of ISSU version compatibility to form SSO communication the recovery. The following syslogs confirm stable and EIGRP graceful recovery on the
standby supervisor will successfully bootup again in its original standby state, see the virtual-switch running the new Cisco IOS software version.
following output.
cr23-VSS-Core#show switch virtual redundancy | inc Mode|Switch|Image|Control NSF-Aware Distribution
! VSS switch node with control-plane ownership
My Switch Id = 1
Peer Switch Id = 2
cr24-4507e-MB#
! SSO Synchronized %DUAL-5-NBRCHANGE: EIGRP-IPv4:(100) 100: Neighbor 10.125.0.14
Configured Redundancy Mode = sso (Port-channel1) is resync: peer graceful-restart
Operating Redundancy Mode = sso
! Mismatch operating system version After re-negotiating and establishing the VSL EtherChannel link and going through the
Switch 1 Slot 5 Processor Information : VSLP protocol negotiation process, the rebooted virtual-switch module boots up in the
Image Version = Cisco IOS Software, s72033_rp Software
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI2a, RELEASE SOFTWARE (fc2)
standby role with the older IOS software version instead the new IOS software version.
Control Plane State = ACTIVE cr23-VSS-Core#show switch virtual redundancy | inc Mode|Switch|Image|Control
Switch 2 Slot 5 Processor Information : ! VSS switch node with control-plane ownership changed to SW2
Image Version = Cisco IOS Software, s72033_rp Software My Switch Id = 2
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI3, RELEASE SOFTWARE (fc2) Peer Switch Id = 1
Control Plane State = STANDBY ! SSO Synchronized
Configured Redundancy Mode = sso
Operating Redundancy Mode = sso
To rejoin the virtual-switch domain, both nodes will reestablish the VSL EtherChannel ! Mismatch operating system version
communication and force the active supervisor to resynchronize all SSO redundancy and Switch 2 Slot 5 Processor Information :
checkpoints, VLAN database and forwarding information with the standby virtual-switch Image Version = Cisco IOS Software, s72033_rp Software
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI3, RELEASE SOFTWARE (fc2)
and the network administrator is notified to proceed with the next ISSU step.
Control Plane State = ACTIVE
Switch 1 Slot 5 Processor Information :
%HA_CONFIG_SYNC-6-BULK_CFGSYNC_SUCCEED: Bulk Sync succeeded Image Version = Cisco IOS Software, s72033_rp Software
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI2a, RELEASE SOFTWARE (fc2)
%PFREDUN-SW2_SPSTBY-6-STANDBY: Ready for SSO mode
Control Plane State = STANDBY

%ISSU_PROCESS-SW1_SP-7-DEBUG: Peer state is [ STANDBY HOT ]; Please issue


the runversion command Like intra-chassis ISSU implementation, eFSU also provides a safeguarded software
design for additional network stability and opportunity to roll back to the previous IOS
• ISSU runversion—After performing several steps to assure the new loaded software software if the system upgrade causes any type of network abnormalities. At this stage,
is stable on the standby virtual-switch, the network administrator is now ready to ISSU automatically starts internal rollback timers to re-install old IOS image if there are any
proceed to the runversion step. problems. The default rollback timer is up to 45 minutes which provides the network
administrator an opportunity to perform several sanity checks. In small to mid size network
designs, the default timer may be sufficient. However for large networks, the network
cr23-VSS-Core#issu runversion 2/5 administrator may want to adjust the timer up to 2 hours:
This command will reload the Active unit. Proceed ? [confirm]y
cr23-VSS-Core#show issu rollback-timer
%issu runversion initiated successfully
Rollback Process State = In progress
Configured Rollback Time = 00:45:00
%RF-SW1_SP-5-RF_RELOAD: Self reload. Reason: Admin ISSU runversion
Automatic Rollback Time = 00:36:08
CLI

This step will force the current active virtual-switch (SW1) to reset itself which will trigger
network protocol graceful recovery with peer devices; however the linecard on the
current standby virtual-switch (SW2) will remain intact and the data plane traffic will
Medium Enterprise Design Profile (MEDP)—LAN Design

The system will notify the network administrator with following syslog to continue to the %SYS-SW1_SPSTBY-5-RELOAD: Reload requested - From Active Switch
next ISSU upgrade step if no stability issues are observed and all the network services (Reload peer unit).
are operating as expected. %issu commitversion executed successfully
%ISSU_PROCESS-SW2_SP-7-DEBUG: Peer state is [ STANDBY HOT ]; Please issue
the acceptversion command cr23-VSS-Core#show switch virtual redundancy | inc
Mode|Switch|Image|Control
• ISSU acceptversion—This eFSU step provides confirmation from the network ! VSS switch node with control-plane ownership
administrator regarding the system and network stability after installing the new My Switch Id = 2
software and confirms they are ready to accept the new IOS software on the standby Peer Switch Id = 1
supervisor. This step stops the rollback timer and instructs the network administrator ! SSO Synchronized
to continue to the final commit state. However, it does not perform any additional Configured Redundancy Mode = sso
steps to install the new software on standby supervisor. Operating Redundancy Mode = sso
cr23-VSS-Core#issu acceptversion 2/5 ! Common operating system version
% Rollback timer stopped. Please issue the commitversion command. Switch 2 Slot 5 Processor Information :
cr23-VSS-Core#show issu rollback-timer Image Version = Cisco IOS Software, s72033_rp Software
Rollback Process State = Not in progress (s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI3, RELEASE SOFTWARE
Configured Rollback Time = 00:45:00 (fc2)
Control Plane State = ACTIVE
Switch 1 Slot 5 Processor Information :
cr23-VSS-Core#show switch virtual redundancy | inc Mode|Switch|Image|Control Image Version = Cisco IOS Software, s72033_rp Software
! VSS switch node with control-plane ownership changed to SW2 (s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI3, RELEASE SOFTWARE
My Switch Id = 2 (fc2)
Peer Switch Id = 1 Control Plane State = STANDBY
! SSO Synchronized
Configured Redundancy Mode = sso
Operating Redundancy Mode = sso Summary
! Mismatch operating system version
Switch 2 Slot 5 Processor Information : Designing the LAN network aspects for the medium enterprise network design
Image Version = Cisco IOS Software, s72033_rp Software establishes the foundation for all other aspects within the service fabric (WAN, security,
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI3, RELEASE SOFTWARE (fc2) mobility, and UC) as well as laying the foundation to provide safety and security,
Control Plane State = ACTIVE operational efficiencies, virtual learning environments, and secure classrooms.
Switch 1 Slot 5 Processor Information :
This chapter reviews the two LAN design models recommended by Cisco, as well as
Image Version = Cisco IOS Software, s72033_rp Software
(s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXI2a, RELEASE SOFTWARE (fc2) where to apply these models within the various locations of a medium enterprise network.
Control Plane State = STANDBY Each of the layers is discussed and design guidance is provided on where to place and
how to deploy these layers. Finally, key network foundation services such as routing,
switching, QoS, multicast, and high availability best practices are given for the entire
medium enterprise design.
• ISSU commitversion—The final eFSU step forces the active virtual-switch to
synchronize the configuration with the standby supervisor and force it to reboot with
the new IOS software. This stage concludes the eFSU upgrade procedure and the
new IOS version is permanently committed on both virtual-switches. If for some
reason the network administrator needs to rollback to the older image, then it is
recommended to perform the eFSU-based downgrade procedure to maintain the
network operational state without any downtime planning.

cr23-VSS-Core#issu commitversion 1/5


Building configuration...
[OK]
%RF-SW2_SP-5-RF_RELOAD: Peer reload. Reason: Proxy request to reload
peer

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy