white-paper-c11-CISCO IMP
white-paper-c11-CISCO IMP
Cisco public
Fiber-Optic Cabling
Connectivity Guide for 40-
Gbps Bidirectional and
Parallel Optical Transceivers
The new Cisco Nexus 9000 Series provides high 1-, 10-, 40-, and (future) 100-Gbps Ethernet densities
with outstanding performance and a comprehensive feature set. The Cisco Nexus 9000 Series provides a
versatile platform that can be deployed in multiple scenarios - direct-attach 1-, 10-, and 40-Gbps access
and collapsed aggregation and access deployments, leaf-and-spine architecture, and compact
aggregation solutions.
Structured cabling requires additional initial investment to create the cabling infrastructure, but the
recurring benefits more than outweigh the slight additional incremental cost. Imagine the cost of deploying
a two-fiber optical jumper each time a new server is placed in the data center. Further, regardless of
whether the data center has a raised floor or uses overhead cabling, both result in time-consuming and
inefficient deployment in an unstructured environment. Likewise, management of such an environment is
cumbersome, increasing the risk of outages caused by human errors.
Structured cabling uses fiber termination connector panels that are connected through permanent links of
optical cabling, typically configured in a star topology. All cabling in the data center coming from the server
area is consolidated in a central location near the core, aggregation-layer, or spine switch in the network
(analogous to the breaker or power panel in the home electrical system analogy). The permanent pre-
terminated trunk cables branch to the zones in the data center, which contain servers, storage, or network
devices. Note that with structured cabling, you still need some device-to-device connections at the access
layer. As you can see in Figure 1, when you make these short connections within the same cabinet or even
a few cabinets away, patch panels may not be required. Likewise, patch panels would not be required for
inter-switch link connections.
Unstructured cabling occurs when optical links are deployed point to point or device to device with no
patch panels installed in the link. In this situation, cabling pathways become congested with an entangled
mess of two-fiber optical patch cords (Figure 2). Likewise, routing new patch cords in ceiling or floor trays
all the way across a data center each time a new device is deployed is extremely inefficient.
Figure 2.
Unstructured Cabling
SFP+ is the dominant transceiver form factor used for 1 and 10 Gigabit Ethernet applications. The
transceiver uses an LC optical connector interface. For more information, see
https://www.cisco.com/en/US/prod/collateral/modules/ps5455/data_sheet_c78-455693.html.
The QSFP+ transceiver is the dominant transceiver form factor used for 40 Gigabit Ethernet applications. In
2010 the IEEE standard 802.3ba released several 40-Gbps based solutions, including a 40GBASE-SR4
parallel optics solution for MMF. Since then, several engineered solutions have been released, including
40GBASE-CSR4, which is similar to 40GBASE-SR4 but extends the distance capabilities. Another solution
released by Cisco is a bidirectional 40-Gbps transceiver that uses a two-fiber LC optical interface. For
more information, see https://www.cisco.com/en/US/prod/collateral/modules/ps5455/data_sheet_c78-
660083.html.
Pluggable ● QSFP-40G-SR-BD Allows extended-reach capabilities More expensive than other short-
optical ● QSFP-40G-SR4 (up to 400m on MMF and 10 km on reach direct-attach options
modules single mode fiber [SMF]); cable links
● QSFP-40G-CSR4
and optical engines are separate and
● QSFP-40GE-LR4
thus can be upgraded independently
● QSFP-40G-CSR
Active optical QSFP+-AOC (1, 2, 3, 5, 7, Allows lower-cost short-reach Limited to less than 10m;
cable (AOC) and 10m) capability with more flexible cabling; reconfigurations of length or failed
assemblies typically used for ToR-to-server transceiver requires replacement
connectivity of entire assembly
MTP trunk cable These fiber trunk cables are typically 12 to 144 fibers and
create the permanent fiber links between patch panels in a
structured environment. They are pre-terminated from the
manufacturer with MTP connectors at a specified length and
have a pulling grip for easy installation.
2x3 conversion The 2x3 conversion module is used for 4-channel (8-fiber)
module parallel optics applications, such as 40GBASE-SR4. It allows
100% utilization of trunk cables by converting a pair of 12-
fiber MTP connections into three 8-fiber MTP connections.
The trunk cables plug into the rear MTP of the module, and
the MTP jumpers plug into the front of the module to make
the connection to the switch.
Learn more about each of these products in the Corning product catalog; see “Indoor Preterminated
Systems” at http://catalog.corning.com/CableSystems/en-US/Default.aspx.
Cisco Nexus 9500 The Cisco Nexus 9500 platform consists of modular switches. The
platform Cisco Nexus 9508 Switch is the first switch released for this
platform. With more than 30 terabits per second (Tbps) of backplane
bandwidth, the switch supports 1, 10, 40, and (future) 100 Gigabit
Ethernet interfaces through a comprehensive selection of modular
line cards. Configurable with up to 1152 10 Gigabit Ethernet or 288
40 Gigabit Ethernet ports, the switch provides sufficient capacity for
both access- and aggregation-layer deployments.
Cisco Nexus The Cisco Nexus 9300 platform consists of fixed-port switches
9396PX Switch designed for ToR and Middle-of-Row (MoR) deployment in data
centers. The Cisco Nexus 9396PX is a 2RU non-blocking Layer 2
and 3 switch with 48 1- and 10-Gbps SFP+ ports and 12 40-Gbps
QSFP+ ports.
Cisco Nexus The Cisco Nexus 9300 platform consists of fixed-port switches
93128TX Switch designed for ToR and MoR deployment in data centers. The Cisco
Nexus 93128TX is a 3RU 1.28-Tbps Layer 2 and 3 switch with 96
1/10GBASE-T ports and 8 40-Gbps QSFP+ ports.
Cisco Nexus 9000 Series Switches can run in two operating modes: the standard Cisco Nexus device
mode with enhanced Cisco NX-OS Software as the operating system, or the Cisco Application Centric
Infrastructure (ACI) mode to take full advantage of an automated policy-based approach to system
management.
While operating in the standard Cisco Nexus device mode, Cisco Nexus 9000 Series Switches can be
deployed in a variety of data center network designs. Cisco Nexus 9500 platform switches, with their high-
density line-rate 40-Gbps line cards, can be placed at the aggregation layer to provide aggregated 40-
Gbps connectivity for the access switches. Cisco Nexus 9500 platform switches, with their 1- and 10-
Gbps line cards, can be deployed as high-performance access-layer End-of-Row (EoR) or MoR switches
for 1- and 10-Gbps server connectivity. Cisco Nexus 9300 platform switches are well designed for the
access layer as ToR switches. The Cisco Nexus 93128TX is also a good choice as a MoR access switch.
As shown in Figure 3, Cisco Nexus 9500 and 9300 platform switches are deployed in the traditional three-
tier design. The Cisco Nexus 9500 platform switches are placed at the core aggregation tiers, and the
Cisco Nexus 9500 or 9300 platform switches are deployed at the access tier.
Cisco Nexus 9500 and 9300 platform switches both support Cisco Nexus 2000 Series Fabric Extenders
(FEXs). By using the Cisco Nexus 2000 Series, Cisco Nexus 9500 and 9300 platform switches can build a
cost-effective and scalable collapsed aggregation and access layer for 1- and 10-Gbps server
connectivity with 40-Gbps uplinks to the network. Figures 4 and 5 show Cisco Nexus 9500 and 9300
platform switches, respectively, in this collapsed two-tier design.
Figure 4.
Cisco Nexus 9500 platform switches and fabric extenders for collapsed aggregation and access layer
Modern data center applications, such as big data and High-Frequency Trading (HFT) applications, and
virtualized and clustered environments have shifted the data center traffic load from north-south client-
server traffic to east-west server-to-server traffic. Data center architects are starting to gravitate to the
spine-leaf topology, which flattens the network to two tiers and increases traffic-forwarding efficiency for
the increasing amount of east-west traffic. With their non-blocking architecture and high 10- and 40-Gbps
port densities, the Cisco Nexus 9500 and 9300 platforms are excellent choices for a spine-and-leaf
network design. Figure 6 shows a sample spine-and-leaf network constructed with Cisco Nexus 9500 and
9300 platform switches. With their versatile line-card options, Cisco Nexus 9500 platform switches can be
deployed at both the spine and leaf layers.
Figure 6.
Cisco Nexus 9500 and 9300 platforms for spine-and-leaf designs
All the preceding deployment scenarios use 1-, 10-, and 40-Gbps cabling for physical connectivity. 1- and
10-Gbps cabling infrastructure is well understood and commonly deployed, but 40-Gbps cabling remains a
new challenge to data center operators because of its different transceiver technologies and cabling
requirements. The rest of this document provides guidance and options for 40-Gbps fiber cabling designs.
The change from two-fiber serial transmission to parallel transmission required some changes in the
cabling. First, the connector type was converted from the traditional 2-fiber LC duplex connector to a 12-
fiber MTP connector. This change created some new challenges: in particular, with the pinning and polarity
of the connector. The traditional LC duplex connector uses a ceramic ferrule on each connector, which is
aligned in an adapter panel with the use of a ceramic alignment sleeve. However, the MTP connector uses
a pinned and non-pinned connector alignment system, making it imperative to always maintain the correct
pinning. Likewise, polarity correction of a 2-fiber system can be easily achieved by flipping the position of
the LC connector in the duplex clip. However, correction of polarity in a 12-fiber MTP connector can be
more challenging because all 12 fibers are in a single ferrule.
In response to these challenges, Cisco developed a two-fiber 40-Gbps Bidirectional (BiDi) multimode
solution. This solution uses two different transmission windows (850 and 900 nm) that are transmitted
bidirectionally over the same fiber. This approach allows the use of the same cabling infrastructure for 40
Gigabit Ethernet as was used for 1 and 10 Gigabit Ethernet. The pluggable bidirectional transceiver has the
same QSFP+ format as the existing 40GBASE-SR4 transceivers. Alternately Cisco also offers 40G-CSR
that uses 4 wavelengths each operating at 10Gbps over MMF supporting 300m and 400m over OM3 and
OM4 respectively. Therefore, the same switch line card with QSFP+ ports can support either parallel optics
40GBASE-SR4 (or 40GBASE-CSR4) or bidirectional optics 40GBASE-SR-BD (or 40GBASE-CSR) solutions.
Thus, when directly connecting a 40 Gigabit Ethernet bidirectional transceiver to another bidirectional
transceiver, a Type A-to-B standard LC duplex patch cord can be used. As defined in TIA-568-C.3, a
Type A-to-B duplex patch cord is constructed with one (blue) fiber in connector position A on one end and
in connector position B on the other end, and likewise for the second (orange) fiber, as shown in Figure 7.
This reverse fiber positioning allows a signal to be directed from the transmit position on one end of the
network to the receive position on the other end of the network.
Figure 7.
Type A-to-B jumper fiber mapping (per TIA-568-C.3)
This type of direct connectivity is suggested only within a given row of cabinets. The jumper assembly is
tested only to the requirements of an interconnect cable, as defined in ANSI/ICEA S-83-596-2001. It has
less robustness (less tensile strength, less crush and impact resistance, etc.) than a distribution-style trunk
cable. Figure 8 depicts a situation in which two 40 Gigabit Ethernet bidirectional ports on two switches are
directly linked using a Type A-to-B LC duplex jumper.
However, when considering structured cabling, you must consider deployment of more permanent links.
The simplest structured cabling link includes a patch panel on both ends of the link, with a jumper
assembly making the connection to the electronic ports. This type of cabling is called an interconnect,
because both jumpers in the link connect from the structured cabling patch panel to the electronics ports.
Figure 9 shows an interconnect link between two bidirectional ports installed in a switch. The link consists
of an MTP-based trunk, MTP-LC modules, and LC jumpers. By installing the MMF MTP assembly, you can
provide more scalability to accommodate future data rates that may require parallel optics for transmission.
This future migration can be accomplished simply by changing the patch panels on each end of the link,
without the need to disrupt the cabling infrastructure.
Figure 9.
Interconnect for 40 Gigabit Ethernet bidirectional transceiver
The final cabling approach to consider is a cross-connect design, as shown in Figure 10. In this scenario,
two separate structured cabling links connect the two switches through a centralized cross-connect. The
advantage of this approach is that it allows the most flexible network configuration. The electronics can be
placed in various locations throughout the data center, with structured cabling links between the cross-
connect location and designated zone cabinets. When new equipment is installed, only patch cords are
required to make the connection from the equipment to the patch panels. Moreover, any port-to-any port
connectivity can be achieved at the cross-connect location. This connection is achieved in an orderly and
manageable way, unlike what can occur with a direct connectivity scheme in which patch cord assemblies
are used to make all port-to-port connections in the data center without structured cabling.
Figure 10.
Cross-Connect for 40 Gigabit Ethernet bidirectional transceiver
Also as previously mentioned, parallel optics does require a change from traditional cabling methods,
which requires learning and so creates an incentive to move to the bidirectional solution at 40 Gigabit
Ethernet. The main advantage of the parallel optics transceiver over the bidirectional transceiver at 40
Gigabit Ethernet is reach. For example, if you cable your data center with OM3 fiber at 10 Gigabit Ethernet,
you can support distances up to 300m. Then if you move to 40 Gigabit Ethernet, you can support the same
300m distance with the same OM3 fiber and a 40GBASE-CSR4 transceiver. However, if your cabling
distances do not justify the extra distance capability, then the bidirectional solution would be used
(Figure 11).
Figure 11.
40GBASE-SR4 and 40GBASE-CSR4 lane assignments
The dilemma is that MTP cable assemblies, which have been used for more than a decade for cabling in
the data center, are built on 12-fiber position connectors. Thus, each link has four unused fibers. There are
several basic cabling options for parallel optics connectivity. One approach is to ignore the unused fibers
and continue to deploy 12 fibers. Another approach is to use a conversion device to convert two 12-fiber
links into three 8-fiber links. Three solutions exist (summarized in Table 7 and Figure 12).
● Solution 1: The no-conversion scenario retains the whole 12-fiber based cabling system, but 33
percent of the fiber is not used. Additional cost is associated with the purchase of additional fiber,
and your system includes unused fiber.
Solution 1: Uses traditional 12-fiber MTP Simplicity and lowest link Does not use 33% of the
connectivity and ignores unused attenuation installed fiber, and thus requires
No conversion
fiber more cable raceway congestion
Solution 2: Converts two 12-fiber links to three Uses all backbone fiber and Entails additional connectivity
8-fiber links through a conversion creates a clean, manageable costs and attenuation associated
Conversion
patch panel patch panel with off-the- with the conversion device
module
shelf components
Solution 3: Converts two 12-fiber links to three Uses all backbone fiber with Creates cabling challenges with
8-fiber links through a conversion additional connectivity dangling connectors and non-
Conversion
assembly and standard MTP patch optimized-length patch cords
assembly
panels that require customization
Figure 12.
Cabling solutions for 40-Gbps connectivity
When directly connecting a parallel optics 40 Gigabit Ethernet transceiver to another 40 Gigabit Ethernet
transceiver, a Type-B pinless-pinless MTP jumper should be used. As shown in Figure 13, a Type-B MTP
jumper assembly, as defined in TIA-568-C.3, has the blue fiber 1 assembled in connector position 1 on
one end of the assembly, and this same fiber assembled in connector position 12 on the other end of the
assembly. This reverse fiber positioning allows the signal to flow from transmission on one end of the link
to reception on the other end.
Figure 13.
Type-B array patch cord (per TIA-568-C.3)
This type of direct connectivity is suggested only within a given row of cabinets. The jumper assembly is
tested only to the requirements of an interconnect cable, as defined in ANSI/ICEA S-83-596-2001. It has
less robustness (less tensile strength, less crush and impact resistance, etc.) than a distribution-style
cable, which would be used for structured cabling trunks. Figure 14 shows two switch ports directly cabled
with an MTP jumper patch cord.
Figure 14.
Direct connection for 40 Gigabit Ethernet parallel optic transceiver
Similar to the bidirectional cabling approach, the most basic structured cabling solution is an interconnect.
The only difference between an interconnect solution and parallel optics is that the connector type of the
patch panels instead is MTP. Figure 15 shows several interconnect link scenarios with various patch-panel
options. As previously discussed, the 2x3 conversion modules, depicted in Figure 15a, allow 100 percent
fiber utilization and constitute the most commonly deployed method. Another advantage of the conversion
module is reduced jumper complexity. Notice that a G jumper, which has a Type-B polarity and is pinless,
is used to directly connect two parallel optics transceivers. That same jumper is used on both ends of the
interconnect link, thus eliminating concerns about correct pinning.
The combined solution shown in Figure 15c might be deployed when cabling between a spine switch,
where the module is placed, and a ToR leaf switch, where the conversion harness and panel are located.
The QSFP ports on the leaf switch are closely clustered, so the short breakouts of the 2x3 harness
assembly should not be a concern. However, use of the 2x3 harness assembly at the core spine switch is
not desirable because patching across blades and chassis is a common practice.
Figure 15.
40 Gigabit Ethernet parallel optics interconnect link with (a) Conversion Modules, (b) No Conversion, and (c) Combined
conversion module and harness solution
A ECM-UM24-93-93Q 2x3 conversion module; MTP (pinned) to MTP (pinned) and OM3/4 cable
Note: A higher-density 4x6 module is also available.
F J937512TE8-NB010F MTP (pinned) to MTP (pinless) OM3 jumper with Type-B polarity; 10 ft
G J757512TE8-NB010F MTP (pinless) to MTP (pinless) OM3 jumper with Type-B polarity; 10 ft
I H937524QPHKLZ010F 2x3 conversion harness assembly; 12-fiber MTP connections are pinned,
~ H937524QPHKLZ300F and 8-fiber MTP connections are pinless; 24-inch breakout legs; OM3/4
cable; 10~300 ft
As with bidirectional cabling, a cross-connect design allows the most network flexibility. Figure 16 shows
two cross-connect network link designs for cabling a 40 Gigabit Ethernet parallel optics transceiver. Figure
16a shows a conversion module example, which again is the most common and preferred method. Notice
in this design that all three jumpers (two at the electronics on the left side of the figure and the one at the
cross-connect on the right side of the figure) in the link are G jumpers, which according to the BoM in
Table 9 are Type-B polarity, and both MTP cables are pinless. Thus, in a conversion module deployment,
only one jumper type is used for a direct-connect, interconnect, or cross-connect cabling scenario.
However, notice in Figure 16b that this is not the case for a non-conversion cabling scenario, in which
standard MTP patch panels are deployed. Here the patch cords at the electronics are pinless (into the
electronics) to pinned (into the patch panel), although the patch cords at the cross-connect are both
pinned going into the patch panel. Thus, for a direct-connect, interconnect, and cross-connect cabling
scenario, three different pinned jumpers are required.
An alternative approach is to install pinned MTP trunks in the structured cabling, but this approach can be
used mainly in new installations because the traditional MTP trunks installed over the past decade have
been pinless.
Figure 16.
40 Gigabit Ethernet parallel optics cross-connect link with (a) Conversion Modules and (b) No Conversion
A ECM-UM24-93-93Q 2x3 conversion module; MTP (pinned) to MTP (pinned) and OM3/4 cable
Note: A higher-density 4x6 module also is available.
F J937512TE8-NB010F MTP (pinned) to MTP (pinless) OM3 jumper with Type-B polarity; 10 ft
G J757512TE8-NB010F MTP (pinless) to MTP (pinless) OM3 jumper with Type-B polarity; 10 ft
Conclusion
Structured cabling using an MTP cabling infrastructure can be used with current 10 Gigabit Ethernet
environments while maintaining investment protection for 40-Gbps environments and beyond. With the
new 40 Gigabit Ethernet bidirectional transceivers, no changes to the cabling infrastructure are required
when transitioning from 10 to 40 Gigabit Ethernet. Extended 40 Gigabit Ethernet link distances, which
match the distances at 10 Gigabit Ethernet, can be achieved by converting to parallel optics transceivers.
These transceivers require a change in traditional cabling practices. However, if structured cabling has
been implemented with MTP-based trunk cables, then making the conversion is as simple as swapping the
patch panels. Thus, the existing MTP-LC modules that were used in the two-fiber serial transmission
would be replaced with MTP conversion modules for parallel optics.
New data center switching platforms, such as the Cisco Nexus 9000 Series, are now using the cost-
effective, lower-power optics at 40 Gbps to deploy innovative and flexible networking solutions. These
solutions allow easy integration into existing environments and deployment of new options regardless of
your zone (EoR or MoR) or ToR deployment needs.
For more information on Cisco optical transceiver products, visit the website
https://www.cisco.com/en/US/products/hw/modules/ps5455/prod_module_series_home.html.
For additional assistances in designing you cabling infrastructure or additional information on Corning
cabling solutions, contact a Corning customer service representative. Likewise, if you would like to read
Corning’s design guide or request that the support team contact you, visit
http://cablesystems.corning.com/1-NX6Cabling.
SFP-10G-SR-X
Learn more about each of these products in the Corning product catalog; see “Indoor Preterminated
Systems” at http://catalog.corning.com/CableSystems/en-US/Default.aspx.